RSAC 2019 Predictions: Container Security, Application Security, AI and More
Security has been shifting left into software development for a decade. DevOps has unified Development and Ops teams around delivering a value stream. Each believes security is their responsibility. Are we at the point where DevOps takes on security successfully as part of its culture? Will the host instead reject security altogether, leaving it to those governance and audit folks in the other building over there? Or, are organizations already saturated with vendors repackaging legacy solutions as the latest SecDevOps silver bullet and beginning to ignore the space?
OK, we can do more than just posit burning questions for you to think about as you’re on your way to the big show. Below are our team’s top four predictions for RSA Conference 2019.
#1: More Silver Bullets.
As in past years, we’ll see vendors hawking this year’s silver bullet. This year attendees can reasonably expect to be inundated by pitches from certain emerging technology plays. To name a few:
- Application Security—IAST: IAST vendors will assert that they’re the only reasonable way to meet a DevOps cadence and/or that it just makes sense for you to swap out SAST or DAST tools for IAST. Some claim “zero false positives.”
IAST can “supercharge” dynamic testing, provide a boost to coverage, increase accuracy around, as well as visibility into, new vulnerabilities. For well-calcified programs, IAST can re-energize vulnerability discovery practices. So, in light of this potential benefit, the vendor positioning strikes us as falling short.
Customers have quickly debunked claims of perfect accuracy and even note that their existing tool regiments often pick up the findings gap IAST tools claim to uniquely fill. While credible improvements in accuracy and coverage are valuable, vendor claims that these features make DevOps cultural compatibility a reality ring hollow to many customers.
To our eyes, facilitating DevOps culture is about supporting the software lifecycle in a software-defined manner. Organizations should ask themselves, “What solutions will help us deploy and use vulnerability discovery tools without a human to integrate and operate them?” “How can the output of these tools be prioritized and routed appropriately for remediation?” Can the pipeline determine that remediation has occurred on its own?” Answering these questions will address the DevOps cultural fit—not IAST alone.
- Cloud—Container Security: Gartner predicted that fragmentation would plague cloud security* and the market hasn’t disappointed. Vendors claiming to secure cloud workloads have popped up like a field of dandelions. Within a single category, vendors claim to be able to “completely solve the workload security problem.” Zero-trust network and some orchestration security vendors claim to provide security without the constraints of a container-aligned solution.
One key benefit these tools provide is fairly seamless integration into the “delivery” phase of the pipeline. Tools have the potential to provide valuable security telemetry at the same time they provide proactive protection. Ultimately, some customers we’ve spoken to prefer this single point of instrumentation to a lifecycle full of assurance activities as a means to provide robust visibility into their security posture.
But customers may also be confused as to what features they can rely on from vendors in this and related categories. Some products provide a fabric of hardening and protection widgets; much like mobile device management (MDM) did for endpoints. Others provide an ability to adjust mandatory access control during operation from a single point of control. Others still provide integrity and/or provenance on image distribution.
The software composition analysis (SCA) these tools provide as a categorical “table stakes” remains too superficial to replace pure-play products in that space. OSS security is recursive: shaky OSS is built on top of more shaky OSS. The lack of clarity as to what other capabilities each tool offers means that adopting organizations are using only a subset of existing product capabilities, and are sometimes flummoxed as to how to divide security responsibilities between their security tools when those capabilities overlap with other categories’ products.
Until Function-as-a-Service (FaaS) takes off, containers represent an exceptional single point of control for organizations that have been able to standardize on them as a means of development and delivery. This is an ideal point for organizations to assert control over composition, configuration and hardening, and injection of monitoring or controls.
#2: Orchestration Tools Introduce Complexity. Complexity Bears Risk.
Kubernetes, Docker, even Envoy are becoming standards, each in their own right, and developers see these tools as silver bullets for managing cloud infrastructure. And, for good reason: these tools dramatically reduce time to market while providing scalability and multi-cloud, even hybrid-cloud, support.
Yet, these tools may be victims of their own success. Both Docker and Kubernetes have had high-profile vulnerabilities publicized against them recently, with zero-day exploits that quickly follow to weaponize these attacks. At a base level, these tools bring increased complexity to an architecture, often opening holes between production environments and those development/staging environments that control deployment. They open up ports and protocols developers may not be aware of or understand, increasing an invisible attack surface that often includes discovery and labeling services—ripe for others to leverage in understanding the victim’s topology. And, of course, with any standardization comes risk: if a vulnerability in Docker’s compartmentalizations of containers, Kubernetes application/network orchestration or in VPC/zero-trust networking identity proofing is discovered, it potentially provides attackers the same scale and leverage in exploitation that DevOps engineers had in development.
#3: Security Initiatives Begin to Break Under the Weight of Solution Fragmentation and Cost
In many areas of the security market, prices are going up. More than one vulnerability discovery tool vendor will ask customers to pay more for their solutions. Sometimes, the additional features or value might not be obvious. No concrete, let alone verifiable, ROI case will be offered.
Security initiatives will struggle to absorb these price increases. Mature security groups are struggling to justify sustained and growing seven-figure, even eight-figure, annual spend for vulnerability discovery and management alone. Their job is getting harder. Cloud is joining an already full club of technology stacks (legacy, web, mobile, …) that they have to support. Boards are demanding a portfolio-wide risk management strategy where a “prioritize and address the top 10 percent of apps” stance sufficed years ago. And though groups have more work to do, their satisfaction with vendor product improvement and support has reached a breaking point. Many firms have switched tool vendors, knocking incumbents from decades-long renewals only to be frustrated by the replaced vendor’s competitors.
Elsewhere in the market, new executives have joined firms to take security to the next level—fresh off tenures in larger, more mature organizations. But while expectations are high, and these executives’ experiences and appetite for challenge are up to the expectations, budget and staff availability are not. These firms are simply not able to acquire the tools and solutions that the top 10 percent of the market has, nor are they able to hire the subject matter experts necessary to implement and maintain those tools’ usage at scale.
As cliche as it sounds, firms in all segments of the market need to be able to do more with fewer/less. Fewer staff, less expertise, and as always, less spend. Relying on anointed security champions within development was a common refrain over the last decade. But, distributing the work from a security group into development only alleviates some aspects of the strain—like available security experts. It’s not like development is flush with bandwidth. From our vantage point, CISOs need to be able to “see” more of their security operations from one vantage point—the so-called single pane of glass. But, this “single pane” can’t be specific to one company’s tools. It’s got to give the the executive a consolidated and coherent view of all the sensors that inform risk posture on a business application or value stream—from the beginning of software lifecycle to deep within its operation and use. These CISOs need not only comprehensive visibility from their vantage point, but the ability to control their security policy without having to rely on several delegates, each with deep subject matter expertise in a particular type of vulnerability discovery or tech stack.
If a software pipeline goes GA within their organization, they need to be able to update the kind of security scanners that get run during that pipeline: the frequency, the configuration, the scoring. If operational or threat intelligence data warrant, the CISO must be able to turn up their visibility into code quality or vulnerability, and be able to query what assets within their organization may be vulnerable and exposed given this new data. On the show floor, look for vendors with solutions to these problems.
#4: AI….of course
As always, in recent years, there will be more “talk” about combining AI with (insert “threat intelligence” or “threat hunting”) to prevent breaches automatically. Of course since AI doesn’t really exist and won’t for at least another decade, some vendors will propose that they have a magic bullet that will level the playing field until they deliver that product. Runtime Application Self-Protection seems poised to fill this gap, although will it be the solution organizations are wanting for auto breach prevention.
Disparate tools produce voluminous results. On the backend, AI can help with triaging this volume but only to the extent that there is a huge body of data from which to train AI. You need people to generate the volume of data to train AI. And if you don’t have enough people doing vulnerability discovery, you aren’t generating this volume of data. Vicious. Cycle. We will get there but it will take several years to bootstrap AI, to make it more “human.”
That said, security is different. Once trained, AI can and will be used to predict failures in cars, failures in software. But it’s almost impossible to use AI to predict threats, because a threat is a person. Systems built of inanimate objects obey the rules of statistics. People have intention which is harder to predict than inanimate objects. Intention allows people to break out of statistical models because they observe the models and then adapt to them. It becomes a game of cat and mouse. This is true today, and even tomorrow. As your walking the show flow and having conversations, ask this question: “How they’re going to supply me with a durable advantage to this game?”
Will any of these come to fruition? Watch this space for a post-game wrap up.