A CARTA-based, Zero Trust Approach to Workload Security


Featuring Tal Klein (Chief Marketing Officer, Rezilion) & Neil MacDonald (Distinguished VP Analyst, Gartner)

Key Takeaways Include:

  1. What is CARTA and why is it important to cloud workload protection?
  2. Why is continuous risk assessment at the core of the Gartner Adaptive Security Architecture?
  3. How can we “shift-left” security without impacting developer productivity?
  4. How to apply a CARTA approach to vulnerability assessment and management
  5. Why automation is a key component of cloud workload protection

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved

Supporting DevOps in an environment of advanced threats requires a new approach for all facets of security. Gartner’s continuous adaptive risk and trust assessment (CARTA) framework has risen to the forefront of cloud workload protection and vulnerability management for DevOps because, in cloud workloads, risk is fluid, not static, and needs to be discovered, continuously assessed, and mitigated. To address these needs, cloud workload protection platforms should be context-aware, adaptable to different levels of risk, and autonomous.

Join featured speaker Neil MacDonald, Distinguished Vice President Analyst at Gartner, and Tal Klein, Chief Marketing Officer at Rezilion, for a free, informational webinar as they explain how adopting a CARTA approach provides a zero trust foundation for security and risk management leaders to gain more context, more visibility and more intelligence for dynamic and contextual risk-based decision making.


Transcript

Interviewer: To address the evolving needs of digital trust, enterprises are adopting cloud workload protection platforms that can operate and scale at the speed of DevOps. But how can these cloud workload protection platforms enable infrastructure to adaptively protect themselves from threats?

In this webcast, Gartner Research Distinguished Vice President Analyst, Neil MacDonald, and Rezilion Chief Marketing Officer Tal Klein, explain how adopting a continuous adaptive risk and trust assessment, or CARTA approach, provides a zero trust foundation for security and risk management leaders to gain more context, more visibility and more intelligence for dynamic and contextual risk based decision making.

First up in our program is Neil. Neil, welcome.

Neil: Hello and welcome. We’re here today to talk about a CARTA-based zero trust approach to workload security. Now, I’m sure that title raises all sorts of questions. So the first thing I want to explain is what is this concept called CARTA and why is it important? And why do we highlight it in Gartner research?

The bottom line is this. Our worldview in security has been flawed. We thought the world was fairly straightforward. We can blacklist and block the things that are bad. We can allow and whitelist the things that are good and all we have to do is manage just a little bit of gray in the middle. And the reality of the world is these static predefined lists, whether they’re blacklist or whitelist, they fail. They fail to stop on the blacklist side, zero day and targeted attacks.

And whitelist fail as well. Let’s say you have a user. They’ve logged in with credentialed access, that’s a form of a whitelist. Well, if the bad guy gets the credentials, now they, for all intents and purposes, look like a good guy. Or you might have an application that you’ve whitelisted, but it has an embedded vulnerability, and now it’s behaving maliciously.

So the conclusion you reach is, you’ve got to watch everything, all the time. Continuous assessment of relative levels of risk and trust. And we need security infrastructure that can adapt accordingly. And that is where CARTA comes from. It stands for Continuous Adaptive Risk and Trust Assessment, watching all the time everywhere, everything, wherever possible. Looking and measuring — does it represent risk, do I have trust in this particular entity, and should it be doing whatever it’s doing? All the time. Continuous Adaptive Risk and Trust Assessment.

So let’s bring that concept to workload protection. Now in the past, I would say information security over-invested on blocking and prevention technologies. But, if you assume the bad guy gets in, you must have the equivalent ability to detect and respond once they’re in your systems, or in your applications, or on your networks.

So, with CARTA we built out a broader framework, we call it the Adaptive Security Architecture. And, as you can see in the diagram, we’ve broken it into four pieces. The upper right being prevent, of course, where you can. But you must assume the bad guy gets through and you have to have the ability to detect and respond once they’re in our systems. And in the upper left, can we proactively identify, or predict or anticipate where the attacker will target next? And all of this works together as an integrated system. So we fill out these models and you can see here I’ve mapped three capabilities to each one of those four phases. Together collectively, that is the Adaptive Security Architecture. CARTA is that little risk trust engine at the center of the slide, watching everything all the time — users, packets, networks, applications, behaviors, all of it, identifying where excessive risk resides.

With this diagram, we can start looking at new and emerging areas in information security. For example, in the detection and response categories, technologies like in point detection response, network detection and response, network traffic analysis, xDR, which is extended detection and response technologies, or SOAR technologies, mapping on the response side. Or breach and attack simulation in the upper left.

All of these are important and emerging. But what about prevention? Good security always starts in the upper right when it comes to attacks. And how do we evolve our prevention capabilities?

That’s why I want to draw your attention to better hardening of systems and better isolation of systems. And that’s why you see interest in technologies like micro segmentation in the data center. Or, in this case, renewed interest in application control and whitelisting and zero trust approaches to information security as a preventative security stance to improve our overall security posture.

So the question is — how will we take this CARTA model, this framework, and bring this approach to workload protection strategies?

Well, the first question you have to ask is, what’s a Workload? And in Gartner research, we talk about a unit of work. It used to be, it would be a physical machine. 10 years ago, VMWare transformed the data center and virtual machines became the common building block. And over the past several years, containers have taken off, and then even more recently, serverless functions have been widely adopted.

All of these are units of work. So when I talk about workload protection strategies, I’m talking about ideally an approach that can protect all of these, whether it’s a full machine, whether it’s a virtual machine, whether it’s a container and potentially including serverless code as well.

So we talked about the need to have preventative controls, detection, response, and predictive capabilities, all working together as a system.

We’ve mapped the set of security controls we believe are necessary for Workload protection into a hierarchy. As you see on the slide, it’s a pyramid. The most important things that reduce the most risk are at the bottom. And the things and the technologies in the layered defense in-depth strategy that reduce less risk are towards the top.

Well, at the bottom, the foundational technologies are rooted in good operational and security hygiene, like privileged account management and making sure those images are hardened and unnecessary code is removed. But as you start building your way up the pyramid, what I want you to note is the very important role of application control and whitelisting. And right above it is another layer called Exploit Prevention and Memory Protection. They’re very low in this pyramid, meaning, if we could use a default deny — or you could call it Zero Trust — approach to what’s allowed to execute on a server, our attack protection, our security posture would be greatly improved. It’s a very powerful security paradigm.

Now, I mentioned Zero Trust. A lot of you think of Zero Trust is only a networking approach. And in fact, if you look at the NIST definition in their latest draft of Zero Trust architectures, they define it. But when you read it, it talks about network based perimeters and network location. And what I’m saying is with application control and whitelisting and CARTA, that we can take this Zero Trust default deny mentality and bring it to workloads.

So just like Zero Trust networking, your position on the network gives you no advantage. And I’m saying the same is true here with executable code on a server. Just because code runs on a server or is on a server, or located on a server doesn’t mean it’s trusted. You need to understand the identity of the code. Where did it come from? Who or what or what process created it? And even once the code is running, it shouldn’t be trusted. We need to monitor for unusual or risky behaviors. And we need to monitor this in the spirit of CARTA all the time in a continuous and automated fashion.

That would be a very powerful security paradigm and a security posture for protecting server workloads. So why don’t we do this, if it’s so powerful? And there’s multiple reasons. Because this default deny idea is not new. But there’s a couple reasons why people don’t widely adopt this. Number one, it’s hard to build the initial allow list or the whitelist. We don’t know what’s actually running on our servers and all of these workloads. But I’d say the bigger problem is how do you maintain that whitelist as things change, especially in the rapidly changing world of Cloud-native applications and DevOps. Things are changing all the time, possibly multiple times a day.

So you have to identify trusted sources of change. What are those trusted sources? The Achilles heel of whitelisting, as I mentioned earlier, is if you get a whitelisted application and it itself contains an embedded vulnerability, now you have a good application that’s gone bad. That’s an extremely difficult problem to solve. And in order to solve it, you actually need to get more granular and observe and monitor the application’s behavior, the processes, its network communications and so on.

We believe this can be done, but to address these issues and achieve this vision of a continuously adaptive default deny Zero Trust security posture for server workloads, we need to shift left into development.

What we’ve talked about so far — the prevent, detect, respond, predict — that’s on the right side of this slide. That’s all runtime security.

To solve this problem for default-deny CARTA-inspired application control and whitelisting brought to server workloads, we’ve got to get into development. And so you see here, CARTA’s in the middle on the right, and as we shift left into development CARTA’s on the left side as well, meaning we’re always watching, always observing what’s going on in the development process.

Now, in this picture, we call it DevSecOps, essentially bringing security seamlessly and transparently into modern development processes, into modern DevOps style work flows. But in order to do this, security needs to be seamless and transparent to the developers. They write code in their favorite IDE, they check it into a build server, there’s automation tools like Chef and Puppet and Ansible and Cloud Formation. They may be using Docker or Docker hub. They may be using GitHub. They may be using JIRA and Slack for bug tracking.

It’s their world, not ours. And that needs to be one of the guiding principles as you look at any tools that are integrating into the development pipeline from a security perspective. We need to make sure we integrate into the developers world, not the other way around. We’re not going to go ask the developer to go to some security console, or to go write a manifest of all of the applications that are supposed to be on this server. They want to write code, they want to do it quickly, and they want to get it into the hands of your customers. We can’t slow them down, and that needs to be a guiding principle for any tools that integrate into the development process.

Let them code. Get out of their way. But in modern development, we can take advantage of the declarative nature of modern development languages and modern frameworks and modern architectures, cloud-native architectures based on microservices and APIs and containers. There’s a lot of declarative information in all of that. It’s in Chef, it’s in Puppet. It’s in Ansible. It’s in YAML files.

Why don’t we analyze those files and take all of that declarative intent and use that to build the whitelist that we were talking about, that allow list? We can automatically generate a profile of what should be running on that server? Why? Because we know this. It’s actually being done by the developers as they’re putting together these applications.

But unfortunately, most security, it treats security as a runtime problem, and we’re kind of blind to what goes on in development.

What we need to do, and as this picture shows, is start linking these. Development and operations and security, they should be fundamentally intertwined. That’s why we call it DevSecOps, fundamentally intertwined. You can’t have any of these without the other. And if we do our job right, that Sec part of DevSecOps is silent, it’s DevOps done right, done securely.

So we can take this declarative intent from all of these sources, and we can use that to then populate and build that allow list, that whitelist, that we enforce at runtime.

So, as you can see, what we’re talking about is solving some of the fundamental problems of why application control in general has not been widely adopted, because we were blind to what’s going on in development. We can solve this problem, but we need to shift left, back into development to gather this information.

But we can not just take information from the left side of this picture to the right side, but we can take information from the right side to the left side, from run time back into development.

So, for example, what if you see behavior in an application that is very odd? What if that behavior, in fact, shows you there is a vulnerability, might even be a zero-day, as yet undiscovered. You find it because the application itself at runtime is behaving in ways that it should not behave, it’s very odd and in fact has not been observed, and it’s very risky. You can take that visibility and push that back into the development process. Say — there might be a vulnerability here, there might be a zero-day.

And in fact, depending on what your policies are on the application control technology, you might even block those risky behaviors. If you see, for example, a command shell being spawned and the application does not do that, and you know that it doesn’t do it because you scanned it back in development and there’s no need. It never has done this. Then we can block that.

So you can find out that this type of approach actually starts to mitigate risk of attacks on vulnerabilities, because the attacker can’t get the application to behave in the way that they want, because we’ve set up these application control and whitelisting policies.

This is why application control and whitelisting is so low in our hierarchy. It’s so low in the pyramid, which means it’s very important and it reduces a great amount of risk if we can adopt this type of approach.

Let me summarize with a few key recommendations. First, we need to take a continuously adaptive risk and trust based approach to security, everywhere. And we need to use security approaches with protection that adapts in real time to change. Does not rely on predefined static lists. They don’t work. Not in the world we’re living in.

We believe this default-deny, zero trust security posture can be brought to server workloads, physical machines, virtual machines, containers. And we’re saying, as you think about zero trust architectures, don’t just stop at the network, bring it up into the application layer, into workload execution.

Next recommendation. Stop treating development and run time as if they’re separate problems. They’re not. They’re fundamentally intertwined, especially for modern cloud-native application development, and modern cloud-native application security.

And if you’re going to integrate security testing and security scanning into these modern development pipelines, you better integrate natively into the developer’s world, not the other way around.

And the final point, especially on support of Linux, which is the most widely adopted operating system for these cloud-native applications. If you have a vendor that says they’re going to support Linux, they better explicitly support cloud Linux distributions, they better support containers, and they better have support and integration with Kubernetes, which is the de facto container orchestration standard.

So with that, it’s been my pleasure today to talk to you about building a CARTA-based zero trust approach to securing your workloads. Thank you.

Interviewer: Thank you, Neil. Now I’ll hand it over to Tal.

Tal: Thanks for those insights, Neil.

I think it might be useful to take a step back for a moment now and look at how we got here. I’ve tried to encapsulate a sort of state of enterprise workload security in this slide. You can see what we’re showing here is that as modern operations environment scale and change at an overwhelming pace, by adopting automated DevOps technologies, security teams are struggling to keep up, because they’re using security and monitoring tools that require manual tuning and administration. The chart here shows that the exponential growth rate of nodes and code in the enterprise IT versus the rate of security hiring in the same organizations.

The point is, it’s not a problem we can solve with more people or better heuristics. From a continuous adaptive risk and trust assessment perspective, none of the tools the security team has at its disposal are relevant to this problem. That’s why I think it’s important that when we think about taking a CARTA approach to cloud workload protection, we need to assume that we’re not looking to evolve things like EDR, RASP, or any of the legacy tools you see listed here. But rather, we need a wholesale new approach for cloud security.

Enter Cloud Workload Protection Platforms. CWPPs draw from wide range of different security capabilities to protect servers, virtual machines, containers and serverless workloads. CWPPs enable security and risk management technical professionals to reduce cloud risk by addressing numerous potential threats in their cloud workloads. Customers usually start by trying to fix their vulnerabilities. Some also try to do it as early as possible in the development cycle, shifting left their vulnerability management process.

The problem is, with the amount of code being pushed, both homegrown and open source, this is simply impractical. Developers won’t fix all those vulnerabilities. Sometimes they simply can’t. Patches are not always available. Some applications can’t be shut down for patching. And, of course, there are zero-day vulnerabilities that you simply don’t know about until it’s too late.

So the journey starts with accepting that vulnerabilities in production are inevitable. Then prioritizing them based on risk, reducing the attack surface and automating remediation. Cloud workload protection platforms make it possible to gain visibility into vulnerabilities that are loaded in memory and are exploitable. And that’s important, because it can dramatically reduce the time and effort necessary to assess and triage patches. CWPPs can also ensure applications can run in production with unpatched and zero-day vulnerabilities.

In the world of security, time is everything. If a certain application either can’t be patched or if the patch impacts performance, by integrating a cloud workload protection platform with your existing DevOps tools, that platform can act as a security health check, marking compromised services as unhealthy and automatically bringing them back to a known good state.

Which brings me to our primary differentiator here at Rezilion. It’s that we believe that all these things should happen autonomously. As soon as you turn us on, we militarize your CI/CD pipeline by enforcing the left side of your CI/CD pipeline on the right side, putting your existing DevOps tools to work, protecting your applications and services.

Since it’s a deterministic technology, Rezilion assesses with absolute certainty whether something running in production is supposed to be there and whether it’s doing the things developers intended it to do. It’s a cloud workload protection platform that understands the interwoven relationships, dependencies and recipes in the CI/CD pipeline and enforces them in production. Because of that, when Rezilion events on something, it’s 100% meaningful. And even if it can’t mitigate an issue, Rezilion informs what happens, how it happened and what its impact was.

Furthermore, we don’t only alert, we actually mitigate the threat by using existing orchestration mechanisms that DevOps teams already trust, such as Kubernetes, AWS, Chef and so on, to restore the compromised host, VM, or container to a known good state.

DevOps teams love us because we don’t require them to jump through hoops, or change or slow down the way they push stuff into production. Our mitigation acts like a health check, curing compromising services the same way DevOps would if that service had a memory leak or a bug. Most important, we do this autonomously without the need for a human administrator.

Finally, and I want to leave some time for Neil to answer some questions at the end, so I’ll just close with a brief overview of how all of this comes together from a UI perspective.

Here, you could see how the Rezilion Management console is a productization of the CARTA principles we’ve laid out in this webinar. It maps out the active risk associated with services in your environment, which include manual access, whether it’s acceptable access, such as an admin making a change in production outside of the CI/CD pipeline, or unsanctioned in the event of insider threat or a cowboy admin doing something they shouldn’t be doing.

Code bloat. That is, components that are deployed with the service that are unused and may have vulnerabilities associated with them. I believe some call these one hit wonders because they’re installed as part of the image, but are never executed.

Unloaded vulnerabilities. These are vulnerabilities that exist in a container or VM, but are never executed and therefore do not represent an actual threat. We’ve recently published some research that demonstrates that at least half of vulnerabilities reported by vulnerability assessment tools are in this unloaded category.

Then we move on to exploitable vulnerabilities, these are vulnerabilities that are loaded but for which Rezilion acts as a compensating control. Meaning we buy you time in the event of an attack. So if an attacker exploits that vulnerability, Rezilion alerts, events, and automatically disrupts the attack.

And last, unmitigated vulnerabilities. These are vulnerabilities that Rezilion can’t compensate for, which, if critical, should be triaged for patching above all else.

And with that, I thank you for your time.

And let’s hand it back to Neil for any questions are audience may have.

Interviewer: Thank you, Tal.

Neil, we have some questions for you. The first one is, how does CARTA apply to vulnerability management?

Neil: If you remember, one of the slides actually said this in the upper left hand corner, it talked about a CARTA-inspired approach to vulnerability management. Well, what does that mean? It means that despite our best efforts we’ll never have our systems entirely patched and up to date. Never. Not possible. More vulnerabilities and new vulnerabilities emerging faster than we can patch.

So once you acknowledge that, you can take a risk-based approach, a continuous risk-based approach, that’s where CARTA comes in, to how you prioritize these vulnerabilities.

So how would we do that? We can look at things like what is the network topology, for example. Is the system exposed to the outside world? Are there attacks in the wild on that known vulnerability? Or there might be mitigating controls in the network path like a firewall or network-based IPS. All of these could be taken into a mathematical, analytical model that helps you prioritize on those vulnerabilities that represent the most risk. You could even include things like the business value of the asset, or the sensitivity of the data that’s being protected. All of this creates a model, and they can help you then prioritize your patching efforts.

But if you take into consideration also what’s happening at runtime, you could further refine the model. If the vulnerability is in a library or a module that you’re not actually using, because you can see that at run time, then you could de-prioritize that finding. And likewise, if you know that module is being used, you can raise the priority of that finding.

So run time visibility at a very granular level can be pulled into these CARTA-inspired vulnerability management prioritization efforts.

Interviewer: What are some of the compensating controls for unpatched systems?

Neil: There’s a variety of compensating controls that might come into play for the protection from attacks on unpatched systems. Some of the obvious ones if you have inline network security controls like a firewall, if the attack could be blocked at the firewall, stop it. Or could be an inline network-based IPS. You could stop it there. Potentially, if it’s an application layer attack, something like a Web application firewall.

But what we talked about today could extend its compensating controls to what’s allowed to run at the server itself. When an attacker gets a foothold, they try to do something. They want to exfiltrate data, they want to steal something, they want to plant a file, a Trojan, and we can observe that behavior if we have a compensating control in the form of application control and very granular process monitoring on the server. So that becomes yet another layer in a defense in-depth strategy for the protection of unpatched systems.

But I’ll take it a step further. Because you could have a zero day attack. You don’t know what the vulnerability is, but you know the attacker is going to try to do something with that vulnerability. And that’s where you catch them. Not because you’re aware that there’s a missing patch — it’s a zero day. But you can see and observe the risky behavior and unexpected behavior because this is not what the application should be doing. It is not what we built in the profile. This is not what the developer intended, and therefore we can highlight this. We could either block it or we could send an alert, or create a trouble ticket back to the development organization.

Interviewer: What’s the role of identity in this approach to cloud workload protection?

Neil: It’s an interesting question, because most people think of identities as users. People have identities. Well, of course they do, but so do applications, so do containers, so do microservices, so do APIs. All of these entities have identities.

So in the presentation, when we were talking about bringing zero trust concepts out of the network and to the application layer for workloads, one of the pillars of Zero Trust is switching to identity-based policies.

And that’s absolutely the case here. Except here, the identity is the entity, the application, the service, the process, the microservice. And with these identities, we can start defining policies. Just like we would in the network with zero trust networking, what I’m saying is we can bring these concepts to the application execution layer with zero trust application execution or just an evolved form of application control and whitelisting.

Interviewer: How is this approach different than legacy Application control?

Neil: As I mentioned earlier, application control, default-deny, call it zero trust, these are not new ideas. These technologies have been around quite a while.

So what is different here is that we’re trying to move away from these static predefined lists of what is good and what is bad. I said that at the very beginning in the presentation. That’s one of the tenets of the CARTA approach. These static lists fail.

And as I mentioned, when you try to build an allow list a whitelist, you’re inheriting a mess. How do you build it? And most importantly, how do you account for change? These are hard problems, and a lot of people gave up because they thought of application control and whitelisting as a run time problem.

And this is one of the key insights. You have to go back into development. Look at the declarative intent of what the developer is trying to do, what that application should be doing. Look at all of those artifacts and build the whitelist, build the model at the very genesis of where the application is born. That is new. And that is where some of the cutting edge approaches are focusing their efforts — getting that declared of intent from the development pipeline and using that to automatically deterministically enforce expected behaviors at runtime. That is a key difference.

Interviewer: Thank you, Neil. That’s all the time we have. I’d like to thank both Neil and Tal for their great insights today.

A quick program note. Gartner is an impartial independent analyst of the information technology industry. All content provided by other enterprises is expressly the views of those enterprises and the speakers. The information should not be construed as a Gartner endorsement of said enterprises, products or services.

This concludes our presentation. Thank you for your time and interest today. If you would like to learn more about Rezilion, please visit rezilion.com.

Thank you.

Get Started Now

Rezilion is a true turnkey SaaS solution for your cloud workload headaches. You are three clicks away from continuous protection.

© 2020 Rezilion. All Rights Reserved.