The Product Versus Skill Pendulum In Security And The Need For Better Solutions

This post was originally published on Forbes

Security used to be easy–a fairly binary condition over whether you are protected or not, whether you are patched or not, or whether the port is accessible to outside IP addresses or not.

And then came complexity: Overlaying different aspects of vulnerabilities. Factoring in application issues, platform bugs, OS patches, network configurations and user access controls has shifted the rather binary situation to an exponential one. As such, we, as security practitioners, learned to use more skills in terms of threat modeling, secure development, honeypots and honeytokens for earlier detections, data-centric decision-making and increased focus on education and training.

We’ve reached a point where products matter less and less. Remember when the first action when getting a new PC was to install an AV on it and try to beat the clock before it got exploited? Now, PCs are pretty much secure out of the box thanks to the native malware detection and mitigation tools that are part of the operating system.

However, when looking at the security industry, we still see a lot of relics of the old-school way of operating. I’m not looking to explore who’s to blame (VCs? Startups? Consumers? Analysts? Your bet is as good as mine), but a lot of security vendors still treat the world in a binary fashion. If you look at marketing claims, for instance, it’s either you have their product, or you are not secure.

This brings me to my main point: A lot of security organizations are already through the pendulum shift. They are much more data- and customer-focused and are prioritizing their risk decisions around this rather than around the binary checklist of products. If that’s the case, where does that leave most industry vendors, especially with products that are not designed around the customer’s actual needs?

As an example, our security organization at Cimpress has been pretty adamant about practicing this customer-focused approach. We make it clear what our needs are and the features/capabilities we’d like to have based on our threat models and current capabilities. However, this leads to several problems.

First, a lot of vendors don’t know how to address that. They have a list of features and their marketing pitch, and that’s it. We’re looking for specific answers–possibly answers that include road map milestones–and are not expecting a single product to address all of our needs. Vendors, on the other hand, find it difficult to adjust their sales process (and pricing) to address customers’ specific needs, leaving frustrated after being told that we’re not using 80% of the product capabilities but would love to pay for the 20% we actually need.

Second, there seems to be a lack of vendors who truly adopt the approach of identifying needs on the customer side and are still adopting the approach of finding novel technical problems and solutions to address them. So, we’re left with niche products that don’t address actual needs but get snazzy marketing backing. We end up pushing the pendulum further into the skills territory, forcing security teams to rely on their own skills and in-house tooling.

To top it off, this increased reliance on skill is deepening the skills gap we already have in the industry. We have an education process that focuses on specific areas of the security field and training and education programs that are often product-focused. Meanwhile, generalists are becoming more rare and expensive as demand for less product-centric and more data/process-centric expertise increases.

What to do? Simple: Enjoy the sound of silence for a moment, especially as a vendor. Don’t incessantly ask what a potential customer’s current challenges are while trying to calculate what part of the answer you can anchor on to and sell your product through. We need to “shift left” that question from sales back to product design and even company inception. We need more smart people listening to what customers say about their needs and, rather than identifying where existing solutions can address them, trying to identify where there are partial or no solutions.

I’ve been fortunate to work with a few VCs and startups that do just that and have the foresight to validate these needs and keep driving their solution to address them or pivot their product so that it truly addresses the core issues.

On the other hand, I get frustrated with vendors who succumb to trying to latch on to a minor detail and blow it out of proportion or, worse, resort to speaking ill of their competition. Statements like, “We see a lot of customers of vendor X come to us after two years, and now, with our product, they are happy,” should be banned from sales discussions and replaced with, “Vendor X? Yes! I hear that their product is really good and addresses a certain set of problems really well. I’d love to know what your threat model and priorities are to better understand whether it is X you should go with or maybe find a different set of solutions.”

And as much as innovative products sometimes need to educate the market (I’ve been there and am actively working with companies like that as well), most times, the reverse is what’s needed: truly understanding what the industry needs right now and providing true minimal viable products (MVPs) that solve these often basic problems. There’s money in solving seemingly simple issues, especially if they have been around for a long time and are considered “the norm” or something that people need to just accept as suboptimal.

Trust-Building For Security

This post was originally published on Forbes

Trust is a fickle thing. And, weirdly enough, the basic assumption of a lot of security practices seems to include a certain level of trust in users that is pretty hard to justify these days. This is why we see so many successful breaches that can be traced back to compromised accounts, default passwords and social engineering. One then asks, “How should I reduce or eliminate inherent trust from the equation?” Good question!

By rethinking trust in a modern environment, we can get to a stage where the starting point of any security-related decision (e.g., granting access, allowing/disallowing a certain action) is a state of no trust or zero trust. Before jumping into a marketplace where zero-trust solution providers will happily part you from a considerable chunk of your budget, let’s take a look at the core concepts of trust in our environments.

Our starting point, as I mentioned, should be that we do not trust anything: not users, networks, devices or even third parties. Building trust from this point on can be done by using elements that allow the organization to get to a place where a level of confidence is achieved. At this point, we can ensure that the decision around the action requested by the actor (user) can be backed with actual facts and in a way that’s relevant to the scenario in question.

Taking into account factors such as the user’s environment, credentials, secondary identification elements and behavioral history (to name a few) can build much better context for the decision of whether to grant the user a certain level of access, including which action they are trying to perform (i.e., are they trying to simply access their own files on a shared storage service, or are they trying to delete a table from a production database?). Being able to differentiate between different actions also allows us to set different thresholds to the level of trust we require. In the traditional model of keeping trust, the rule follows that if you have the right username and password, you’re in. No matter what you intend to do, if you have access to a system, you can fully utilize whatever roles and permissions you’re granted. In a trustless scenario, every action can be associated with a different level of trust.

Take a financial user for example. On a typical day, the user may be accessing transactional data, such as ledgers, payment processing and accounting. However, monthly or quarterly reporting (especially in public companies) requires a completely different set of activities and permissions. Why should these activities be constantly accessible to our user if they are only used once in a long period of time?

In a trustless world, our user will have the defined relationship with said reporting functionality, but exercising this functionality would be scrutinized both on a temporal perspective (“Is it the right time of the quarter?”) as well as on a trustworthiness level (“Can we accumulate enough evidence to ensure this is indeed the user in question?”). From the user’s perspective, the process is still pretty much the same. They may be required to present additional validation for their identity (responding to a multifactor authentication process), but the rest of the elements can be gathered and analyzed automatically (“Can we identify the device? Is it adequately patched/protected? Can we identify the environment?”).

Through this simple process, we can significantly lower our attack surface, and even when users get compromised (and our working assumption is that every environment will get compromised), access to sensitive assets in our environment will be highly limited, and we will have better visibility into adversarial actions that cannot achieve our trust requirements.

I realize that the concept of throwing away any shred of initial trust may sound too harsh or counterproductive to some, but when you start to think about the kinds of environments we work in these days, the idea makes sense. I can tell you from personal experience that at my own company, we’ve gone through (and still make) significant changes in the way we perceive trust.

Our latest milestone in that journey has been to rid ourselves of the concept of the “enterprise network” as we all move to a guest network. The concept of providing an inherent trust value to simply be on a certain network doesn’t apply anymore to the majority of assets we access (this is, of course, in a scenario where most assets are cloud-based). This doesn’t mean that we are done with our trust journey: I keep finding myself questioning and challenging paradigms where inherent trust exists, and by going through a process of thinking on behalf of our customers, our businesses and, yes, shareholders, we keep simplifying the security approach to managing risk and keep ourselves nimble.

So here’s a quick takeaway for you: Look at the common ways in which users in your organization access assets. Now, figure out what trust assumptions are being made (implicitly and explicitly) through the process. Scrutinize each trust assumption, and ask yourself, “Should this be an explicit trust assertion that’s based on evidence relevant to this context?” Repeat periodically to make sure you are comfortable with the kinds of risks you are taking when leaving these implied trust assertions in place.

Why You Should Go Beyond The Typical Penetration Test

This post was originally published on Forbes

If you’ve ever run across a penetration test report, they usually look bleak. I should know; I’ve authored hundreds of them. By their very nature, they try to focus on the most egregious security issues within a system or network. Having an understanding of how an actual adversary would perceive the organization (which is a lot different than how a hired penetration tester does) and a grasp of the business and operational relevance of the tested environment are crucial to driving an effective result.

In combination with a good understanding of the business risks and how they relate to cybersecurity (see my previous article on utilizing two frameworks for securing a decentralized organization), a company’s security organization can provide context for such technical and scope-focused penetration tests. As we go through a report and align it with business risks, we should keep asking “why/what.” Why is this finding relevant? Why is this issue so critical? What is the root cause that allowed an asset to be exploited? What are the environmental factors that are part of this scenario?

Through connecting the technical report to the specific business environment and contextualizing it with the actual valuation of assets/risks, we end up with a highly enriched map. This map contains the original report findings along with additional elements that the penetration test did not address, such as processes, out-of-scope-systems, environmental elements, compensating controls and awareness/practices.

How does this play out? Consider a report finding related to the ability to steal credentials, escalate them to a higher privilege and gain access to a sensitive server. A penetration tester may suggest addressing this through technical means (hardening the server to allow access from predetermined hosts, locking down workstations to reduce privilege escalation effectiveness and adding MFA). However, this could also be amended by enforcing two-person authorization for critical actions (breaking the ability to abuse a compromised account), using password managers (reducing the chances of reused or guessable passwords) or even increasing logging visibility (to provide the SOC with insights on privileged activities through the systems). These remediations are less likely to turn up on a penetration test report, however, they are just as effective — if not more so — than the classic ones that only address the common technical aspects of the issue.

By no means should this be read as an encouragement to ignore the report findings from a penetration test, but rather, it should compel you to enhance them within the business context and make them more applicable for your organization. Many of these tests are done through narrow scoping, so as you get more proficient in contextualizing the results, you’ll be able to work more closely with your penetration testers and guide them to provide tests and results that are attuned to your business’ needs.

Rather than taking these penetration test results as gospel, and instead of accounting for the specific business environment and risks (and, yes, the culture, practices and tolerance), security organizations can provide more effective and longer-lasting mitigations to security gaps, perhaps even lowering the severity of seemingly critical issues to negligible ones. By being a true partner in the organization, instead of limiting ourselves to a technical watch-guard, we can more easily come to acceptance and cooperation — not only from the technical teams we work with, but also from business leadership.

Two Frameworks For Securing A Decentralized Enterprise

This post was originally published on Forbes

Many modern enterprises no longer operate in a highly centralized manner. Traditionally, cybersecurity in enterprise environments consisted of defining trust boundaries, placing controls over these boundaries, setting standards and policies for the safe and secure handling of data, enforcing said policies and scrutinizing any code/applications that were developed for flaws that may be exploited by adversaries — inside or out.

But now, as part of a decentralized model, optimizing for performance and independence, the kind of control and scrutiny that security organizations once had in the past is becoming irrelevant. This leaves the separate business operations within an enterprise free to choose their own implementation (and sometimes prioritization) of how they run their security.

If the CSO/CISO role was challenging before, now it is even more complex. We need to hold a delicate balance between enforcing some elements and providing guidance or general directions for most others.

As the parent company of multiple independently operated businesses, our approach at Cimpress is to emphasize a shared-security-responsibility model, where each business unit is not only responsible for choosing and operating its technical stack but also its security and risk management. As such, our security organization has two main tasks: providing clear and transparent metrics for security maturity and providing a means for measuring (really, this means quantifying) risk in a way that supports decision making around changing controls. We have chosen to utilize two well-known frameworks to achieve this.

For security maturity metrics, we chose NIST-CSF: the NIST Cybersecurity Framework. This framework provides a consistent and clear indication of security maturity across over 100 categories and subcategories of an organization’s defense capabilities. There are many approaches to using this framework, ranging from self-attested surveys to fully automated and continuously updating platforms. The technique is less of an issue than the ability to create a clear reflection of each business’ maturity levels and, of course, set a minimal or sought-after maturity level (again, this doesn’t have to exist across all the subcategories). This baselining allows businesses to better align themselves with existing policies (which are translated to the minimal required maturity levels) and map out their tactical security gaps.

For our risk measurements, we picked FAIR: Factor Analysis of Information Risk. FAIR is being used at the enterprise risk management (ERM) level and is combined with the rest of the ERM functions to assure relevancy and context for our business leaders. We do not deploy FAIR analysis across a broad range of scenarios; rather, we focus on the top three to five that each business identifies as the most relevant ones for themselves.

The goal of using the framework this way is to get immediate buy-in from business leaders while providing them with means of making informed decisions about their risks. After identifying the scenarios, we perform a FAIR-based analysis of the risk and come up with the results of the expected loss. Behind the scenes, we map the relevant controls to the already-identified security maturity measure.

Beyond providing a more realistic reflection of risk (while shying away from high/medium, red/green, etc. qualitative measures), we also create an immediate feedback loop. At this point, we’ve turned security and risk management into a business problem that’s more “easily” solved through financial measurements of recommended changes and their impact to previously expected losses.

If you are asking yourself, “Great, but where do I get started with this?” I would suggest to first define how a shared security responsibility model would look like in your organization. Where do you draw the line between the core responsibilities of the security organization, and where do you expect other parts of the business to own their share of security?

Once that’s defined, utilizing the different frameworks to facilitate this becomes more natural. We’ve used the NIST-CSF to provide metrics for everyone to gauge their maturity level and, based on the shared responsibility model, understand what areas need to improve. I’d suggest tailoring the use of the framework to your needs. In our specific implementation, we simplified the framework in terms of the number of maturity levels and provided focus on several specific subcategories that we defined as “basic security hygiene.”

Lastly, in order to prioritize these tasks of closing maturity level gaps, choose a risk model that works for you. In our case, it was FAIR, and even then we allowed ourselves to use it in a way that works for us rather than cover all our use cases and scenarios. This allowed us to stay more agile and have an easier adoption period where our businesses were getting used to the methodology and the framework.

At the end of the day, it’s about what works for your organization rather than about sticking to the letter of how a specific framework is structured. And of course, always make sure to build closed-loop feedback cycles in order to facilitate continuous improvement for how your security program works.

One of the biggest challenges of running a security organization is balancing the ongoing efforts, with strategic directions, all while keeping the “pressure” on to increase the maturity across the prioritized elements that give you the most risk reduction over time.

Seems like a bunch of management words, I admit, but it’s truly one of the more exciting areas to run. Combines technical depth, with business understanding of not only what matters now, but also how to “open” up business opportunities by enabling it to take risks.