Category Archives: Opinion

Elastic Permissions

Over the past two years my colleagues and friends have heard me talk about Elastic Permissions, and at some point I started hearing other people mention the term (yay for planting the seeds through consistently using a new term…). So I figured – for the sake of clarity, let’s put this out there for posterity.

The goal of applying the Elastic Permission model is to reduce the effective attack surface for an organization.

Elastic Permissions (can you tell I am an Amazon AWS veteran? 😉 ) is a concept where permissions are being constantly and dynamically evaluated against the actual use of the granted permissions, and reduced to where they match said usage. This requires a few phases: mapping, measuring, and elasticity.

Mapping – in order to apply Elastic Permissions effectively, we first need a system to identify all the users, accounts, roles, assets, as well as all the relations between them. Think about a sort of a graph map where you can traverse between users, their accounts, the assets (whether data or functional/compute) using the effective set of permissions that tie them together. Now apply that to your organization, across all systems and platforms (including across platforms – for example IaaS, Saas, etc…).

Measuring – once you have a map graph, the actual use of every permission should be measured. This needs to be recorded in a way that reflects usage over time, in order to identify patterns of seasonality, volume, and misuse/under-use. In an essence, this phase applies weights to the graph map – where the higher the weight, the more use that set of permission gets.

Elasticity – based on the mapping and the difference between granted and used permissions, the system should revoke access dynamically. This means that unused (or even less used) permissions are being revoked. Additionally, permissions that have been identified as being used seasonally, should be revoked while unused, and re-granted in preparation for the usage period (then to be revoked again). Lastly – since there is an expected potential friction where permissions are incorrectly revoked, the system should offer a way to natively escalate and regain privileges (for example – through an MFA challenge) and apply learning to the decision to revoke the permission set in the future. Systems should also consider several grades of escalation based on the confidence level of the revocation (from granting access immediately, through challenge-response to verify with the user their intent, to an out-of-band escalation to a manager/SOC).

The end result as stated in the Elastic Permission goal is to accomplish a reduction in the effective attack surface. In my personal experience working with several systems like this, organizations should expect somewhere around 70-80% reduction, but these numbers will depend on the level of complexity of the organization (how many platforms are used, and the relationships between platforms), and allow the security organization to focus on real incidents since the elasticity acts as sort of a canary/honeypot.

Incentives and metrics

“you have to be very careful of what you incent people to do, because various incentive structures create all sorts of consequences that you can’t anticipate”

Steve Jobs

Observation 1: As more companies are enforcing a work from home (WFH) policy these days, a new trend is starting to emerge. I’ve already observed at least 3 companies that started adding a “work from home” payroll addition to their employees, and with even more amusement I’ve heard employees gloat about it.

Observation 2: People and managers tell me how they are happy to see productivity not only remain stable, but improve. People are finding their “zone” much more easily without the distraction of office environment, less wasted time on water cooler chats, less time spent on lunch breaks, and of course huge amounts of time saved on commute.

I think that it’s only a matter of time until people start connecting the dots.

I’ll start with the first (quite miserable) realization: if you work for a company that boasts fancy office perks, you are underpaid, and are incentivized to spend more time in the office. The latter is pretty obvious, and I’m sure you realized that when you signed up. The second usually takes a while to sink in. Look around the office, and count the money. All the people there catering for your needs, all the food, dry cleaning, cooks, massage therapists, and whatever other ostentatious pers you see, cost a lot of money. And all this money – is taken out of your pocket.
I had the luxury of having an opportunity to decide between two jobs – one with Amazon, and another with a large company that offered said perks, and quickly realized how the amazon frugality translates into more employee satisfaction and more take-home pay. Simple – as an Amazonian, your perks end up with a $100 discount on shopping at Per year (manifested as a 10% discount on the first $1000 spent). And for the same role you are getting significantly more in your total compensation than with the other “fancy perks” company.

Question 1: Is this a time where both employees as well as companies realize that it’s time to start treating their employees like adults (you need a massage mid-day – great, you are paid like an adult, so go ahead a book one). This realization will have some significant effects on the workforce – less human capital needed to maintain the office environment (all those service jobs basically gone), but with the obvious other side of the coin – an opportunity for small businesses to offer the same services, for actual cost (not subsidized) as all those employees that are now paid more can consume these based on actual need.
It’s also the greener less wasteful path – again, we’re switching to a model where consumption is actual and not pre-planned based on number of employees.

Question 2: Setting aside the office environment side, and realization that companies need to pay more to employees who counted on getting fed and their ass wiped at the office (maybe that’s why people are buying stupid amounts of toilet paper),the productivity boost is hard to ignore. There are already entire companies working fully remote, my question is how many of the ones that have not embraced work from home (even on a partial basis) will realize this is something they need to start planning for and embracing, rather than trying to force butts-in-seats-at-the-office for some 80’s era productivity book?

So to get back to Steve Jobs’ quote from earlier – pretty clear how creating the wrong incentives have driven a culture that’s less productive, brings less value to both the companies as well as to employees, all in the name of “competitiveness”. Time to check what problems you are trying to solve again, and act a bit more like the engineers and scientists we are…

Full disclosure – I’ve been a proud Amazonian and still support many of the work culture elements from there. I’ve also been working remote for over a decade, both as an IC as well as managing remote teams globally (pretty successfully as well).

The Product Versus Skill Pendulum In Security And The Need For Better Solutions

This post was originally published on Forbes

Security used to be easy–a fairly binary condition over whether you are protected or not, whether you are patched or not, or whether the port is accessible to outside IP addresses or not.

And then came complexity: Overlaying different aspects of vulnerabilities. Factoring in application issues, platform bugs, OS patches, network configurations and user access controls has shifted the rather binary situation to an exponential one. As such, we, as security practitioners, learned to use more skills in terms of threat modeling, secure development, honeypots and honeytokens for earlier detections, data-centric decision-making and increased focus on education and training.

We’ve reached a point where products matter less and less. Remember when the first action when getting a new PC was to install an AV on it and try to beat the clock before it got exploited? Now, PCs are pretty much secure out of the box thanks to the native malware detection and mitigation tools that are part of the operating system.

However, when looking at the security industry, we still see a lot of relics of the old-school way of operating. I’m not looking to explore who’s to blame (VCs? Startups? Consumers? Analysts? Your bet is as good as mine), but a lot of security vendors still treat the world in a binary fashion. If you look at marketing claims, for instance, it’s either you have their product, or you are not secure.

This brings me to my main point: A lot of security organizations are already through the pendulum shift. They are much more data- and customer-focused and are prioritizing their risk decisions around this rather than around the binary checklist of products. If that’s the case, where does that leave most industry vendors, especially with products that are not designed around the customer’s actual needs?

As an example, our security organization at Cimpress has been pretty adamant about practicing this customer-focused approach. We make it clear what our needs are and the features/capabilities we’d like to have based on our threat models and current capabilities. However, this leads to several problems.

First, a lot of vendors don’t know how to address that. They have a list of features and their marketing pitch, and that’s it. We’re looking for specific answers–possibly answers that include road map milestones–and are not expecting a single product to address all of our needs. Vendors, on the other hand, find it difficult to adjust their sales process (and pricing) to address customers’ specific needs, leaving frustrated after being told that we’re not using 80% of the product capabilities but would love to pay for the 20% we actually need.

Second, there seems to be a lack of vendors who truly adopt the approach of identifying needs on the customer side and are still adopting the approach of finding novel technical problems and solutions to address them. So, we’re left with niche products that don’t address actual needs but get snazzy marketing backing. We end up pushing the pendulum further into the skills territory, forcing security teams to rely on their own skills and in-house tooling.

To top it off, this increased reliance on skill is deepening the skills gap we already have in the industry. We have an education process that focuses on specific areas of the security field and training and education programs that are often product-focused. Meanwhile, generalists are becoming more rare and expensive as demand for less product-centric and more data/process-centric expertise increases.

What to do? Simple: Enjoy the sound of silence for a moment, especially as a vendor. Don’t incessantly ask what a potential customer’s current challenges are while trying to calculate what part of the answer you can anchor on to and sell your product through. We need to “shift left” that question from sales back to product design and even company inception. We need more smart people listening to what customers say about their needs and, rather than identifying where existing solutions can address them, trying to identify where there are partial or no solutions.

I’ve been fortunate to work with a few VCs and startups that do just that and have the foresight to validate these needs and keep driving their solution to address them or pivot their product so that it truly addresses the core issues.

On the other hand, I get frustrated with vendors who succumb to trying to latch on to a minor detail and blow it out of proportion or, worse, resort to speaking ill of their competition. Statements like, “We see a lot of customers of vendor X come to us after two years, and now, with our product, they are happy,” should be banned from sales discussions and replaced with, “Vendor X? Yes! I hear that their product is really good and addresses a certain set of problems really well. I’d love to know what your threat model and priorities are to better understand whether it is X you should go with or maybe find a different set of solutions.”

And as much as innovative products sometimes need to educate the market (I’ve been there and am actively working with companies like that as well), most times, the reverse is what’s needed: truly understanding what the industry needs right now and providing true minimal viable products (MVPs) that solve these often basic problems. There’s money in solving seemingly simple issues, especially if they have been around for a long time and are considered “the norm” or something that people need to just accept as suboptimal.

Trust-Building For Security

This post was originally published on Forbes

Trust is a fickle thing. And, weirdly enough, the basic assumption of a lot of security practices seems to include a certain level of trust in users that is pretty hard to justify these days. This is why we see so many successful breaches that can be traced back to compromised accounts, default passwords and social engineering. One then asks, “How should I reduce or eliminate inherent trust from the equation?” Good question!

By rethinking trust in a modern environment, we can get to a stage where the starting point of any security-related decision (e.g., granting access, allowing/disallowing a certain action) is a state of no trust or zero trust. Before jumping into a marketplace where zero-trust solution providers will happily part you from a considerable chunk of your budget, let’s take a look at the core concepts of trust in our environments.

Our starting point, as I mentioned, should be that we do not trust anything: not users, networks, devices or even third parties. Building trust from this point on can be done by using elements that allow the organization to get to a place where a level of confidence is achieved. At this point, we can ensure that the decision around the action requested by the actor (user) can be backed with actual facts and in a way that’s relevant to the scenario in question.

Taking into account factors such as the user’s environment, credentials, secondary identification elements and behavioral history (to name a few) can build much better context for the decision of whether to grant the user a certain level of access, including which action they are trying to perform (i.e., are they trying to simply access their own files on a shared storage service, or are they trying to delete a table from a production database?). Being able to differentiate between different actions also allows us to set different thresholds to the level of trust we require. In the traditional model of keeping trust, the rule follows that if you have the right username and password, you’re in. No matter what you intend to do, if you have access to a system, you can fully utilize whatever roles and permissions you’re granted. In a trustless scenario, every action can be associated with a different level of trust.

Take a financial user for example. On a typical day, the user may be accessing transactional data, such as ledgers, payment processing and accounting. However, monthly or quarterly reporting (especially in public companies) requires a completely different set of activities and permissions. Why should these activities be constantly accessible to our user if they are only used once in a long period of time?

In a trustless world, our user will have the defined relationship with said reporting functionality, but exercising this functionality would be scrutinized both on a temporal perspective (“Is it the right time of the quarter?”) as well as on a trustworthiness level (“Can we accumulate enough evidence to ensure this is indeed the user in question?”). From the user’s perspective, the process is still pretty much the same. They may be required to present additional validation for their identity (responding to a multifactor authentication process), but the rest of the elements can be gathered and analyzed automatically (“Can we identify the device? Is it adequately patched/protected? Can we identify the environment?”).

Through this simple process, we can significantly lower our attack surface, and even when users get compromised (and our working assumption is that every environment will get compromised), access to sensitive assets in our environment will be highly limited, and we will have better visibility into adversarial actions that cannot achieve our trust requirements.

I realize that the concept of throwing away any shred of initial trust may sound too harsh or counterproductive to some, but when you start to think about the kinds of environments we work in these days, the idea makes sense. I can tell you from personal experience that at my own company, we’ve gone through (and still make) significant changes in the way we perceive trust.

Our latest milestone in that journey has been to rid ourselves of the concept of the “enterprise network” as we all move to a guest network. The concept of providing an inherent trust value to simply be on a certain network doesn’t apply anymore to the majority of assets we access (this is, of course, in a scenario where most assets are cloud-based). This doesn’t mean that we are done with our trust journey: I keep finding myself questioning and challenging paradigms where inherent trust exists, and by going through a process of thinking on behalf of our customers, our businesses and, yes, shareholders, we keep simplifying the security approach to managing risk and keep ourselves nimble.

So here’s a quick takeaway for you: Look at the common ways in which users in your organization access assets. Now, figure out what trust assumptions are being made (implicitly and explicitly) through the process. Scrutinize each trust assumption, and ask yourself, “Should this be an explicit trust assertion that’s based on evidence relevant to this context?” Repeat periodically to make sure you are comfortable with the kinds of risks you are taking when leaving these implied trust assertions in place.

Why You Should Go Beyond The Typical Penetration Test

This post was originally published on Forbes

If you’ve ever run across a penetration test report, they usually look bleak. I should know; I’ve authored hundreds of them. By their very nature, they try to focus on the most egregious security issues within a system or network. Having an understanding of how an actual adversary would perceive the organization (which is a lot different than how a hired penetration tester does) and a grasp of the business and operational relevance of the tested environment are crucial to driving an effective result.

In combination with a good understanding of the business risks and how they relate to cybersecurity (see my previous article on utilizing two frameworks for securing a decentralized organization), a company’s security organization can provide context for such technical and scope-focused penetration tests. As we go through a report and align it with business risks, we should keep asking “why/what.” Why is this finding relevant? Why is this issue so critical? What is the root cause that allowed an asset to be exploited? What are the environmental factors that are part of this scenario?

Through connecting the technical report to the specific business environment and contextualizing it with the actual valuation of assets/risks, we end up with a highly enriched map. This map contains the original report findings along with additional elements that the penetration test did not address, such as processes, out-of-scope-systems, environmental elements, compensating controls and awareness/practices.

How does this play out? Consider a report finding related to the ability to steal credentials, escalate them to a higher privilege and gain access to a sensitive server. A penetration tester may suggest addressing this through technical means (hardening the server to allow access from predetermined hosts, locking down workstations to reduce privilege escalation effectiveness and adding MFA). However, this could also be amended by enforcing two-person authorization for critical actions (breaking the ability to abuse a compromised account), using password managers (reducing the chances of reused or guessable passwords) or even increasing logging visibility (to provide the SOC with insights on privileged activities through the systems). These remediations are less likely to turn up on a penetration test report, however, they are just as effective — if not more so — than the classic ones that only address the common technical aspects of the issue.

By no means should this be read as an encouragement to ignore the report findings from a penetration test, but rather, it should compel you to enhance them within the business context and make them more applicable for your organization. Many of these tests are done through narrow scoping, so as you get more proficient in contextualizing the results, you’ll be able to work more closely with your penetration testers and guide them to provide tests and results that are attuned to your business’ needs.

Rather than taking these penetration test results as gospel, and instead of accounting for the specific business environment and risks (and, yes, the culture, practices and tolerance), security organizations can provide more effective and longer-lasting mitigations to security gaps, perhaps even lowering the severity of seemingly critical issues to negligible ones. By being a true partner in the organization, instead of limiting ourselves to a technical watch-guard, we can more easily come to acceptance and cooperation — not only from the technical teams we work with, but also from business leadership.