Tag Archives: security policy

Elastic Permissions

Over the past two years my colleagues and friends have heard me talk about Elastic Permissions, and at some point I started hearing other people mention the term (yay for planting the seeds through consistently using a new term…). So I figured – for the sake of clarity, let’s put this out there for posterity.

The goal of applying the Elastic Permission model is to reduce the effective attack surface for an organization.

Elastic Permissions (can you tell I am an Amazon AWS veteran? 😉 ) is a concept where permissions are being constantly and dynamically evaluated against the actual use of the granted permissions, and reduced to where they match said usage. This requires a few phases: mapping, measuring, and elasticity.

Mapping – in order to apply Elastic Permissions effectively, we first need a system to identify all the users, accounts, roles, assets, as well as all the relations between them. Think about a sort of a graph map where you can traverse between users, their accounts, the assets (whether data or functional/compute) using the effective set of permissions that tie them together. Now apply that to your organization, across all systems and platforms (including across platforms – for example IaaS, Saas, etc…).

Measuring – once you have a map graph, the actual use of every permission should be measured. This needs to be recorded in a way that reflects usage over time, in order to identify patterns of seasonality, volume, and misuse/under-use. In an essence, this phase applies weights to the graph map – where the higher the weight, the more use that set of permission gets.

Elasticity – based on the mapping and the difference between granted and used permissions, the system should revoke access dynamically. This means that unused (or even less used) permissions are being revoked. Additionally, permissions that have been identified as being used seasonally, should be revoked while unused, and re-granted in preparation for the usage period (then to be revoked again). Lastly – since there is an expected potential friction where permissions are incorrectly revoked, the system should offer a way to natively escalate and regain privileges (for example – through an MFA challenge) and apply learning to the decision to revoke the permission set in the future. Systems should also consider several grades of escalation based on the confidence level of the revocation (from granting access immediately, through challenge-response to verify with the user their intent, to an out-of-band escalation to a manager/SOC).

The end result as stated in the Elastic Permission goal is to accomplish a reduction in the effective attack surface. In my personal experience working with several systems like this, organizations should expect somewhere around 70-80% reduction, but these numbers will depend on the level of complexity of the organization (how many platforms are used, and the relationships between platforms), and allow the security organization to focus on real incidents since the elasticity acts as sort of a canary/honeypot.

Two Frameworks For Securing A Decentralized Enterprise

This post was originally published on Forbes

Many modern enterprises no longer operate in a highly centralized manner. Traditionally, cybersecurity in enterprise environments consisted of defining trust boundaries, placing controls over these boundaries, setting standards and policies for the safe and secure handling of data, enforcing said policies and scrutinizing any code/applications that were developed for flaws that may be exploited by adversaries — inside or out.

But now, as part of a decentralized model, optimizing for performance and independence, the kind of control and scrutiny that security organizations once had in the past is becoming irrelevant. This leaves the separate business operations within an enterprise free to choose their own implementation (and sometimes prioritization) of how they run their security.

If the CSO/CISO role was challenging before, now it is even more complex. We need to hold a delicate balance between enforcing some elements and providing guidance or general directions for most others.

As the parent company of multiple independently operated businesses, our approach at Cimpress is to emphasize a shared-security-responsibility model, where each business unit is not only responsible for choosing and operating its technical stack but also its security and risk management. As such, our security organization has two main tasks: providing clear and transparent metrics for security maturity and providing a means for measuring (really, this means quantifying) risk in a way that supports decision making around changing controls. We have chosen to utilize two well-known frameworks to achieve this.

For security maturity metrics, we chose NIST-CSF: the NIST Cybersecurity Framework. This framework provides a consistent and clear indication of security maturity across over 100 categories and subcategories of an organization’s defense capabilities. There are many approaches to using this framework, ranging from self-attested surveys to fully automated and continuously updating platforms. The technique is less of an issue than the ability to create a clear reflection of each business’ maturity levels and, of course, set a minimal or sought-after maturity level (again, this doesn’t have to exist across all the subcategories). This baselining allows businesses to better align themselves with existing policies (which are translated to the minimal required maturity levels) and map out their tactical security gaps.

For our risk measurements, we picked FAIR: Factor Analysis of Information Risk. FAIR is being used at the enterprise risk management (ERM) level and is combined with the rest of the ERM functions to assure relevancy and context for our business leaders. We do not deploy FAIR analysis across a broad range of scenarios; rather, we focus on the top three to five that each business identifies as the most relevant ones for themselves.

The goal of using the framework this way is to get immediate buy-in from business leaders while providing them with means of making informed decisions about their risks. After identifying the scenarios, we perform a FAIR-based analysis of the risk and come up with the results of the expected loss. Behind the scenes, we map the relevant controls to the already-identified security maturity measure.

Beyond providing a more realistic reflection of risk (while shying away from high/medium, red/green, etc. qualitative measures), we also create an immediate feedback loop. At this point, we’ve turned security and risk management into a business problem that’s more “easily” solved through financial measurements of recommended changes and their impact to previously expected losses.

If you are asking yourself, “Great, but where do I get started with this?” I would suggest to first define how a shared security responsibility model would look like in your organization. Where do you draw the line between the core responsibilities of the security organization, and where do you expect other parts of the business to own their share of security?

Once that’s defined, utilizing the different frameworks to facilitate this becomes more natural. We’ve used the NIST-CSF to provide metrics for everyone to gauge their maturity level and, based on the shared responsibility model, understand what areas need to improve. I’d suggest tailoring the use of the framework to your needs. In our specific implementation, we simplified the framework in terms of the number of maturity levels and provided focus on several specific subcategories that we defined as “basic security hygiene.”

Lastly, in order to prioritize these tasks of closing maturity level gaps, choose a risk model that works for you. In our case, it was FAIR, and even then we allowed ourselves to use it in a way that works for us rather than cover all our use cases and scenarios. This allowed us to stay more agile and have an easier adoption period where our businesses were getting used to the methodology and the framework.

At the end of the day, it’s about what works for your organization rather than about sticking to the letter of how a specific framework is structured. And of course, always make sure to build closed-loop feedback cycles in order to facilitate continuous improvement for how your security program works.

Basic is great

Encouraged by the response to my last post (https://www.iamit.org/blog/2018/06/the-ian-amit-spectrum-of-pentesting-efficacy/ for those who missed it), and following up on a couple of recent Twitter/LinkedIn/WhatsApp conversations, I’d like to emphasize the importance of doing basic and simple work (in security, but it probably also applies to everything else).

We are working in a weird industry. The industry encourages unique thinking, contrarian ones, and creativity. Guess what? The kinds of people who find themselves “fitting in” is more often than not your typical ‘hacker’ with the stereotypical social baggage that comes with it. It also means (and of course, I’m generalizing) a short fuse, lack of respect/patience to people who are not as immersed in the cybers as they are, and that often creates the scenarios that Phil describes in his post.

Moreover, those of us who have been around the block a couple of times, also know and realize that there is no silver bullet solution to security. We are in it because we realize it is a constantly moving and evolving practice. Because we love the challenge, the changing landscape, and the multitude of domains involved in practicing security properly.

Which gets me to the basics. 

This, and other conversations (the notorious “Cyberberet” WhatsApp channel for the Israeli guys), which revolve around the latest and greatest [insert cyber-marketing-buzz/fud] solution. So here is my old-man eye-roll for you…

I earned the right to roll my eyes. 20+ years and counting 😉

The reason being, I still see a lot of organizations trying to decipher how they are going to integrate said [insert cyber-marketing-buzz/fud] product, while failing to have a basic security program.

They often don’t have one, because they never bothered to perform a proper threat modeling exercise where they “dare” ask their executive leadership what do they care about (i.e. what are they afraid of). I’ve seen companies invest huge $ in fancy SIEM solutions while not having a basic authentication mechanism for their employees (dare I say MFA?). And even the inability to get a somewhat consistent asset inventory and tracking, which comes with the usual excuse – all this cloud stuff is very dynamic, we don’t have racked servers like in the olden days. To which my rebuttal – all this cloud stuff makes it easier to track your assets. You are just lazy and incompetent.

Compound that with an approach which I sadly see some of my colleagues take, which says – forget about all those products, you are going to get breached. You need to embrace the [other-fud-buzz-cyber] approach where attackers [pre-cog / deceived / lost / identified before they get to you / hacked_back / …]. Hmmmm, let me guess – you must have a company operating in that space, right? 

So no. Neither precog, nor deception or hacking back will save you either. And I’ve played attacker against these things in the past, and (shocked face) always won against them. What you should be doing in getting back to basics.

You know – the stuff they teach at intro to infosec 101. Layered security. Logging, monitoring and anomaly detection (behavioral – after baselining and such). Getting the basics of authentication and authorization done properly. Having a patch management practice coupled with a vulnerability scanning one. Knowing what is your threat model. What assets are you protecting. Which do you need to prioritize over others. What is your threat landscape (and no – no matter how fancy or ninja/8200/NSA the threat feed is, it most likely has zero applicability to your threat landscape). What controls do you have in place (technological, and others) and how effective are they.

Image result for polishing turd

“Playing” with these basic elements can, and will have a huge impact over your security posture. Much more than trying to fit a fancy “cyber” solution without any context over what you are getting done (see equivalent image to the left…). But you know what – don’t take my word for it, ask any competent pentester who’s faced both – a properly executed security program, and for comparison one of the latest buzz-worthy products. You’ll get the same response: it’s harder to overcome the security program, while dealing with a magic product requires a one-time effort to render it moot.

Now go back to the basics. There’s no shame in it, you’ll get to play with the fancy stuff a bit later, and when you do, make sure to come up with what YOU need from it rather than get starry-eyed while listening to the sales folk try to wow you 😉

the art of not thinking about elephants

We have been quite busy here at Security Art in the last few weeks (as the blog posting frequency suggests), but I figured I would provide a quick preview of some of the elements we have been working on in terms of risk management.

Now, I suppose you have read Yoram’s earlier post about risk informed decision making, so I won’t elaborate on this for too long, nevertheless, we are often posed with the question “so how does this apply to my organization”. this usually comes form someone who did spend a lot of time and resources on the technical aspects of their network security. The answer is usually “let’s take a look at how you do your business”, which is what we usually do anyways…

Having that in mind, we set off to investigate in a few recent engagements how would some of our clients actually fare against an informed and skilled attacker that has been commissioned to break into the organization. These engagements have been prompted by a few incidents in which the organization in questions was basically left in the dark as they were basing their forensics on the tools that commercial security vendors provided them with, and nothing much more than that (remember the ever expressive “generic” detection from your AV vendor… Ever wonder what it really means?).

With that in mind, and a network to steal data from as a target we accepted the challenge. The only caveat is that the network was disconnected. For real. No Internets…

But (and there’s always a “but”), there was a voice network that went out through PSTN to provide the office with telephony connectivity. Bingo. Ever seen a complete separation of the VOIP network and the internal network? yeah, neither have I. To make a long story short, we managed to get the data in the most old fashioned way possible… we beeped it away (actually transmitted over a VOIP connection using a custom written simulated trojan that encoded the data into audible voice signals and left them as a message on one of our voice mailboxes). Done deal. (and the PoC code can be found here if you’d like to play with some of the conecpts).

Bottom line – always remember that when you think of solutions, you should not be “blinded” by what’s available out there and the accompanying marketing materials. That’s basically the “pink elephant” that vendors tell you not to think about when pitching their solutions. You usually end up thinking about it (and buying the product thinking that you’ll never see that elephant again as you just bought the best “anti-elephant” solutions…).

Always challenge the way you think of networks and processes (we did have to get the code INTO the network somehow… but that’s for another post 🙂 ), and ALWAYS test your assumptions and protections. You’d be surprised how easy it mat be to out-compartmentalize you just because you were boxed in to take care of just a single aspect of the security 9and yes – that even applies to CIO’s, CISO’s, etc…).

Identity crisis

Here’s a common question I get asked a lot: “What technology should I use to secure my server/network/[some technology]?”

wpid-IdentityCrisis-2010-06-7-14-11.jpgThe question is usually presented by someone who’s in charge of “Security” in an organization. Now, I wouldn’t have had a problem with this if this was a technician, or a pen-tester of sorts, but I get really nervous when the CISO/CIO/Security manager is the one asking.

I think that this question is highly inappropriate for two reasons:

  1. You should not be looking for “technology”. Buying a product is not going to make you more secure or less secure.
  2. You should not be trying to protect a technology. Your servers, networks, routers, PCs, etc… are not the focus of information security. The information is…

Having been working with senior management – sometimes as an advisor/consultant, and sometimes as a “virtual CISO”, I know that this is not what we expect the CISO or security manager to ask. We expect business savvy, we expect an understanding of what the information assets are, what are the information critical paths, who owns the information and what is the impact of every asset on the business. We expect that the understanding of how each assets fits into the grand scheme of things would be clear to whoever is in charge of securing it, and we expect them to take into account what is the potential damage related to each of these assets (in terms of losing it, having it fall into the wrong hands, etc…).
For me (or us when talking as management) this is the only way to approach security. Funny how things get a little unclear when all you thought you needed to know was which vendor/product fits where in your topology, huh?

What strikes me as most peculiar is the fact that a lot of these security “professionals” find themselves in a self proclaimed identity crisis, having to deal with business requirements and financial understanding of how the business operates. and the weirdest thing is that they often choose to get back to what then “know” best – the technology side of things. Definitely not the way to make a move…

wpid-risk-blocks-2010-06-7-14-11.jpgI’m really hoping that all this preaching of “know thyself before you know your enemy” would help somehow, because right now unfortunately the situation at hand only brings us more business (not that I’m complaining). But seriously now – technology is fine and cool, but having the aptitude to know where it fits, not on an architectural level, but from a business perspective is the key to what we do. Get back to the drawing board, erase the network topology and start drawing the business one!