Tag Archives: risk management

The Ian Amit Spectrum of Pentesting Efficacy

It’s been a while since I posted (duh), but recently I’ve had something brewing in my mind that appeared to not have been clearly discussed before, so here goes.

I’ve been seeing some discussions and ambiguity around pentesting, vulnerability assessment, and red teaming (again – no huge shocker for those of us in the industry). However, as much as the “look at our shiny new red team” marketing BS coming from big companies (read as “we do pentesting, we have a new name for it so you can pay more”), pisses me off, what bugged me even more is the lack of clarity as to where and when pentesting can/should be used, and through which means.

I offer you this – my simplified spectrum of pentesting efficacy.

In short, here’s how this works: first identify the actual need for the test. There should be three categories as follows:

  1. Testing because you have to (i.e. compliance). PCI is a good example here. It’s something you can’t avoid, and doesn’t really provide any real value to you (because of the way it is structured, and as we all know, compliance/regulation has nothing to do with security so you might as well ignore it.
  2. Testing because you want to make sure that your controls are effective, and that your applications are built properly. This is where the “meat” of your pentesting should come into play. This is where you see direct value in identifying gaps and fixing them to adjust your risk exposure and tolerance (based on your threat model and risk management, which you should have, or if you don’t, just go ahead and quit your job).
  3. Testing to see how you fare up against an actual adversary. Bucket 2 above was fairly technical in its scope and nature. This bucket is threat focused. More specifically – threat actor focused. Your risk management strategy should identify the adversaries you are concerned about, and here you want to see how you fare up against them in a “live fire” scenario.

Once you have these, you are almost done. Here’s how you approach “solving” for each category:

  1. Compliance: find the cheapest vendor that’ll check off the box for you. Don’t be tempted for cheap marketing tricks (we red team, we thoroughly test your controls, we bring in the heavy hitters who have spoken at BlackHat and DEFCON and the super-underground conferences). Remember – you are getting no security value from here, so shop around and see who will tick the box for you. Remember to be heavy handed on the reporting and remediation, as if you are doing things correctly, the scope should be very minimal (remember – just enough to cover compliance requirements) and you should easily have these covered as part of your standard security practice.
    Also – no point of putting any internal resources into here since it won’t challenge them and is menial work that should be outsourced.
  2. Controls and Applications: this is where you should invest in your own resources. Send your security engineers to training. Build up an SDLC that includes security champions and involves the dev teams (DevSecOps anyone?). This is where you should see the most value out of your investment and where your own resources are better equipped to operate as they are more familiar with your operating environment, threats, and overall risk prioritization. In this category you’ll also sometimes include testing of 3rd parties – from supply chain to potential M&As. Use your discretion in choosing whether to execute internally or engage with a trusted pentest company (make sure you utilize the Penetration Testing Execution Standard when you do).
  3. Adversarial Simulation: This is where you shift from pentesting to red teaming. The distinction is clear: pentesting focuses on one domain (technical), and sometimes claims to be a red team when phishing is involved (social). A red team engagement covers three (technical, social, physical), and more importantly, the convergence of two or three domains (social/physical, technical/social, technical/physical, or all three). This is where you should engage with a company that can actually deliver on a red team (again – use the PTES sparingly in helping you scope and set expectations for the engagement), and can work with you to identify what kind of adversary they should be simulating, how open or closed does the intelligence gathering works, how engaged will they get with targets, and to which degree are you ok with potential disruptions. This is where you’ll see value in affirming your security maturity across several domains, and how these fare up against your threat communities. This will also enable you to more clearly align your investments in reducing exposure and increasing your controls, monitoring, and mitigations for actual loss scenarios.

I did mention vulnerability assessments initially, and if you made it through here you noticed it’s not on the spectrum. Well, it kind of is – it should be part of all of the engagement types, and is not considered an independent member of an engagement. Hint – never outsource this. It’s extremely cheap to engage in VA by yourself, and there are plenty of tools/programs/services that are highly effective in continuously performing VA for you that range from free to very cheap. Don’t be tempted to pay any premium for it.

That’s all for now – hope this made some sense, and helped you in prioritizing and understanding how pentesting (and red teaming) are best applied through your security program!

So, There’s this new (for me) LinkedIn “publishing” thing, that prompted me to try it as I was posting a semi-rant there.

Let’s see how well that works out:

https://www.linkedin.com/today/post/article/20140531211959-1510435-security-and-maturity-beating-the-averages?trk=prof-post

Information Security, Homeland Security, and finding someone to pin it on

In the recent spree of cyber attacks on a plethora of US and international government and federal related establishments a lot of speculations are being thrown around as authorities are trying to find the threat community behind it.

As computer systems are reigning most of the control over our daily lives – from transportation, through financial systems, and up to government facilities that provide research, analysis and even critical infrastructure to support what we know of now as “modern life”, attackers find it easier and easier to poke at such systems as their security is left mostly as an afterthought. Most of the focus when the relevant organizations approach the forensics and remediation of such breaches is first to recover any lost data, and then to identify not the root cause of the breach, but the attacker.

As the blame game runs amok, the actual privacy and confidentiality of the core (digital) elements of our modern society are left for grabs. When groups such as LulzSec, Anonymous, and any other book-reading internet-browsing anonymous-under-several-proxies infosec-warrior find it as easy as running a few scripted tools on their target list to find easy to exploit issues, we are facing a very tough job of figuring out who to blame.

Nevertheless, blame by itself (or attribution as we like to refer to it in the more politically-correct industry circles) won’t help us in mitigating such attacks. It may be helpful for organizations to have someone to pin the “adversary” tag on – especially when dealing with defense/government/federal institutions who’s budgets can be manipulated more easily under the threat of a foreign nation. But when looking at the ability to actually come up with evidence to support such claims we often face empty hands, and a thick smokescreen of assumptions, prejudice, and incompetence.

On the other hand, when viewed from a strategic/political stance, it can be easily seen how a string of breaches in facilities that share a common ground (such as the one presented by Rafal Los of HP in his great article “DOE Network Under Siege”) can be attributed more to a nation state than to a fun-seeking internet-bored group.

This simple reality – of having intricate connections that are often only visible when looking at the bigger picture of security incidents, allows state sponsored attacks to happen without much scrutiny or the ability to thwart them on a more strategic position.

The bottom line remains the same – chasing after excuses and online enemies won’t get us to a more secure state. Investing in proper education, training, exercises, people and (lastly) technologies, will. Instead of trying to investigate breaches from an attribution standpoint, we should be investigating root causes to the deepest level (i.e. not stopping at “a 0-day vulnerability we didn’t know of”, or the bit-bucket of “It’s an APT”) that involves how we manage our electronic infrastructure and how we keep track of what’s going on in it after the initial setup is complete and the contractors/integrators pack up their people and leave.

The curious case of Dropbox security

The Dropbox logoAfter the disclosure of the host_id authentication issues that plagued the popular Dropbox service last week, a new issue came up with the fact that Dropbox can detect whether the files you are trying to upload to their cloud already exist there, and “save you the bandwidth” of uploading it if they already have a copy in hand.

So – the Dropbox client probably checks for the hash of the file being uploaded against a list of hashes of existing files that are already stored on the cloud. It may also be that the files stored online are encrypted. So… what’s the big deal?

One has to remember that when using a service such as Dropbox (and I’m an avid user myself), you clearly do not have full control over the material you upload, and the online encryption is only a fraction of the protection you may be seeking. There is no key management visible to the user. There is no way that each client you use has its own key, nor they share keys, and if they do, Dropbox is managing your keys. This also gives them the ability to decrypt your data at any given time. Subsequently, it also gives them the ability to provide you with the file of another user if you tried to upload it yourself (hence saving you the bandwidth) – for example, when you may want to access it from a client which does not have the synched copy of your account (or through the web interface). They just decrypt the other user’s file, and serve it back to you. After all – you have the same one back on your home/work/whatever PC (remember that you showed “proof” by providing the hash before).

Which brings us back to reality – what are we really exposed to here in terms of risk?

  1. Dropbox has the ability to access the contents of my files.
  2. If I can come up with a hash of a file that I know someone else has, and that file may be confidential in some way, I can potentially claim to upload the same file, and then download the real one (as I don’t really have the original) from another client or through the web interface.

Clearly, the media attention to point 1 is important – but still not really interesting as people should have had a clue when they send their files to the “cloud”.

However, point 2 makes a more interesting argument… It would be interesting to see when the first “hack” will come along which will start “uploading” files (by hacking the client protocol – hint: start here, here, and here) just based on hashes, and then downloading them as if from another client to see what you get (if they were “cached” already on the Dropbox cloud). Now that would be an interesting little experiment…

Happy hacking!

SCADA, control systems and security – not necessarily enemies

Insights from the NISA International SCADA Security Forum conference (NISA stands for National Information Security Authority, which is a division of the Israeli Security Agency).

We all know that SCADA has been considered a security nightmare for a long time. Admittedly, I only have a short experience with such systems and control systems in general (just short of two years), but the topic is fascinating. The main challenges in securing control systems from my point of view is the ability to “connect” with the domain experts and understand the systems and processes properly.
Unfortunately, we, as a security community are far from it (at least based on what I have seen in the past couple of days in the conference). The rush to force traditional IT solutions and ways of thinking onto control systems just do not work. From “learning” firewalls that monitor the industrial control protocols, to systems that are designed to ADD complexity to the threat modeling by layering network and Internet related threats to SIEM mechanisms and add the “scada” data to it. These are all solutions that are Bound to fail as they do not understand the actual needs and operational state of mind of control systems engineering.

If we take a new and unbiased look at what kind of data and processes are involved in such systems, we (as in the security community) would be thrilled to learn that there are a lot of untapped intelligence resources that would substantially help us in building a more appropriate and relevant detection and alerting mechanisms. Trying to force an IT solution on these would be an exercise in fitting a square peg into a round hole, and as exciting as that may be we all know what would be the outcome of it.

To sum things up – just as you would not pretend to know the environment of a financial or a commercial customer when approaching the task of securing it, control systems pose an ever more distinct challenge. Open up, keep the critical thinking and most of all LISTEN. You’ll find out that long before you can start pushing the “cyber” agenda, you have much to work with just with the basic data and processes already at hand, and that there is a lot of value that a security practitioner can bring to such an organization.

P.S. I’m specifically refraining from addressing any product or vendor as I do not think it’s fair to “out” them (however big or small they may be) as these have obviously been rushed to the market in an attempt to get an initial foothold in the industry. Nevertheless, I do encourage such vendors to do some more homework, and work WITH the industry rather than just try to capitalize on their existing expertise in IT and “cyber”.