Category Archives: Opinion

Basic is great

Encouraged by the response to my last post (https://www.iamit.org/blog/2018/06/the-ian-amit-spectrum-of-pentesting-efficacy/ for those who missed it), and following up on a couple of recent Twitter/LinkedIn/WhatsApp conversations, I’d like to emphasize the importance of doing basic and simple work (in security, but it probably also applies to everything else).

We are working in a weird industry. The industry encourages unique thinking, contrarian ones, and creativity. Guess what? The kinds of people who find themselves “fitting in” is more often than not your typical ‘hacker’ with the stereotypical social baggage that comes with it. It also means (and of course, I’m generalizing) a short fuse, lack of respect/patience to people who are not as immersed in the cybers as they are, and that often creates the scenarios that Phil describes in his post.

Moreover, those of us who have been around the block a couple of times, also know and realize that there is no silver bullet solution to security. We are in it because we realize it is a constantly moving and evolving practice. Because we love the challenge, the changing landscape, and the multitude of domains involved in practicing security properly.

Which gets me to the basics. 

This, and other conversations (the notorious “Cyberberet” WhatsApp channel for the Israeli guys), which revolve around the latest and greatest [insert cyber-marketing-buzz/fud] solution. So here is my old-man eye-roll for you…

I earned the right to roll my eyes. 20+ years and counting 😉

The reason being, I still see a lot of organizations trying to decipher how they are going to integrate said [insert cyber-marketing-buzz/fud] product, while failing to have a basic security program.

They often don’t have one, because they never bothered to perform a proper threat modeling exercise where they “dare” ask their executive leadership what do they care about (i.e. what are they afraid of). I’ve seen companies invest huge $ in fancy SIEM solutions while not having a basic authentication mechanism for their employees (dare I say MFA?). And even the inability to get a somewhat consistent asset inventory and tracking, which comes with the usual excuse – all this cloud stuff is very dynamic, we don’t have racked servers like in the olden days. To which my rebuttal – all this cloud stuff makes it easier to track your assets. You are just lazy and incompetent.

Compound that with an approach which I sadly see some of my colleagues take, which says – forget about all those products, you are going to get breached. You need to embrace the [other-fud-buzz-cyber] approach where attackers [pre-cog / deceived / lost / identified before they get to you / hacked_back / …]. Hmmmm, let me guess – you must have a company operating in that space, right? 

So no. Neither precog, nor deception or hacking back will save you either. And I’ve played attacker against these things in the past, and (shocked face) always won against them. What you should be doing in getting back to basics.

You know – the stuff they teach at intro to infosec 101. Layered security. Logging, monitoring and anomaly detection (behavioral – after baselining and such). Getting the basics of authentication and authorization done properly. Having a patch management practice coupled with a vulnerability scanning one. Knowing what is your threat model. What assets are you protecting. Which do you need to prioritize over others. What is your threat landscape (and no – no matter how fancy or ninja/8200/NSA the threat feed is, it most likely has zero applicability to your threat landscape). What controls do you have in place (technological, and others) and how effective are they.

Image result for polishing turd

“Playing” with these basic elements can, and will have a huge impact over your security posture. Much more than trying to fit a fancy “cyber” solution without any context over what you are getting done (see equivalent image to the left…). But you know what – don’t take my word for it, ask any competent pentester who’s faced both – a properly executed security program, and for comparison one of the latest buzz-worthy products. You’ll get the same response: it’s harder to overcome the security program, while dealing with a magic product requires a one-time effort to render it moot.

Now go back to the basics. There’s no shame in it, you’ll get to play with the fancy stuff a bit later, and when you do, make sure to come up with what YOU need from it rather than get starry-eyed while listening to the sales folk try to wow you 😉

The Ian Amit Spectrum of Pentesting Efficacy

It’s been a while since I posted (duh), but recently I’ve had something brewing in my mind that appeared to not have been clearly discussed before, so here goes.

I’ve been seeing some discussions and ambiguity around pentesting, vulnerability assessment, and red teaming (again – no huge shocker for those of us in the industry). However, as much as the “look at our shiny new red team” marketing BS coming from big companies (read as “we do pentesting, we have a new name for it so you can pay more”), pisses me off, what bugged me even more is the lack of clarity as to where and when pentesting can/should be used, and through which means.

I offer you this – my simplified spectrum of pentesting efficacy.

In short, here’s how this works: first identify the actual need for the test. There should be three categories as follows:

  1. Testing because you have to (i.e. compliance). PCI is a good example here. It’s something you can’t avoid, and doesn’t really provide any real value to you (because of the way it is structured, and as we all know, compliance/regulation has nothing to do with security so you might as well ignore it.
  2. Testing because you want to make sure that your controls are effective, and that your applications are built properly. This is where the “meat” of your pentesting should come into play. This is where you see direct value in identifying gaps and fixing them to adjust your risk exposure and tolerance (based on your threat model and risk management, which you should have, or if you don’t, just go ahead and quit your job).
  3. Testing to see how you fare up against an actual adversary. Bucket 2 above was fairly technical in its scope and nature. This bucket is threat focused. More specifically – threat actor focused. Your risk management strategy should identify the adversaries you are concerned about, and here you want to see how you fare up against them in a “live fire” scenario.

Once you have these, you are almost done. Here’s how you approach “solving” for each category:

  1. Compliance: find the cheapest vendor that’ll check off the box for you. Don’t be tempted for cheap marketing tricks (we red team, we thoroughly test your controls, we bring in the heavy hitters who have spoken at BlackHat and DEFCON and the super-underground conferences). Remember – you are getting no security value from here, so shop around and see who will tick the box for you. Remember to be heavy handed on the reporting and remediation, as if you are doing things correctly, the scope should be very minimal (remember – just enough to cover compliance requirements) and you should easily have these covered as part of your standard security practice.
    Also – no point of putting any internal resources into here since it won’t challenge them and is menial work that should be outsourced.
  2. Controls and Applications: this is where you should invest in your own resources. Send your security engineers to training. Build up an SDLC that includes security champions and involves the dev teams (DevSecOps anyone?). This is where you should see the most value out of your investment and where your own resources are better equipped to operate as they are more familiar with your operating environment, threats, and overall risk prioritization. In this category you’ll also sometimes include testing of 3rd parties – from supply chain to potential M&As. Use your discretion in choosing whether to execute internally or engage with a trusted pentest company (make sure you utilize the Penetration Testing Execution Standard when you do).
  3. Adversarial Simulation: This is where you shift from pentesting to red teaming. The distinction is clear: pentesting focuses on one domain (technical), and sometimes claims to be a red team when phishing is involved (social). A red team engagement covers three (technical, social, physical), and more importantly, the convergence of two or three domains (social/physical, technical/social, technical/physical, or all three). This is where you should engage with a company that can actually deliver on a red team (again – use the PTES sparingly in helping you scope and set expectations for the engagement), and can work with you to identify what kind of adversary they should be simulating, how open or closed does the intelligence gathering works, how engaged will they get with targets, and to which degree are you ok with potential disruptions. This is where you’ll see value in affirming your security maturity across several domains, and how these fare up against your threat communities. This will also enable you to more clearly align your investments in reducing exposure and increasing your controls, monitoring, and mitigations for actual loss scenarios.

I did mention vulnerability assessments initially, and if you made it through here you noticed it’s not on the spectrum. Well, it kind of is – it should be part of all of the engagement types, and is not considered an independent member of an engagement. Hint – never outsource this. It’s extremely cheap to engage in VA by yourself, and there are plenty of tools/programs/services that are highly effective in continuously performing VA for you that range from free to very cheap. Don’t be tempted to pay any premium for it.

That’s all for now – hope this made some sense, and helped you in prioritizing and understanding how pentesting (and red teaming) are best applied through your security program!

When great ideas go to the wrong places

Or: why attribution is not a technical problem.

TL;DR: hacking is an art and a science, computer attacks (cyber these days) are only one manifestation of an aggressor, which has very limited traits that can trace it to its origin. Relying on technical evidence without additional aspects is not enough to apply attribution, and when done so, attackers can use it to deflect attribution to other actors.

Context: Experts, Microsoft push for global NGO to expose hackers

So, apparently, some really smart people at RAND corporation and Microsoft have decided that they are going to solver the world’s computer Bourne attack problems by creating a new global NGO to unmask and apply attribution to hacking incidents. They claim the organization will be responsible to authoritatively publish the identities of attackers behind major cyber attacks.

Which is really cute when you think about it – a bunch of brainiacs (and Microsoft people) sit around and analyze network, storage and memory dumps to trace back attacks to their origins. Sounds like a really great service, which can be used by companies and governments to trace back who attacked them, and act on it (either by suing, or means of diplomatic recourse).

The only problem is that the attribution game is not won on technical merit only. And guess what? Attackers know that very well. Even the US government knows that (or at least the organization responsible for launching such attacks) and have been trained to study different attacker’s traits and tactics so that they can replicate them in their own attack – hence throwing off attribution if/when the attacks are detected.

The reality of it is that companies are often hired to provide incident response and forensics, and in a rush/pressure to give value to their clients, come up with attribution claims based on technical merits. Cyrillic words will point to Eastern European blame (RUSSIA!). Chinese character in a binary will lead to claiming Chinese hackers are behind an attack. An Iranian IP address linked to a command and control server that trojans connect to will point to an Iranian government operation. Which is all a big steaming pile of horse feces because everyone who’s been on the offense in the last couple of decades (probably more – I can only attest to my experience) also knows that. And can easily create such traces in their attack. Furthermore, for the ones following at home thinking “oh, they know that I know…” – yes, we play that game too, and attackers are also “nesting” their red herrings to trace back to several different blamed parties, and it all depends on how deep the forensic analyst wants to dive in.

The bottom line, is that the technical artifacts of a computer attack are ALL FULLY CONTROLLED BY THE ATTACKER. Almost all forensic evidence that can be found is controlled by a knowledgeable attacker, and should be considered tainted.

Now consider an NGO who have no “skin in the game”, and relies on technical artifacts to come up with attribution. No financial evidence, no political ties, no social and physical artifacts or profiling of suspected targets or persons of interest in the victim organization. Anyone who’s been somewhat involved in the intelligence community can tell you that without these, an investigation is not worth the paper or the bits that are produced during it.

So, sorry to burst another bubble, and actually, if you read the article, you’ll see that I’m not alone, and at the Cycon conference at which this initiative was announced, several others have expressed pretty firm opinions on the futility of this initiative. So as much as I appreciate the initiative and willingness to act and “fix the problem”, perhaps it’s best to actually step out of the fluorescent light and really understand how things work in the real world 😉

Infosec conferences/talks redux

Don’t mind me, just poking my head in here to make sure the cobwebs haven’t taken over this place yet 😛
So yes – I’m going to be blogging waaay less then before because of, well, life? But I recently saw a post from Daniel Meissler who discussed how (in)effective are modern security talks at conferences are.
He’s bringing up a couple of great points, and talks about what a good talk in his mind would be. Figured I’d share my 2c on this based on a couple of conferences and talks I’ve been to and delivered.

So, neither approach is useful IMHO (i.e. essay, nor entertainment).
A Dan Geer style essay-reading has zero added value for the participants. Go read it yourself in your own pace and you’ll be better equipped to take something from it.

A handwaving “look at my marketing schtick” presentation has no value without any insights to the thought process behind it. Neither is a talk focused solely on the entertainment value. Even if it seems to veil itself as “but through which you’ll get awareness/education”. Especially if it’s mostly self-serving and designed to make you look good. Go away.

Slides that are visually appealing (cat pics), but that support the narrative of what the speaker is saying would be the best experience for me personally (given that there is actual content, and not just the same regurgitated BS that a lot of talks “innovate/research” with).

So, first – get something new in place.

Ok – go and google that shit. Double time. Because most of what’s been out there recently – from “unveiling” cyber criminal tools and forums, to “new” ways to avoid data exfiltration mitigations, is OLD FUCKING NEWS. You are supposed to be this OSINT Google-foo master. Prove it by not embarrassing yourself with a re-branding of old research.

Now, realizing that you may have no idea how to present this new thing, do two things:

  1. Write a paper that describes said new thing. Keep it fairly academic or white-paper style. This is the “essay” style you keep hearing about. DO NOT TRY TO PRESENT IT. It’ll be boring as fuck, and people will go into hibernation in the crowd.
  2. Start writing the story of how you found said new thing. Take note of the following:
    1. Why did you go out to invent/find said new thing? What was the motivation? What gap does this fill?
    2. How did you go about researching and finding the new thing? What challenges did you face doing so? What didn’t work through your process (much more interesting and relevant than what did work)?
    3. How do you use this new thing? How can I use it (assuming I don’t have to sell a kidney to do so. If so, pass this along to your marketing guys so they can get ready for RSA)?
    4. Show relevant data on how this new thing improved your life (professional life included). Show the situation before, and after new thing was applied. Data is cool, and you can’t argue with it (as opposed to “hey, look at me doing this thing one time with no context and no goal and how badass I am”).
    5. Give credit. Understanding that you are probably not alone researching new thing in complete void – give some props to the people/projects who have inspired you, helped you move along your research, or have done similar things, and you have build on their things to get to your new thing. (i.e. don’t be an asshole).
  3. Take this story now, and tell it. This is your talk. Find visuals that support the narrative of this story. These don’t have to be the text verbatim of what you are saying (please, for the love of god, stop it with the bullet wars). They can be cat pictures, then can be graphs, or funny graphics. Make sure there’s some context between your slides and your story narrative.
  4. Practice going through your talk and telling your story. After a couple of tries, try turning off the slides. Can you still make it work? Do you keep trying to read out from the slides (of course not, because they should only have minimal text on them).
  5. Go talk. It’s going to be great. You are going to stumble on your words sometimes, utter an “Ummm”, and an “Ahhh” from time to time. Nobody really cares. Because they are listening to your story, which is awesome, and interesting, and not reading out of your slides before you can recite them.
    1. (oh, and of course – don’t memorize the thing. You need to be able to tell that story again and again, and never sound the same. Otherwise you could have just sent a pre-recorded and edited copy of you doing this).

I guess it’s easier to say this from where I’m standing (here’s my bias declaration: I’ve done this many times, including bad presentations, and am about to deliver my last talks by the end of the month). But trust me – do yourself a favor and think about what you’d want to see/hear at a conference. It’s that simple. Don’t think about some “rock star” researcher and look up their presentation (they might suck at public speaking), just put yourself in the crowd and think “this is what would have worked for me if I’d want to learn about something”.

Thoughts about the Apple vs FBI iPhone firmware case

Not trying to provide the full story here, just a few thoughts and directions as to security, privacy and civil rights. (for the backdrop – Apple’s Tim Cook letter explains it best: https://www.apple.com/customer-letter/)

From a technical perspective, Apple is fully capable to alleviating a lot of the barriers the FBI is currently facing with unlocking the phone (evidence) in question. It is an iPhone 5C, which does not have the enhanced security features implemented in iPhones from version 5S and above (security enclave – see Dan Guido’s technical writeup here: http://blog.trailofbits.com/2016/02/17/apple-can-comply-with-the-fbi-court-order/).

Additionally, when dealing with more modern versions, it is also feasible for Apple to provide updates to the security enclave firmware without erasing the content of the phone.

But from a legal perspective we are facing not only a slippery slope, but a cliff as someone eloquently noted on twitter. Abiding by a legal claim based on an archaic law (All Writs act – originally part of the Judiciary act of 1789) coupled with just as shaky probably cause claim, basically opens up the door for further requests that will build up on the precedent set here if Apple complies with the court’s order.
One can easily imagine how “national security” (see how well that worked out in the PATRIOT ACT) will be used to trump civil rights and provide access to anyone’s private information.

We have finally reached a time where technology, which was an easy crutch for law enforcement to rely on, is no longer there to enable spying (legal, or otherwise) on citizens. We are back to a time now where actual hard work needs to be done in order to act on suspicions and real investigations have to take place. Where HUMINT is back on the table, and law enforcement (and non-LE forces) have to step up their game, and again – do proper investigative work.

Security is obviously a passion for me, and supporting (and sometimes helping) it advance in order to provide everyone with privacy and comfort has been my ethics since I can remember myself dealing with it (technology, security, and privacy). So is national security and the pursuit of anything that threatens it, and I don’t need to show any credentials for either.

This is an interesting case, where these two allegedly face each other. But it’s a clear cut from where I’m standing. I’ve said it before, and I’ll say it again: Tim Cook and Apple drew a line in the sand. A very clear line. It is a critical time now to understand which side of the line everybody stands on. Smaller companies that lack Apple’s legal and market forces, which have bent over so far to similar “requests” from the government can find solace in a market leader drawing such a clear line. Large companies (I’m looking at you Google!) should also make their stand very clear – to support that line. Crossing that line means taking a step further towards being one of the regimes we protect ourselves from. Dark and dangerous ones, who do not value life, who treat people based on their social, financial, racial, gender, or belief standing differently. That’s not where or who we want to be.

Or at least I’d like to think so.

Update: Apparently Google is standing on the right side of the line:

Update 2 (2/20/16): Seems like the story is developing more rapidly, so figured I’d add a couple more elements here.

First – a good review from a forensic perspective on the FBI’s request puts the entire thing in even shadier legal standings if the data from the phone would be used in such a way: http://www.zdziarski.com/blog/?p=5645

Second – Apple today (2/20) updated that while the phone was in the FBI’s custody, it’s iCloud ID has been reset, basically eliminating one of the easier paths to recover data from the phone (http://abcnews.go.com/US/san-bernardino-shooters-apple-id-passcode-changed-government/story?id=37066070). This would have been a major oversight by the FBI, who would have failed to establish a clear “hands-off” policy on anything related to the terrorists assets – including it’s employer’s digitally controlled assets. Later in the day, and probably after getting under scrutiny for allegedly performing the iCloud account reset “on their own accord”, the San Bernardino County’s official account notified that it essentially tampered with the evidence based on the FBI’s request.

If this indeed is the case, we are looking at a much more problematic practice that exceeds incompetence, and moves into malpractice.

line-in-the-sand1

p.s. additional reading on this, from a couple of different authors who I wholeheartedly agree with:

http://www.macworld.com/article/3034355/ios/why-the-fbis-request-to-apple-will-affect-civil-rights-for-a-generation.html

And the EFF’s stand: https://www.eff.org/deeplinks/2016/02/eff-support-apple-encryption-battle