How to Vendor/Sales in the Security Industry

I’ve been on the receiving end of sales pitches for years now. Ever since I took on senior leadership roles the constant trickle of various sales pitches just kept increasing.

These vary from completely out of the blue “cold calls” that attempt to push some solution, through the slightly better informed ones that take into account some of the business context, to highly relevant and targeted ones (not too many of those unfortunately).

This discussion recently came up during a conversation with one of my friends (who happens to be in sales), and we were comparing notes on the atrocities that we’ve seen as far as those pitches go. So I brushed up on one of my favorite Vendor Rebuffs courtesy of Andy Ellis, and was pointed to an interesting post by Mike Johnson from Lyft.

Both provide a good approach to dealing with vendors, but I found that there’s something missing, so here’s my additional take on it:

  1. Don’t pitch me. I’m probably not the right first contact for you. In every organization I’ve managed or built, I had subject matter experts (security architect, managers, etc) who were provided with the responsibility, autonomy and budget for their domains. You are looking for them.
  2. Don’t try to skip over. Skipping over my SMEs and going directly to me will result in the best case you being re-routed to them. Skipping me and trying to go to my CEO/CFO/etc will result in your company being blacklisted.
  3. Context is king. Trust me to do the minimal amount of work required to get my job done – which means I’m well aware of areas where we need products/services. There’s zero chance of you educating me of a completely new domain where I need help with (if there was, I’d be fired, or quit before that). Then you’d need to trust me to know the market enough to have conducted due-diligence and find the relevant providers in that domain. From the well-known names, to 5 people startups. We do not discriminate, and I always make sure to cover the market properly (I personally prefer to work with startups where we can have better control over the product features and roadmap actually…).
    If you aren’t in the running, then it’s one of two options:

    1. Your solution is known, but isn’t good enough for us or doesn’t fit our requirements (we work based on our requirements, rather than on what the vendors have to offer).
    2. You have an opportunity to educate us on your solution and we’d love to hear about it and see whether it fits our needs or not.
  4. As per the vendor rebuff from Andy – you don’t need to follow up again on your email. Not even once (definitely not 3 times). I do read all my email, and yes – I’ve been ignoring yours because I reached the conclusion that this is the best way to get rid of you. Past experience have shown that “unsubscribe”, “don’t contact me again”, and “this isn’t relevant” responses end up being perceived as “sure – reach out again in 6/12 months to see if my memory is shoddy”).
    Absolutely do not try to actually call me on my phone. You are wasting my time (and compoundly so, since it requires me to shift my multitasking and deal with your analogue call continuously).
  5. I maintain a black-list of vendors. It’s not easy to get into it (I do provide the benefit of the doubt and best intentions to everyone), but it’s impossible to get out of. You can ask vendors who have suddenly saw an immediate halt on all orders from my organization once I started working there.

So here you have it – a roadmap on how to get the 15 minutes of attention. Yes – it means you need to do your homework first. No – “clever” pitches to grab attention will not result in getting it (unless you were fortunate enough to be the one who caught me on a bad day, in which case I’ll probably post your egregious sleazy pushy email anonymously somewhere).

And even when you do your homework, you need to remember that you are dealing with a market that’s pretty mature and educated (or at least, my organization is such). Your attempt to “educate” us on a need is likely to be ineffective. Keep to the above and you might have a chance to get thrown into our POCs where we evaluate solutions for our needs.

Basic is great

Encouraged by the response to my last post (https://www.iamit.org/blog/2018/06/the-ian-amit-spectrum-of-pentesting-efficacy/ for those who missed it), and following up on a couple of recent Twitter/LinkedIn/WhatsApp conversations, I’d like to emphasize the importance of doing basic and simple work (in security, but it probably also applies to everything else).


We are working in a weird industry. The industry encourages unique thinking, contrarian ones, and creativity. Guess what? The kinds of people who find themselves “fitting in” is more often than not your typical ‘hacker’ with the stereotypical social baggage that comes with it. It also means (and of course, I’m generalizing) a short fuse, lack of respect/patience to people who are not as immersed in the cybers as they are, and that often creates the scenarios that Phil describes in his post.

Moreover, those of us who have been around the block a couple of times, also know and realize that there is no silver bullet solution to security. We are in it because we realize it is a constantly moving and evolving practice. Because we love the challenge, the changing landscape, and the multitude of domains involved in practicing security properly.

Which gets me to the basics. 


This, and other conversations (the notorious “Cyberberet” WhatsApp channel for the Israeli guys), which revolve around the latest and greatest [insert cyber-marketing-buzz/fud] solution. So here is my old-man eye-roll for you…

I earned the right to roll my eyes. 20+ years and counting 😉

The reason being, I still see a lot of organizations trying to decipher how they are going to integrate said [insert cyber-marketing-buzz/fud] product, while failing to have a basic security program.

They often don’t have one, because they never bothered to perform a proper threat modeling exercise where they “dare” ask their executive leadership what do they care about (i.e. what are they afraid of). I’ve seen companies invest huge $ in fancy SIEM solutions while not having a basic authentication mechanism for their employees (dare I say MFA?). And even the inability to get a somewhat consistent asset inventory and tracking, which comes with the usual excuse – all this cloud stuff is very dynamic, we don’t have racked servers like in the olden days. To which my rebuttal – all this cloud stuff makes it easier to track your assets. You are just lazy and incompetent.

Compound that with an approach which I sadly see some of my colleagues take, which says – forget about all those products, you are going to get breached. You need to embrace the [other-fud-buzz-cyber] approach where attackers [pre-cog / deceived / lost / identified before they get to you / hacked_back / …]. Hmmmm, let me guess – you must have a company operating in that space, right? 

So no. Neither precog, nor deception or hacking back will save you either. And I’ve played attacker against these things in the past, and (shocked face) always won against them. What you should be doing in getting back to basics.

You know – the stuff they teach at intro to infosec 101. Layered security. Logging, monitoring and anomaly detection (behavioral – after baselining and such). Getting the basics of authentication and authorization done properly. Having a patch management practice coupled with a vulnerability scanning one. Knowing what is your threat model. What assets are you protecting. Which do you need to prioritize over others. What is your threat landscape (and no – no matter how fancy or ninja/8200/NSA the threat feed is, it most likely has zero applicability to your threat landscape). What controls do you have in place (technological, and others) and how effective are they.

Image result for polishing turd

“Playing” with these basic elements can, and will have a huge impact over your security posture. Much more than trying to fit a fancy “cyber” solution without any context over what you are getting done (see equivalent image to the left…). But you know what – don’t take my word for it, ask any competent pentester who’s faced both – a properly executed security program, and for comparison one of the latest buzz-worthy products. You’ll get the same response: it’s harder to overcome the security program, while dealing with a magic product requires a one-time effort to render it moot.

Now go back to the basics. There’s no shame in it, you’ll get to play with the fancy stuff a bit later, and when you do, make sure to come up with what YOU need from it rather than get starry-eyed while listening to the sales folk try to wow you 😉

The Ian Amit Spectrum of Pentesting Efficacy

It’s been a while since I posted (duh), but recently I’ve had something brewing in my mind that appeared to not have been clearly discussed before, so here goes.

I’ve been seeing some discussions and ambiguity around pentesting, vulnerability assessment, and red teaming (again – no huge shocker for those of us in the industry). However, as much as the “look at our shiny new red team” marketing BS coming from big companies (read as “we do pentesting, we have a new name for it so you can pay more”), pisses me off, what bugged me even more is the lack of clarity as to where and when pentesting can/should be used, and through which means.

I offer you this – my simplified spectrum of pentesting efficacy.

In short, here’s how this works: first identify the actual need for the test. There should be three categories as follows:

  1. Testing because you have to (i.e. compliance). PCI is a good example here. It’s something you can’t avoid, and doesn’t really provide any real value to you (because of the way it is structured, and as we all know, compliance/regulation has nothing to do with security so you might as well ignore it.
  2. Testing because you want to make sure that your controls are effective, and that your applications are built properly. This is where the “meat” of your pentesting should come into play. This is where you see direct value in identifying gaps and fixing them to adjust your risk exposure and tolerance (based on your threat model and risk management, which you should have, or if you don’t, just go ahead and quit your job).
  3. Testing to see how you fare up against an actual adversary. Bucket 2 above was fairly technical in its scope and nature. This bucket is threat focused. More specifically – threat actor focused. Your risk management strategy should identify the adversaries you are concerned about, and here you want to see how you fare up against them in a “live fire” scenario.

Once you have these, you are almost done. Here’s how you approach “solving” for each category:

  1. Compliance: find the cheapest vendor that’ll check off the box for you. Don’t be tempted for cheap marketing tricks (we red team, we thoroughly test your controls, we bring in the heavy hitters who have spoken at BlackHat and DEFCON and the super-underground conferences). Remember – you are getting no security value from here, so shop around and see who will tick the box for you. Remember to be heavy handed on the reporting and remediation, as if you are doing things correctly, the scope should be very minimal (remember – just enough to cover compliance requirements) and you should easily have these covered as part of your standard security practice.
    Also – no point of putting any internal resources into here since it won’t challenge them and is menial work that should be outsourced.
  2. Controls and Applications: this is where you should invest in your own resources. Send your security engineers to training. Build up an SDLC that includes security champions and involves the dev teams (DevSecOps anyone?). This is where you should see the most value out of your investment and where your own resources are better equipped to operate as they are more familiar with your operating environment, threats, and overall risk prioritization. In this category you’ll also sometimes include testing of 3rd parties – from supply chain to potential M&As. Use your discretion in choosing whether to execute internally or engage with a trusted pentest company (make sure you utilize the Penetration Testing Execution Standard when you do).
  3. Adversarial Simulation: This is where you shift from pentesting to red teaming. The distinction is clear: pentesting focuses on one domain (technical), and sometimes claims to be a red team when phishing is involved (social). A red team engagement covers three (technical, social, physical), and more importantly, the convergence of two or three domains (social/physical, technical/social, technical/physical, or all three). This is where you should engage with a company that can actually deliver on a red team (again – use the PTES sparingly in helping you scope and set expectations for the engagement), and can work with you to identify what kind of adversary they should be simulating, how open or closed does the intelligence gathering works, how engaged will they get with targets, and to which degree are you ok with potential disruptions. This is where you’ll see value in affirming your security maturity across several domains, and how these fare up against your threat communities. This will also enable you to more clearly align your investments in reducing exposure and increasing your controls, monitoring, and mitigations for actual loss scenarios.

I did mention vulnerability assessments initially, and if you made it through here you noticed it’s not on the spectrum. Well, it kind of is – it should be part of all of the engagement types, and is not considered an independent member of an engagement. Hint – never outsource this. It’s extremely cheap to engage in VA by yourself, and there are plenty of tools/programs/services that are highly effective in continuously performing VA for you that range from free to very cheap. Don’t be tempted to pay any premium for it.

That’s all for now – hope this made some sense, and helped you in prioritizing and understanding how pentesting (and red teaming) are best applied through your security program!

Dumpster fires and security incidents

Full disclosure: this post isn’t about security per-se. It’s here because of recent conversations I’ve had with people from outside the immediate security “industry” who wondered about Equifax from a technical perspective, but mostly from a “WTF are these guys smoking” one ;-). I’m also happily not selling any of this (although I did in the past, and ran crisis management for a few major incidents – ones that happily did not end up like Equifax – although had every opportunity to…)

A lot was said and written on the Equifax hack (check out Brian’s coverage for most of it). Mostly about how badly Equifax handled their security by blaming Apache Struts, and having 3 executives dumping stock before the public announcement. How they handled the incident response by working with a 3rd party that leaked the event by registering equihax.com without proper OPSEC. And obvious security issues while working on communicating with their customers such as pointing people to use a newly registered domain name, hosting a fairly static site on WordPress, leaving trails of user config from said WordPress, etc…

But I’m not here to talk about this (again – enough have been said). I’m here to talk about how all these came out to be in the first place. Well not the original breach (that took years of neglect IMHO), but how does a major company like this gets to commit to a string of poor decisions in time of crisis.

And the simple answer is crisis management. They don’t have any. I’m not talking about a “cyber incident plan” that includes communication strategies (which I’m sure all the vendors are hawking these days in light of the major fail that Equifax are exhibiting). I’m talking about properly handling a crisis from a company management perspective.

Companies this size that go through a major breach and do not have a very strong leadership internally, tend to fall into mode of operation where all the executives are taking care of themselves, and have zero to negative cooperation. Everyone is trying to CYA, and rush to the closest “action” they can hide behind while saying “I did my job”. That’s how you end up with registering a new domain to handle the incident and running it on WordPress (instead of using your established “credible” domain).

Crisis management is often done best by bringing an outsider. It can be someone from the advisory board who doesn’t have a direct stake with the company, or just someone who’s JOB is to manage crisis scenarios. Much like the security consultants (that Equifax probably didn’t use as much as they should have), crisis management people come in, and represent the company’s best interest.

Unlike security consultants, the crisis manager is responsible for making decisions, and vetting any action the company does. Everything goes through them. From communications, through legal, to technical remediation. This assures that there is a clear line in how the company operates, there’s an owner to these actions, and the owner can report back to the board with accountability and represent the company’s best interests. Clearly, Equifax did not have any of this. I’m sure they were advised by the incident responders they hired, and potentially by other security consultants. But these don’t have crisis management experience. They lack the perspective, and the breadth of thinking about all the implications, and the solutions they propose are usually scoped to a technical element.

That’s how security incidents turn into dumpster fires. Even when you have the best security professionals working for you on the incident. Companies need to learn, that regardless of their size, for situations that exceed the typical “shit’s broken”, they need professional crisis management help. Just like they get for performing incident response (because they don’t have the skill-set) or forensics (because the don’t have the skill-set). See the trend?

When great ideas go to the wrong places

Or: why attribution is not a technical problem.

TL;DR: hacking is an art and a science, computer attacks (cyber these days) are only one manifestation of an aggressor, which has very limited traits that can trace it to its origin. Relying on technical evidence without additional aspects is not enough to apply attribution, and when done so, attackers can use it to deflect attribution to other actors.

Context: Experts, Microsoft push for global NGO to expose hackers

So, apparently, some really smart people at RAND corporation and Microsoft have decided that they are going to solver the world’s computer Bourne attack problems by creating a new global NGO to unmask and apply attribution to hacking incidents. They claim the organization will be responsible to authoritatively publish the identities of attackers behind major cyber attacks.

Which is really cute when you think about it – a bunch of brainiacs (and Microsoft people) sit around and analyze network, storage and memory dumps to trace back attacks to their origins. Sounds like a really great service, which can be used by companies and governments to trace back who attacked them, and act on it (either by suing, or means of diplomatic recourse).

The only problem is that the attribution game is not won on technical merit only. And guess what? Attackers know that very well. Even the US government knows that (or at least the organization responsible for launching such attacks) and have been trained to study different attacker’s traits and tactics so that they can replicate them in their own attack – hence throwing off attribution if/when the attacks are detected.

The reality of it is that companies are often hired to provide incident response and forensics, and in a rush/pressure to give value to their clients, come up with attribution claims based on technical merits. Cyrillic words will point to Eastern European blame (RUSSIA!). Chinese character in a binary will lead to claiming Chinese hackers are behind an attack. An Iranian IP address linked to a command and control server that trojans connect to will point to an Iranian government operation. Which is all a big steaming pile of horse feces because everyone who’s been on the offense in the last couple of decades (probably more – I can only attest to my experience) also knows that. And can easily create such traces in their attack. Furthermore, for the ones following at home thinking “oh, they know that I know…” – yes, we play that game too, and attackers are also “nesting” their red herrings to trace back to several different blamed parties, and it all depends on how deep the forensic analyst wants to dive in.

The bottom line, is that the technical artifacts of a computer attack are ALL FULLY CONTROLLED BY THE ATTACKER. Almost all forensic evidence that can be found is controlled by a knowledgeable attacker, and should be considered tainted.

Now consider an NGO who have no “skin in the game”, and relies on technical artifacts to come up with attribution. No financial evidence, no political ties, no social and physical artifacts or profiling of suspected targets or persons of interest in the victim organization. Anyone who’s been somewhat involved in the intelligence community can tell you that without these, an investigation is not worth the paper or the bits that are produced during it.

So, sorry to burst another bubble, and actually, if you read the article, you’ll see that I’m not alone, and at the Cycon conference at which this initiative was announced, several others have expressed pretty firm opinions on the futility of this initiative. So as much as I appreciate the initiative and willingness to act and “fix the problem”, perhaps it’s best to actually step out of the fluorescent light and really understand how things work in the real world 😉