Tag Archives: Ethics

Breaking news: Spy agencies are spying!

Please say it ain’t so! Spy agencies are spying?

I’m actually going to go out on a limb here and present my (again – MY) opinion, which might pass as complicated by people with very deterministic views (or are being spoon-fed said views through the media of their choice).

First – I think that the Der Spiegel article that covers the “latest” NSA spying capabilities (http://www.spiegel.de/netzwelt/netzpolitik/quantumtheory-wie-die-nsa-weltweit-rechner-hackt-a-941149.html) is very important, and I applaud Jake and the crew that covered this. If you haven’t yet, go read it and go over the slides. Also make sure to read through the “product catalog” here: http://leaksource.wordpress.com/2013/12/30/nsas-ant-division-catalog-of-exploits-for-nearly-every-major-software-hardware-firmware/

So you are back? Great! That being said, I do think that spy agencies should continue spying. BLAM! And yes, it makes total sense to me. Because I do think that spy agencies should keep spying in order to keep their corresponding nations safe. It’s all about the tradecraft and trying to keep a step ahead of your potential enemies.

Yes, that WILL entail walking (and falling over) a very fine line between legal implications and privacy. It means that as always – agencies will spy on foreign nationals AND citizens. Because yes – terrorists and adversaries do not have boundaries that are defined by the color of your passport. And opposed Jake’s claim in his CCC talk, “carpet bombing” is a totally legitimate way to collect and analyze data. I’m not saying that it’s nice, or legal, or ethical, but it’s effective. It’s up to the agency using this technique to justify and qualify what they do. And yes – keep it quiet – just because of this delicate nature of collection.

Now, back to the data. Yes – agencies (and I’m not picking on the NSA here, these kinds of capabilities exist with lots of other agencies), have these kinds of capabilities to wiretap, modify, exploit and persist on a lot of kinds of accounts and systems. It’s what they are tasked with doing. That’s not even news. But I think that the fact that this comes up again is critical because of something completely different: OPSEC. Operational Security.

The NSA has fallen (again) to the oldest sin of spying – getting cocky. You can see the same behavior from anyone who’s picking up a new capability – be it a script kiddie picking up Metasploit for the first time, someone getting to be decent at martial arts, or any other skill. They get cocky. And think they are unbeatable. And that’s when mistakes start to show up. Basic OPSEC. And I believe that this is an important lesson to learn. Again. Because OPSEC is not a compliance thing that you check off once and forget about it. It’s a basic practice that (should be) taught to everyone that participates in tradecraft. And practiced. And apparently the NSA isn’t that great at it (surprise!). Hence their powerpoint slides are all over the Internet now.

So that’s my little 2c on the topic. Yes – I support spy agencies continued practice, and yes – I support anonymity and privacy, and yes – I support the law and the need to keep improving it. I support the creation of free and open source software designed to enhance your anonymity and privacy. I have actually met Jacob a couple of times (and found it funny that he’s freaking out every time we do meet), and actually think he’s a great guy. Same for Moxie. Complicated? I mentioned it at the beginning. So there you have it. Deal with it.

Now go watch Jake’s talk from CCC. You have to. Because I said so. And for crying out loud – get your OPSEC together.

Information Security, Homeland Security, and finding someone to pin it on

In the recent spree of cyber attacks on a plethora of US and international government and federal related establishments a lot of speculations are being thrown around as authorities are trying to find the threat community behind it.

As computer systems are reigning most of the control over our daily lives – from transportation, through financial systems, and up to government facilities that provide research, analysis and even critical infrastructure to support what we know of now as “modern life”, attackers find it easier and easier to poke at such systems as their security is left mostly as an afterthought. Most of the focus when the relevant organizations approach the forensics and remediation of such breaches is first to recover any lost data, and then to identify not the root cause of the breach, but the attacker.

As the blame game runs amok, the actual privacy and confidentiality of the core (digital) elements of our modern society are left for grabs. When groups such as LulzSec, Anonymous, and any other book-reading internet-browsing anonymous-under-several-proxies infosec-warrior find it as easy as running a few scripted tools on their target list to find easy to exploit issues, we are facing a very tough job of figuring out who to blame.

Nevertheless, blame by itself (or attribution as we like to refer to it in the more politically-correct industry circles) won’t help us in mitigating such attacks. It may be helpful for organizations to have someone to pin the “adversary” tag on – especially when dealing with defense/government/federal institutions who’s budgets can be manipulated more easily under the threat of a foreign nation. But when looking at the ability to actually come up with evidence to support such claims we often face empty hands, and a thick smokescreen of assumptions, prejudice, and incompetence.

On the other hand, when viewed from a strategic/political stance, it can be easily seen how a string of breaches in facilities that share a common ground (such as the one presented by Rafal Los of HP in his great article “DOE Network Under Siege”) can be attributed more to a nation state than to a fun-seeking internet-bored group.

This simple reality – of having intricate connections that are often only visible when looking at the bigger picture of security incidents, allows state sponsored attacks to happen without much scrutiny or the ability to thwart them on a more strategic position.

The bottom line remains the same – chasing after excuses and online enemies won’t get us to a more secure state. Investing in proper education, training, exercises, people and (lastly) technologies, will. Instead of trying to investigate breaches from an attribution standpoint, we should be investigating root causes to the deepest level (i.e. not stopping at “a 0-day vulnerability we didn’t know of”, or the bit-bucket of “It’s an APT”) that involves how we manage our electronic infrastructure and how we keep track of what’s going on in it after the initial setup is complete and the contractors/integrators pack up their people and leave.

7 Steps to consider when running a Vulnerability Assessment

Today I’m proud to give this stage to some friends from GFI (have some good friends from the former Sunbelt guys that were acquired by GFI last year). Vanessa is our guest blogger, and she’s got a great post on how to run a more effective Vulnerability Assessment process in your organization.


Do you know how your server measures up to potential threats? If you haven’t performed a vulnerability assessment on your servers yet, you may not be aware of issues that may leave you exposed to hackers and web-based attacks. A vulnerability assessment is the process of inventorying systems to check for possible security problems, and is an important part of system management and administration.

Vulnerabilities are weaknesses within a server or network that can be exploited in order to gain unauthorized access to a system, usually with the intention of performing malicious activities. The most common way to address many software-related vulnerabilities is through patches, which will usually be provided by the software manufacturer to correct security weaknesses or other bugs within an program. However, there may be times when a patch is not available to address a possible security hole, and not all vulnerabilities are software-related for which a patch would be offered. This is where the concept of vulnerability assessment comes into play. Minimizing the attack surface and the effect that a potential hacking attempt could have on your system is a proactive way of effectively managing a server network.

While there is no 100% way to protect your servers against vulnerabilities, in performing a vulnerability assessment there are some steps you can take to minimize your risk:

  1. Close unused ports
    Ideally, your server network setup should include at least a network firewall and a server-level firewall to block undesired traffic. Undesired traffic would include traffic to ports that are unused or that correspond with services that shouldn’t be publicly-available. These ports should be blocked in your firewall(s).
  2. Don’t over-share
    If servers on your network are set up to share files with others, or to access network shares (such as file servers and other resources), make sure that those shares are configured to only allow access as appropriate. Hosts that don’t participate in sharing resources should have that capability turned off completely.
  3. Stop unnecessary service
    The more services you have on your server, especially those that listen on network ports, the more avenues a hacker has to get into your system. This is especially true if you have services running that aren’t being monitored or used, and therefore are unmaintained. Stop services that are not in use or necessary, and restrict access to others that are not intended for public access.
  4. Remove unnecessary applications
    Many operating systems come with a wide set of programs that may not be necessary for normal server operations. Find out what software is installed on your system, and then determine which of those applications are not necessary and remove them.
  5. Change your passwords
    Using default vendor passwords is more common than you may think – but since those passwords are usually publicly-known, they are often the first ones used during hacking attempts. Secure passwords should always be used in favor of the vendor defaults, and industry experts recommend changing them every 30-60 days.
  6. Do some research
    When software or new applications are installed, users often neglect to take the time required to review their settings to ensure that everything is up to par with modern security standards. Take some time to research what you are installing and any security implications that it may have, including what features may be enabled that could introduce security problems, and what settings need to be adjusted.
  7. Encrypt when possible
    Many services and network hardware have the capability of encrypting traffic, which decreases the likelihood of information being “sniffed” out of your network. When transmitting sensitive data, such as passwords, always use an encrypted connection.

Regular vulnerability assessment is a vital part of maintaining system security. Not only will it help diminish the success or possible effects of malicious activity against your servers, but it’s also a requirement for many compliance standards such as PCI DSS, HIPAA, SOX, GLB/GLBA, among others.

This guest post was provided by Vanessa Vasile on behalf of GFI Software Ltd. GFI is a leading software developer that provides a single source for network administrators to address their network security, content security and messaging needs. More information on vulnerability assessment

All product and company names herein may be trademarks of their respective owners.