Tag Archives: vulnerability

Guest post: Why you need patch management

Today we have another guest post from our friends at GFI – this time on patch management (which unfortunately is one of the reasons that so many pentests are so easy to succeed in…)

Every organization uses several types of software such as operating systems, servers, clients and many other third party applications. Every software package is a complicated piece of engineering. Programs are created from hundreds of thousands to millions of lines of code and each line of code is further converted by the compiler into many low level instructions that a computer needs to execute. With all this complexity, it is not surprising that issues will arise affecting the smooth running of the system on which the software is installed.

Be it a coding mishap, a combination of inputs that the developer didn’t foresee, or even an unexpected interaction between the billions of low level instructions, an application can have a vulnerability that would allow someone to maliciously exploit in order to perform some undesired action.

Malicious hackers are constantly looking for these software deficiencies both for fame and profit. A zero-day vulnerability in a popular application may be sold for hundreds of thousands of dollars on the black market. Such vulnerabilities could be used to infect a large number of computers before vendors get a chance to detect and address the problem.

At the other end of the spectrum, security researchers are in a daily race with malicious hackers to find vulnerabilities themselves. When they do come across these vulnerabilities, vendors are notified and given time to fix the issue before making their findings public.

How does all this affect your need for patch management?

Malware like the Code Red worm and Sasser exploited software vulnerabilities to deliver their malicious payload, therefore failing to patch software would leave your systems open to such attacks. Payloads can be anything from BotNet software which will waste bandwidth and computer resources to send spam, to spyware software designed to steal credentials, financial details and intellectual property.

While security researchers play an important role in discovering vulnerabilities, disclosing them to the public indirectly increases the risk for organizations because even though a patch will be available, not all organizations are organized and address the problem immediately.

Hackers are aware that many IT admins take their time to patch their systems, sometimes waiting months to deploy a released patch. Code Red is a perfect example. While Code Red exploited a vulnerability for which a patch had been available for over a month, countless systems were compromised by the malware because many had not yet deployed the critical patch.

One also needs to take a holistic approach to patch management.  As in all security cases, it is the weakest link in the system that is exploited. An attacker does not need to compromise all your systems’ vulnerabilities, they only need to compromise one vulnerability to get access to your systems. That is why selective patch management is not recommended. While operating system patch management is essential and easiest to perform, that alone will not protect you against vulnerabilities in third-party products such as a PDF file exploiting a vulnerability in a PDF reader.

It is obviously preferable to do operating system patch management exclusively rather than having no patch management at all, but the more applications you keep up to date the more effective your security will be.


This guest post was provided by Emmanuel Carabott on behalf of GFI Software Ltd. GFI is a leading software developer that provides a single source for network administrators to address their network security, content security and messaging needs.

Find out more about what should be included in your patch management.

7 Steps to consider when running a Vulnerability Assessment

Today I’m proud to give this stage to some friends from GFI (have some good friends from the former Sunbelt guys that were acquired by GFI last year). Vanessa is our guest blogger, and she’s got a great post on how to run a more effective Vulnerability Assessment process in your organization.

 

Do you know how your server measures up to potential threats? If you haven’t performed a vulnerability assessment on your servers yet, you may not be aware of issues that may leave you exposed to hackers and web-based attacks. A vulnerability assessment is the process of inventorying systems to check for possible security problems, and is an important part of system management and administration.

Vulnerabilities are weaknesses within a server or network that can be exploited in order to gain unauthorized access to a system, usually with the intention of performing malicious activities. The most common way to address many software-related vulnerabilities is through patches, which will usually be provided by the software manufacturer to correct security weaknesses or other bugs within an program. However, there may be times when a patch is not available to address a possible security hole, and not all vulnerabilities are software-related for which a patch would be offered. This is where the concept of vulnerability assessment comes into play. Minimizing the attack surface and the effect that a potential hacking attempt could have on your system is a proactive way of effectively managing a server network.

While there is no 100% way to protect your servers against vulnerabilities, in performing a vulnerability assessment there are some steps you can take to minimize your risk:

  1. Close unused ports
    Ideally, your server network setup should include at least a network firewall and a server-level firewall to block undesired traffic. Undesired traffic would include traffic to ports that are unused or that correspond with services that shouldn’t be publicly-available. These ports should be blocked in your firewall(s).
  2. Don’t over-share
    If servers on your network are set up to share files with others, or to access network shares (such as file servers and other resources), make sure that those shares are configured to only allow access as appropriate. Hosts that don’t participate in sharing resources should have that capability turned off completely.
  3. Stop unnecessary service
    The more services you have on your server, especially those that listen on network ports, the more avenues a hacker has to get into your system. This is especially true if you have services running that aren’t being monitored or used, and therefore are unmaintained. Stop services that are not in use or necessary, and restrict access to others that are not intended for public access.
  4. Remove unnecessary applications
    Many operating systems come with a wide set of programs that may not be necessary for normal server operations. Find out what software is installed on your system, and then determine which of those applications are not necessary and remove them.
  5. Change your passwords
    Using default vendor passwords is more common than you may think – but since those passwords are usually publicly-known, they are often the first ones used during hacking attempts. Secure passwords should always be used in favor of the vendor defaults, and industry experts recommend changing them every 30-60 days.
  6. Do some research
    When software or new applications are installed, users often neglect to take the time required to review their settings to ensure that everything is up to par with modern security standards. Take some time to research what you are installing and any security implications that it may have, including what features may be enabled that could introduce security problems, and what settings need to be adjusted.
  7. Encrypt when possible
    Many services and network hardware have the capability of encrypting traffic, which decreases the likelihood of information being “sniffed” out of your network. When transmitting sensitive data, such as passwords, always use an encrypted connection.

Regular vulnerability assessment is a vital part of maintaining system security. Not only will it help diminish the success or possible effects of malicious activity against your servers, but it’s also a requirement for many compliance standards such as PCI DSS, HIPAA, SOX, GLB/GLBA, among others.

This guest post was provided by Vanessa Vasile on behalf of GFI Software Ltd. GFI is a leading software developer that provides a single source for network administrators to address their network security, content security and messaging needs. More information on vulnerability assessment

All product and company names herein may be trademarks of their respective owners.

 

The curious case of Dropbox security

The Dropbox logoAfter the disclosure of the host_id authentication issues that plagued the popular Dropbox service last week, a new issue came up with the fact that Dropbox can detect whether the files you are trying to upload to their cloud already exist there, and “save you the bandwidth” of uploading it if they already have a copy in hand.

So – the Dropbox client probably checks for the hash of the file being uploaded against a list of hashes of existing files that are already stored on the cloud. It may also be that the files stored online are encrypted. So… what’s the big deal?

One has to remember that when using a service such as Dropbox (and I’m an avid user myself), you clearly do not have full control over the material you upload, and the online encryption is only a fraction of the protection you may be seeking. There is no key management visible to the user. There is no way that each client you use has its own key, nor they share keys, and if they do, Dropbox is managing your keys. This also gives them the ability to decrypt your data at any given time. Subsequently, it also gives them the ability to provide you with the file of another user if you tried to upload it yourself (hence saving you the bandwidth) – for example, when you may want to access it from a client which does not have the synched copy of your account (or through the web interface). They just decrypt the other user’s file, and serve it back to you. After all – you have the same one back on your home/work/whatever PC (remember that you showed “proof” by providing the hash before).

Which brings us back to reality – what are we really exposed to here in terms of risk?

  1. Dropbox has the ability to access the contents of my files.
  2. If I can come up with a hash of a file that I know someone else has, and that file may be confidential in some way, I can potentially claim to upload the same file, and then download the real one (as I don’t really have the original) from another client or through the web interface.

Clearly, the media attention to point 1 is important – but still not really interesting as people should have had a clue when they send their files to the “cloud”.

However, point 2 makes a more interesting argument… It would be interesting to see when the first “hack” will come along which will start “uploading” files (by hacking the client protocol – hint: start here, here, and here) just based on hashes, and then downloading them as if from another client to see what you get (if they were “cached” already on the Dropbox cloud). Now that would be an interesting little experiment…

Happy hacking!

Defining Penetration Testing

I have been fortunate enough to be working with a group of peers from the security industry over the past few months (since November 2010) on finally creating a solid definition of what a penetration testing is.

It has been a topic that has been abused, cannibalized, and lowered to a level where we (as in people in the industry) could not relate to it anymore. It was time to get the fake stuff out, and focus on content. We were all getting tired of “penetration tests” that were nothing more than a Nessus scan printed out and slapped on with the “security consultancy” logo.

Enter – the Penetration Testing Execution Standard.

This is our attempt to define what a penetration test should include – both from the tester side (vendors) as well as from the client side (the business/organization being tested).

It is the fruit of a huge collaborative effort from people who I consider to be some of the best in the industry. Getting together people who on their day-jobs often compete with each other, and come from different areas of the industry, all together and working on something as big as this has been a humbling experience. For that – you guys all ROCK!

Onwards to the content – remember that this is pre-alpha, and is aimed mainly to get feedback from everyone. A lot of branches do not appear in their full glory there, and some will surely not make it to the final edition. We welcome everyone to take a close look at this, contribute, criticize, assist, comment and generally get involved. Some of you may have been watching this and thinking we are holding back – could not have been further from the truth… In order to get to something as big as this we had to cap the number of participants in this revision in order to keep things somewhat organized, so this is a chance to get back in and offer your assistance – we promise to keep this as open as possible.

This is really exciting – for me at least. Hope some of you will be able to share this enthusiasm and weed out the industry from the bad form we got into.

About CyberWar, Deterrence, and Espionage

It’s been a long time since my last post, but trust me for all the good reasons (i.e. work). This one is long due, and has been recently fueled after I had a chance to attend RAND’s Martin Libicki’s brief at the Tel-Aviv University.

Spy vs. Spy - copyright Kigs, devianart.

Martin is a great source for debate and thought exercises as he is fluent in many realms of the subject at hand, and has been trained as an economist which makes it much easier to broaden the debate into politics and diplomacy.

I’ll address a few key elements of the brief – at least the ones that speak to me the most in terms of research and ongoing work that we are engaged in on a national, international and local levels.

First – the ever provoking “there is no CyberWar” statement. Immediately followed by “this is the definition of CyberWar as I see it”… Obviously, with a definition that closely resembles war as defined in other domains (land, sea, air, space), it’s hard to see how one can state that CyberWar was ever engaged (or ever will be for that matter). But the key here is not to treat the Cyber domain as “another” domain and try to use the template of the traditional domains when defining it. Cyber is a game-changer, it’s not a domain like any other, it has its own rules, territorial issues are mute here, jurisdiction is a mess, and accessibility is even worst. It’s almost impossible to define what a conflict is in Cyber, what an engagement is in terms of forces colliding and how is aggression defined. Nevertheless, all the issues mentioned in the last sentence have risen many times over the last decade, and yet some refuse to realize that in several occasions it was indeed a state or form of warfare.

The second issue is deterrence. On this one I almost completely agree with Martin’s approach which speculates whether real deterrence can be subjected into the domain. Nevertheless, I do believe that sustained and proven threat over the opponent’s critical infrastructure, financial and base production facilities can be used as a deterrence factor. You do not need missile silo counts to prove deterrence in the Cyber domain, you need sustainable access to critical systems, and a prove that you can retain such access in light of some vulnerabilities and key access elements being taken off the table by the defensive strategy. For that – enter espionage… With a combination of cyber-domain capabilities, and a solid intelligence practice (i.e. both gathering as well as proactive), one side can create a situation where such access to critical elements in the other side’s Cyber domain are kept consistently under surveillance and accessible to modification/sabotage.

Which leads to the last issue, which has surprisingly raised a lot of eyebrows lately – even from people who I consider proficient in the “Art” of international relationships and diplomacy: the “legality” of espionage. Face it – espionage has been and will always be a fully acceptable part of a nation strategy. It is accepted at all level of diplomacy, and by every nation. Everyone knows that everyone else is engaged in it, and is putting a lot of resources to make sure that their efforts are successful while trying to minimize everyone else’ efforts in their own territory. The same applies for the Cyber domain. It’s no big surprise that the US finds itself dealing with a major espionage case (on the commercial level) almost every year, and just think about all the cases that are not made public in the government, and military sectors… But have no fear – the other side is being spied on just as well with skills that do not fall short (and usually surpass) of what the US is subjected to. It’s a fact of life, so stop whining about it (and excuse the burn notice cameo).

To conclude – I truly think that dealing with such a young and ever evolving domain is a great challenge – both technologically, as well as from the diplomacy / international relationship aspects of it. And until we’ll have some shape or form of formalized discourse on this domain (such as the efforts put in by NATO, the UN and a few of the world’s largest nations), it’s a free-for-all playground that is going to keep providing us with moral, technological and sociological challenges. BRING IT ON!