Skip to content
May 14, 2014 / Jared DeMott

Understanding how Fuzzing Relates to a Vulnerability like Heartbleed

Fuzzing is a security-focused testing technique in which a compiled program is executed so that the attack surface can be tested as it actually runs.  The attack surfaces are the components of code that accept user input.  Since this is the most vulnerable part of code, it should be rigorously tested with anomalous data.  During testing, the application is monitored for known bad states, such as an application crash, often the result of an out-of-bounds memory access.  If a crash is found, the input and application state are stored for later review.  Such a flaw will be entered as a security bug for developers to repair.  A high level view fuzzing is shown in Figure 1.

fuzzingoverview

Figure 1: Fuzzing Overview

Fuzzing is often an effective way to find bugs, which are missed by manual and automated code reviews.  Fuzzing also finds real flaws, e.g. not much problem with false positives like automated code auditing tools.  However, fuzzing tends to get shallow penetration for complex protocols, and thus relatively weak code coverage.  This is particularly true for code paths that might require specialized input that is unlikely to be present in automated inputs.  That is why both static and dynamic testing approaches are critical to any well-formed secure development lifecycle (SDL).

I have not been briefed on exactly how the OpenSSL code is secured.  The Heartbleed wiki page specifies how and when the bug was introduced into the code base.  However, the wiki page does not discuss how often code reviews and runtime testing are conducted. It would not be surprising to find out that a vulnerable version of the OpenSSL code had been fuzzed and yet the Heartbleed bug had been missed.  Information disclosure bugs are easy to miss when fuzzing, because there may be no crash associated with this bug.

However, crashes are not the only exceptional condition fuzzers can look for.  Access violations have been popular to search for in the past because they can sometimes lead to host compromise exploits.  But with the rise of low-level systems protections like ASLR (address space layout randomization) leaking memory contents is a growing field of offensive study.  Leaking the contents of memory helps attackers defeat the ASLR protection.  The disclosure was particularly exciting, since cryptographic material could also potentially be leaked.

There are ways that fuzzing tools can observe potential memory leaks.  For example, in the case of a network protocol like OpenSSL, the size of return packets could be recorded, and anything larger than expected should be reported on.  Likewise, if the returned data is of a well-known format, unknown data should also be treated with suspicion.  Also, detecting leaked memory is commonly accomplished by using a debug heap that sets each allocation next to an unmapped page, as shown in Figure 2.

debugheap

Figure 2: Debug Heap

If data is read beyond the scope of one allocation, a page fault will occur.  Not all information leaks occur outside the bounds of an allocated buffer, but this approach is a good start.  Most operating systems have a debug heap, which can be optionally enabled.  It is clear that the heartbeat extension was not fuzzed using a debug heap; else this bug would have been detected.  The beauty of using debug heaps for fuzzing is that your existing tools can continue to focus on catching crashes and they will now find this bug type.  The tradeoff is that the application, and therefore the fuzzer, will run slower.

So was the heartbeat extension not fuzzed at all?  Or was improper technique or tooling used?  Either is likely; both often happen.

  • Overworked testers create fuzzing tools that test most of the protocol, but do not go back and retool fuzzers for each code check-in.  To combat this possibility existing tools that test SSL could be used.  Often times propriety fuzzers are required for special purpose code, but this is a well-documented standard.
  • Improper technique is just as likely: the people charged with testing are smart developers or testers, but may lack critical security training.  That is why many major projects now have a central security team that helps with security testing such as fuzzing.

To close the loop on this bug, the OpenSSL team should determine where the testing failure occurred and address the issue to insure a similar situation does not occur.

2 Comments

Leave a Comment
  1. Sanjay- securitylearner / May 14 2014 11:14 pm

    good to know about debug heap option. i too was curious to know if static analysis could have detected this kind of bug (as Codenomicon has commercial fuzzer, so perhaps they might have used it to detect the bug) … as a result, threw few thoughts at http://lnkd.in/b36VCp6

  2. Sami / Codenomicon / May 27 2014 5:52 am

    Hi Sanjay, we call our Fuzzers model-based, or generational Fuzzers. Meaning that we do have built in protocol model there. Breaking the relationships of TLV structures is one of regular scenarios that’s there for all applicable protocols, and it’s doable because Fuzzer ‘understands’ the protocol. Latest generation of Fuzzers do have checks for return values, amplification, weak crypto, faulty authentication implementations (well, this is related to return values) etc. That’s what caught Heartbleed.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: