This week I presented at Black Hat USA. The talk is titled “Poacher turned gatekeeper: lessons learned from eight years of breaking hypervisors”. The main points were:
- Describe the attack surface of Type 1 and Type 2 hypervisors
- Show that despite not being 100% bulletproof, hypervisors are still the best usable way to isolate potentially malicious code
- Describe a few generic methods to harden a hypervisor
- Discuss four new VirtualBox vulnerabilities
- Discuss DMA attacks against DeepSafe
The whitepaper is here, enjoy.
Tom Sutcliffe and Thomas Coudray
You walk into a coffee shop and take a seat. While waiting for your coffee, you take out your smartphone and start playing a game you downloaded the other day. Later, you go to work and check your email in the elevator. Without you knowing, an attacker has just gained a foothold in your corporate network and is steadily infecting all your colleagues’ smartphones too.
We don’t talk about Android much here on the Bromium Labs Blog, but now and again we like to tinker. Recently my colleague Thomas Coudray and I have been looking at an Android remote code execution vulnerability to see how much of a problem it is in real-world usage.
While privilege-escalation techniques are common on Android (and form the basis for the common practice of ‘rooting’ a device), remote code execution is a rarer and much more dangerous type of vulnerability. It allows an attacker to run code of their choosing on a user’s device without their knowledge or permission. This bug was particularly interesting because it appeared to still be exploitable even on a fully-patched latest-model Android device, a full 18 months after it was fixed. We wanted to see if this was true and if so, how much effort was required to exploit it. We found that the scenario described above is an all-too-real possibility.
The Bromium Labs team got together and came up with a summary of notable exploitation trends that we observed in the first 6 months of 2014. There were some interesting takeaways:
- Internet Explorer took the cap for historic high number of security patches in over a decade, and that feat was accomplished in the first 6 months of 2014!
- As timelines to the next version of the latest Internet Explorer shrink, time to the next security patches have also shrunk. Expect more to come.
- In 2013, Oracle Java was labeled as the notorious king for malware authors and zero day attacks, exploit kits had a field time with it. Notably, there were no reported zero day exploits targeting Java in H1 2014.
- Adobe Flash continued to be exploited by zero days and also providing attackers with newer avenues to exploit the browser (Internet Explorer).
- Attackers developed novel ways to attack browsers leveraging ‘Action Script Spray’ to bypass ASLR. This was used by several zero day exploits in the wild.
It’s evident that attackers continue to shift focus in between ubiquitous internet facing applications, but there’s a common theme throughout – attacking the end users. Will Java attacks continue to decline this year? Would attackers continue to focus on exploiting Internet Explorer? We’ll soon find out.
The full report is available to download here.
” Lasker does not play chess, he plays dominoes. ” – David Janowski, 1910
Alice sees her “browser” as the computer, not the network or operating system. This trend will only get stronger as BYOD takes hold. We’re in a brave new world, where traditional security models based on clear boundaries break down. Modern browsers’ cornerstone security approach, same-origin policy, entails significant complexity and controls only a limited subset of cross-domain interactions. Highly granular isolation within this context ends up breaking existing web-apps. In an odd way, one can see flashbacks and parallels to the old MS-DOS era.
However, the state of the art in exploitation is now more dominoes than chess. Eve can bypass the sandbox via techniques ranging from kernel exploits to plugins to get to Alice. We now take a look at what happens with just a few tricks from an evergreen bag.
Let’s install the latest:
Alice then confidently proceeds to browse to her favorite videos. Unfortunately, Eve has compromised the site and planted an exploit with a custom payload. Conventional layers of defense including AV, sandboxes and firewalls fail to stop the attack:
If Alice was protected by vSentry, the attack would have been captured within a micro-VM. Her SOC team would then be able to trace Eve’s tracks via LAVA:
Perhaps, Chrome should dance with Bromium.
Training is an important part of the secure development lifecycle. It’s something EVERY security conscious organization should be engaged in. But getting good training that is relevant to your developers and testers, as well as security engineers and researchers is hard to find. That’s why I developed a full two day course called “Application Security: for Hackers and Developers”. In the course I cover the 4 pillars of application security: Code auditing, Fuzzing, Reverse Engineering, and Exploit Development. 5 years ago when I realized there was no course that taught all 4 subjects in a balanced and deeply technical manner, I resolved to create one.
Over the last 4 years, I have offered the course to hundreds of students, who afterwards were equipped and energized to face their security threats and research goals. To give people a small taste of this course, I’ve partnered with BlackHat to offer a free 35min webinar on application security. The link to join the web training is here: https://www.blackhat.com/html/webcast/06192014-appsec-overview-deep-dive-and-trends.html The event happens on June 19th, 2014.
I hope you’ll join us for the webcast, and perhaps I’ll see you in Las Vegas for the next offering of the full, hands-on course:
The training is Aug 2-3 or Aug 4-5, 2014. Sign up, and keep securing your code!
If you see a text and / or HTML document on your Desktop called HOW_TO_DECRYPT with the following contents: then you were unfortunate to be a victim of another crypto-ransomware. Recently, several of our field engineers encountered this piece of ransomware encrypting files on victim machines. This new ransomware is being distributed via Java drive-by-downloads and it’s likely that more victims could be targeted with this new attack vector. We were able to capture the sample inside a micro-VM implemented in our product so we have all the files and traffic involved in the attack. This allows us to skip the Java exploit and 1st layer dropper for now and focus on the actual malware dropped. If we find something noteworthy in other parts of attack we’ll post a follow up article. Read more…
Fuzzing is a security-focused testing technique in which a compiled program is executed so that the attack surface can be tested as it actually runs. The attack surfaces are the components of code that accept user input. Since this is the most vulnerable part of code, it should be rigorously tested with anomalous data. During testing, the application is monitored for known bad states, such as an application crash, often the result of an out-of-bounds memory access. If a crash is found, the input and application state are stored for later review. Such a flaw will be entered as a security bug for developers to repair. A high level view fuzzing is shown in Figure 1.
Figure 1: Fuzzing Overview
Fuzzing is often an effective way to find bugs, which are missed by manual and automated code reviews. Fuzzing also finds real flaws, e.g. not much problem with false positives like automated code auditing tools. However, fuzzing tends to get shallow penetration for complex protocols, and thus relatively weak code coverage. This is particularly true for code paths that might require specialized input that is unlikely to be present in automated inputs. That is why both static and dynamic testing approaches are critical to any well-formed secure development lifecycle (SDL).
I have not been briefed on exactly how the OpenSSL code is secured. The Heartbleed wiki page specifies how and when the bug was introduced into the code base. However, the wiki page does not discuss how often code reviews and runtime testing are conducted. It would not be surprising to find out that a vulnerable version of the OpenSSL code had been fuzzed and yet the Heartbleed bug had been missed. Information disclosure bugs are easy to miss when fuzzing, because there may be no crash associated with this bug.
However, crashes are not the only exceptional condition fuzzers can look for. Access violations have been popular to search for in the past because they can sometimes lead to host compromise exploits. But with the rise of low-level systems protections like ASLR (address space layout randomization) leaking memory contents is a growing field of offensive study. Leaking the contents of memory helps attackers defeat the ASLR protection. The disclosure was particularly exciting, since cryptographic material could also potentially be leaked.
There are ways that fuzzing tools can observe potential memory leaks. For example, in the case of a network protocol like OpenSSL, the size of return packets could be recorded, and anything larger than expected should be reported on. Likewise, if the returned data is of a well-known format, unknown data should also be treated with suspicion. Also, detecting leaked memory is commonly accomplished by using a debug heap that sets each allocation next to an unmapped page, as shown in Figure 2.
Figure 2: Debug Heap
If data is read beyond the scope of one allocation, a page fault will occur. Not all information leaks occur outside the bounds of an allocated buffer, but this approach is a good start. Most operating systems have a debug heap, which can be optionally enabled. It is clear that the heartbeat extension was not fuzzed using a debug heap; else this bug would have been detected. The beauty of using debug heaps for fuzzing is that your existing tools can continue to focus on catching crashes and they will now find this bug type. The tradeoff is that the application, and therefore the fuzzer, will run slower.
So was the heartbeat extension not fuzzed at all? Or was improper technique or tooling used? Either is likely; both often happen.
- Overworked testers create fuzzing tools that test most of the protocol, but do not go back and retool fuzzers for each code check-in. To combat this possibility existing tools that test SSL could be used. Often times propriety fuzzers are required for special purpose code, but this is a well-documented standard.
- Improper technique is just as likely: the people charged with testing are smart developers or testers, but may lack critical security training. That is why many major projects now have a central security team that helps with security testing such as fuzzing.
To close the loop on this bug, the OpenSSL team should determine where the testing failure occurred and address the issue to insure a similar situation does not occur.