Channels ▼
RSS

Parallel

Secure Development Is Much Easier Than You Think


We plan our features, write our elegant and efficient code, and test it to make sure it does everything the customer would want. Then, after the application ships and everyone involved pats each other on the back for a job well done, we start getting reports — sometimes within days, sometimes much later on — that there is something wrong with the application. It lets people harvest personal data, or exposes the customer to compromise, or worse yet, it is wormable and can be used to attack other devices with the app on them. It appears we have forgotten to include some simple, easily integrated security development practices.

This scenario happens all too often, largely because it is still rare for developers to consider security as a feature. Instead it is an afterthought (often, so far after that it arises only when customers start to complain). While security may not be the feature that will win you an award for your newest phone or tablet application, it is the one that will help to keep your customers from banging down your proverbial door with complaints about theft and fraud. Based on recent estimates, adding security can save upwards of four times the ROI if it is done during development, instead of waiting until the security vulnerability is released and requires an update.

The good news is that automated code-time and test-time security activities are easy to integrate into development practices. While security should ideally be considered throughout the development lifecycle, from design to sustained engineering and response, I will focus on code-time (implementation) and test-time (verification) activities that are simple to apply and pay huge dividends by delivering more-resilient applications and fewer exploitable security bugs.

Automation Is a Developer's (and Tester's) Best Friend

While a solid process, such as an application security process based on ISO 27034-1, is a giant leap forward for the development of trustworthy applications, developers tend to struggle with the one small step of applying basic techniques in a rigorous way without tools and automation. At Microsoft, our development teams rely on several tools to get the job done in each part of the Security Development Lifecycle (SDL), ranging from process templates in Visual Studio (so you know when you've finished the security work for a given set of targets) to binary analyzers, fuzzers, and attack surface analyzers.

Tools are good, automated is better. If a developer must stop coding or break out of the IDE to run a test, the odds are high it won't be run, or at least won't be run consistently. Most IDE's these days have plug-ins or hard-wired security capabilities; it is a case of being familiar with that capability and using it. In fact, all of the switches I call out below can be enabled in a project template in Visual Studio as well as set as defaults in gcc environments, so it is a "one and done" solution for developers if they build a custom template for all future projects. Test tools are a bit less integrated, but some snap right into the IDE as plug-ins (such as MiniFuzz), and some can be made to run as part of a test harness, like Ubuntu's Python scripts.

Security at Code-Time

When it comes to code-time security, compiler flags are your best friend for identifying low-hanging fruit and are still vastly underutilized for the benefit they provide.

Visual Studio

It is important to automate your build to use the right flags and switches. Some security switches exist in Visual Studio that will mitigate a wide variety of known and dangerous issues. Some of these issues are the most common types of attacks seen against client software (non-Web based); specifically, memory corruption and error handling. To effectively mitigate memory-corruption issues, there are two tacks to take. The first of these, and easiest, is to randomize the stack and heap so that even if an app causes an overflow condition, the randomized location of that program in memory makes it extremely difficult to actually predict the execution path for the injected instructions. Randomizing the heap and stack, and mitigating other common exploits, can be done simply by enabling the /GS, /DYNAMICBASE, /NXCOMPAT, /SAFESEH, and APTCA functions in Visual Studio. To break these down a bit, let's examine each in turn:

Buffer Security Check

The /GS switch provides a "speed bump" or cookie between the buffer and the return address. If an overflow writes over the return address, it will have to overwrite the cookie put in between it and the buffer, resulting in a new stack layout:

Function parameters

Function return address

Frame pointer

Cookie

Exception Handler frame

Locally declared variables and buffers

Callee save registers

Note that/GS is enabled by default in Visual Studio 2012.

/DYNAMICBASE modifies the header of an executable to indicate whether the application should be randomly rebased at load time by the OS. The random rebase is well known as Address Space Layout Randomization (ASLR). This option also implies "/FIXED:NO," which will generate a relocation section in the executable.

/NXCOMPAT is used to specify an executable as compatible with Data Execution Prevention (DEP). This option applies for x86 binaries only. Non-x86 architecture versions of desktop Windows (e.g., x64, ARM) always enforce DEP if the executable is not running in WOW64 mode. There is little value to NXCOMPAT without using the complimentary DYNAMICBASE switch, as they work in concert.

Exception Handler Safety

/SafeSEH is a method to allow software-enforced DEP to perform additional checks on exception handling mechanisms in Windows. If the program's image files are built with Safe Structured Exception Handling (SafeSEH), software-enforced DEP ensures that before an exception is dispatched, the exception handler is registered in the function table located within the image file.

Finally, there is an easy way! Visual Studio 2012 has introduced the SDL switch, which enables these functions and more automatically. In addition to enabling the flags already discussed, /SDL causes a number of compiler warnings to be treated as errors — the Microsoft SDL treats these warnings as mandatory for native code:

Warning

Command line switch

Description

C4146

/we4146

A unary minus operator was applied to an unsigned type, resulting in an unsigned result

C4308

/we4308

A negative integral constant converted to unsigned type, resulting in a possibly meaningless result

C4532

/we4532

Use of "continue", "break" or "goto" keywords in a __finally/finally block has undefined behavior during abnormal termination

C4533

/we4533

Code initializing a variable will not be executed

C4700

/we4700

Use of an uninitialized local variable

C4703

/we4703

Potential use of an uninitialized local pointer variable

C4789

/we4789

Buffer overrun when specific C runtime (CRT) functions are used

C4995

/we4995

Use of a function marked with pragma deprecated

C4996

/we4996

Use of a function marked as deprecated

GCC

While the aforementioned are Microsoft-centric approaches to security during compile, other compilers are not lacking for such features. GCC, for example, has a suite of capabilities that provide much of the same type of coverage, from stack protection to setting memory sections to read-only, to ASLR. Let's look at each in turn.

gcc ­fstack­protector

-fstack-protector provides a randomized stack "canary" that protects against stack overflows and reduces the chances of arbitrary code execution by controlling return address destinations.

RELRO

ld ­z relro hardens ELF programs against loader memory area overwrites by having the loader mark any areas of the relocation table as read-only for any symbols resolved at load-time ("read-only relocations") to reduce possible GOT-overwrite-style memory corruption attacks.

PIE

ld ­pie / gcc ­fPIE: The gcc equivalent to DYNAMICBASE is -fPIE, which provides ASLR support. This protects against "return-to-text" and provides good mitigation of memory corruption attacks.

FORTIFY_SOURCE

gcc ­D_FORTIFY_SOURCE=2 ­O2: Programs built with -D_FORTIFY_SOURCE=2 (and -O1 or higher) enable several compile-time and runtime protections in glibc:

  • Expands unbounded calls to sprintf, strcpy into their "n" length-limited cousins when the size of a destination buffer is known (protects against memory overflows).
  • Stops format string "%n" attacks when the format string is in a writable memory segment.
  • Requires checking various important function return codes and arguments (e.g., system, write, open).
  • Requires explicit file mask when creating new files.
Wformat-security

gcc ­Wformat ­Wformat­security: If -Wformat is specified, the compiler also warns about uses of format functions that represent possible security problems. At present, this warns about calls to printf and scanf functions where the format string is not a string and there are no format arguments, as in printf (foo);. This could be a security hole if the format string came from untrusted input and contains "%n."

Security at Test Time

Binscope enables developers to verify they've applied the basic set of security controls during build on Visual Studio projects. For ELF binaries, Ubuntu has a great set of Python scripts to help you do regression tests.

Testing isn't just about functionality anymore, it is about whether an application plays nice with the system on which it runs and whether it does what the user expects both in what they see and what the app does beneath the covers. Attack Surface Analyzer enables a tester to see whether the app installs and uninstalls cleanly, whether it opens up vulnerabilities in the file or RPC system, and whether it creates security vulnerabilities in the registry. All of these things are usually not seen by the user, but are vital for an app to be a good citizen on the user's system. Ubuntu has the previously mentioned Python scripts to provide similar types of coverage for known filesystem issues such as symlink and hardlink race conditions, and to reduce the need for setuid applications.

Testing for known and easy issues such as compile-time switches and good behavior with system permissions is all well and good, but often, the most insidious vulnerabilities are those you don't expect. Testing with a good fuzzing library can help you find issues you didn't even know existed — logic flaws, race conditions, and memory leaks. Fuzzing is not only good for security, but can improve performance as well. There are numerous free fuzzers available for both UNIX and Windows platforms. On the Windows side, Minifuzz integrates well with Visual Studio and provides a good introductory fuzzer for developers and testers who have never used a fuzzer before. For UNIX, there are a wide variety of fuzzers, some of which require a significant investment in time to make them work and to integrate them into your test and build environments. Some of my preferred of the commonly available are SPIKE and its variants (SPIKEFile, SPIKE Proxy), Sharefuzz, and lxapi.

The Future is Bright…

When you are writing code or testing apps, there are numerous ways across platforms to make your app more secure. I have only covered a small portion of the possibilities here, and only in two aspects of the lifecycle for apps. There are other automated solutions to make your app more secure by design, such as threat modeling and attack surface analysis. At the very least, this article should give you a list of things to do to make sure your app has few to none of the common coding issues exploited currently; and it should help give your tests a clear area of focus in finding security issues before you release.


Arjuna Shunn is a Principal Security Group Program Manager on the Secure Development Customer Solutions (SDCS) Team in Microsoft's Trustworthy Computing (TwC) Group.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Video