|By Adam Kolawa, Yakov Fain||
|June 13, 2005 10:00 AM EDT||
The vast majority of corporate developers truly believe that application security is not their concern, assuming that network and engineering groups will build their environment in a secure way. But what about application security? Are you ready for the code audit?
Application Security Isn't Getting the Attention It Deserves
When most people in the corporate world talk about "security," they mean the security of the network, operating system, and servers. Organizations that want to protect their systems against hacker attacks invest a lot of time, effort, and money ensuring that these three components are secure. Without this secure foundation, systems cannot operate securely.
However, even if the network, server, and operating system are 100% secure, vulnerabilities in the application itself make a system just as prone to dangerous attacks as unprotected networks, operating systems, and servers would. In fact, if an application has security vulnerabilities, it can allow an attacker to access privileged data, delete critical data, and even break into the system and operate at the same priority level as the application, which is essentially giving the attacker the power to destroy the entire system. Consequently, the security of the application is even more important than the security of the system on which it's running. Building an insecure application on top of a secure network, OS, and server is akin to building an elaborate fortress, but leaving the main entryway wide open and unguarded.
There is a simple explanation to why this happens: tight project deadlines and unawareness of potential consequences. Project managers believe that answering that annoying review of the corporate security group takes care of everything. Not every project is reviewed by experienced enterprise architects, and even if it is, Java security is not one of the major skills of Java architects.
Most Developers Don't Know How To Write Secure Code
Most developers have no idea what writing secure code involves. Most have never thought about writing secure code - probably in response to the corporate world virtually ignoring application security, and very few have ever had to try writing secure code. Some developers have heard that buffer overflows and SQL injections can cause security problems, but that's about the extent of most developers' security knowledge.
When developers are asked to make applications secure, they start trying to find security bugs in the application - after it's been built. For example, they might look for dangerous method calls and remove them, using an application vulnerability scanner, or using a security mechanism such as mod_security or an application firewall to prevent exploitation. However, this bug-finding strategy isn't sufficient to meet today's complex security requirements, such as those mandated by the Sarbanes-Oxley Act . Testing problems out of the application is both inefficient and largely ineffective. Independent, end-of-process bug finding alone can't and on't expose all possible security vulnerabilities.
With penetration testing, which involves trying to mimic an attacker's actions and checking if any tested scenarios result in security breaches, security vulnerabilities will go unnoticed unless the tester has the skill and luck to design the precise attack scenarios required to expose them. Considering that there are thousands, if not millions, of possible scenarios for even a basic application, odds are some vulnerabilities will be overlooked. However, it takes only one security vulnerability to compromise the security of an application and its related systems - opening the door to attacks, as well as fines for not complying with security mandates.
Furthermore, penetration testing can fail to catch the most dangerous types of problems. Let's assume that you have a Web application to test, and this application has a backdoor that gives admin privileges to anyone who knows to supply a secret argument, like h4x0rzRgr8 = true. A typical penetration test against a Web application uses known exploits and sends modified requests to exploit common coding problems. It would take years for this test to find this kind of vulnerability through penetration testing. Even an expert security analyst would have a tough time trying to exploit this. What about a difficult-to-reach section of code in the error-handling routine that performs an unsafe database query? Or the lack of an effective audit trail for monitoring security functions? These kinds of problems are often entirely overlooked by even a diligent penetration test.
Other popular end-of-process security testing techniques - such as using static analysis to check whether code follows a standard set of security rules such as "Do not use java.util.Random" or "Use java.security.SecureRandom" - might expose some of the vulnerabilities that penetration testing overlooks, but come with their own share of problems. For instance, consider some of the weaknesses of trying to identify security vulnerabilities through static analysis. One is that these patterns don't consider the nuances of actual operation; they don't factor in business rules, or general security principles. If you have a Web application that lets your customer see their competitor's account by adding one to the session ID, this is a very serious problem. However, this kind of problem escapes static analysis because it doesn't involve a dangerous function call. Security assessment, in this sense, isn't always a bug to find, but a design problem to verify. Another problem is false positives. Static analysis can't actually exploit vulnerabilities; it can only report potential problems. Consequently, the developer or tester must review every reported error and then determine if it indicates a true problem, or a false positive. Sophisticated static analysis methods can improve accuracy, but ultimately, a significant amount of time and resources must be spent reviewing and investigating reported problems and determining which actually need to be corrected.
Complying with Sarbanes-Oxley
To comply with Sarbanes-Oxley (SOX), public companies need to effectively define and verify security policies for their financial and record-keeping applications.
Public companies are now required by SOX to implement and verify effective security for their financial and record-keeping applications. To comply with this requirement, it's necessary to establish an effective application security policy and verify that the policy is actually implemented in the code and reflected in the system functionality. By security policy we mean a document that defines best practice secure coding standards, secure application design rules, security testing benchmarks, privacy requirements, as well as custom security requirements.
According to SOX, having a security policy has evolved from a "nice-to-have" feature to an essential business requirement. Companies that don't establish and implement effective security policies could now be found to be negligent and face significant fines for failing to comply with SOX. A lot of developers and managers still treat security like they treat quality - they try to get as much quality/security as they can to the best of their knowledge, but often settle short of complete quality/security. However, systems that aren't 100% secure aren't acceptable under SOX. If development managers don't recognize this, they could cause their companies tremendous liabilities.
Defining a security policy doesn't satisfy SOX requirements; the specification items defined in the policy must actually be implemented in the code. In other words, the specification must truly be seen as requirements - not as suggestions or guidelines, as is typically the case with functionality specifications. The specifications defined in the security policy must be implemented…no ifs, ands, or buts. If your corporate information group doesn't have resources to enforce this, your architecture group may have to take this responsibility.
What's required to ensure that the security policy is implemented in the code? First, code should be statically analyzed to enforce the organization's security policy on the client and server sides. Static analysis typically looks for potentially dangerous function call patterns and tries to infer if they represent security vulnerabilities (for instance, to determine if code has unvalidated inputs, and if unvalidated inputs are passed to specific functions that can be vulnerable to attack).
Next, thorough automated penetration testing should be done to confirm that the security policy has been implemented correctly and operates properly. In addition, security should be verified through unit testing, runtime error detection, and SQL monitoring.
Aug. 29, 2016 10:15 PM EDT Reads: 272
Aug. 29, 2016 10:00 PM EDT Reads: 2,483
Aug. 29, 2016 08:30 PM EDT Reads: 2,438
Aug. 29, 2016 07:45 PM EDT Reads: 1,661
Aug. 29, 2016 07:00 PM EDT Reads: 1,980
Aug. 29, 2016 06:15 PM EDT Reads: 320
Aug. 29, 2016 02:15 PM EDT Reads: 3,745
Aug. 29, 2016 12:45 PM EDT Reads: 2,028
Aug. 29, 2016 12:15 PM EDT Reads: 837
Aug. 29, 2016 12:00 PM EDT Reads: 3,199
Aug. 29, 2016 08:00 AM EDT Reads: 959
Aug. 29, 2016 07:30 AM EDT Reads: 824
Aug. 29, 2016 02:15 AM EDT Reads: 1,846
Aug. 29, 2016 01:45 AM EDT Reads: 2,208
Aug. 29, 2016 01:15 AM EDT Reads: 3,038
Aug. 29, 2016 12:00 AM EDT Reads: 1,925
Aug. 28, 2016 10:30 PM EDT Reads: 4,079
Aug. 27, 2016 12:45 PM EDT Reads: 2,412
Aug. 27, 2016 02:30 AM EDT Reads: 2,101
Aug. 25, 2016 05:15 PM EDT Reads: 928