Forrester Report: Why to automate AppSec now.

The History of Application Security Testing – Part 2

Last week, we discussed the early history of computer security, tracing back to World War II and the “bombe”. This week, we’re looking back to the origins of the internet and how application security testing became an invaluable part of enterprise security. Here we go!

Read Part 1 of The History of Application Security Testing HERE

The Dot Com Era: Emerging Technologies + Emerging Hacking Techniques


By the late 1980s, the internet was slowly catching on when the Morris Worm took the then much smaller internet by storm, infecting over 6,000 systems around the world. The Worm, which was unleashed in 1988, prompted action:  DARPA created CERT, an organization dedicated to solving cybersecurity challenges, and firewall security tools were first released.

With the 90s came a flood of both internet access and new websites. In 1990, the first HTML code was written and reliance on the internet by both enterprises and individuals began to increase significantly. Internet security also got better, though Static Code Analysis tools still hadn’t improved since Lint.


One of the first software vulnerability scanners was SATAN, later renamed SANTA. The tool, first released in 1993 was free, making it widely available for both good and malicious uses, which of course it was used for. The tool made way for newer tools like nMAP and Metasploit, released in the late 90s and early 2000s, respectively. Black hat testing became the primary way for organizations to test for vulnerabilities in their own systems, but also became a primary way for hackers to find vulnerabilities in other systems.


1995 was a big year for the WWW. Netscape was the key player in the game at the time, and the company released both JavaScript and the original SSL protocol that year. JavaScript allowed developers to create dynamic pages by allowing embedded scripts to render on the client-side, while the SSL protocol allowed a secure way to send information over public networks.


Netscape also launched the first bug bounty in ‘95, organized to help find vulnerabilities in its’ products before they could be exploited through malicious means. Unfortunately, bug bounties didn’t catch on as quickly as they could have, becoming common only in the mid 2000s. The next year saw the introduction of Flash animations, making the web brighter and more dynamic – for better and for worse.

History of bug bounties. Source:
History of bug bounties. Source:

By 1999, Web Applications using HTML, JavaScript, and Flash began appearing online. Web applications made the internet a living, breathing thing – and opened it up to a hacking bonanza. SQL injection was discovered in 1998, and injection techniques quickly became the most used hacking method as web apps gained in popularity. To help combat the increase in hacking, security tools evolved from single source-file testing to scanning the whole code base, thanks to integration build platforms.


2000 – 2010: Application Security and Hacking Become Mainstream


The turn of the century was an intense time for both the hacking and security communities. The second generation of Static Code Analysis tools began popping up, with commercial solutions finally becoming viable for organizations with lots of code development. But as new security tools were created to help organizations prevent vulnerabilities such as those presented by the OWASP Top 10, the responsibility of security shifted away from the developer and into dedicated security teams, as well as testing teams later in the SDLC.


The cause and effect of this meant that developers wrote their code and then moved onto another project, only to have to go back and fix their original code once security testing was done, often right before a scheduled release. In addition, these tools caused high false positive rates that needed to be weeded out manually. This caused many bugs not to be fixed, or at least not completely, before the software was released. Patches became a viable way to release fixed code, but quickly became unsustainable for large enterprises.


The Open Web Application Security Project, or OWASP, was established in 2001 to help improve awareness of web application security. OWASP would release their first OWASP Top Ten list of AppSec risks in 2003. The list, released every three years since then, is an awareness document that has been used by many organizations to mitigate the riskiest vulnerabilities. OWASP also released its’ first OWASP Testing Guide in 2003, in an effort to help developers and pentesters secure their code better.


In 2002, Bill Gates sent out his famous “Trustworthy Computing” memo, driven by embarrassing and costly vulnerabilities like Code Red. The note laid out a need for systems to be inherently secure, available and reliable and set the stage for Microsoft’s Software Development Lifecycle (SDL) approach to development and security processes. In 2004, The Payment Card Industry released the first PCI-DSS, marking the first regulatory compliance standard to promote the use of automated security testing. Further standards followed soon after.


During this period, development teams were growing, and more and more code was being written – but the security staff remained small and siloed in most organizations. The inability to scale security testing became the number one prohibitor of application security testing solutions.


With the lack of thorough security, these years were filled with major hacks, from the ‘friendly’ Samy worm in 2005, which used Cross-Site Scripting to attack over one million MySpace profiles, to AOL’s hack of over 650,000 users in 2006, to the worst of them all: the 2007 TJ MAXX attack that exposed over 94 million credit card numbers.


2010 to Today: Source Code Analysis 3.0 & DevSecOps


The last decade has seen even more major shifts in the security industry. Applications, both web and mobile, have become critical components of organizations, and the sensitive data contained within them is more important than ever to secure.


To facilitate growing development, a new methodology developed. The Agile Manifesto was written in 2001, but only started catching on in the late 2000s.  As Agile approaches to software development increased among organizations, DevOps became the top adopted development methodology. With collaboration between teams, DevOps allows for faster, more efficient development. But as we’ve seen in more recent hacks, development at speed and scalability MUST incorporate security in order to survive.


In the last five years, we’ve seen the term DevSecOps more and more to describe the new way of tightly integrating security within the SDLC. In this way, the responsibility over security has shifted to the left, to the first stages of development, with the security team closely involved in the process.


We’re still dealing with major hacks, including Target, Adobe, Snapchat, and Evernote in the past several years, but with more structured compliance standards and better communication between teams, Application Security Testing is beginning to look up.


Security testing tools that integrate within the complex development and testing platforms and processes are starting to gain more traction, as they facilitate agile development without putting an extra burden on non-security personnel. This new generation of application security testing solutions combines the best of the first two generations: the ability to analyze full codebases while minimizing false positives. And by tightly integrating with development environments, the new generation of Application Security Testing goes above and beyond what we previously thought possible.


What will the next decade bring? We’d love to hear your insights below!


jumping 1

Click here to read “8 Chrome Extensions Every Security Pro Needs”  

Jump to Category