Dept. of Energy Breach: What Went Wrong & Key Takeaways

Dec 17, 2013 By Sarah Vonnegut

The Department of Energy (DOE) has released more details about the July 2013 DOE Employee Data Repository (DOEInfo) incident in which the Personal Identifiable Information (PII) of at least 100,000 past and current federal employees – but possibly as high as 150,000 – was exposed.  

According to the 28-page review conducted by Gregory H. Friedman, the DOE’s inspector general, leaked details included full names, social security numbers, birth dates and places, security questions and answers, education and even details of employee disabilities.

What Went Wrong?

In the case of the DOE breach, an easier question to answer might be “what didn’t go wrong”. Noting extensive failures, the security incident was a result of a “combination of the technical and managerial problems” that “set the stage for individuals with malicious intent to access the system with what appeared to be relative ease,” Freidman concluded.

The following were identified as glaring technical errors that contributed to the kind of environment that almost invited a breach:

  • The frequent use of complete Social Security numbers and other PII without encryption, a practice against Federal guidance and key industry best practices.
  • Allowing direct web access to highly sensitive information and systems lacking adequate security controls. Friedman noted that email access at the DOE was more secure than the DOEInfo system.
  • Not securely integrating the Departments Management Integration System (MIS) with the DOEInfo system and never having a thorough review of its security controls, despite operating for over 10 years prior to the breach.
  • A failure to remediate known vulnerabilities in its systems in a reasonable amount of time

More critical than the technical errors, though, was the lack of active participation by upper management to perform security assessments or correct these issues, even as they were pointed out. Competing security priorities, blurred lines of responsibility between departments, and the lack of a real sense of urgency in regards to any security concerns are also noted in the report as having helped create the vulnerable environment.

In fact, Friedman noted in the report that members of the department’s Office of the Chief Information Officer failed to implement essential security patches, sometimes for years.

“We found that the Department had not taken appropriate action to remediate known vulnerabilities in its systems either through patches, system enhancements, or upgrades. Critical security vulnerabilities in certain software supporting the MIS application had not been patched or otherwise hardened for a number of years. Specifically, an operating system utility and third-party development application that were installed on the MIS server had not been updated since early 2011.”

What’s worse is that the vulnerability that was exploited in July had actually been identified nearly six months earlier in January, and a $4,200 software update that was purchased in March could have completely overridden the vulnerability…had the update been implemented. Instead, the financial implications of the breach are around $3.7 million for the cost of a year of credit monitoring for victims of the breach as well as costs due to losses in employee productivity.

Key Takeaways For CIO’s and CISO’s:

Friedman had several recommendations for the DOE in light of the breach, and we broke them down so they can be more widely applicable to security professionals of any organization:

  • All databases should be secure, and the information encrypted. If multiple systems are connected, it should be done in a secure way that will prevent the leaking of information, as well as unauthorized access.
  • Known vulnerabilities should be fixed ASAP – there’s no reason to be sitting on unused solutions the way the DOE did.
  • Responsibilities of those in charge of security and fixing vulnerabilities need to be clearly dictated and adhered to. Overlaps in responsibility are usually much better than gaps in responsibility.
  • A process and central authority for shutting down and repairing compromised/vulnerable systems should be implemented, at least on a temporary basis, to ensure the safety of vulnerable data.

You can read the whole report here.

The following two tabs change content below.
Sarah is in charge of social media and an editor and writer for the content team at Checkmarx. Her team sheds light on lesser-known AppSec issues and strives to launch content that will inspire, excite and teach security professionals about staying ahead of the hackers in an increasingly insecure world.

Latest posts by Sarah Vonnegut (see all)

Stay Connected

Sign up today & never miss an update from the Checkmarx blog

Get a Checkmarx Free Demo Now

Interested in trying CxSAST on your own code? You can now use Checkmarx's solution to scan uncompiled / unbuilt source code in 18 coding and scripting languages and identify the vulnerable lines of code. CxSAST will even find the best-fix locations for you and suggest the best remediation techniques. Sign up for your FREE trial now.

Checkmarx is now offering you the opportunity to see how CxSAST identifies application-layer vulnerabilities in real-time. Our in-house security experts will run the scan and demonstrate how the solution's queries can be tweaked as per your specific needs and requirements. Fill in your details and we'll schedule a FREE live demo with you.