President of Fortify Software Public Sector, an HP company, Kelly Collins has a mantra for software security: Find, fix and (from the title of her firm) fortify.
For too long, software security assurance has slipped under the radar. But now, Collins said the government is beginning to move to patch up “gaping holes in security policy,” and paying as close attention to software security as it does to hardware and network security.
A 30-year veteran of the IT-vendor industry, Collins spoke about recent federal security legislation, the threat of cyber war and Fortify’s acquisition by HP. Fortify Federal CTO Rob Roy also joined the conversation to lend his expertise.
The New New Internet: New federal legislation for software security was recently signed into law. Does this mean that software security’s time has finally come?
Kelly Collins: Having the new software assurance provisions in the National Defense Authorization Act of 2011 is certainly an important milestone. Working at Fortify, many of us have felt for a while that because of the work we in the software security industry do and the realities we’re exposed to, we’ve accumulated a great deal of knowledge that could possibly save lives. What we’ve been trying to communicate to people within the government and federal agencies is all fairly straightforward, but it’s just technical enough that software security hasn’t gotten the policy and governance attention we think it deserves. That’s beginning to change with initiatives like those in the NDAA legislation. We’ve long believed that if people truly understood the very serious security issues at stake where software is concerned, and the technical underpinnings behind them, the approaches we advocate would be strongly embraced by anyone inside or outside of government. And we’re seeing that more and more each day. One reason why is the rising concern over what we call the Advanced Persistent Threat, where nation states such as Russia, China, North Korea, Middle Eastern countries and others are aggressively targeting our defense and critical infrastructure systems here in the U.S. as well as those of our allies.
The New New Internet: Why did Congress feel it was necessary to introduce these policies now?
Kelly Collins: There’s quote from the federal government’s first Chief Technology Officer Aneesh Chopra that I think sums up the value behind software security as a distinct practice. When he was asked the proverbial question, ‘What keeps you up at night?’ he said, ‘It is not the recent denial of service attacks that have occurred within the government that have brought down government websites. It is sloppy software implementation that have left holes open for hacking.’ We would never characterize what’s happening as ‘sloppy implementation,’ but until this legislation came along, there hadn’t been much in the way of official and focused policy requirements for software developers to build more secure software. As a consequence, security holes are inadvertently left open that third parties can then exploit to steal data, introduce malware, disrupt the efficacy of software systems, and place mission objectives and personnel at risk. In effect, the defense committees in Congress acknowledged that there were some gaping holes in security policy, and so they set about addressing those through adding very specific language on software security assurance in the Defense Authorization bill. Congress finally understood that for our nation’s defense systems, at least, the same level of governance that we already have in place over things like hardware and network security need to now be applied to software as well.
The New New Internet: Can you give us some specifics about these new requirements?
Kelly Collins: One of the key elements in the measure has to do with automating the process for identifying and remediating potential security vulnerabilities in software. In our work, we’ve come up with an easy-to-remember expression that captures the essence of what that entails. We say that organizations need to find, fix and fortify their applications against attack. This means that when you perform an assessment on software that somebody has developed, the first step is to scan the code. Here’s where you’ll find all of the places where there are critical vulnerabilities in the software that need to be fixed in order to ensure that no one can exploit them in ways that can bring harm. Now, with this new security guidance for software on Defense Department systems, that find-fix-fortify requirement is being passed down to developers, most of whom, I should point out, are third-party systems integrators, who will now also have to follow these guidelines. While I can imagine that some in the development community will look on these security policies as just one more hoop to jump through, the good news is that there are technologies that can help developers and security teams find those vulnerabilities and fix them in a cost-effective way so that it becomes much, much harder for anyone to attack the software layer that is running mission critical applications.
The New New Internet: You are saying the technologies to analyze the software already exist, but it is about creating the will for the policymakers to make that a requirement?
Kelly Collins: Exactly. You hit the nail on the head in the sense that there has not been strong policy guidance on the software layer and that has been true for the last 20 years. At least in our research this software assurance language that came out in this year’s NDAA is the first time some prescriptive language has been offered to the Department of Defense to help protect and harden the software layer that runs our national defense systems. You’ll see that oftentimes for system components to get clearance to operate on a government network, they need to go through a certification and accreditation process or C&A. There have been pretty strict requirements for the hardware, for the network for some time now — and yet it seemed in our industry that software was given a pass and that the same expectations for security weren’t being levied upon the software layer as they were the hardware and the network. But the reality today is that when somebody breaches a system, you don’t hear the refrain, ‘My network was stolen.’ Instead, you hear that thousands of highly sensitive data records were stolen or a worm disrupted some vital computer network. When hackers break into a system, they are not sitting on the network firewall like Humpty Dumpty watching packets fly by. They are there to try to get into the software system such that they can either do damage at that time, plant something that will create damage at a later time, or actually take data from the system surreptitiously but through that software layer. Point of fact; it makes much more sense to protect the software layer first from the inside out so that should intruders get past perimeter defenses, it would be extremely difficult for them to get at critical data inside applications or to insert a virus that would affect operations.