
Journal Journal: Ask Slashdot: Incentives to prevent software problems
After the recent CrowdStrike debacle, I got to thinking about software failures and culpability and such, and was wondering if there was a way to incentivize better software practices.
I'm thinking mostly big security breaches, where someone breaks into a system and downloads the personal information of millions of users, or ransomware attacks where an entire hospital system is taken offline, but also straight up software failures such as a Y2K problem that takes an airline offline for days. That sort of thing.
Let's imagine a US federal government agency whose job it is to incentivize best practices in software.
Yeah, I know... another federal agency. Efficiency and incompetence and all that. But read on.
The agency publishes a document outlining a set of software "best practices" that are known to help prevent security breaches. I'm thinking something of the scale of ISO-9000 series for quality of manufacturing. Call it the ISO-Software standard.
The point of the standard is this: it's optional. Being ISO-Software compliant is just a stamp granted by the agency that tells customers that the software has been vetted and known to abide by the best practices. It's like the USDA stamp on beef: it lets you know the product has been inspected and that it is safe, wholesome, and correctly labeled. Like the USDA stamp, it doesn't tell you that you won't get sick, only that it comes about from best practices.
The agency gives out the stamp but doesn't actually do the compliance audit. That's left up to professionals and companies in the industry that meet the agency's standards. Something on the order of the FAA's "DER" program (Designated Engineer Responsible). The FAA doesn't have the technical expertise to tell if the electronics of an altimeter complies with the standards, so the company making the device hires one or more the DER's to sign off on the technical details.
Here's how the stamp works:
1) You get accredited by the agency after a compliance audit. You can then advertise your product as ISO-Software compliant.
2) If a data breech happens and your software is at fault, you are protected from liability. You followed all the correct procedures, and either something was missed or a new form of breech technique was invented.
2a) If something was missed, the agency can look at the compliance audit and take appropriate action. DERs with a lot of breeches under their belt can be dropped from the program.
2b) If it's a new type of breech, the agency can analyze the incident in the manner of the NTSB analyzes a plane crash. They update the standards to account for the new techniques.
3) If you release an update to your software, you're still considered compliant. Even if you make major changes and/or rewrite the code, you still retain the stamp.
4) If you have a breech on an updated version of the software:
4a) If the breech exists in the original software, you're protected. You retain the stamp, and have no liability for the breech.
4b) It the breech arose because of changes in your software, you immediately lose the stamp *and* can be held liable for the breech. If you want to be compliant you have to go through the entire certification process again.
5) The software package can use compliant and non-compliant libraries.
5a) If the breech came from a compliant library, then it's their problem.
5b) If the breech came from a non-compliant library, then it's your fault. You should have tested their library more fully.
As mentioned, this type of compliance stamp is completely optional. It won't interfere with open source projects or labor-of-love packages on GitHub, and it shouldn't affect anything that has a license with a disclaimer.
But the stamp would be a strong selling point for commercial software, and that would incentivize companies to do compliance audits and implement best practices. It would incentivize CEOs and sales departments to spend money making their software safer.
As to the actual compliance requirements, we can start by taking all the data breeches we know about and putting them into categories, then writing general rules that would have prevented the breech. Such as sanitizing user input, or checking for thrown pointers, that sort of thing.
(And of note: The FAA has lots of completely useless requirements, things like having a design document or code coverage, that are not needed in software. All ISO-Software requirements should be a direct consequence of some breech that happened in the past, phrased in general terms that would have prevented that breech.)