During the last two months, it's safe to say that the biggest stories in technology have been the series of illegal hacks at companies holding millions of user records. In some instances the Sony and Citigroup break-ins tens of millions of records were accessed. Swift response by law enforcement appears to have disrupted the LulzSec band of hackers who were behind the highest profile jobs, although we won't know for a while how much their activities were affected.
While law enforcement has done the heavy lifting, it appears the business community has done, well, nothing at all to protect its data. Many of the breaches were simple garden-variety hacks that would have been prevented by disciplined application of best practices. Given that these hacks were nothing new every month seems to bring forth a new one you'd have to conclude that many businesses don't view themselves as having an obligation to their customers to make sure data is secure.
I believe this peculiar impunity is due to a lack of legislative will to impose strong penalties on companies whose data is compromised. When regulation to this effect appears, companies will finally be induced to take security seriously (but only for fear of punishment, alas). In other business domains where fines can today be imposed for data violations by third parties, security has greatly improved. For example, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) opened the eyes of the healthcare industry to the need for confidentiality and protection of information. And the industry responded with expensive initiatives that put an end, by and large, to inappropriate data access.
The financial industries have long argued that imposing fines on them doubly victimizes them and would interfere with the rapid movement of trade and commerce. Because of this resistance to accountability, the principal response required by legislation of these companies is to notify account holders of a breach. In real life, these notifications occur sometimes weeks after the breach was detected thereby depriving the customer of the ability to monitor suspicious activity when it's most likely to happen.
The other legally imposed response is to remove any fraudulent charges from customers' accounts. Most of those charges, however, are bounced back to the original vendor, and not swallowed by the institution. To add insult to injury, the financial institutions charge the vendors fees for these charge-backs. So, the institution whose lax security caused the problem shifts the cost to other parties and profits from the process. Could there be any less incentive to enforce security?
Arshad Noor, the CTO of StrongAuth, a company that specializes in key-based security, points out that the situation is actually worse than it appears. Because there is no clearinghouse for information on break-ins and no requirement that breaches be publicized (in the event no customer records were accessed), it is certain that far more successful hacks are occurring than we know about. And chances are good that many firms that think no customer records were accessed are wrong. In addition, as Noor points out, customers have no realistic recourse against the institution whose breach may seriously inconvenience them.
The system essentially says tacitly that it is willing to accept the current situation and thereby force everyone to deal by themselves with the fallout from the hacks. Lovely, eh?
Moving to software development, the situation is not wholly different. Certainly, we've become attuned enough to problems like buffer overflow and SQL injection to know how to protect ourselves. (Successfully, it appears, as most recent big hacks were not the result of either of these defects.) But other practices continue to make software easy pickings. For example, one key entry weakness is that many software packages continue to use a default password at installation. Why, if they can formulate unique license numbers, can a vendor not use that license number or some other unique value as the default password?
On a more systematic scale, companies that sell developer tools to reduce the attack surface of released software have never thrived, no matter how appalling the hacks reported in the news. For example, Fortify Software mostly muddled along for seven years before being bought by HP. And other ISVs in the security development tools business have shared similar unremarkable fates.
Their lack of success stems from the difficulty of convincing managers to invest the dollars and programmer time to adopt these products and address the security issues they raise. Only managers badly burned by previous hacks will value these investments. And, odd as it might sound, there are too few mangers who have been burned this way. Rather, many managers have excess faith in the naïve belief that because they haven't been hacked yet, they're handling security correctly or at least well enough. Good luck with that!