Recent posts about security protocols like SSL and DNSSEC got me thinking. In an orthogonal direction.
You know what's wrong with protocols like SSL, SSH and PGP/GPG? They let users pick the stupid. Bruce Schneier has trained me to call this the "dancing pigs" problem, though I'm too lazy to go look up the guy Bruce says he got it from.
It goes like this: "There's a problem with the security gizmo; click OK to see the dancing pigs."
Unless you're a security researcher who lives for the chance to investigate a malicious server, you just click OK to see the dancing pigs.
All my kids have been computer users since before they could read. They don't know what the dialog says, but they learn to click OK to see the dancing pigs. Even when they do learn how to read, they aren't necessarily so concerned with expired certificates or DNS name mismatches.
The reason these protocols all suck is because they let just anybody make the security policy decisions. Stupid.
(OK, so it's not the protocols/file formats themselves, just every app that implements them.)
So, what am I suggesting instead?
Dan Kaminsky wrote an excellent chapter on spoofing for me, for the first edition of Hack Proofing your Network. I hope to have it available for download one of these days. In it, he makes a perfect case for reliability == security. If your service is going down all the time, then you are being trained to live with unreliability and ignore strange problems. Your judgment is shot.
So much for the idea that a DoS isn't a security problem.
As a QA guy, I really want bugs to crash, and crash hard. Crash dump or core file too, please. The alternative is random behavior, unreproducible issues, caught exceptions that I really needed to know about, and maybe memory scribbling that could exhibit random symptoms. You don't want it to be kinda tolerable, not that big of a deal, I haven't seen it in a while I guess it's gone kind of a problem.
So security protocols need to break and break hard.
If there's a problem with the certificate, then just drop the connection. Don't prompt the user. Don't try to rate how bad of a problem it is. Don't toss a yield sign in the corner, don't show me a key with fewer teeth. Just stop.
If the SSH server keys have changed, don't connect. Don't offer to connect anyway. Don't ask if I want to save over my keys. Don't tell me the command-line switch to disable my security.
If the GPG email signature doesn't verify, don't let me read it anyway. Don't invite me to keep searching keyservers until I happen to find one with keys that agree.
Why? Because if it breaks properly, people will be forced to get someone competent to fix it. And they will HAVE to fix it.
Examples. If someone's SSL cert expires, right now they can sort of ignore it for a little while, or tell people to click OK, and so on. Do it my way, and it breaks entirely, and the person who should have renewed the cert does so, right now. Don't get me started on self-signed certs. If you've done something and blown away your server SSH keys, you think no big deal, just tell everyone to accept the new ones. Do this enough, and what have you trained users to do? If instead SSH doesn't work at all, how much more careful would you be about bothering to restore the original SSH keys?
But this is painful for people? That's the point. People learn through pain. Some things should be punished. Some events should be disruptive.
People should be trained to take security seriously.