1. Public vulnerability information (e.g. CVE, disclosure info, etc.) provides data about the activities of the hacker/bugfinder/security researcher community; it tells us nothing about the absolute or relative level of vulnerability of software.On the contrary, I think the effort required to find bugs, and the rate and volume at which they are discovered are the best indicators of the relative level of security of a software package. I will agree that this doesn't tell us the absolute number of vulnerabilities left. There's always the chance that the researchers found the absolutely last bug in a package on the 31st while doing their Month of x Bugs.
The past is not necessarily a predictor of the future, but the past may be a predictor of the more recent past. Or you might prefer correlator. I believe the data is all there for someone who wants to, say, take the bugs for packages from 2005 and see how they correlated with bugs in 2006. At least for known bugs.
2. The defining aspect of a software program's vulnerable state is the number of vulnerabilities (known or unknown) that exist in the software. It is not how hard programmers try not to program vulnerabilities nor how hard others try to find the vulnerabilities.The first sentence is a fine definition. The second sentence seems to be trying to distance itself from the first, though. If you try hard to create fewer vulnerabilities (and have some talent and experience in that), don't you think you will create fewer vulnerabilities? And if you missed some, and other find them and you fix them, don't you mostly end up with fewer vulnerabilities?
So no, using the definition of "vulnerable" to mean there is at least one vulnerability left, there's probably no amount of effort you can expend that is going to get that count to zero. But don't we want software packages that have fewer vulnerabilities, if you can't have zero?
Because if there's no value to that, I know lots of people who could be doing something else with their time.
3. The contribution of a patch to the vulnerable state of a software program is a tradeoff between the specific vulnerability (or set of vulnerabilities) it fixes and the potential new vulnerabilities it introduces.Sure. Do you mean to imply that patches often introduce new problems? I'm kinda under the impression that's relatively uncommon, but I'd be willing to be proven wrong.
4. There is currently no known measurement that determines or predicts the vulnerable state of a software program.False. If you use the definition of "vulnerable" meaning that there is at least one vulnerability, then I have a program that will read any other program of some minimum complexity, and return the probability that it is vulnerable. The answer is usually 1. I'm very confident in my low false-positive rate.
Facetiousness aside, I agree that there is no metric or program to find or event count all of the vulnerabilities in a program. Maybe not even most of them.
But there are programs, services and consulting that will find "some". Is there value in finding "some"? Is it useful to know how hard it was to find "some"?
5. We don't know how many "undercover" vulnerabilities are possessed and/or in use by the bad guys, therefore we must develop solutions that don't rely on known vulnerabilities for protection.Once again, I agree with your opening statement, and am left wonder where you got that particular conclusion. Why not "therefore we must find and fix as many vulnerabilities as possible" or "therefore we must infiltrate the underground and gather intelligence"?
6. The single best thing any developer can do today to assist in protecting a software program is to systematically, comprehensively describe how the software is intended to operate in machine (and preferably human) readable language.As a QA guy, I'd have to say that would be really, really awesome. Yes, can I have that please? But if I had that, isn't that the same as programmers trying hard, ala your point 2?
No comments:
Post a Comment