Sunday, June 08, 2008

Little Brother

I just finished reading Little Brother by Cory Doctorow while on a plane to Seattle for a Windows Secrets meetup.

There are a few audiences one might rate this book against. Probably the only fair one is the one Cory wrote for, young adult readers who need an introduction to electronic civil rights (and civil rights in general, for that matter.) For that audience, I think he has succeeded admirably. I will make my copy available to my kids, and see if any of them have an opinion.

To be sure, the book tries to indoctrinate readers to the cyber libertarian way of thinking. Since I happen to agree with that doctrine, I have no problem with that. (And yes, I gave up fighting the use of "cyber". I lose.)

Another audience I might rate this book against is the one I put myself in. Middle-aged infosec people. Perhaps with a little amateur writer thrown in. I still recommend the book, but now I have to start breaking out caveats and picking nits.

Spoilers ahoy.

First off, how's the tech? This is a sliding graph. Compared to the vast majority of the books in the world, Cory's technical accuracy is quite high. There are extreme ends of this scale. For example, Dan Brown (The Da Vinci Code author) writes with basically zero tech accuracy. Amazingly good, page-turning drama. Horrible tech. So Dan's down at the great writing, lousy tech corner.

If I may give my ego a backhanded stroke for a moment, I place myself up at the opposite corner. In the Stealing the Network series, I went way out of my way to make my tech 100% accurate. I also acknowledge that my writing probably sucks, so I like to think of myself as the anti-Dan Brown. Mercifully, my books are shelved in the Computer section of book stores.

Cory's writing in Little Brother is good and his tech is very good. (For a not-specifically tech, non-hacking book). So he's in the upper-right quadrant of the graph.

But of course I'm compelled to point out specific problems. Cory sacrifices some accuracy for plot in a few key places. And appropriately so, I think. The plot flows better this way. Biggest example is the RFID rewriting. The majority of the tags are not rewritable. Cory has kids running around doing non-contact rewrites of FastTrak and other cheap RFID tags. Doesn't work in real life. Nor, I believe, in the near future.

Speaking of time, I can't recall spotting anything in the book that would indicate a specific year. I'm sure that's intentional. I've had my books described as being 10 minutes into the future. I think Cory's at 60 minutes. It reads like now plus 5 to 10 years.

Cory's writing also snags in a few places. (Keep in mind, just because I can spot someone else doing it doesn't mean I can avoid doing it myself.) One of his purposes is to instruct. He doesn't assume the reader knows what an RFID tag is in the first place. This is where there's a big difference between random YA reader and someone like me who has been doing security for years.

For me, he's way over-explaining, and the story grids to a halt. It's mostly first-person, and so are the explanations. But the first person goes from being aimed at someone in the story to being aimed at the reader. It's as if the main character turns to look straight out of the page at you. For someone who knows these things, it's like saying "money can be used for goods and services." So this lessened the enjoyment of the story aspect for me somewhat. But again, probably a tradeoff he made.

I also am already caught up on all the technical and political aspects the book covers, so I didn't learn anything new there. But then I read Boing Boing, was around when the EFF was founded, have been going to various hacking conferences for over a decade, and know half of the people Cory used for source material.

In my case, that leaves the story. On to the parts I did like. I find the overall plot, sadly, believable. It's almost entirely set in San Francisco and the Bay Area, where I live. So he gets local color points. He came up with a number of characters I care about. He made me angry about what was happening in the story. After the first couple of chapters, I had to spend all my spare time reading it.

Let me see if I can help you categorize yourself as a person who would agree with the politics of this book, and would be ok sharing with a YA reader. Do you get mad every time Thomas Hawk links to a story about a photographer getting hassled by the police or a security guard? Do you want to call up and scream at a school board or principal when Fark links to a story about some kid getting expelled for a t-shirt or haircut? Do you have nothing but contempt for the TSA every time you find yourself removing your shoes at the airport?

If the answer is yes, then you will probably "enjoy" the plot and be right on board with the political implication. Be prepared to spend the first half of the book angry.

You know what else I liked? Cory didn't shy away from the other points of view in the discussion. He goes ahaead and points out how his main character is just like a terrorist. He gets screwed over by his parents for most of the book. Some of his own friends give up on him. Some of his trusted circle betray him. He doubts constantly. He suffers for it. It's not like Cory's position still isn't clear, but I appreciate him exposing all the costs.

The big moral of the story is that intrusive government sucks. But the smaller moral is that you have to stand up for your own rights, and it's going to hurt.

Little Brother download page
Little Brother posts
on Boing Boing
Cory's review of one of my books
(seems only fair)

Saturday, May 31, 2008

Race to Zero

The Race to Zero contest.

So, people are going to write some new packers? OK, no problem then.

Thursday, July 19, 2007

The Ladies of Infosec

I was at an event not long ago, and the woman in the group was really pissed. In a room full of nothing but security geeks, someone asked her "Oh, do you do security work?"

This didn't happen with any of the guys. The question they got was "Where do you work?"

I was thinking about this today, and I realized that every woman I know who works in infosec has told me a similar story. That might be a slight exaggeration, but not much. Literally every one I can think of right now has told me one of these stories.

They get things like:
  • Are you here with your boyfriend?
  • She used to be a man
  • Take your shirt off
Yes, sadly I have heard jerks yell out "take your shirt off" when a woman was trying to give a talk.

How much do women hate this? You can read what Raven thinks about it.

Let me tell you a little about this particular woman in question that reminded me of all this. She has worked in some of the most important software companies in the world, in the security groups. She has worked at at least two security companies that I know of. Pick just about any well-know security male, and they know who she is and they respect her work.

If you've been paying attention to the infosec world, you probably know who I'm talking about. Keep it to yourself, because this particular woman is not the point.

I have met a number of women at various conferences. I'd look really foolish if I went around assuming they weren't attendees or didn't know what they were doing. I've met a woman who works for the CIA. I've met one who was a heavy-duty cryptographer. I've met one who does BGP vulnerability research. Yes, the women are rare. Staring and asking stupid questions doesn't help improve that.

Because of how hostile the infosec world is to women, the ones who manage to survive tend to really love what they do, and have worked very hard to stay in the field. This may mean that the woman you just met is better at security than 90% of the men. That probably includes you (and I'll happily concede that includes me.)

Keep that in mind.

Wednesday, July 18, 2007

BaySec 3 Tonight!

BaySec 3 is tonight, July 18 2007.

Per Nate:
July 18th, 7-11 pm or so.
O'Neills Irish Pub
747 3rd St (at King)
http://www.tisoneills.com

Wednesday, June 20, 2007

BaySec 2

BaySec 2 is tonight, June 20 2007.

Details here.

Hope to see you there!

Wednesday, June 06, 2007

Attention Jed Pickel

It appears that I owe you a big apology. You were right, I was wrong.

(It's amazing the stuff you find when googling yourself.)

Monday, June 04, 2007

Open Source Remorse

So rather than continuing to carry on in the Matasano blog comments (1, 2) and being mirrored in Alan's blog, I figure I should gather my thoughts on this subject in my own long-winded blog entry.

Now, my recent comments have been prompted by Alan's and Tom's comments at each-other, but they aren't about that per se. I gather the background there is that StillSecure has released Cobia which includes Snort (and other open-source bits?), but the Cobia bits aren't GPL. I really don't know anything about whether there's any inappropriate linking or anything going on, I haven't looked at it. The StillSecure guys raise some legal doubts about the GPL, and Tom points to Marty's post about the "clarifications" in the Snort license.

(Update: Alan tells me that Cobia does NOT include Snort. Leaving me wondering what Tom was was upset about in the first place. Shrug. Sorry about further muddying things with my incorrect claim, Alan.)

The key point that Tom raises that I want to take issue with is this:
Why do I care? Because companies like StillSecure are driving open-source projects “underground”, into proprietary licenses. Wow, that sucks.
Now, let's hang on a second there. It looks more to me like a basic desire to make money has caused the open-source security tools developers to start changing their licenses.

They have open source remorse.

It looks more to me like they are finding it difficult to get people to pay them when their stuff is licensed only under a GPL license. Obviously, if the software is only available under the GPL, then anything else it goes into also needs to be GPL. (Modulo calling vs. linking vs. straight source modification, etc... I'm not here to try to hash that mess out.)

I've watched this happen with BitTorrent, Nessus, nmap, and Snort.

Is there anything wrong with making money with software? Certainly not. I've worked at Sybase, contracted at ArcSight, tried my own hand with Enforcer for AnchorIS, and am currently about 4 years in at BigFix. BigFix, by the way, has licensed nmap for commercial use, and Fyodor's licensing terms were very reasonable. All those companies I worked at are traditional, closed-source software vendors. So I fully stand behind profiting from software licensing.

We are salesmen, and completely up-front about that.

But I believe there is a different standard if you're going to go the open-source route. Maybe I'm too much of an idealist, but then, the GPL is kind of an idealist license.

So here's the game: You create some very early, proof-of-concept open-source security tool. Maybe you're early to the market, or maybe you have some genuinely nifty feature, but you're a known concept, an IDS or a scanner.

How do you gain popularity? Well frankly, being free can be a huge help. And if you're not doing it for a living anyway, it works for everyone. What do most open-source projects want? Help. For the packages I've mentioned, they got it

Maybe it wasn't in the form of (much) code. But it was in the form of signatures, QA, people running mailing lists, people submitting fingerprints and banners for obscure software, filing bug reports and feature requests, help compiling on weird unixes, packet captures, books, articles, and other general evangelism. The license also allows every Linux distro in the world to ship your stuff, further cementing you as a de-facto standard.

Those things are absolutely massive contributions for a young project. I don't wish to discount the efforts of the key developers on each of those projects. The packages would most certainly have fallen into obscurity without their leadership. But even then, you don't maintain such a project for years without a positive feedback loop.

But for the projects mentioned, the maintainers eventually decided they would like to make a living off the project.

This is where I admit that I don't know what's in the hearts and minds of the people who are now selling commercial licenses for these projects. I can only judge based on their actions and published licenses.

But it sure looks like they're taking the combination of their own work and the community support, and selling it for a profit.

Why do I care? Because I believe that a lot of people, myself included, gave support because they thought they were helping out a project that was only under a GPL license. Changing it after the fact strikes me as a kind of dishonesty. If you help out a commercial software company, great. You knew what you were helping. I know a lot of people who do free QA for Microsoft.

But if you think you're contributing to a project because your help will always be available to the world, and you'll find it in your favorite latest Linux distro, sorry. Nessus is all the way there, no new Nessus for anyone who doesn't want to register, download and install it themselves, and so on. And no source. Snort and nmap can still be shipped around, but we'll see if it stays that way. No more free Snort sig feeds for you though, if I recall correctly.

I should clarify a point. I keep talking like these projects aren't GPL anymore. That's because I don't think they are, at least not entirely. Nessus clearly isn't anymore. No question there. How about Snort and nmap which have commercial versions available for licensing?

Marty asks in the Matasano blog comments next to me "Snort isn't GPL?"

No.

So you can take Snort and code on it or mix it with other code, and your users can demand the source from you under the GPL terms. That seems pretty GPL, right? So what if your code is in Snort, and SourceFire sells a license to a commercial software vendor. Can you make that vendor give you a copy of their source?

Nope.

Anyone remember the point of the GPL? It's so that no one can take your code away from you.

So you might be wondering, how can they take your GPL code and sell it under another license? Am I accusing these projects of stealing code? No, not really. I assume that they have acquired the rights to all the bits of code or have purged the stuff they can't track down.

Yes, this does mean they had to have planned this for a while. They had to stop taking contributions from all the outsiders or people who will only submit GPL code. I believe these guys are smart enough to get this right, though I wouldn't mind seeing how they went about auditing the codebase.

Does this mean they can never take outside code again? Well, it means the submitter has to be willing to give them a license to do whatever they want with it, including selling it non-GPL'd. This would include, say, people working on it for the Google Summer of Code.

SourceFire has that part tied up rather neatly, too. If you read Marty's "clarifications", you'll see that if you get your code near any SourceFire people, then you automagically grant them the right to sell it as closed-source.

So no, not GPL.

Another interesting thing about the GPL, it only covers code and maybe some docs. If you made some other kind of contribution like the ones I mentioned earlier, not covered. They can just take it and sell it.

So who is really killing GPL'd projects? If you think StillSecure is stealing without giving back, I'm not seeing how SourceFire isn't doing some of the same.

I've met Fyodor and a bunch of the SourceFire guys a number of times. I don't have anything against them personally, and it's not like I don't wish them financial success. I just wish they had either had the license they really wanted in the first place, or didn't go changing it late in the game.

Saturday, June 02, 2007

That's your manifesto?

Pete Lindstrom posts his Secure Software Manifesto. Pete, you'll have to do better than that. I guess a manifesto is not a thesis, it's not intended to be a self-contained set of assertions and evidence. But I feel it necessary to call out what look like some glaring factual errors and inconsistencies to me.

1. Public vulnerability information (e.g. CVE, disclosure info, etc.) provides data about the activities of the hacker/bugfinder/security researcher community; it tells us nothing about the absolute or relative level of vulnerability of software.
On the contrary, I think the effort required to find bugs, and the rate and volume at which they are discovered are the best indicators of the relative level of security of a software package. I will agree that this doesn't tell us the absolute number of vulnerabilities left. There's always the chance that the researchers found the absolutely last bug in a package on the 31st while doing their Month of x Bugs.

The past is not necessarily a predictor of the future, but the past may be a predictor of the more recent past. Or you might prefer correlator. I believe the data is all there for someone who wants to, say, take the bugs for packages from 2005 and see how they correlated with bugs in 2006. At least for known bugs.
2. The defining aspect of a software program's vulnerable state is the number of vulnerabilities (known or unknown) that exist in the software. It is not how hard programmers try not to program vulnerabilities nor how hard others try to find the vulnerabilities.
The first sentence is a fine definition. The second sentence seems to be trying to distance itself from the first, though. If you try hard to create fewer vulnerabilities (and have some talent and experience in that), don't you think you will create fewer vulnerabilities? And if you missed some, and other find them and you fix them, don't you mostly end up with fewer vulnerabilities?

So no, using the definition of "vulnerable" to mean there is at least one vulnerability left, there's probably no amount of effort you can expend that is going to get that count to zero. But don't we want software packages that have fewer vulnerabilities, if you can't have zero?

Because if there's no value to that, I know lots of people who could be doing something else with their time.
3. The contribution of a patch to the vulnerable state of a software program is a tradeoff between the specific vulnerability (or set of vulnerabilities) it fixes and the potential new vulnerabilities it introduces.
Sure. Do you mean to imply that patches often introduce new problems? I'm kinda under the impression that's relatively uncommon, but I'd be willing to be proven wrong.
4. There is currently no known measurement that determines or predicts the vulnerable state of a software program.
False. If you use the definition of "vulnerable" meaning that there is at least one vulnerability, then I have a program that will read any other program of some minimum complexity, and return the probability that it is vulnerable. The answer is usually 1. I'm very confident in my low false-positive rate.

Facetiousness aside, I agree that there is no metric or program to find or event count all of the vulnerabilities in a program. Maybe not even most of them.

But there are programs, services and consulting that will find "some". Is there value in finding "some"? Is it useful to know how hard it was to find "some"?
5. We don't know how many "undercover" vulnerabilities are possessed and/or in use by the bad guys, therefore we must develop solutions that don't rely on known vulnerabilities for protection.
Once again, I agree with your opening statement, and am left wonder where you got that particular conclusion. Why not "therefore we must find and fix as many vulnerabilities as possible" or "therefore we must infiltrate the underground and gather intelligence"?
6. The single best thing any developer can do today to assist in protecting a software program is to systematically, comprehensively describe how the software is intended to operate in machine (and preferably human) readable language.
As a QA guy, I'd have to say that would be really, really awesome. Yes, can I have that please? But if I had that, isn't that the same as programmers trying hard, ala your point 2?

Wednesday, May 16, 2007

BaySec 1 Tonight!

BaySec is this evening. Hope to see you there!

Also, there is now a CitySec site for organizing these things. I know it's unlikely that you're aware of or care about the city meetups are are not reading the Matasano blog and don't know this already. But for completness' sake, and search engines and so on.

Saturday, May 05, 2007

BaySec!

The first San Francisco-area Matasano-inspired BaySec get together is Wednesday May 16 2007, 7:00 PM at Zeitgeist. They tell me they don't do reservations, and the best thing is to show up early and stake out your seats. Sounds like an invitation to take over the place to me.

Likely attendees (aka those of us who have been conspiring to get BaySec started) are Raffael Marty, Anton Chuvakin, Nate Lawson, and more importantly, you.


There's a mailing list, courtesy of Tom Ptacek:
baysec at sockpuppet dot org
baysec-subscribe at sockpuppet dot org


Hope to see you there, and please spread the word.

Tuesday, April 03, 2007

Why SSL sucks

Recent posts about security protocols like SSL and DNSSEC got me thinking. In an orthogonal direction.

You know what's wrong with protocols like SSL, SSH and PGP/GPG? They let users pick the stupid. Bruce Schneier has trained me to call this the "dancing pigs" problem, though I'm too lazy to go look up the guy Bruce says he got it from.

It goes like this: "There's a problem with the security gizmo; click OK to see the dancing pigs."

Unless you're a security researcher who lives for the chance to investigate a malicious server, you just click OK to see the dancing pigs.

All my kids have been computer users since before they could read. They don't know what the dialog says, but they learn to click OK to see the dancing pigs. Even when they do learn how to read, they aren't necessarily so concerned with expired certificates or DNS name mismatches.

The reason these protocols all suck is because they let just anybody make the security policy decisions. Stupid.

(OK, so it's not the protocols/file formats themselves, just every app that implements them.)

So, what am I suggesting instead?

Dan Kaminsky wrote an excellent chapter on spoofing for me, for the first edition of Hack Proofing your Network. I hope to have it available for download one of these days. In it, he makes a perfect case for reliability == security. If your service is going down all the time, then you are being trained to live with unreliability and ignore strange problems. Your judgment is shot.

So much for the idea that a DoS isn't a security problem.

As a QA guy, I really want bugs to crash, and crash hard. Crash dump or core file too, please. The alternative is random behavior, unreproducible issues, caught exceptions that I really needed to know about, and maybe memory scribbling that could exhibit random symptoms. You don't want it to be kinda tolerable, not that big of a deal, I haven't seen it in a while I guess it's gone kind of a problem.

So security protocols need to break and break hard.

If there's a problem with the certificate, then just drop the connection. Don't prompt the user. Don't try to rate how bad of a problem it is. Don't toss a yield sign in the corner, don't show me a key with fewer teeth. Just stop.

If the SSH server keys have changed, don't connect. Don't offer to connect anyway. Don't ask if I want to save over my keys. Don't tell me the command-line switch to disable my security.

If the GPG email signature doesn't verify, don't let me read it anyway. Don't invite me to keep searching keyservers until I happen to find one with keys that agree.

Why? Because if it breaks properly, people will be forced to get someone competent to fix it. And they will HAVE to fix it.

Examples. If someone's SSL cert expires, right now they can sort of ignore it for a little while, or tell people to click OK, and so on. Do it my way, and it breaks entirely, and the person who should have renewed the cert does so, right now. Don't get me started on self-signed certs. If you've done something and blown away your server SSH keys, you think no big deal, just tell everyone to accept the new ones. Do this enough, and what have you trained users to do? If instead SSH doesn't work at all, how much more careful would you be about bothering to restore the original SSH keys?

But this is painful for people? That's the point. People learn through pain. Some things should be punished. Some events should be disruptive.

People should be trained to take security seriously.

Tuesday, March 27, 2007

I'm glad you got your kid back

Erik takes his kids to Disneyland, but manages to lose the 3-year-old. But that's OK, he had hung a USB flash drive around the kid's neck, and had him back within 13 minutes.
Our three year old did just what we thought he would do - Disappeared. Within 13 minutes of being ‘lost’ though, my cellphone rang.
The little scamp.

Anyway, I actually am glad he got his kid back so quickly. Nothing is worse than having your young child go missing. But...

So the father planned ahead, that's good. If you'd like to do the same yourself, I see that SurplusComputers has a 2-pack of similar-sounding drives for about $8.

But I can't say I recommend you do that. Instead, I recommend that you plant the equivalent of a dog tag on your kid. It's no worse than the USB version, and you're much more likely to get someone with a cell phone and no computer handy to just read the tag and call you.

Heck, if you know you're probably going to lose your kid at Disneyland, I bet you could get them back in just 5 minutes with the dog tag.

Oh, and I see the lost USB drive thing just relies on Autorun to pop up the message. Disneyland Security, you just got pwned by a 3-year-old. Pentesters, are you paying attention?

Found via The Disney Blog.

Saturday, March 24, 2007

Owning up

If you're a software vendor and a researcher comes along a claims there's a problem with one of your offerings, and you (the vendor) think there is not, you issue a public statement to the contrary. That's fair.

However, if the researcher persists and manages to prove his or her case to you, what do you do?

If you're Microsoft, you own up to the problem, and thank the researcher for making you understand.

Exhibit 1
Exhibit 2

That sure looks like the right way to do things to me. At least, the drama will probably only last about a week.

Thursday, February 22, 2007

Julie Amero add'l

Brian Livingston gave me permission to write my Windows Secrets article this time about Julie Amero. I'm grateful that he allowed to use my space there (which is a paid gig for me) to help spread the word. Brian is sympathetic to her situation as well, and you may have seen him quoted in the New York Times story about it. In addition, he made it the Top Story, which means that it goes to ALL subscribers, not just paid subscribers. It also means I can link to it from anywhere, like I just did.

If you don't know about Julie's situation, you can read my article, and there are some links in it to others that give more background. If you read security blogs at all, you probably already know all this, so I won't cover it here. The reason I haven't mentioned it before is because I was preparing that article, and because I have been working behind the scenes with others, as hinted at in the article.

I can be long-winded, so my article was over twice the length it was supposed to be, and had to be cut down a bit for the newsletter. I wanted to use the extra material here, and make an update or two.

In the ComputerCOP Pro section, I originally had this:

So what did the detective use to examine the "image"? He used a program called Computer COP Pro. Here's an example entry from the FAQ:
Q. Does Professional require training to use?

A. For a competent computer user, Professional truly does not need training to use as the detailed search applications are performed automatically by the software and the product does come with a Getting Started manual. However, because you may need to testify in court or in a hearing, it would be best to receive the company's training and certification.
So, training would be nice, but you can get away with not doing it if it's inconvenient. I'm told that training consists of an hour on the phone.

Needless to say, this program really doesn't sound like it would meet my standards for a forensics utility.

[and]

Since this is a key portion of the prosecution's case, Alex Shipp contacted a representative from the makers of ComputerCOP about this aspect of their software. Alex tells me:
Allison Whitney, directory of communications for ComputerCOP, confirmed that the product was unable to distinguish between URLs visited as a result of malicious software, and URLs visited by direct user action.

She also confirmed that this point is not made clear during the ComputerCOP training. At this point in time, ComputerCOP have no plans to contact the Connecticut court to point out the errors in interpretation of the ComputerCOP output made by the prosecution attorney and prosecution expert witness.
[and]

Why didn't the defense present these kinds of findings? They tried. There appears to have been a procedural error on the defense's part, and the judge would not allow the defense to enter their evidence. The defense expert has publicly stated that his analysis of the computer files would have revealed that spyware was causing the pop-ups to appear and he feels the evidence would have totally exonerated Julie.

[end of extra material]

Speaking of procedural errors on the defense attorney's part, it appears that Julie is getting a new lawyer, and this may delay sentencing. This is good news. The article makes the new lawyer out to be a hot shot, which is exactly what Julie needs. Despite the fact that she has been declared guilty already, there are a couple of small chances for the case to be resolved before sentencing still, from what I understand. The prosecution could realize that there has been an error in the facts presented, and request that the verdict be vacated, for example. I'm obviously not a lawyer, so apologies if I have abused the terminology.

Despite the TV shows you see, I'm learning that appeals aren't as easy to get as you would think, so anything that helps slow this train wreck down and bring some sanity into the situation is a welcome development.

Saturday, February 10, 2007

Apple vs. Maynor update

I had a great time chatting with people at the security bloggers meetup the other night. There were any number of "I didn't know you blogged" moments all around. Two of the guys I spent some time talking with were David Maynor and Robert Graham who have recently formed Errata Security. And yes! they are blogging too.

We chatted about all kinds of things. We chatted about Robert moving on after IBM acquired ISS. It seems that David found some reason to move on from his position at Secureworks, too. And then we went to dinner at some mediterranian tapas food place, and chatted some more. They bought. Thanks for the dinner, guys!

So when I got back home, I tracked down their blog, and there's some good stuff there. Hey look, there's this one particular entry from David. Looks like he's tired of keeping his mouth shut about the Mac wireless hack thing. Short version of my take on the issue: I believe David and Johnny.

But at this point, I do have to agree that some opportunities have been lost. The Matasano guys propose some hoops that researchers should be going through. Frankly, I thought that was a little silly and totally unnecessary. Even in David's case. I never thought for a second that Apple would ship the patch while still claiming that David and Johnny found nothing. I was wrong on both counts.

So unfortunately, this leaves room for the next bit of stupidity. If/when David ever decides to demo owning the built-in wireless, or release an exploit, etc... then the Mac zealots will claim that he must have reverse-engineered the Apple patch, and that he never found anything ahead of time.

Because David can reverse engineer the patch and write a working exploit, but he's not capable of finding the hole in the first place, right? And the hole that Apple fixed just coincidentally is in the area that the original Black Hat talk covered. And the holes in other OSes that they found of the same class aren't related. And HD Moore using their fuzzer and finding a similar hole in OS X has nothing to do with it.

One of these days, I hope David drops more info. At this point though, it looks like Apple has been largely successful. They have managed to drag things out long enough and tell enough half-truths that their customers believe Apple. So it's likely that few zealots will be swayed when David finally presents proof. There will just be further dismissals from people who really don't understand security very well. I still look forward to it, though.

Hey look, David is speaking a couple of times at Black Hat Federal later this month.

I'm in ur package, playing with ur puzzlez

So one of the developers I work with, Dave, is quite the twisty-puzzle fanatic. Take a look at some of his photos on flickr, and you'll see what I mean. Here's something like 1/3 to 1/2 of what he has in his office at work:





As you might imagine, Dave is also on all the various puzzle sites, and knows which puzzles are rare, which are worth the most, and which ones he doesn't have. Recently, he worked out a trade with some other puzzle collector in another country. He shipped a Square-1 in exchange for a few other puzzles. This is what arrived in the mail:



Yes, go ahead and look at the larger version of that pic. That's the Department of Homeland Security logo. So what was inside that caused such alarm that they had to open his package in transit to inspect it?



We suspect it was the rare and unusual Rubik's Hat that caught their attention. Had it been your run-of-the-mill 3x3, I doubt they would have felt it necessary to play with it. or maybe they saw The Da Vinci Code recently, and it looked like a cryptex on the x-ray?

Rubik-sniffing dogs?

Dave did note that whoever was fondling his hat didn't seem to have any luck solving it. Good thing he didn't trade for something with batteries and wires.

Update: As if to further prove his cube-geekiness (did I mention that he placed fairly well at the recent cube-solving time trials?) Dave writes:

Nice, although technically it was my custom modified Square-1 that I traded, as a vanilla Square-1 is only worth $20-$30. I hear that the maker may even be doing another round of production, in which case the price might go back down to $9.99 or so. Here's a flickr picture of my custom modification:

http://farm1.static.flickr.com/135/362992417_66e4734f8a.jpg
I stand corrected.

Thursday, February 08, 2007

Second best ad at RSA

I hereby declare the second-best ad at RSA:

DSC01377

"Beware of False Positives"

Awesome!

(I give "best" to my company's own ad, of course. It holds special place in my heart. However, if you think this one is first place, and ours only second, I'll forgive you.)

The woman working the booth tells me that it was "obtained" in Seattle, and is authentic. They were raffling it off in their booth. Excellent job CyberDefender.

Friday, February 02, 2007

MoAB to the BillG

You know your month of bugs is good when Bill Gates is out there pimping them for you.

Hey Bill, are you daring people to do a MoVB?

Thursday, February 01, 2007

Mooninites

I, for one, am outraged at the ridiculous over-reaction of the Boston authorities to what amounts to a battery-powered litebrite

Wait, I can get the bomb squad to come detonate things by attaching LEDs to them?

I can tie up the entire police force of a major metropolitan city for an entire day with a $100 worth or parts from Radio Shack? Completely distracting them from anything else that might be planned for that day?

Wait wait... I can get national news coverage on every major news outlet, and get away with it by just not admitting it was me in the first place?

Carry on.

Friday, January 19, 2007

Web of trust

I continue to add little bits of other people's Javascript on the side of my blog. I just added some code from Technorati. Earlier, I added a hit tracker from Sitemeter and am publishing my RSS feed via Feedburner. The Technorati and Sitemeter things are raw Javascript includes. Oh, and I've started using Zooomr pictures, more Javascript. I haven't added the dozen "pick me!" buttons from Digg et al, yet. But I'm not ruling it out in the future. I don't plan to turn on the ads, but that's just more of the same.

The point is, if you want to 0wn my readers, just compromise Blogger, Technorati, Sitemeter, Zoomr or Feedburner. Or maybe something they depend on. Then you can hand out all of the browser exploits in my name you want.

It's not like attacking one site to compromise another has never been done, or that I haven't been targeted before. I'm just saying.

Web 2.0 is looking a lot like a huge interconnected chain of transitive trust. See also: myspace.