Thursday, November 30, 2006

Apple gives credit to evil haxors?

(OK, even I have to admit up front that I'm just giving Apple a kick in the joy department for the fun of it over this one.)

So, I'm having a glance at daringfireball.net this evening, to see what kind of Mac security zealotry I should be enraged about lately. Gruber says they gave HD Moore credit. Hey, look at that! He's right.

Now, you would think that with all the recent past history on Apple and vulnerability disclosure, that Apple would have a policy of not crediting researchers who don't "properly" report vulnerabilities, wouldn't you? After all, not even Microsoft will give you credit if you don't play nice. But for Apple, maybe that's not the case? Here's what I believe is their official policy:
For the protection of our customers, Apple does not disclose, discuss or confirm security issues until a full investigation has occurred and any necessary patches or releases are available. Apple usually distributes information about security issues in its products through this site and the mailing list below.
Near as I can tell, that's the entirety of their official written policy. If there's a better version, I'd love a link. Note that it doesn't say anything about giving credit or not.

So maybe it's Apple's policy to give credit to the discoverer of the vuln, regardless of how it is disclosed? If so, then kudos to Apple! You've done one better than Microsoft. Honestly, I don't see why you wouldn't. It's a simply acknowledgment of a fact.

Now, if we could just get proper credit attached to an earlier wireless vuln, and work on not pretending a problem doesn't exist until "any necessary patches or releases are available", then I'd be that much happier.

Friday, November 10, 2006

MS says not exploitable

Hey, look at Microsoft dropping the disassembly to demonstrate that something isn't exploitable. Nice. I don't think I've seen them do that before.

Monday, November 06, 2006

Shut down the Internet

A Sybase story this time. I was a network & security guy at Sybase for just under 5 years, between 1995 and 2000.

Speaking of 2000, I was at Sybase for the Y2K rollover. Like most IT shops, we had spent a couple of years planning for Y2K, and as it got closer, we got busier. Then I get a visit from the director of telecom. He tells me that for the rollover weekend, we will be shutting down all Internet, dialup and ISDN links.

What?!?

Yes, his boss, one of the co-CIOs we had at the time, told him to cut off all outside communications because of hackers. The request got dropped in my lap, because I was in charge of all the Internet links, firewalls, and dialup lines.

What?!?

It was explained to me that our CEO, John Chen, had been golfing with one of his buddies from HP, and he had heard through the grapevine that the hackers were going to be out in force on the Y2K weekend, and were saving their attacks for when companies were at their most vulnerable. Therefore, we were going to preemptively take down our communications links, just like HP!

(Remember the scandal about HP taking themselves off the net over the Y2K weekend, screwing their customers? No, you don't. It didn't happen.)

I tried, briefly, to deal with my upstream management on the issue. Nope, I was told it was a done deal. This was several days before the rollover.

I didn't wait long to go over everyone's heads and email the CEO explaining why he was making a mistake. Reasons included things like "You're going to make SURE you have a major outage on the chance that you MIGHT have an attacker-driven outage." "I know a lot of these 'hackers' they will either being working on Y2K at the day job, or drunk for New Years." "What about all of our customers who need last-minute Y2K patches? What about all of our OWN people who need the same from other vendors?" "Do you have any idea what level of attack we already get and live through every day? We get over a million failed connection attempts every day. Literally!".

And he started to relent. I had a reasonable explanation for each of his concerns.

The "deal" was that I would build a monitoring team, so that we had 24-hour around-the-clock coverage of the firewalls and other logs, looking for anything suspicious. I had to report in every so often. Anything really bad, and we would have to pull the plug.

Of course, nothing happened. After about 12 hours, the CEO got really, really bored looking at attack reports. Oh look, a port scan. Oooh... a distributed port scan! Hey, 100,000 attempts to connect to a telnet port that isn't listening.

But I had had to make 8 network & security people work the entire Y2K weekend, 8 hours on, 8 hours off, to be allowed to keep the links up. These were 8 people who had done their jobs ahead of time, like they should have, and by all rights should have had a nice relaxing New Years Eve for the big millenium switch.

And Sybase was just going to screw their customers. Not to mention making us as a company look like idiots.

So, I got my way, forced Sybase to do the right thing, and had to suffer for it. And naturally, I got the warning email after about "going through channels" (which would have got me exactly nowhere. I had had about 2 days.)

I left Sybase on January 31st to go work for SecurityFocus. Sybase had made a corporate decision to essentially spam people, also over my objections. (Did I mention that I was abuse@sybase.com?) Plus, I was starting to get the kind of treatment that made it clear I was being punished for going over people's heads. This just after I had tracked down a rogue sysadmin who was embezzling (a story for another time.)

Since then, I've taken jobs with people and companies that actually care about security.

(No, don't lecture me about what year the millenium rolled over. I have my own ideas about that.)

Saturday, November 04, 2006

Crashing MailWorks

Another Bechtel story.

Bechtel had standardized on DEC MailWorks for their corporate email standard. Previous standard was PROFS on the mainframe. We had enough MailWorks users going that we needed a VMS cluster to deal with the volume, and have some redundancy in case of an outage, maintenance, etc... all the stuff you want a cluster for. I'm actually DEC certified on some of this stuff. I get lots of use out of that now, let me tell you.

One day, mail goes down. The senior VMS admins determine that the MailWorks server process had gone down. On all the machines in the cluster. At the same time.

OK. So they try to run it again, and it comes up. As they are trying to bring it up on another machine in the cluster, they both go down again. It had only been up for a few minutes. So they try one machine by itself. It runs for a minute or two, and goes down again.

They do some dump analysis, and can see that the process is crashing. Not that this helps with how to fix it. After a bit of in-house fiddling, DEC is called. Some phone support doesn't help, must be a hardware problem somewhere. On every box in the cluster? OK, a hardware problem in the cluster interconnect (CI), then. Waste time, cannibalize hardware, break cluster, determine that problem happens on one server, no cluster, and machine works fine for all other software. Dispatch DEC technician to site.

Reload OS, MailWorks software, runs clean. Problem solved? No, when you give it the mail spool, it crashes again. And yes, we DO need our old mail, thanks anyway.

But now we know it's something in our mail files that is causing it. Maybe we can figure that out and surgically remove it? OK, so they binary split the files, and determine that a single email is causing the problem.

This is several days into an outage, mind you.

Email is examined, and it turns out to have a really long subject line, like thousands of characters, almost all spaces. Some experimentation shows that once you hit a subject line of 1K or so in length, MailWorks takes a dive. (Ah yes, I saw that light bulb go off over your head.) And if you have a cluster, when one server crashes, the next one dutifully takes over mail processing, until it hits that same message.

Message is purged, and people can actually get back to work.

They track down the user who sent the killer email, to find out what the heck she was thinking. Turns out she was eating breakfast, and reading her email. A piece of Grape Nuts cereal lodged in her keyboard, and managed to hold the spacebar down. She still sent the email after that, but remembered having to dislodge the offending Grape Nut.

So an entire VMS MailWorks cluster got taken out for days by a piece of Grape Nuts. But that's not the punchline.

After DEC support had been largely useless for days and our guys had to more or less had fix it themselves, we submitted a fix request. We didn't want this happening again. We were able to send a specific problem description, number of characters, sample email, the whole bit.

DEC's response was: Oh yeah, we know about that! Here, we've had a patch available for a while. Why weren't we (one of the largest MailWorks installations in the world) told about that? Oh, uh... you have to call with a problem description that indicates that patch is needed. OK, and when we DID call with a problem like that? And you sent out a technician, why didn't he know? Why don't you publish the patch list? Uh.. well...

And I believe that was my first practical introduction to buffer overflows and vendor patching.

Thursday, November 02, 2006

Threat vs. vulnerability

Inspired in part by Richard Bejtlich, I present Yet Another Horrible Information Security Analogy (YAHISA): A tale of bunnies and kitties.

Imagine a lush green field of grass and clover, where bunnies frolic and play. These are cute white bunnies, with pink eyes. And the occasional black bunny, which inexplicably costs more. The bunnies in this field have no natural predators. The wolves don't know about this field.

Now, picture a city cat that roams the streets, getting into fights, disappearing for days at a time. When it comes home, it's missing a little more of its ear, or occasionally needs to be stitched up. If it gets into a fight, sometimes it wins, sometimes it loses. It will eventually be run over by a car. Its bloated carcass will be poked by children with sticks.

The bunnies are vulnerable. The kitty is vulnerable, and has threats.

Wednesday, November 01, 2006

You want Mac wireless bugs?

So, the Month of Kernel Bugs (MoKB) begins today. They start by releasing a live exploit for a remote kernel bug in older PPC Macs with Orinoco-based chipsets. "1999-2003 PowerBooks, iMacs". (Note: I've done no independent verification of the bug, I just trust the people reporting it.)

With no official notification to Apple, and no patch available.

Even though the machine I'm typing this on right now is vulnerable to the exploit, I believe this is the appropriate way to handle this release. Why? Because of they way Apple handled the same kind of issue with David Maynor and Johnny Cache, of course.

Apple thinks it should not even acknowledge unpatched bugs. It (apparently) thinks that it should issue press releases denying the issue and use vague legal threats against researchers to "protect customers".

This kind of release is the result. If Apple doesn't want to play responsible disclosure, then the researchers will be happy to oblige. I trust there will be no denial of the problem by any interested parties this time?

(No, not really. The Mac zealots still won't believe it. But it sounds good, anyway.)

Update: I'm taking my Mac off the list of affected machines. It's an iBook G4 with an Airport Extreme that was purchased separately. It appears that the Extreme (802.11g) version of the Airport isn't affected by this particular bug. I might as well try to be careful about technical accuracy. I've seen how the Mac community reacts to any little inaccuracy.