Saturday, December 30, 2006

Security models

This has been bouncing around in my head for years. I still can't quite get it to form properly, maybe someone else can help.

I've worked on a bunch of operating systems over the years. I've forgotten all of the 8-bit ROM-based "operating systems", RSTS/E, CP/M, VMS, and most of Netware. And of course I've forgotten the other ones I've forgotten. That leaves, more or less, DOS, Windows, and unix. And by "Windows" and "unix", I mean a bunch of different flavors. Though anymore if you sit me down in front of even Win9X, I'm a bit rusty there.

I wouldn't really have any basis for understanding the security model of an OS more than 10 or 15 years ago, not beyond the overt features. "Overt" being stuff like username, passwords, and file permissions and ACLs. Mostly because I just didn't know that stuff back then, but also because the major of stuff I touched by volume had no real protected memory model. There would have been a little bit in NT 3.1, and it was there in the unix and VMS, but again, I had no real basis for grokking it.

Point being, I learned my file permission stuff on Netware and VMS, and the more complicated things like process permissions and kernel and user separation on NT4 and later, and unix. Picking up NT-based Windows and unix were fortuitous, since that's where the market went. But that was a self-fulfilling thing, since I worked on what was popular. I actually managed to leave Netware behind about when I would have needed to learn about NDS, and I stopped doing any heavy-duty Windows admin about when AD became the dominant domain model for Windows. Some I'm missing some directory services brain damage, too.

I took all of the NT4 classes, but only bothered to take a couple of the tests. I took some user-level unix classes, but mostly picked it up on my own. I would say that the majority of my useful experience is on-the-job self-taught stuff.

So why do I feel like I have the entire unix security model in my head, but I only have a tenuous grasp on some significant chunks of the Windows security model?

I've done DOS and Windows far longer, though to be fair anything before about NT4 isn't really pertinent. I've done more Windows, too, in terms of hours spent.

Obviously, part of the answer is that the unix model is just simpler. Everything's a file, you get owner, group, and world. There are a few special sticky bits. Everything runs as a user and gets its permissions the same way the filesystem works. Even the pipes are relatively simple. Kernel and user separation are clean. I understand what happens with environment and handle inheritance for child processes. I find the user database very easy to deal with. The typical init startup process is nice and simple. Signals are easy.

Even when you add on things like NIS, full ACLs and SELinux, I think I'm following along just fine.

Windows on the other hand...

File permissions I'm good with. Same with the Registry, it's basically just another filesystem. Process permissions? I gather that they each have a privilege token, and sometimes privileges to change privileges and so on... I know there are a couple of different types of pipes, not sure what's going on with the security model there. Services can be running as different users or local system. There's the event system, I vaguely recall hearing about there being ACLs on that. And I get this sense that there's a ton of other things that I don't even know about.

What happens when Windows boots? When I log in, what process(es) are creating my processes, and what, I get a set of tokens as well? When I change a password, what is happening with permissions that ultimately write my new hash out?

I just starts falling apart for me.

Now, I don't think I'm incapable of learning it. But part of the point is that I never made any concerted effort to learn either security model. Yet, unix feels like it's right there with minimal effort, and Windows is making me work for it. If I ever catch up with Windows, it will be because I made the effort to track down and study some serious documentation. And I'm not opposed to that, I just haven't done it. Pointers to favorite docs welcome.

Part of what I've concluded about this is that the unix model is superior. And that's "superior" in the practical sense that if you can understand it, it's going to work better for you. I'll happily admit that the Windows model might be more expressive, maybe allowing you finer-grain control. But that does me no good.

typesI think another reason that the unix model works much better is because unix itself is far, FAR more modular. I can strip a unix box down to the floorboards, leaving it with no functionality other than the purpose for which it exists. I've done this before with firewalls, various servers, and so on.

Many unix functions do not have a lot of interdependency. I can kill off portmapper, and not have it disable the majority of my administrative tools. Configuration storage is justs as likely to be a text file, which I can change by hand, and not need a front end tool for. I can turn off the damn GUI.

And this is where I start to not be able to articulate it much better. Anyone else have a more elegant way to explain what I'm trying to get at? Does anyone else's experiences even match mine? I have to assume they do, since there's a big correlation between security people and unix fans.

OS X is a slightly different beast. It has (for me) a lot of the obscurity of Windows layered on top of unix. I got nothing when it comes to the mach kernel or the Window manager. But even so, with the unix underpinnings, I am in a much better position to pick up the rest. I can already see where they have made horrible permissions mistakes.

In any case, I'd like to have this be a proper essay someday, and I could use the help explaining it better. I'd love some feedback, even if it's just "me too" or "you're high."

Sunday, December 10, 2006

Negative vulnerabilities?

As part of the recent discussion having to do with agents introducing new vulnerabilities to a system, I've been thinking. Everyone agrees that adding software means adding vulnerabilities. Even if you're one of the best at writing bug-free software like Knuth or DJB, you still make the occasional mistake. (Even if DJB hates to admit it.)

Richard Bejtlich mentions it again while commenting on the agent discussion.
Worse, as is the case any time you add code to a platform, you are adding vulnerabilities.
Of course, I understand fully what people are saying when they say this, and I'm not trying to disagree. But that got me thinking.

Now, you can't have a negative number of vulnerabilities in your own code. At best, the theoretical minimum is 0. But could you, by adding code, remove someone else's vulnerabilities? I think it might be possible.

First off, there's the obvious case of a patch. You added something, and are now (hopefully) down one vulnerability, right? Well, not exactly. Modern patches work by replacing an entire chunk of code with one just like it, (hopefully) minus the vulnerability. It will, for example, replace an entire DLL. So that's not more code, that's different code. And sure, it might introduce more vulnerabilities too, but I'm trying to make a point here, work with me.

So fair or not, I just removed the vendor from being able to go negative on vulnerabilities. At least, not within the same software package. Nothing that says Microsoft couldn't add something outside of Word to remove Word vulnerabilities, though.

How about third-party patches? Well, depending on how they work, they might qualify for what I'm thinking. If there is something that sticks around all the time, and is somehow removing a vulnerability from another piece of software on that system (and adds none of its own, of course), then maybe it just achieved negative vulnerability.

Now, it's unlikely that a program of any size is not going to have its own vulnerabilities, so it had better fix a large number of someone else's. This is what I think the HIPS category is trying to do. If you look at Blink or Determina, they dig around in the guts of someone else's software, and do things that try to remove vulnerabilities. Or specifically, make them nonexploitable, downgrading them from vulnerabilities to just bugs. For purposes of this discussion, lets call the process exiting instead of running shellcode "no longer vulnerable."

Briefly, a sidebar: I'm quite annoyed by the current use of the term "HIPS". If you happen to be a Windows Secrets paid subscriber, you may have seen my article about that. Briefly, there is a bunch of stuff calling itself HIPS, including traditional stuff like AV and file integrity checkers (read: Tripwire.) Don't do that. Otherwise, you don't leave me a good name for things like Blink, Determina, W^X, DEP, and those categories of protection. Naming suggestions welcome.

So lets assume these things work, at least partially. If so, and they remove a substantial number of vulnerabilities (more than they add), then you have added software and removed vulnerabilities. Discuss.

(For those wondering if I have an ulterior motive on this one; not really. BigFix doesn't do this kind of thing. We might someday parter with a vendor that does, or add management of such products to our agent, but we don't muck about it the binaries of other software. I'm just interested in exploring the idea of adding software to remove vulnerabilities.)

Friday, December 08, 2006

Nothing wrong with agents

This post is mostly in reaction to a post from Thomas Ptacek on the Matasano
, one of my favorites. Tom in turn says his was in reaction to post from Alan Shimel, who is replying to an article by Ray Wizbowski. That gives you some idea where in the food chain I am. But it doesn't get interesting for me until Tom's post, so that's where I start.

I'm not trying to feed any kind of Matasano/BigFix war but hey, they started it. (For the seriously humor impaired, I'm kidding. I've known most of the Matasano guys casually for years, and Amrit has worked with Tom, and so on. We're just disagreeing with each other using technical points, that's how it is supposed to work.)

If you're just running across this post via some other blog, I'm the QA Manager at BigFix. We are a vendor of agent-based systems management software. Though, I'm sure our lawyers would like me to point out that I'm providing my own opinions here, and I'm not a company spokesperson. Anyway, probably because of where I work and what I know, I'm a fan of the agent-based approach, and naturally I think we do a fine job, our stuff is secure, and we can do it all. So if you're thinking "Ryan is biased", well duh.

Onto the bloggery. I make a few comments of little substance on Tom's blog entry, and he emails me to politely suggest that if I want to disagree, I should quit making snide jabs, and get to the point-by-point.

Here's the premise I start from:
  • You have a large number of machines (an "enterprise")
  • You wish to have mass control over them (you want "management")
  • The software that comes with the OS is insufficient for this purpose (you're going to buy some "software")
In other words, let's assume that the built-ins like WU, UP2DATE, YAST, ports, Software Update, and so on are not going to cut it. Point being, you have decided you have to add something on, and not having extra software isn't an option. If you disagree with this, then you probably have less than a few thousand machines, and the rest of this will be quite boring.

Tom's assertion is that agent-based software is bad, m'kay? and you should avoid it. To be completely fair, I'm seriously summarizing and putting words in his mouth. But take a look at the title of his post I'm responding to "Matasano Security Recommendation #001: Avoid Agents" and this slide which says "Enterprise Management Applications - Threat or Menace?", and you sense a theme. Yes, Tom is quite fair in the details, and will tell you he can only make claims about stuff he has tried, which does not yet include BigFix.

I understand good storytelling, yet I'm getting covered by these blanket statements. So hopefully it is understandable if I feel it necessary to respond.

So, you need some enterprise management software. Your basic choices are agents, and scanners, because I've already ruled out any kind of one-by-one method as impractical by the time you get to a certain size. I'm of the opinion that you can only get so far with pure scanners. For example, they can only determine, they can't change. If it can change, then what you have is a scanner-driven part-time agent. Yes, they push an agent onto the box long enough to do their business, and then get off again. And yes, there is value to only having the agent on there the absolute minimum amount of time, so I don't want to totally dismiss that benefit.

Let's check Tom... OK, he's not advocating scanners. In fact, if I'm not reading too much into it, he's not even saying you shouldn't run agents at all, he just wants fewer. But wait, are we talking per machine, or what?
  1. Minimizing the number of machines that run agent software.

Do you mean that some machines shouldn't run any agents? Then how are you going to manage them? Nothing wrong with hand-maintaining a small number of critical machines, of course, but I don't think that is what is being suggested. So this might be basic choice number 1: Are you at more risk by not having management of your machines, or by having an agent, even if it is a "bad" one? I still have to go with agent. Simple math will get you there. Count all the various threats out there, and only a small handful of them have been aimed at agents.

2. Minimizing the number of different agents supported in the enterprise as a whole.
I think this point is far more central to Tom's message. And I don't disagree with him. Again, no huge surprise, since BigFix replaces a number of other agents. See next point.

Endpoint agents are programs that run silently in the background, usually as Windows Services or Unix daemons, which communicate back to a central management system. Well known examples include:

  • Systems Management (BMC Patrol, CA Unicenter, Microsoft MOM)

  • Antivirus (McAfee, Symantec)

  • Patch Management (Novell ZenWorks, SDS, BigFix)

  • Data Leakage Prevention

Good definition, I agree. But not on the categories, not for BigFix. We do systems management, AV management (we manage something like 6 or more AV vendors' code and signature updates), AV & antispyware engines (OEM'd), Patch, software distribution, power management, inventory, etc... We do NOT do the HIDS functions ala Blink and Determina. That would be an example of someone else's software we would manage.

So it's incorrect to stick BigFix just in patch. It's a common mistake, that's all we used to emphasize up until a few years ago. And, hey, not Tom's job to make sure our marketing is properly conveyed. But I make a big deal out of it precisely because BigFix is exactly the kind of thing he's calling for to help reduce the number of agents running around.
Agent-based architectures are a severe security risk.
So now Tom makes one of these leaps I object to. He's drawing mass conclusions based on (significant) experience actually looking at a bunch of agent systems. But you can't make a factual statement about all N software products by looking at N-M of them, if M is greater than zero. You can only state generalities.

He gives specifics class examples. While I still owe the world a BigFix architecture document (I know you're all anxiously waiting), let me give some short previews as responses.
Listening Network Services on Agents
You can disable the BigFix notification protocol, and go full polling if you want. We are client pull. Even with the listener in the default listening state, the protocol is simple. It's just a 12-16 byte (payload) UDP packet. It suggests to the agent that there is something upstream that it should check for.

Listening Network Services on Management Servers

OK, got me there. We've discovered that either the agent or the servers need to have something listening on the network, as a general design principle. Are you suggesting that people go without management at all again? I'm pretty sure any alternative to an agent will want a listener, too.

Client of Agent Service on Management Server
That's our default, but it's not necessary if you don't want it. We use the agent on the server to do software upgrade on the server. But you can do it manually if you choose. I, for one, expect a software distribution system to be self-upgrading. But here, you're implying that the server is security-critical. I.e. crack the server, and you have the agents. BigFix doesn't work that way. All the security is in the signing keys.

Confidentiality and Integrity of Agent/Server Protocol
Ah, this is where we're especially awesome. Everything the agent pulls down has been signed by the private key of an administrator, and is verified by the agent before it will save it or look at it. We use OpenSSL and zlib libs, and of course track vulnerabilities in those, and re-release when they re-release.
Web Application on Management Server
I'm suspicious that we're talking about different animals here, but we have an optional Web Reports component that can be run on the server or on its own server. And it can do SSL if you like. It will be there if you do an install taking all the defaults. And again, getting the server for BigFix doesn't get you the agents.
Javascript on Browser Client of Management Server
This is what makes me think we might be talking different animals. We don't have a web-based management interface. Or rather, to be completely up-front, we use the IE libs in our MFC app which is our Console, and everything is run in restricted zones or comes from signed content.
Listening Network Services for Management Clients on Management Server
Isn't this one a dupe?
Middleware Frameworks and RPC
I think, from having listened to your Black Hat talk, this is referring to complicated protocols between the agents and server. We use a subset of HTTP, and move files around.

Client of Management Server Service on Agent

What? Can't parse.

Display Logic for Agent-Sourced Data on Management Client

Ah, we could potentially suffer from this class of problem if we have bugs there. You got one.

Confidentiality and Integrity of Client/Server Protocol

Isn't this a dupe? If not, which Client and Server are we talking, if not the agent and server? The management console? Ours speaks the minimal HTTP and TDS (the MS SQL Server protocol. (Well, the Sybase protocol, but now I'm just being pedantic because I used to work at Sybase.))


Yep, got one of those. You can't compromise our agents if you get the database.

Agents tend to be installed en-masse. Attacks that offer uniform compromise of all installed agents provide attacks with thousands of hijacked machines.

Yep. True of any central management system, if you find a flaw that allows control of the endpoints. How is this particularly the fault of agents?

Even in the absence of an exploit that compromises agent software directly, it is impractical to ensure the security of thousands of endpoints. But every machine running an agent must be secured if the management components are to be shielded from attacks.

Ah, you assume that only agents can attack the server? Not so for BigFix. Unless the customer has done some extra firewalling, anyone with IP connectivity can talk to the server. Attack away.

In a majority of surveyed agent-based systems, compromise of a single management server allows code execution on every agent, exposing the enterprise to a single point of failure.

For our system, let's call this "stolen keys". Yes, if you steal some keys (and the passphrase), you can act as the owner of those keys. That's why we have key revocation. We've got a whole PKI built-in, it works quite well. Something can always be stolen, spoofed, or impersonated. We went with what we felt has the best security, and has attestation to boot. Our financials customers love the audit trail.

This class of problem is true of any central management system. Steal the important authentication thingy, and you control the endpoints. Why is this particularly an agent problem? Do you prefer some sort of scanner thing that gives the admin creds to every IP it hits? Are you proposing no central management again?

Agent implementations are often substantially homogenous, even across operating systems, enabling uniformly effective attacks against desktops, Windows servers, and Unix servers.

We prefer to think of it as uniform management, but guilty as charged. So yes, if you steal some keys, we have a cross-platform language you can use to command the agents with. Admittedly, for other central management systems, you would have to craft your commands in a number of different shells.

Workstations of management operators are high-value IT targets, and compromised agents can inject poisonous data to exploit a myriad of clientside and XSS-style attacks to hijack their machines.

This is a potentially viable technique if we have bugs in that area. But like I said, you needn't be an agent to attack there, go for it. One of the points from your Black Hat talk was that apps that weren't Internet-facing didn't have to survive those attacks, and were weaker for it (my wording.) So far, our customers with Internet-facing servers and relays where attackers could try feeding bogus data haven't fallen over. Maybe we're just enjoying some obscurity.

[Section on the kinds of things Matasano has found elsewhere.]

No doubt about it, Matasano is good at what they do. I'm looking to have more outside auditing Real Soon Now. I have no illusions that we'll have a 100% flawless clean bill of health when we put it in front of someone of Tom's caliber. What I AM confident about is that we will do far better than the others Tom talks about (but can't name, because they don't have their patches out yet.)

First off, my programmers can beat up your programmers. Second, our architecture is designed to eliminate huge swaths of problems. That thing where we have everything that hits the agents be signed? Right. It means you can't throw attacks at the agents unless they are signed. You have to find flaws in OpenSSL or zlib to try attacks before that stage. While not perfect, we use those libraries for a reason. Third, when something is found, we get our patches out in a timely manner. The last big thing we had? 3 days. And since our system does software distribution and patch management, you could be fully patched about 10 minutes after that, or as soon as your change management allows.

[Tom's mitigating factors]
If I did point-by-point here, a lot of it would be redundant. Hopefully, some summarizing will suffice.
  • I fully disagree with removing the most important assets from management.
  • You don't need to segregate classes of managed machines if it's not important to "be an agent"
  • My protocols are as simple as can be. They just move files around. The files are all signed though, that's going to cause the attackers some trouble.
  • Suggesting "use SSL" alone is a boondoggle. They key point, easy to miss, is Tom is suggesting that agents sign reports. While that has value, and BigFix will likely offer that as an option in the future, it shouldn't be key to the system surviving.
  • Use third-party auditing? Actually, I agree with you there, and I will be doing more. But is that recommendation a huge surprise, given Tom's job? ;)
[Full-snark mode, Tom's conclusions]
Agent-based architectures are incredibly convenient and can be a significant cost-saver for IT operations teams.
You forgot: And if you don't have one, or even some other central management system with the exact same class of problems, then you are in FAR, FAR worse shape than having a few agent holes to deal with.
In all circumstances, enterprises should seek to minimize the number of agent installations within their enterprise.

Indeed. And BigFix sales people are standing by.

In all circumstances, enterprises should seek to minimize the number of different agent-based vendors their enterprises must support.

Still right with you there.

Agent-based software should be treated as a high-risk target for attacks. Agent software warrants intensive security testing and analysis and rigorous access control.
Treat us that way if you like, we won't hold it against you. And then we will replace all the vendors who didn't hold up to scrutiny. After all, Tom's talking about our competition.

Thursday, November 30, 2006

Apple gives credit to evil haxors?

(OK, even I have to admit up front that I'm just giving Apple a kick in the joy department for the fun of it over this one.)

So, I'm having a glance at this evening, to see what kind of Mac security zealotry I should be enraged about lately. Gruber says they gave HD Moore credit. Hey, look at that! He's right.

Now, you would think that with all the recent past history on Apple and vulnerability disclosure, that Apple would have a policy of not crediting researchers who don't "properly" report vulnerabilities, wouldn't you? After all, not even Microsoft will give you credit if you don't play nice. But for Apple, maybe that's not the case? Here's what I believe is their official policy:
For the protection of our customers, Apple does not disclose, discuss or confirm security issues until a full investigation has occurred and any necessary patches or releases are available. Apple usually distributes information about security issues in its products through this site and the mailing list below.
Near as I can tell, that's the entirety of their official written policy. If there's a better version, I'd love a link. Note that it doesn't say anything about giving credit or not.

So maybe it's Apple's policy to give credit to the discoverer of the vuln, regardless of how it is disclosed? If so, then kudos to Apple! You've done one better than Microsoft. Honestly, I don't see why you wouldn't. It's a simply acknowledgment of a fact.

Now, if we could just get proper credit attached to an earlier wireless vuln, and work on not pretending a problem doesn't exist until "any necessary patches or releases are available", then I'd be that much happier.

Friday, November 10, 2006

MS says not exploitable

Hey, look at Microsoft dropping the disassembly to demonstrate that something isn't exploitable. Nice. I don't think I've seen them do that before.

Monday, November 06, 2006

Shut down the Internet

A Sybase story this time. I was a network & security guy at Sybase for just under 5 years, between 1995 and 2000.

Speaking of 2000, I was at Sybase for the Y2K rollover. Like most IT shops, we had spent a couple of years planning for Y2K, and as it got closer, we got busier. Then I get a visit from the director of telecom. He tells me that for the rollover weekend, we will be shutting down all Internet, dialup and ISDN links.


Yes, his boss, one of the co-CIOs we had at the time, told him to cut off all outside communications because of hackers. The request got dropped in my lap, because I was in charge of all the Internet links, firewalls, and dialup lines.


It was explained to me that our CEO, John Chen, had been golfing with one of his buddies from HP, and he had heard through the grapevine that the hackers were going to be out in force on the Y2K weekend, and were saving their attacks for when companies were at their most vulnerable. Therefore, we were going to preemptively take down our communications links, just like HP!

(Remember the scandal about HP taking themselves off the net over the Y2K weekend, screwing their customers? No, you don't. It didn't happen.)

I tried, briefly, to deal with my upstream management on the issue. Nope, I was told it was a done deal. This was several days before the rollover.

I didn't wait long to go over everyone's heads and email the CEO explaining why he was making a mistake. Reasons included things like "You're going to make SURE you have a major outage on the chance that you MIGHT have an attacker-driven outage." "I know a lot of these 'hackers' they will either being working on Y2K at the day job, or drunk for New Years." "What about all of our customers who need last-minute Y2K patches? What about all of our OWN people who need the same from other vendors?" "Do you have any idea what level of attack we already get and live through every day? We get over a million failed connection attempts every day. Literally!".

And he started to relent. I had a reasonable explanation for each of his concerns.

The "deal" was that I would build a monitoring team, so that we had 24-hour around-the-clock coverage of the firewalls and other logs, looking for anything suspicious. I had to report in every so often. Anything really bad, and we would have to pull the plug.

Of course, nothing happened. After about 12 hours, the CEO got really, really bored looking at attack reports. Oh look, a port scan. Oooh... a distributed port scan! Hey, 100,000 attempts to connect to a telnet port that isn't listening.

But I had had to make 8 network & security people work the entire Y2K weekend, 8 hours on, 8 hours off, to be allowed to keep the links up. These were 8 people who had done their jobs ahead of time, like they should have, and by all rights should have had a nice relaxing New Years Eve for the big millenium switch.

And Sybase was just going to screw their customers. Not to mention making us as a company look like idiots.

So, I got my way, forced Sybase to do the right thing, and had to suffer for it. And naturally, I got the warning email after about "going through channels" (which would have got me exactly nowhere. I had had about 2 days.)

I left Sybase on January 31st to go work for SecurityFocus. Sybase had made a corporate decision to essentially spam people, also over my objections. (Did I mention that I was Plus, I was starting to get the kind of treatment that made it clear I was being punished for going over people's heads. This just after I had tracked down a rogue sysadmin who was embezzling (a story for another time.)

Since then, I've taken jobs with people and companies that actually care about security.

(No, don't lecture me about what year the millenium rolled over. I have my own ideas about that.)

Saturday, November 04, 2006

Crashing MailWorks

Another Bechtel story.

Bechtel had standardized on DEC MailWorks for their corporate email standard. Previous standard was PROFS on the mainframe. We had enough MailWorks users going that we needed a VMS cluster to deal with the volume, and have some redundancy in case of an outage, maintenance, etc... all the stuff you want a cluster for. I'm actually DEC certified on some of this stuff. I get lots of use out of that now, let me tell you.

One day, mail goes down. The senior VMS admins determine that the MailWorks server process had gone down. On all the machines in the cluster. At the same time.

OK. So they try to run it again, and it comes up. As they are trying to bring it up on another machine in the cluster, they both go down again. It had only been up for a few minutes. So they try one machine by itself. It runs for a minute or two, and goes down again.

They do some dump analysis, and can see that the process is crashing. Not that this helps with how to fix it. After a bit of in-house fiddling, DEC is called. Some phone support doesn't help, must be a hardware problem somewhere. On every box in the cluster? OK, a hardware problem in the cluster interconnect (CI), then. Waste time, cannibalize hardware, break cluster, determine that problem happens on one server, no cluster, and machine works fine for all other software. Dispatch DEC technician to site.

Reload OS, MailWorks software, runs clean. Problem solved? No, when you give it the mail spool, it crashes again. And yes, we DO need our old mail, thanks anyway.

But now we know it's something in our mail files that is causing it. Maybe we can figure that out and surgically remove it? OK, so they binary split the files, and determine that a single email is causing the problem.

This is several days into an outage, mind you.

Email is examined, and it turns out to have a really long subject line, like thousands of characters, almost all spaces. Some experimentation shows that once you hit a subject line of 1K or so in length, MailWorks takes a dive. (Ah yes, I saw that light bulb go off over your head.) And if you have a cluster, when one server crashes, the next one dutifully takes over mail processing, until it hits that same message.

Message is purged, and people can actually get back to work.

They track down the user who sent the killer email, to find out what the heck she was thinking. Turns out she was eating breakfast, and reading her email. A piece of Grape Nuts cereal lodged in her keyboard, and managed to hold the spacebar down. She still sent the email after that, but remembered having to dislodge the offending Grape Nut.

So an entire VMS MailWorks cluster got taken out for days by a piece of Grape Nuts. But that's not the punchline.

After DEC support had been largely useless for days and our guys had to more or less had fix it themselves, we submitted a fix request. We didn't want this happening again. We were able to send a specific problem description, number of characters, sample email, the whole bit.

DEC's response was: Oh yeah, we know about that! Here, we've had a patch available for a while. Why weren't we (one of the largest MailWorks installations in the world) told about that? Oh, uh... you have to call with a problem description that indicates that patch is needed. OK, and when we DID call with a problem like that? And you sent out a technician, why didn't he know? Why don't you publish the patch list? Uh.. well...

And I believe that was my first practical introduction to buffer overflows and vendor patching.

Thursday, November 02, 2006

Threat vs. vulnerability

Inspired in part by Richard Bejtlich, I present Yet Another Horrible Information Security Analogy (YAHISA): A tale of bunnies and kitties.

Imagine a lush green field of grass and clover, where bunnies frolic and play. These are cute white bunnies, with pink eyes. And the occasional black bunny, which inexplicably costs more. The bunnies in this field have no natural predators. The wolves don't know about this field.

Now, picture a city cat that roams the streets, getting into fights, disappearing for days at a time. When it comes home, it's missing a little more of its ear, or occasionally needs to be stitched up. If it gets into a fight, sometimes it wins, sometimes it loses. It will eventually be run over by a car. Its bloated carcass will be poked by children with sticks.

The bunnies are vulnerable. The kitty is vulnerable, and has threats.

Wednesday, November 01, 2006

You want Mac wireless bugs?

So, the Month of Kernel Bugs (MoKB) begins today. They start by releasing a live exploit for a remote kernel bug in older PPC Macs with Orinoco-based chipsets. "1999-2003 PowerBooks, iMacs". (Note: I've done no independent verification of the bug, I just trust the people reporting it.)

With no official notification to Apple, and no patch available.

Even though the machine I'm typing this on right now is vulnerable to the exploit, I believe this is the appropriate way to handle this release. Why? Because of they way Apple handled the same kind of issue with David Maynor and Johnny Cache, of course.

Apple thinks it should not even acknowledge unpatched bugs. It (apparently) thinks that it should issue press releases denying the issue and use vague legal threats against researchers to "protect customers".

This kind of release is the result. If Apple doesn't want to play responsible disclosure, then the researchers will be happy to oblige. I trust there will be no denial of the problem by any interested parties this time?

(No, not really. The Mac zealots still won't believe it. But it sounds good, anyway.)

Update: I'm taking my Mac off the list of affected machines. It's an iBook G4 with an Airport Extreme that was purchased separately. It appears that the Extreme (802.11g) version of the Airport isn't affected by this particular bug. I might as well try to be careful about technical accuracy. I've seen how the Mac community reacts to any little inaccuracy.

Saturday, October 28, 2006

Purpose of a firewall

Periodically, I see statements like "firewalls are useless" or "firewalls are dead". (Or IDS, or antivirus, pick your favorite security product category.) Does that mean you no longer need a firewall? Of course not. What it really means is a couple of things; One, a firewall is such a obvious requirement that it is just a given. And two, client-side holes are exploited so frequently that firewalls are not considered to contribute significantly as a preventative measure anymore.

Allow me to remind everyone what the purpose of a firewall is. A firewall exists so that you can do something risky on the protected side. That's it. You want to use Windows networking? You want to use cleartext protocols? You want to use enterprise software? (Or is that Enterprise Software.) Then you do that kind of thing behind a firewall.

If the systems, software, and protocols were hardened enough that they could be on a bare Internet connection, you wouldn't need a firewall. But I've never seen a company that didn't use at least one piece of software that couldn't make that cut. So they have a firewall.

Firewalls exist so that you can do risky things on the protected side.

Monday, October 23, 2006

Microsoft vs. McAfee & Symantec

I write for the Windows Secrets newsletter. Usually, you can only see my articles if you're a paid subscriber. Every once in a while, I end up writing a special update, or the featured article. Meaning, you can read them for free. I figure if you read this blog, then you probably have some interest in my writing.

This article is my take on the whole debate about Microsoft locking vendors out of the Vista 64-bit kernel.

Saturday, October 21, 2006

Nicolas Brulez analyses a virus

Nice example of a virus/bot analysis by Nicolas Brulez at Websense:

Has some good IDA Pro tips. Nicolas is a really good reverse engineer. He taught me the proper way to unpack a file, and helped me give a presentation at the first RECon.

RSS Feed

I suspect that no one has as of this writing, but if you've subscribed to my blog with the default Blogger Atom feed, I would appreciate if you switch to my Feedburner one. You can see it in the upper-right if you're reading this in a browser, or use this link: .

This is so I can keep track of you if you read this via RSS. I've also added a Site Meter counter. I'm pretty new to Blogger. If I screwed up something, please let me know.

So what's up with Digg?

First thing: I am an utter newb when it comes to So a lot of this post amounts to stupid user questions. But hey, maybe I'll get some answers. I did try to do some searching, but the sheer volume of "digg" hits with any given keyword makes this somewhat challenging. The voume is one of the things that makes Digg useful, but I normally read it through an RSS feed.

Yesterday, I was in a debating mood. So I waded into a Mac security discussion on Digg, here:
This is me on Digg:
(And no arguement from me that the original article there is inflamatory and inaccurate. I wanted to argue with the people who don't know the difference between threats and vulnerabilities, and so think the lack of threats mean there are no vulnerabilities.)

A few brief observations. First off, tons of Mac fanboys who aren't particularly knowledgeable about security, but have a lot of blind faith. No surprise. If I make a post to the contrary, give a counterexample, or ask someone to explain their position, it gets dugg down. Also no real surprise, I've seen this happen before with other users. But I find the volume and consistency of that behavior interesting. It appears that if you don't like or don't agree with someone's post, you give it a negative digg. Well, I don't, at least not yet. If you're discussing something , simply shouting down the other person is pointless and rude. But I see that that is how it works. I'm guessing it's gameable, too? By just simply having multiple accounts?

I see that as broken. And this is from the point of view on a longtime Slashdot user. Sure, I'm used to seeing unpopular opinions modded down on Slashdot in a similar fashion. But not nearly to the same degree. Why is that? Because Slashdot has caps on both mod points, and how high or low something can be modded? And most people don't get mod points often? Because you have to supply a reason for the moderation (interesting, flamebait, etc...)? Because you can see someone's ID number, and can tell how long they have been on Slashdot? Because you can't both moderate and participate in the same discussion? I'm not sure, probably some combination of those and other factors I haven't observed.

I will throw one opinion out there, that it's probably a bad idea to simply give people an infinite supply of anonymous red buttons to shout down someone they disagree with. Especially if those buttons don't obviously represent some objective quality of the post in question.

Now, some regular bugs/questions/feature requests:

  • Why aren't discussions threaded? Why, in order to reply to a particular comment, do I have to go find the parent to the whole thread? Then I probably have to click "show comment" because it was dugg down too far. Then click reply. Then scroll all the way back up and find the post I wanted to reply to. Then manually copy the person's username into my post to show which person I'm replying to. Then cut-and-paste the text I want to quote. Doesn't seem very Web2.0y. How about if there's just a "reply" button on every post so that it's clear who I'm replying to, and it could even autoquote. You know, like every email client for the last 20 years.
  • How do I know when a discussion I'm participating in has been responded to? Part of this is related to the previous threading issue, I'm sure. It's hard to track who is talking to whom, when the discussion is almost entirely flat. So, fix that, and then give me the option of getting an email when someone responds to me. Or at least a link somewhere on the site where I can see new responses I haven't viewed yet. Where's my "subscribe to this thread" button?
  • There's no way for me to link to a particular comment?
  • When digging stories, I can filter by particular topics and properties of the sumissions (age, popularity, etc..) How do I filter out the ones I've already dugg?
  • Where is Digg's todo/upcoming features list?
  • Where the bug database for Digg, so I can see if these things are known or have been requested?
I'm not just trying to complain. Some of these things must have simple answers, and if someone would supply those for me, I would appreciate it. I have tried to do some searching, but "digg" and any keyword you can think of will simply pull up a list of stories that have been linked from Digg for that topic. There needs to be a keyword that indicates that it's about Digg itself, "metadigg" perhaps.

And, naturally, I have dugg this blog entry, so I can see some of the rest of the proccess, and maybe some answers to my questions. If you diggers do end up coming and helping me out, then I thank you in advance.

OS X Malware

Just to be up front about it: Yes, this entry was created in the spirit of stabbing OS X zealots in the eye with a lit cigarette. Why? It drives me absolutely insane when people who clearly have no concept of how these things work insist that Macs can't get malware, don't have vulnerabilities, or have some magic security model. Yes, I realize trying to educate someone like that is masochistic. However, I wanted to have a more convenient place to point to when some clueless Mac fanboy says "show me even one virus for OS X!!".

I don't care to claim that the problem of malware on OS X has in any way reached significant levels. Nor am I trying to say that it is immanent. I do mean to say that is it not non-existent, and that it is certainly not impossible that it could happen.

So I'm going to try to maintain a list. I'm doing "malware" here, not exploits nor vulnerabilities. For my purposes, that includes viruses, trojan horses, worms, rootkits and spyware. I'm also going to limit this list to malware designed for OS X. There is a long list of macro/Office based stuff, things for OS 9 and below, and so on. Yes, I realize that some of it still probably works fine on OS X under the right circumstances.





(Sony Rootkit)






Not malware:
But I put it here for reference. This is to address the people who want to claim that malware would have to ask for your admin password. Not that there is any requirement that malware be root, of course. In the OS X security model, any admin user can write to everything in /Applications.

According to the author, .D is no longer a worm, but is an autorooter. Unless I have time to look at it later and change my mind, it does not appear to meet my definition of malware.

Saturday, September 16, 2006

What makes a good programmer

Aha! I just found a quote from Joel which puts into words what makes a good programmer.

You need training to think of things at multiple levels of abstraction simultaneously, and that kind of thinking is exactly what you need to design great software architecture.
The quote can be found in this blog post.

I didn't realize it before, but this is what makes the good programmers at BigFix, good.

Saturday, September 09, 2006

Ruth's Chris Steak House

It was my birthday the other day (37. Thanks for asking.) I wasn't really into a party or cake or presents or anything, so my wife took me to dinner. We went to Ruth's Chris Steak House. The food and service were both excellent! It's just a little expensive, though. It actually ended up being a bit more expensive than we even thought it was going to be, because our waitress misquoted the price on one of the specials about $40 too low. It wasn't a big deal, and the correct price wasn't really out of line with the rest of the items. So, two of us, I had the American kobe beef special and the Australian lobster tail special, wife had a filet, we had 3 sides, and cheesecake for dessert (dessert was free, because of the birthday.)

The total was $192 before tip and valet. I wouldn't have spent quite that much on purpose, but man that was good.

ELER Mention

One of the on-line comic strips I like to read is Everybody Loves Eric Raymond. I got a mention there the other day, as "Famous security dude Blue Boar (Ryan Russell)". (Yes, I'll accept that description :) ).

The comic that day, "Bruce Schneier Facts" is also quite hilarious, as is the database that goes with it.

He has a "Knuth is my homeboy" t-shirt, which I purchased. I happen to be wearing it as I type this. It's just a funny shirt all-around, but you would't enjoy it on as many levels as I do, flavin.

One reason is that my main character from the "Stealing the Network" series uses "Knuth" as a handle, mostly to piss off the other hackers. (Which worked pretty well on Fyodor.) Another reason is because the picture used was taken by Jake Appelbaum, whom I have met a number of times.

So I wore the shirt to Black Hat one of the days, and had Johnny Long take a picture of me in it with Darci and Jaime.

Why can't I print this email?

Why can't I print this email?

Many years ago (about 1990-1995) I worked at Bechtel in San Francisco. It was the kind of place that made you wonder if Dilbert creator Scott Adams worked there. So I will likely have a number of stories that are set there, if i can remember them.

Bechtel was a long-time DEC VAX shop, so we ended up using a lot of strange DEC products, probably long after we should have been. For example, DEC PathWorks, which was DEC's weird LANMAN-based NETBIOS over DECNet (Phase IV). We were also using the DEC email product, I want to say it was called "MailWorks", but I can't actually remember. A lot of their stuff had "works" in the name, I think they were trying to convince themselves. And this is back in the day when Windows wasn't a given, and we're talking Windows 3.1x.

So one of the executives calls the helpdesk, and wants to know why he can't print his email. We thought that was a little strange, since the printing generally worked well. We troubleshot the usual queue problems and such, and then sent someone up to see him. OK, so the problem turned out to be that the print function just wasn't there in the program mode he was in, which was the compose mode. In other words, he wanted to print the note while he was still typing it up.

OK, so why did he want to do that, we asked him. He said he couldn't send it unless he printed it out. Huh? Of course he could, just press the "send" button. And you can even print it out from your "sent" items, if you want. No, he doesn't want to send it that way, and he needs to print it!

After backing up several steps, the person finally gets the full story out of him. What he wanted to was type it in the compose window, print it out, and the FAX it to the person it was addressed to. That's right, he just wanted to use the email program as a word processor.

When questioned as to why he didn't just send it via email, he said that he couldn't be sure it got there that way. OK, so why is FAX any better? You can't tell that it got sent for sure that way either.

Yes you can, he replied. You can see the paper going into the machine, so you know it got sent.

Tap Whistle

Tap Whistle

(Orginally from a writing exercise I did in my Slashdot Journal. There's a writer there by the name of SolemnDragon, and she occasionally gives out said exercises. This "universe" is one I've had in mind for a while. I haven't been satisfied with the level of tech detail in other steampunk stuff I've read. I may do more in this vein, we'll see.)

Tap Whistle hated to work in the rain. It loosened the black from the streets and buildings, and made the manhole diving unbearable. You didn't want to get caught by the Plumbers when it was raining. If you tried to run, you'd just end up slipping in the black runoff. After a couple of hours of rain like today, the sewers would be full up to your knees.

It also seeped into the battery jars, and the top layer of grease would short out the 'nodes, leaving you no voltage. Anyway, you didn't want to get caught with a jar if you could help it, or else you would be charged under the Tesla ban.

The rain made it too noisy to scope the street for audio, too. Not that sound would do him any good at the machine point he planned to monitor today, not from the topside.

Whistle wouldn't even bother on a day like today, except that he had a rare motivation, a paying customer. It seems that several of the local "plumbers apprentices" had named him as the best when the norm had come around looking to hire a hole diver. He was even more nervous than Whistle, and it made him laugh inside to think how paranoid the norm was about getting caught. Whistle wasn't worried, why not get paid for some of his fun? He suspected he wouldn't be able to get the message anyway.

Whistle didn't actually have to pop any holes today, so he had left the crowbar at home. This junction point was big enough that it had its own housefront. Most of the major machine points had a little house-like building on top of them. The house part was little more than a single-story box with a front door. Inside was just some storage, a wall of valves that ran below, and the circular metal starcase that led down to the workroom. Whistle had a key that he had traded for, that would open the front door. It was a simple warded key, not one of the newer pin tumblers. Those were not thought to be reliable enough, though the lockers considered them more secure. That was about the extent of Whistle's lock knowledge, which he had mostly picked up from trade pamphlets and a couple informal demos from the lockers at the meetings.

Whistle checked for any of the copper-clad Plumbers carriages on the street before letting himself in the door. Once inside with the door closed behind him, he headed straight downstairs.

At the bottom of the stairs, he stepped right into the water, feeling the cold grip on his calves, dragging at his pant legs. The rain was seeping from the walls, and dripping from the curved ceiling, between the bricks. Parts of the sewers under the city went back to Roman times, though not under a machine point. In a machine point like this, they had typically been dug down two stories worth, and rebuilt, like a mini Underground station in the dark. They didn't carry any trains though, just pipes and conduit.

Whistle's target today was Lloyd's. They were an old user, so they still mostly used the pneumatics. Usually, only the newer users used rods, because they didn't have as many feeds to convert. There were a couple of exotic hydrolics in town, used in local building carrys, but that was only the standard in America. You wouldn't find a hydrolic in an official machine point. Whistle had a few catalogs from Edison's Hydrologic Manufacturing Company, describing what they had over there.

He lit the gaslight, and pulled a couple of books from his pack. One was the city feed directory, which would give him the numbers he needed to check for. Customers would use these to look up the endpoint and route. The other was a stolen PCL manual, which would give him the stamped numbers he would need to read off the pipe he wanted. He looked up the machine station he was in, and found the list of Lloyd's serials. Lloyd's had mostly low numbers, they had been around longer.

One challenge was that, through this particular station, Lloyd's had no less than 21 tubes, too many to monitor at once. Whistle knew to check which switch they went to, though. And only one switch down here lead to the destination he was supposed to watch for.

He found that only four of the tubes went through that switch, so that was the set he would have to watch. From his bag he pulled a set of loadstones and reed flags.

Carefully, he found the places in the middle of the tubes where the plungers would have to cross. The places where, when the plunger went back and forth, it would flip the flag one way and then the other, giving him a visual means of watching the bits. Down here, you could use a horn to listen to one pipe, if you only had one to watch. Well, maybe two. He had heard of one blind kid that could do two at once.

For a lot of beginners, tapping by ear was easier. Especially if you were used to decoding by ear at a legitimate endpoint anyway.

But that didn't help if you needed to watch four. Whistle set up the reeds so that the reflective sides were to the right, where the gaslight was. Once the plunger started going, the flashes would let him read the message right off the pipe.

Monday, September 04, 2006

When, where, how and for how much, to reveal your vulnerability

You know, I can do these long logic chains based on a lot of assumptions as well. Can I get some vitriol?

So, you're a researcher, and you've got some sexy new class of exploitable flaws you've found. You do your presentation at con, but it seems like everyone's employers nowadays don't appreciate presenters dropping 0-day. Therefore, you decide to show a video clip instead.

You decide to playfully pick on a group of smug OS users who generally think they are more secure. (I know. Security researchers bursting the bubble of someone with a false sense of security? I'm shocked too.)

Trying hard to be "responsible" (as defined by the software vendors), you give the vendor some heads up that you're going to be showing a video demo of yourself 0wning their kernel driver. Lo! This vendor, who happens to actively cultivate this perception that their stuff is more secure, takes exception.

Let's talk about this particular software vendor for a sec. They have repeatedly demonstrated a willingness to sue anyone who reveals anything they aren't ready to reveal. They are willing to sue every time. Even if it's true. To the point where you might have to take them to the state supreme court to try and keep them from going after your sources.

Of course, that's for news, which theoretically has some constitutional protection in the U.S. How do they feel about vulnerability disclosure? "We don't feel that our customers are better served by public disclosure of potential issues". Oh.

So, maybe picking on Darth Litigious isn't such a hot idea. They decide to instead demo one of the third-party cards with its own vulnerable driver. And not even identify the card, so that vendor can't complain either. Yeah, it kind of weakens their demo, but they don't have a lot of choice.

Maybe they could just mention in passing that the sue-happy vendor's built-in card and driver have similar problems?

Surely, the masses won't ignore the impressive 802.11 research presented that made up 80% of the talk, and only focus on the demo? And pick the demo to death only because it affected their favorite platform? Surely, it can't be possible that rational, sane people would believe that the problem is demonstratable on FreeBSD, Windows, and even their own platform, but with a third-party driver... and then not believe that there is any chance whatsoever that the same kind of problem exists on the driver that ships with the OS?

No, clearly, the researchers must have faked the video. It seems MUCH more likely that they would use a third-party card ONLY as a red herring. Not because the OS vendor breaks out the lawyers at the drop of a hat. No, they faked the video, and they didn't show themselves popping the native card because, well... that's more believable. Or something.

So, clearly the zealots were right all along, the researchers are frauds. Wasn't it stupid of them to get up in front of all their friends and peers, and pull a scam? Especially since at least one of them had proven himself more than competent over the years. Oh well, no accounting for stupidity.

But zealots are rarely willing to let things go at victory. No, how about the zealots taunt the researchers with promises of prizes, on the off chance that the researchers have something to actually show? Maybe all they were waiting for was a shiny thing. And not the threat of lawsuit.

Let's examine the offer from the zealot.
  • Zealot will buy said vulnerable (Ha! As if!) shiny thing
  • Zealot will not permit researchers to put their filty paws on the shiny thing
  • Researchers will use their exploit, which they have promised to keep private until the patch is out
  • If the exploit doesn't work flawlessly on the first try, then researchers will either have to give the zealot the cost of the shiny thing, or it will be called "even". Where "even" is the researchers have to pay no money, but zealot will crow about victory, and researchers will have proven themselves untrustworthy by using the exploit they said they would keep private, and maybe get sued.
  • However, if it does work flawlessly, the researchers will be up one shiny thing, and will only have proven themselves untrustworthy, and maybe get sued. Plus, zealot will have some excuse as to why it doesn't matter because, well, whatever, nuh-uh!
  • All judging will be done by zealot, who would be out the cost of one shiny thing, and prove himself completely wrong if he declares the researchers the winners.
So, back to my opening question, if you're a researcher in this position. You've got this sexy vuln, what do you do with it? Here are some options:

  • Ignore any potential gain from your work, don't present it, sell it, use it as a resume item, etc... just post it, and take a chance the vendor will be really mad about that. Tick off potential employers. Anger some peers who think that is irresponsible.
  • Sell it. TippingPoint and iDefense will offer $10,000 or more. That's like, enough for 9 shiny things! Note that you will be required to keep the exploit private until the patch ships.
  • Present it and try and warn people about this class of problem. (Also, you get some travel expenses, and maybe enough money for 1 shiny thing. Woo!) Note that this does not neccessarily prevent you from releasing the exploit if you want. Unless maybe your employer paid for some of your time, and insists that you don't. Or maybe your peers and potential employers and customers wouldn't like that. Or maybe the conference itself got sued for that sort of thing last year, and it wouldn't be cool.
  • Unless you tried to be nice to the vendor by giving them some advance notice, who then turns around and makes you change your presentation and hold your tongue. Even if they later issue a public half-denial that they know about the problem. because, you know that presenters and conferences get sued for that kind of thing now...
So, the holding all details until a patch is released strategy looks like a pretty good choice. The researchers probably would have had more options if they hadn't tried to give any vendors advance notice, but it's a bit late for that now.

Maybe the vendor is trying really hard to communicate to the researchers that the best strategy is to just blindside the vendor? Maybe they like a challenge.

In case it's not obvious, I don't believe that David and Johnny faked anything. They are being really big about the whole thing, despite taunts, derision and bribes. I believe they will proven correct when Apple puts out the patch (which is, of course, completely on Apple's schedule.) And I also believe that the same people who are calling them frauds now will probably still be grasping at any little detail which might help them keep from admitting they were wrong.

Second Coder Wins

Second coder wins

I subscribe to the school of thought that says the second coder always wins. By that, I mean that after you write your "undetectable" rootkit, someone will analyze it, and find a way to detect it. If your malware kills all the protection mechanisms on a victim, then the AV vendors will recode their apps so that the technique you used to kill them no longer works. IDS vendors will find a way to detect your IDS evasion, and so on.

Exceptions: Crypto might be an exception, though I've been surprised by the number of crypto algorithms that have fallen in recent years.

Wednesday, August 16, 2006

In IDA Pro, to defeat a simple IsDebuggerPresent check

  • Set a breakpoint at the top of the start function
  • F9 to run the program
  • Shift-F2 to open the IDC window
  • PatchByte ( EBX+2, 0);