Saturday, December 30, 2006
I've worked on a bunch of operating systems over the years. I've forgotten all of the 8-bit ROM-based "operating systems", RSTS/E, CP/M, VMS, and most of Netware. And of course I've forgotten the other ones I've forgotten. That leaves, more or less, DOS, Windows, and unix. And by "Windows" and "unix", I mean a bunch of different flavors. Though anymore if you sit me down in front of even Win9X, I'm a bit rusty there.
I wouldn't really have any basis for understanding the security model of an OS more than 10 or 15 years ago, not beyond the overt features. "Overt" being stuff like username, passwords, and file permissions and ACLs. Mostly because I just didn't know that stuff back then, but also because the major of stuff I touched by volume had no real protected memory model. There would have been a little bit in NT 3.1, and it was there in the unix and VMS, but again, I had no real basis for grokking it.
Point being, I learned my file permission stuff on Netware and VMS, and the more complicated things like process permissions and kernel and user separation on NT4 and later, and unix. Picking up NT-based Windows and unix were fortuitous, since that's where the market went. But that was a self-fulfilling thing, since I worked on what was popular. I actually managed to leave Netware behind about when I would have needed to learn about NDS, and I stopped doing any heavy-duty Windows admin about when AD became the dominant domain model for Windows. Some I'm missing some directory services brain damage, too.
I took all of the NT4 classes, but only bothered to take a couple of the tests. I took some user-level unix classes, but mostly picked it up on my own. I would say that the majority of my useful experience is on-the-job self-taught stuff.
So why do I feel like I have the entire unix security model in my head, but I only have a tenuous grasp on some significant chunks of the Windows security model?
I've done DOS and Windows far longer, though to be fair anything before about NT4 isn't really pertinent. I've done more Windows, too, in terms of hours spent.
Obviously, part of the answer is that the unix model is just simpler. Everything's a file, you get owner, group, and world. There are a few special sticky bits. Everything runs as a user and gets its permissions the same way the filesystem works. Even the pipes are relatively simple. Kernel and user separation are clean. I understand what happens with environment and handle inheritance for child processes. I find the user database very easy to deal with. The typical init startup process is nice and simple. Signals are easy.
Even when you add on things like NIS, full ACLs and SELinux, I think I'm following along just fine.
Windows on the other hand...
File permissions I'm good with. Same with the Registry, it's basically just another filesystem. Process permissions? I gather that they each have a privilege token, and sometimes privileges to change privileges and so on... I know there are a couple of different types of pipes, not sure what's going on with the security model there. Services can be running as different users or local system. There's the event system, I vaguely recall hearing about there being ACLs on that. And I get this sense that there's a ton of other things that I don't even know about.
What happens when Windows boots? When I log in, what process(es) are creating my processes, and what, I get a set of tokens as well? When I change a password, what is happening with permissions that ultimately write my new hash out?
I just starts falling apart for me.
Now, I don't think I'm incapable of learning it. But part of the point is that I never made any concerted effort to learn either security model. Yet, unix feels like it's right there with minimal effort, and Windows is making me work for it. If I ever catch up with Windows, it will be because I made the effort to track down and study some serious documentation. And I'm not opposed to that, I just haven't done it. Pointers to favorite docs welcome.
Part of what I've concluded about this is that the unix model is superior. And that's "superior" in the practical sense that if you can understand it, it's going to work better for you. I'll happily admit that the Windows model might be more expressive, maybe allowing you finer-grain control. But that does me no good.
typesI think another reason that the unix model works much better is because unix itself is far, FAR more modular. I can strip a unix box down to the floorboards, leaving it with no functionality other than the purpose for which it exists. I've done this before with firewalls, various servers, and so on.
Many unix functions do not have a lot of interdependency. I can kill off portmapper, and not have it disable the majority of my administrative tools. Configuration storage is justs as likely to be a text file, which I can change by hand, and not need a front end tool for. I can turn off the damn GUI.
And this is where I start to not be able to articulate it much better. Anyone else have a more elegant way to explain what I'm trying to get at? Does anyone else's experiences even match mine? I have to assume they do, since there's a big correlation between security people and unix fans.
OS X is a slightly different beast. It has (for me) a lot of the obscurity of Windows layered on top of unix. I got nothing when it comes to the mach kernel or the Window manager. But even so, with the unix underpinnings, I am in a much better position to pick up the rest. I can already see where they have made horrible permissions mistakes.
In any case, I'd like to have this be a proper essay someday, and I could use the help explaining it better. I'd love some feedback, even if it's just "me too" or "you're high."
Sunday, December 10, 2006
Richard Bejtlich mentions it again while commenting on the agent discussion.
Worse, as is the case any time you add code to a platform, you are adding vulnerabilities.Of course, I understand fully what people are saying when they say this, and I'm not trying to disagree. But that got me thinking.
Now, you can't have a negative number of vulnerabilities in your own code. At best, the theoretical minimum is 0. But could you, by adding code, remove someone else's vulnerabilities? I think it might be possible.
First off, there's the obvious case of a patch. You added something, and are now (hopefully) down one vulnerability, right? Well, not exactly. Modern patches work by replacing an entire chunk of code with one just like it, (hopefully) minus the vulnerability. It will, for example, replace an entire DLL. So that's not more code, that's different code. And sure, it might introduce more vulnerabilities too, but I'm trying to make a point here, work with me.
So fair or not, I just removed the vendor from being able to go negative on vulnerabilities. At least, not within the same software package. Nothing that says Microsoft couldn't add something outside of Word to remove Word vulnerabilities, though.
How about third-party patches? Well, depending on how they work, they might qualify for what I'm thinking. If there is something that sticks around all the time, and is somehow removing a vulnerability from another piece of software on that system (and adds none of its own, of course), then maybe it just achieved negative vulnerability.
Now, it's unlikely that a program of any size is not going to have its own vulnerabilities, so it had better fix a large number of someone else's. This is what I think the HIPS category is trying to do. If you look at Blink or Determina, they dig around in the guts of someone else's software, and do things that try to remove vulnerabilities. Or specifically, make them nonexploitable, downgrading them from vulnerabilities to just bugs. For purposes of this discussion, lets call the process exiting instead of running shellcode "no longer vulnerable."
Briefly, a sidebar: I'm quite annoyed by the current use of the term "HIPS". If you happen to be a Windows Secrets paid subscriber, you may have seen my article about that. Briefly, there is a bunch of stuff calling itself HIPS, including traditional stuff like AV and file integrity checkers (read: Tripwire.) Don't do that. Otherwise, you don't leave me a good name for things like Blink, Determina, W^X, DEP, and those categories of protection. Naming suggestions welcome.
So lets assume these things work, at least partially. If so, and they remove a substantial number of vulnerabilities (more than they add), then you have added software and removed vulnerabilities. Discuss.
(For those wondering if I have an ulterior motive on this one; not really. BigFix doesn't do this kind of thing. We might someday parter with a vendor that does, or add management of such products to our agent, but we don't muck about it the binaries of other software. I'm just interested in exploring the idea of adding software to remove vulnerabilities.)
Friday, December 08, 2006
Blog, one of my favorites. Tom in turn says his was in reaction to post from Alan Shimel, who is replying to an article by Ray Wizbowski. That gives you some idea where in the food chain I am. But it doesn't get interesting for me until Tom's post, so that's where I start.
I'm not trying to feed any kind of Matasano/BigFix war but hey, they started it. (For the seriously humor impaired, I'm kidding. I've known most of the Matasano guys casually for years, and Amrit has worked with Tom, and so on. We're just disagreeing with each other using technical points, that's how it is supposed to work.)
If you're just running across this post via some other blog, I'm the QA Manager at BigFix. We are a vendor of agent-based systems management software. Though, I'm sure our lawyers would like me to point out that I'm providing my own opinions here, and I'm not a company spokesperson. Anyway, probably because of where I work and what I know, I'm a fan of the agent-based approach, and naturally I think we do a fine job, our stuff is secure, and we can do it all. So if you're thinking "Ryan is biased", well duh.
Onto the bloggery. I make a few comments of little substance on Tom's blog entry, and he emails me to politely suggest that if I want to disagree, I should quit making snide jabs, and get to the point-by-point.
Here's the premise I start from:
- You have a large number of machines (an "enterprise")
- You wish to have mass control over them (you want "management")
- The software that comes with the OS is insufficient for this purpose (you're going to buy some "software")
Tom's assertion is that agent-based software is bad, m'kay? and you should avoid it. To be completely fair, I'm seriously summarizing and putting words in his mouth. But take a look at the title of his post I'm responding to "Matasano Security Recommendation #001: Avoid Agents" and this slide which says "Enterprise Management Applications - Threat or Menace?", and you sense a theme. Yes, Tom is quite fair in the details, and will tell you he can only make claims about stuff he has tried, which does not yet include BigFix.
I understand good storytelling, yet I'm getting covered by these blanket statements. So hopefully it is understandable if I feel it necessary to respond.
So, you need some enterprise management software. Your basic choices are agents, and scanners, because I've already ruled out any kind of one-by-one method as impractical by the time you get to a certain size. I'm of the opinion that you can only get so far with pure scanners. For example, they can only determine, they can't change. If it can change, then what you have is a scanner-driven part-time agent. Yes, they push an agent onto the box long enough to do their business, and then get off again. And yes, there is value to only having the agent on there the absolute minimum amount of time, so I don't want to totally dismiss that benefit.
Let's check Tom... OK, he's not advocating scanners. In fact, if I'm not reading too much into it, he's not even saying you shouldn't run agents at all, he just wants fewer. But wait, are we talking per machine, or what?
Do you mean that some machines shouldn't run any agents? Then how are you going to manage them? Nothing wrong with hand-maintaining a small number of critical machines, of course, but I don't think that is what is being suggested. So this might be basic choice number 1: Are you at more risk by not having management of your machines, or by having an agent, even if it is a "bad" one? I still have to go with agent. Simple math will get you there. Count all the various threats out there, and only a small handful of them have been aimed at agents.
Minimizing the number of machines that run agent software.
2. Minimizing the number of different agents supported in the enterprise as a whole.I think this point is far more central to Tom's message. And I don't disagree with him. Again, no huge surprise, since BigFix replaces a number of other agents. See next point.
Good definition, I agree. But not on the categories, not for BigFix. We do systems management, AV management (we manage something like 6 or more AV vendors' code and signature updates), AV & antispyware engines (OEM'd), Patch, software distribution, power management, inventory, etc... We do NOT do the HIDS functions ala Blink and Determina. That would be an example of someone else's software we would manage.
Endpoint agents are programs that run silently in the background, usually as Windows Services or Unix daemons, which communicate back to a central management system. Well known examples include:
Systems Management (BMC Patrol, CA Unicenter, Microsoft MOM)
Antivirus (McAfee, Symantec)
Patch Management (Novell ZenWorks, SDS, BigFix)
Data Leakage Prevention
So it's incorrect to stick BigFix just in patch. It's a common mistake, that's all we used to emphasize up until a few years ago. And, hey, not Tom's job to make sure our marketing is properly conveyed. But I make a big deal out of it precisely because BigFix is exactly the kind of thing he's calling for to help reduce the number of agents running around.
Agent-based architectures are a severe security risk.So now Tom makes one of these leaps I object to. He's drawing mass conclusions based on (significant) experience actually looking at a bunch of agent systems. But you can't make a factual statement about all N software products by looking at N-M of them, if M is greater than zero. You can only state generalities.
He gives specifics class examples. While I still owe the world a BigFix architecture document (I know you're all anxiously waiting), let me give some short previews as responses.
Listening Network Services on AgentsYou can disable the BigFix notification protocol, and go full polling if you want. We are client pull. Even with the listener in the default listening state, the protocol is simple. It's just a 12-16 byte (payload) UDP packet. It suggests to the agent that there is something upstream that it should check for.
OK, got me there. We've discovered that either the agent or the servers need to have something listening on the network, as a general design principle. Are you suggesting that people go without management at all again? I'm pretty sure any alternative to an agent will want a listener, too.
Listening Network Services on Management Servers
Client of Agent Service on Management ServerThat's our default, but it's not necessary if you don't want it. We use the agent on the server to do software upgrade on the server. But you can do it manually if you choose. I, for one, expect a software distribution system to be self-upgrading. But here, you're implying that the server is security-critical. I.e. crack the server, and you have the agents. BigFix doesn't work that way. All the security is in the signing keys.
Confidentiality and Integrity of Agent/Server ProtocolAh, this is where we're especially awesome. Everything the agent pulls down has been signed by the private key of an administrator, and is verified by the agent before it will save it or look at it. We use OpenSSL and zlib libs, and of course track vulnerabilities in those, and re-release when they re-release.
Web Application on Management ServerI'm suspicious that we're talking about different animals here, but we have an optional Web Reports component that can be run on the server or on its own server. And it can do SSL if you like. It will be there if you do an install taking all the defaults. And again, getting the server for BigFix doesn't get you the agents.
Listening Network Services for Management Clients on Management ServerIsn't this one a dupe?
Middleware Frameworks and RPCI think, from having listened to your Black Hat talk, this is referring to complicated protocols between the agents and server. We use a subset of HTTP, and move files around.
What? Can't parse.
Client of Management Server Service on Agent
Ah, we could potentially suffer from this class of problem if we have bugs there. You got one.
Display Logic for Agent-Sourced Data on Management Client
Isn't this a dupe? If not, which Client and Server are we talking, if not the agent and server? The management console? Ours speaks the minimal HTTP and TDS (the MS SQL Server protocol. (Well, the Sybase protocol, but now I'm just being pedantic because I used to work at Sybase.))
Confidentiality and Integrity of Client/Server Protocol
Yep, got one of those. You can't compromise our agents if you get the database.
Yep. True of any central management system, if you find a flaw that allows control of the endpoints. How is this particularly the fault of agents?
Agents tend to be installed en-masse. Attacks that offer uniform compromise of all installed agents provide attacks with thousands of hijacked machines.
Ah, you assume that only agents can attack the server? Not so for BigFix. Unless the customer has done some extra firewalling, anyone with IP connectivity can talk to the server. Attack away.
Even in the absence of an exploit that compromises agent software directly, it is impractical to ensure the security of thousands of endpoints. But every machine running an agent must be secured if the management components are to be shielded from attacks.
For our system, let's call this "stolen keys". Yes, if you steal some keys (and the passphrase), you can act as the owner of those keys. That's why we have key revocation. We've got a whole PKI built-in, it works quite well. Something can always be stolen, spoofed, or impersonated. We went with what we felt has the best security, and has attestation to boot. Our financials customers love the audit trail.
In a majority of surveyed agent-based systems, compromise of a single management server allows code execution on every agent, exposing the enterprise to a single point of failure.
This class of problem is true of any central management system. Steal the important authentication thingy, and you control the endpoints. Why is this particularly an agent problem? Do you prefer some sort of scanner thing that gives the admin creds to every IP it hits? Are you proposing no central management again?
We prefer to think of it as uniform management, but guilty as charged. So yes, if you steal some keys, we have a cross-platform language you can use to command the agents with. Admittedly, for other central management systems, you would have to craft your commands in a number of different shells.
Agent implementations are often substantially homogenous, even across operating systems, enabling uniformly effective attacks against desktops, Windows servers, and Unix servers.
This is a potentially viable technique if we have bugs in that area. But like I said, you needn't be an agent to attack there, go for it. One of the points from your Black Hat talk was that apps that weren't Internet-facing didn't have to survive those attacks, and were weaker for it (my wording.) So far, our customers with Internet-facing servers and relays where attackers could try feeding bogus data haven't fallen over. Maybe we're just enjoying some obscurity.
Workstations of management operators are high-value IT targets, and compromised agents can inject poisonous data to exploit a myriad of clientside and XSS-style attacks to hijack their machines.
[Section on the kinds of things Matasano has found elsewhere.]
No doubt about it, Matasano is good at what they do. I'm looking to have more outside auditing Real Soon Now. I have no illusions that we'll have a 100% flawless clean bill of health when we put it in front of someone of Tom's caliber. What I AM confident about is that we will do far better than the others Tom talks about (but can't name, because they don't have their patches out yet.)
First off, my programmers can beat up your programmers. Second, our architecture is designed to eliminate huge swaths of problems. That thing where we have everything that hits the agents be signed? Right. It means you can't throw attacks at the agents unless they are signed. You have to find flaws in OpenSSL or zlib to try attacks before that stage. While not perfect, we use those libraries for a reason. Third, when something is found, we get our patches out in a timely manner. The last big thing we had? 3 days. And since our system does software distribution and patch management, you could be fully patched about 10 minutes after that, or as soon as your change management allows.
[Tom's mitigating factors]
If I did point-by-point here, a lot of it would be redundant. Hopefully, some summarizing will suffice.
- I fully disagree with removing the most important assets from management.
- You don't need to segregate classes of managed machines if it's not important to "be an agent"
- My protocols are as simple as can be. They just move files around. The files are all signed though, that's going to cause the attackers some trouble.
- Suggesting "use SSL" alone is a boondoggle. They key point, easy to miss, is Tom is suggesting that agents sign reports. While that has value, and BigFix will likely offer that as an option in the future, it shouldn't be key to the system surviving.
- Use third-party auditing? Actually, I agree with you there, and I will be doing more. But is that recommendation a huge surprise, given Tom's job? ;)
Agent-based architectures are incredibly convenient and can be a significant cost-saver for IT operations teams.You forgot: And if you don't have one, or even some other central management system with the exact same class of problems, then you are in FAR, FAR worse shape than having a few agent holes to deal with.
In all circumstances, enterprises should seek to minimize the number of agent installations within their enterprise.
Indeed. And BigFix sales people are standing by.
In all circumstances, enterprises should seek to minimize the number of different agent-based vendors their enterprises must support.
Still right with you there.
Agent-based software should be treated as a high-risk target for attacks. Agent software warrants intensive security testing and analysis and rigorous access control.Treat us that way if you like, we won't hold it against you. And then we will replace all the vendors who didn't hold up to scrutiny. After all, Tom's talking about our competition.