lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 27 Feb 2012 01:01:29 +0100
From:	"Henrik Rydberg" <>
Cc:	Bobby Powers <>, Ted Ts'o <>,
	Greg KH <>,
	Guenter Roeck <>,
	Jidong Xiao <>,
	Kernel development list <>
Subject: Re: Can we move device drivers into user-space?

Hi David,

> the point that you seem to be missing is that the interfaces between
> the different areas of the kernel are not stable, they change over
> time.

The argument was based on the idea that they would stabilize over
time. However, I realize this may not be true, which was also touched
upon in a later reply. The heavy-tailed nature of large changes in
open-source projects seems to put some hard numbers behind that claim [1].

> When both sides of the interface are in the kernel, this is
> not a problem, both sides get changed, but if one side was out of
> the kernel, then you either can't make the change, or have a flag
> day change where both sides need to change in lock-step (and
> downgrading is hard as both sides need to change again)

Assuming the interfaces changes, this follows naturally, of course.

> This is completely ignoring the performance and security aspects of
> userspace components vs kernel components.


> Ted is explaining the performance aspects well, but let's look at
> the security aspects as well.
> It's not just a case of "if something in userspace crashes, it
> doesn't crash the kerenl", it's also a case that "if you have a
> userspace component, then the kernel must sanity check the userspace
> interface to defend against rogue userspace". Doing these checks is
> not cheap (adding to performance overhead), and may not even be
> possible (how do you know if the command being sent to the SCSI bus
> is safe or not?)

No doubt, an open-ended system has its own set of problems. At any
given system size, the question is how this balances against a closed
system. The assumption I made was that as the system grows, the
balance would shift in favor of an open-ended system. This may not be
the case at all, as you are saying. It would be nice to be able to see
this in a quantitative manner if possible.

Thanks for taking the time to respond.


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists