lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9b2b86520905010319g4b1e915ejc5c17baa519f3d35@mail.gmail.com>
Date:	Fri, 1 May 2009 11:19:49 +0100
From:	Alan Jenkins <sourcejedi.lkml@...glemail.com>
To:	Kay Sievers <kay.sievers@...y.org>
Cc:	linux-kernel <linux-kernel@...r.kernel.org>,
	Greg KH <greg@...ah.com>, Jan Blunck <jblunck@...e.de>
Subject: Re: [PATCH] driver-core: devtmpfs - driver core maintained /dev tmpfs

On 4/30/09, Kay Sievers <kay.sievers@...y.org> wrote:
> From: Kay Sievers <kay.sievers@...y.org>
> Subject: driver-core: devtmpfs - driver core maintained /dev tmpfs
>
> Devtmpfs lets the kernel create a tmpfs very early at kernel
> initialization, before any driver core device is registered. Every
> device with a major/minor will have a device node created in this
> tmpfs instance. After the rootfs is mounted by the kernel, the
> populated tmpfs is mounted at /dev. In initramfs, it can be moved
> to the manually mounted root filesystem before /sbin/init is
> executed.
>
> The tmpfs instance can be changed and altered by userspace at any time,
> and in any way needed - just like today's udev-mounted tmpfs. Unmodified
> udev versions will run just fine on top of it, and will recognize an
> already existing kernel-created device node and use it.
> The default node permissions are root:root 0600. Only if none of these
> values have been changed by userspace, the driver core will remove the
> device node when the device goes away. If the device node was altered
> by udev, by applying the appropriate permissions and ownership, it will
> need to be removed by udev - just as it usually works today.
>
> This makes init=/bin/sh work without any further userspace support.
> /dev will be fully populated and dynamic, and always reflect the current
> device state of the kernel. Especially in the face of the already
> implemented dynamic device numbers for block devices, this can be very
> helpful in a rescue situation, where static devices nodes no longer
> work.
> Custom, embedded-like systems should be able to use this as a dynamic
> /dev directory without any need for aditional userspace tools.
>
> With the kernel populated /dev, existing initramfs or kernel-mount
> bootup logic can be optimized to be more efficient, and not to require a
> full coldplug run, which is currently needed to bootstrap the inital
> /dev directory content, before continuing bringing up the rest of
> the system. There will be no missed events to replay, because /dev is
> available before the first kernel device is registered with the core.
> A coldplug run can take, depending on the speed of the system and the
> amount of devices which need to be handled, from one to several seconds.

Aren't you overeaching in your claims here?  I'm sure you can't avoid
at least one coldplug run on a contemporary general purpose system,
because you lose so much of the functionality provided by udev.  I'm
sure of that, but it would be nice if you could address it in the
changelog.  And modern initramfs' require udev RUN rules to read UUIDs
and set up LVM.

I'm loving this for embedded, init=/bin/sh, and rescue floppies :-).
But I can't understand how you plan to use this as an optimisation.

And - I'm sure you must have considered this in a moment of madness -
do you know why we couldn't just start _udev_ "before the first kernel
device is registered with the core"?

Regards
Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ