[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20060907162800.e92b5c7c.akpm@osdl.org>
Date: Thu, 7 Sep 2006 16:28:00 -0700
From: Andrew Morton <akpm@...l.org>
To: Greg KH <greg@...ah.com>
Cc: Alexey Dobriyan <adobriyan@...il.com>,
linux-kernel@...r.kernel.org, Al Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@....de>
Subject: Re: Naughty ramdrives
On Thu, 7 Sep 2006 16:01:30 -0700
Greg KH <greg@...ah.com> wrote:
> On Thu, Sep 07, 2006 at 03:20:37PM -0700, Andrew Morton wrote:
> > On Fri, 8 Sep 2006 02:08:53 +0400
> > Alexey Dobriyan <adobriyan@...il.com> wrote:
> >
> > > > So I assume udev is still madly crunching on its message backlog while
> > > > this is happening?
> > > >
> > > > If so, ug.
> > >
> > > OK. I'll let it stabilize, sorry.
> >
> > You shouldn't have to.
>
> You shouldn't have to what? You purposefully add and remove a block
> driver as fast as is possible, creating a ton of new events and you
> expect userspace processing of those events to be able to keep up in
> real-time with it?
Absolutely. sys_init_module() should not return until the device nodes
have stabilised. There is no other sane interface the kernel can offer.
ho hum.
Perhaps there's some hacklet we can put into modprobe, to allow it to peek
at the udev sequence numbering, wait until all the events which were
associated with this modprobe have been serviced? Or maybe a standalone
tool?
Say, just a loopback message: send it into the kernel, knowing that it will
be appended to the queue. Wait until a reply comes, so you know that all
preceding events in the queue have been serviced?
Or whatever. Right now, there's no sane way to do
modprobe rd
mkfs /dev/ram0
so instead we could do
modprobe rd
/sbin/wait-for-udev-to-catch-up
mkfs /dev/ram0
Or something.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists