[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0811141412290.12992@hs20-bc2-1.build.redhat.com>
Date: Fri, 14 Nov 2008 14:34:11 -0500 (EST)
From: Mikulas Patocka <mpatocka@...hat.com>
To: Alan Cox <alan@...rguk.ukuu.org.uk>
cc: linux-kernel@...r.kernel.org, mingo@...e.hu, rml@...h9.net,
Alasdair G Kergon <agk@...hat.com>,
Milan Broz <mbroz@...hat.com>
Subject: Re: Active waiting with yield()
On Fri, 14 Nov 2008, Alan Cox wrote:
> > * driver unload --- check the count of outstanding requests and call
> > yield() repeatedly until it goes to zero, then unload.
>
> Use a wakeup when the request count hits zero
>
> > * reduced size of data structures (and reduced cache footprint for the hot
> > path that actually processes requests)
>
> The CPU will predict the non-wakeup path if that is normal. You can even
> make the wakeup use something like
>
> if (exiting & count == 0)
>
> to get the prediction righ
>
> > The downside of yield is slower unloading of the driver by few tens of
> > miliseconds, but the user doesn't really care about fractions of a second
> > when unloading drivers.
>
> And more power usage, plus extremely rude behaviour when virtualising.
How these unlikely cases can be rude?
If I have a race condition that gets triggered just for one user in the
world when repeatedly loading & unloading a driver for an hour, and I use
yield() to solve it, what's wrong with it? A wait queue increases cache
footprint for every user. (even if I use preallocated hashed wait queue,
it still eats a cacheline to access it and find out that it's empty)
Mikulas
> There are cases you have to use cpu_relax/spins or yield simply because
> the hardware doesn't feel like providing interrupts when you need them,
> but for the general case its best to use proper sleeping.
>
> Remember also you can use a single wait queue for an entire driver for
> obscure happenings like unloads.
>
> Alan
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists