[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080321165405.GC5766@kroah.com>
Date: Fri, 21 Mar 2008 09:54:05 -0700
From: Greg KH <greg@...ah.com>
To: Heiko Carstens <heiko.carstens@...ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michael Buesch <mb@...sch.de>,
Alan Stern <stern@...land.harvard.edu>,
Henrique de Moraes Holschuh <hmh@....eng.br>,
David Brownell <david-b@...bell.net>,
Richard Purdie <rpurdie@...ys.net>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
netdev@...r.kernel.org,
Martin Schwidefsky <schwidefsky@...ibm.com>,
linux-usb@...r.kernel.org, linux-wireless@...r.kernel.org,
video4linux-list@...hat.com,
Stefan Richter <stefanr@...6.in-berlin.de>,
lm-sensors@...sensors.org
Subject: Re: use of preempt_count instead of in_atomic() at leds-gpio.c
On Fri, Mar 21, 2008 at 02:47:50PM +0100, Heiko Carstens wrote:
> On Thu, Mar 20, 2008 at 07:27:19PM -0700, Andrew Morton wrote:
> > On Fri, 21 Mar 2008 02:36:51 +0100 Michael Buesch <mb@...sch.de> wrote:
> > > On Friday 21 March 2008 02:31:44 Alan Stern wrote:
> > > > On Thu, 20 Mar 2008, Andrew Morton wrote:
> > > > > On Thu, 20 Mar 2008 21:36:04 -0300 Henrique de Moraes Holschuh <hmh@....eng.br> wrote:
> > > > >
> > > > > > Well, so far so good for LEDs, but what about the other users of in_atomic
> > > > > > that apparently should not be doing it either?
> > > > >
> > > > > Ho hum. Lots of cc's added.
> > > >
> > > > ...
> > > >
> > > > > The usual pattern for most of the above is
> > > > >
> > > > > if (!in_atomic())
> > > > > do_something_which_might_sleep();
> > > > >
> > > > > problem is, in_atomic() returns false inside spinlock on non-preptible
> > > > > kernels. So if anyone calls those functions inside spinlock they will
> > > > > incorrectly schedule and another task can then come in and try take the
> > > > > already-held lock.
> > > > >
> > > > > Now, it happens that in_atomic() returns true on non-preemtible kernels
> > > > > when running in interrupt or softirq context. But if the above code really
> > > > > is using in_atomic() to detect am-i-called-from-interrupt and NOT
> > > > > am-i-called-from-inside-spinlock, they should be using in_irq(),
> > > > > in_softirq() or in_interrupt().
> > > >
> > > > Presumably most of these places are actually trying to detect
> > > > am-i-allowed-to-sleep. Isn't that what in_atomic() is supposed to do?
> > >
> > > No, I think there is no such check in the kernel. Most likely for performance
> > > reasons, as it would require a global flag that is set on each spinlock.
> >
> > Yup. non-preemptible kernels avoid the inc/dec of
> > current_thread_info->preempt_count on spin_lock/spin_unlock
> >
> > > You simply must always _know_, if you are allowed to sleep or not. This is
> > > done by defining an API. The call-context is part of any kernel API.
> >
> > Yup. 99.99% of kernel code manages to do this...
>
> This is difficult for console drivers. They get called and are supposed to
> print something and don't have the slightest clue which context they are
> running in and if they are allowed to schedule.
> This is the problem with e.g. s390's sclp driver. If there are no write
> buffers available anymore it tries to allocate memory if schedule is allowed
> or otherwise has to wait until finally a request finished and memory is
> available again.
> And now we have to always busy wait if we are out of buffers, since we
> cannot tell which context we are in?
This is the reason why the drivers/usb/misc/sisusbvga driver is trying
to test for in_atomic:
/* We can't handle console calls in non-schedulable
* context due to our locks and the USB transport.
* So we simply ignore them. This should only affect
* some calls to printk.
*/
if (in_atomic())
return NULL;
So how should this be "fixed" if in_atomic() is not a valid test?
thanks,
greg k-h
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists