lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080321134750.GB4128@osiris.boeblingen.de.ibm.com>
Date:	Fri, 21 Mar 2008 14:47:50 +0100
From:	Heiko Carstens <heiko.carstens@...ibm.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Michael Buesch <mb@...sch.de>,
	Alan Stern <stern@...land.harvard.edu>,
	Henrique de Moraes Holschuh <hmh@....eng.br>,
	David Brownell <david-b@...bell.net>,
	Richard Purdie <rpurdie@...ys.net>,
	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	Geert Uytterhoeven <geert@...ux-m68k.org>,
	netdev@...r.kernel.org,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	linux-usb@...r.kernel.org, linux-wireless@...r.kernel.org,
	video4linux-list@...hat.com,
	Stefan Richter <stefanr@...6.in-berlin.de>,
	lm-sensors@...sensors.org
Subject: Re: use of preempt_count instead of in_atomic() at leds-gpio.c

On Thu, Mar 20, 2008 at 07:27:19PM -0700, Andrew Morton wrote:
> On Fri, 21 Mar 2008 02:36:51 +0100 Michael Buesch <mb@...sch.de> wrote:
> > On Friday 21 March 2008 02:31:44 Alan Stern wrote:
> > > On Thu, 20 Mar 2008, Andrew Morton wrote:
> > > > On Thu, 20 Mar 2008 21:36:04 -0300 Henrique de Moraes Holschuh <hmh@....eng.br> wrote:
> > > > 
> > > > > Well, so far so good for LEDs, but what about the other users of in_atomic
> > > > > that apparently should not be doing it either?
> > > > 
> > > > Ho hum.  Lots of cc's added.
> > > 
> > > ...
> > > 
> > > > The usual pattern for most of the above is
> > > > 
> > > > 	if (!in_atomic())
> > > > 		do_something_which_might_sleep();
> > > > 
> > > > problem is, in_atomic() returns false inside spinlock on non-preptible
> > > > kernels.  So if anyone calls those functions inside spinlock they will
> > > > incorrectly schedule and another task can then come in and try take the
> > > > already-held lock.
> > > > 
> > > > Now, it happens that in_atomic() returns true on non-preemtible kernels
> > > > when running in interrupt or softirq context.  But if the above code really
> > > > is using in_atomic() to detect am-i-called-from-interrupt and NOT
> > > > am-i-called-from-inside-spinlock, they should be using in_irq(),
> > > > in_softirq() or in_interrupt().
> > > 
> > > Presumably most of these places are actually trying to detect 
> > > am-i-allowed-to-sleep.  Isn't that what in_atomic() is supposed to do?  
> > 
> > No, I think there is no such check in the kernel. Most likely for performance
> > reasons, as it would require a global flag that is set on each spinlock.
> 
> Yup.  non-preemptible kernels avoid the inc/dec of
> current_thread_info->preempt_count on spin_lock/spin_unlock
> 
> > You simply must always _know_, if you are allowed to sleep or not. This is
> > done by defining an API. The call-context is part of any kernel API.
> 
> Yup.  99.99% of kernel code manages to do this...

This is difficult for console drivers. They get called and are supposed to
print something and don't have the slightest clue which context they are
running in and if they are allowed to schedule.
This is the problem with e.g. s390's sclp driver. If there are no write
buffers available anymore it tries to allocate memory if schedule is allowed
or otherwise has to wait until finally a request finished and memory is
available again.
And now we have to always busy wait if we are out of buffers, since we
cannot tell which context we are in?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ