[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120311143615.GB13336@n2100.arm.linux.org.uk>
Date: Sun, 11 Mar 2012 14:36:15 +0000
From: Russell King - ARM Linux <linux@....linux.org.uk>
To: santosh prasad nayak <santoshprasadnayak@...il.com>
Cc: FlorianSchandinat@....de, linux-fbdev@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel-janitors@...r.kernel.org
Subject: Re: [PATCH] Video : Amba: Use in_interrupt() in clcdfb_sleep().
On Sun, Mar 11, 2012 at 07:47:27PM +0530, santosh prasad nayak wrote:
> Not to use in_atomic() in driver code.
>
> Following article inspired me to do the change.
> http://lwn.net/Articles/274695/
>
> "in_atomic() is for core kernel use only. Because in special
> circumstances (ie: kmap_atomic()) we run inc_preempt_count() even on
> non-preemptible kernels to tell the per-arch fault handler that it was
> invoked by copy_*_user() inside kmap_atomic(), and it must fail.
> In other words, in_atomic() works in a specific low-level situation,
> but it was never meant to be used in a wider context. Its placement in
> hardirq.h next to macros which can be used elsewhere was, thus, almost
> certainly a mistake. As Alan Stern pointed out, the fact that Linux
> Device Drivers recommends the use of in_atomic() will not have helped
> the situation. Your editor recommends that the authors of that book be
> immediately sacked. "
>
> In the present case, we just check whether its an IRQ context or user
> context. So for that
> we can use "in_interrupt()".
>
> Greg also mentions the same in the following mail.
> http://www.spinics.net/lists/newbies/msg43402.html
In which case, we'll just have to do mdelay() and forget about allowing
anything else to run for the 20ms that we need to sleep. Sucky but
that's the way things are.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists