[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160302175751.GJ17997@ZenIV.linux.org.uk>
Date: Wed, 2 Mar 2016 17:57:51 +0000
From: Al Viro <viro@...IV.linux.org.uk>
To: "majun (F)" <majun258@...wei.com>
Cc: ebiederm@...ssion.com, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, akpm@...ux-foundation.org,
dhowells@...hat.com, Waiman.Long@...com, dingtianhong@...wei.com,
guohanjun@...wei.com, fanjinke1@...wei.com
Subject: Re: [PATCH] Change the spin_lock/unlock_irq interface in
proc_alloc_inum() function
On Wed, Mar 02, 2016 at 05:29:54PM +0000, Al Viro wrote:
> And no, it doesn't save the irq state anywhere - both disable and enable
> are unconditional. schedule() always returns with irqs enabled.
PS: look at it that way: how would you expect a context switch to behave?
Suppose we blocked because we needed to write some dirty pages on disk
to be able to free them; we *can't* keep irqs disabled through all of that,
right? After all, disk controller needs to be able to tell us it's done
writing; hard to do that with interrupts disabled, not to mention that
keeping them disabled for typical duration of disk write would be rather
antisocial. So no matter how schedule() behaves wrt irqs, doing it with
irqs disabled would either invite deadlocks, or enable irqs at least for
a while. Even if it remembered that you used to have them disabled and
re-disabled them when switching back, you would still lose whatever protection
you were getting from having them disabled in the first place. If e.g.
you call request_irq() before being done setting the things up for
interrupt handler and count on finishing that before reenabling irqs, you
would need irqs to _stay_ disabled through all of that. And with any
blocking allocations there's no way to guarantee that.
Powered by blists - more mailing lists