[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1411272246110.3961@nanos>
Date: Thu, 27 Nov 2014 22:52:08 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: David Hildenbrand <dahi@...ux.vnet.ibm.com>
cc: Heiko Carstens <heiko.carstens@...ibm.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
linuxppc-dev@...ts.ozlabs.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, benh@...nel.crashing.org,
paulus@...ba.org, akpm@...ux-foundation.org,
schwidefsky@...ibm.com, mingo@...nel.org
Subject: Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when
atomic
On Thu, 27 Nov 2014, David Hildenbrand wrote:
> > OTOH, there is no reason why we need to disable preemption over that
> > page_fault_disabled() region. There are code pathes which really do
> > not require to disable preemption for that.
> >
> > We have that seperated in preempt-rt for obvious reasons and IIRC
> > Peter Zijlstra tried to distangle it in mainline some time ago. I
> > forgot why that never got merged.
> >
>
> Of course, we can completely separate that in our page fault code by doing
> pagefault_disabled() checks instead of in_atomic() checks (even in add on
> patches later).
>
> > We tie way too much stuff on the preemption count already, which is a
> > mightmare because we have no clear distinction of protection
> > scopes.
>
> Although it might not be optimal, but keeping a separate counter for
> pagefault_disable() as part of the preemption counter seems to be the only
> doable thing right now.
It needs to be seperate, if it should be useful. Otherwise we just
have a extra accounting in preempt_count() which does exactly the same
thing as we have now: disabling preemption.
Now you might say, that we could mask out that part when checking
preempt_count, but that wont work on x86 as x86 has the preempt
counter as a per cpu variable and not as a per thread one.
But if you want to distangle pagefault disable from preempt disable
then you must move it to the thread, because it is a property of the
thread. preempt count is very much a per cpu counter as you can only
go through schedule when it becomes 0.
Btw, I find the x86 representation way more clear, because it
documents that preempt count is a per cpu BKL and not a magic thread
property. And sadly that is how preempt count is used ...
> I am not sure if a completely separated counter is even possible,
> increasing the size of thread_info.
And adding a ulong to thread_info is going to create exactly which
problem?
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists