[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141126170447.GC11202@redhat.com>
Date: Wed, 26 Nov 2014 19:04:47 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Christian Borntraeger <borntraeger@...ibm.com>
Cc: David Hildenbrand <dahi@...ux.vnet.ibm.com>,
linuxppc-dev@...ts.ozlabs.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, benh@...nel.crashing.org,
paulus@...ba.org, akpm@...ux-foundation.org,
heiko.carstens@...ibm.com, schwidefsky@...ibm.com, mingo@...nel.org
Subject: Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when
atomic
On Wed, Nov 26, 2014 at 05:51:08PM +0100, Christian Borntraeger wrote:
> > But this one was > giving users in field false positives.
>
> So lets try to fix those, ok? If we cant, then tough luck.
Sure.
I think the simplest way might be to make spinlock disable
premption when CONFIG_DEBUG_ATOMIC_SLEEP is enabled.
As a result, userspace access will fail and caller will
get a nice error.
> But coming up with wrong statements is not helpful.
True. Sorry that I did that.
> >
> > The point is that *_user is safe with preempt off.
> > It returns an error gracefully.
> > It does not sleep.
> > It does not trigger the scheduler in that context.
>
> There are special cases where your statement is true. But its not in general.
> copy_to_user might fault and that fault might sleep and reschedule.
Yes. But not if called inatomic.
> For example handle_mm_fault might go down to pud_alloc, pmd_alloc etc and all these functions could do an GFP_KERNEL allocation. Which might sleep. Which will schedule.
>
>
> >
> >
> > David's patch makes it say it does, so it's wrong.
> >
> >
> >
Absolutely.
I think you can already debug your case easily, by enabling CONFIG_PREEMPT.
This seems counter-intuitive, and distro debug kernels don't seem to do this.
--
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists