lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 26 Nov 2014 18:32:16 +0200
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Christian Borntraeger <borntraeger@...ibm.com>
Cc:	David Hildenbrand <dahi@...ux.vnet.ibm.com>,
	linuxppc-dev@...ts.ozlabs.org, linux-arch@...r.kernel.org,
	linux-kernel@...r.kernel.org, benh@...nel.crashing.org,
	paulus@...ba.org, akpm@...ux-foundation.org,
	heiko.carstens@...ibm.com, schwidefsky@...ibm.com, mingo@...nel.org
Subject: Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when
 atomic

On Wed, Nov 26, 2014 at 05:07:13PM +0100, Christian Borntraeger wrote:
> Am 26.11.2014 um 16:47 schrieb Michael S. Tsirkin:
> > On Wed, Nov 26, 2014 at 04:32:07PM +0100, David Hildenbrand wrote:
> >>> On Wed, Nov 26, 2014 at 05:17:29PM +0200, Michael S. Tsirkin wrote:
> >>>> On Wed, Nov 26, 2014 at 11:05:04AM +0100, David Hildenbrand wrote:
> >>>>>> What's the path you are trying to debug?
> >>>>>
> >>>>> Well, we had a problem where we held a spin_lock and called
> >>>>> copy_(from|to)_user(). We experienced very random deadlocks that took some guy
> >>>>> almost a week to debug. The simple might_sleep() check would have showed this
> >>>>> error immediately.
> >>>>
> >>>> This must have been a very old kernel.
> >>>> A modern kernel will return an error from copy_to_user.
> >>>> Which is really the point of the patch you are trying to revert.
> >>>
> >>> That's assuming you disabled preemption. If you didn't, and take
> >>> a spinlock, you have deadlocks even without userspace access.
> >>>
> >>
> >> (Thanks for your resent, my first email was sent directly to you ... grml)
> >>
> >> This is what happened on our side (very recent kernel):
> >>
> >> spin_lock(&lock)
> >> copy_to_user(...)
> >> spin_unlock(&lock)
> > 
> > That's a deadlock even without copy_to_user - it's
> > enough for the thread to be preempted and another one
> > to try taking the lock.
> 
> Huh? With CONFIG_PREEMPT spin_lock will disable preemption. (we had preempt = server anyway).

Are you sure? Can you point me where it does this please?

> But please: One step back. The problem is not the good path. The problem is that we lost a debugging aid for a known to be broken case. In other words: Our code had a bug. Older kernels detected that kind of bug. With your change we no longer saw the sleeping while atomic. Thats it. See my other mail.
> 
> Christian

You want to add more debugging tools, fine. But this one was
giving users in field false positives.

The point is that *_user is safe with preempt off.
It returns an error gracefully.
It does not sleep.
It does not trigger the scheduler in that context.


David's patch makes it say it does, so it's wrong.



-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ