lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 26 Nov 2014 17:51:08 +0100
From:	Christian Borntraeger <borntraeger@...ibm.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
CC:	David Hildenbrand <dahi@...ux.vnet.ibm.com>,
	linuxppc-dev@...ts.ozlabs.org, linux-arch@...r.kernel.org,
	linux-kernel@...r.kernel.org, benh@...nel.crashing.org,
	paulus@...ba.org, akpm@...ux-foundation.org,
	heiko.carstens@...ibm.com, schwidefsky@...ibm.com, mingo@...nel.org
Subject: Re: [RFC 0/2] Reenable might_sleep() checks for might_fault() when
 atomic

Am 26.11.2014 um 17:32 schrieb Michael S. Tsirkin:
[...]
>>>> This is what happened on our side (very recent kernel):
>>>>
>>>> spin_lock(&lock)
>>>> copy_to_user(...)
>>>> spin_unlock(&lock)
>>>
>>> That's a deadlock even without copy_to_user - it's
>>> enough for the thread to be preempted and another one
>>> to try taking the lock.
>>
>> Huh? With CONFIG_PREEMPT spin_lock will disable preemption. (we had preempt = server anyway).
> 
> Are you sure? Can you point me where it does this please?

spin_lock --> raw_spin_lock --> _raw_spin_lock --> __raw_spin_lock

static inline void __raw_spin_lock(raw_spinlock_t *lock)
{
---->   preempt_disable();   <-----
        spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
        LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
}


Michael, please be serious. The whole kernel would be broken if spin_lock would not disable preemption.


> 
>> But please: One step back. The problem is not the good path. The problem is that we lost a debugging aid for a known to be broken case. In other words: Our code had a bug. Older kernels detected that kind of bug. With your change we no longer saw the sleeping while atomic. Thats it. See my other mail.
>>
>> Christian
> 
> You want to add more debugging tools, fine.

We dont want to add, we want to fix something that used to work

> But this one was > giving users in field false positives.

So lets try to fix those, ok? If we cant, then tough luck. But coming up with wrong statements is not helpful.

> 
> The point is that *_user is safe with preempt off.
> It returns an error gracefully.
> It does not sleep.
> It does not trigger the scheduler in that context.

There are special cases where your statement is true. But its not in general.
copy_to_user might fault and that fault might sleep and reschedule. For example handle_mm_fault might go down to pud_alloc, pmd_alloc etc and all these functions could do an GFP_KERNEL allocation. Which might sleep. Which will schedule.


> 
> 
> David's patch makes it say it does, so it's wrong.
> 
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ