lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55F9326A.9070509@redhat.com>
Date:	Wed, 16 Sep 2015 11:12:10 +0200
From:	Paolo Bonzini <pbonzini@...hat.com>
To:	Christian Borntraeger <borntraeger@...ibm.com>,
	paulmck@...ux.vnet.ibm.com
Cc:	Peter Zijlstra <peterz@...radead.org>, Tejun Heo <tj@...nel.org>,
	Ingo Molnar <mingo@...hat.com>,
	"linux-kernel@...r.kernel.org >> Linux Kernel Mailing List" 
	<linux-kernel@...r.kernel.org>, KVM list <kvm@...r.kernel.org>,
	Oleg Nesterov <oleg@...hat.com>
Subject: Re: [4.2] commit d59cfc09c32 (sched, cgroup: replace
 signal_struct->group_rwsem with a global percpu_rwsem) causes regression for
 libvirt/kvm



On 16/09/2015 10:57, Christian Borntraeger wrote:
> Am 16.09.2015 um 10:32 schrieb Paolo Bonzini:
>>
>>
>> On 15/09/2015 19:38, Paul E. McKenney wrote:
>>> Excellent points!
>>>
>>> Other options in such situations include the following:
>>>
>>> o	Rework so that the code uses call_rcu*() instead of *_expedited().
>>>
>>> o	Maintain a per-task or per-CPU counter so that every so many
>>> 	*_expedited() invocations instead uses the non-expedited
>>> 	counterpart.  (For example, synchronize_rcu instead of
>>> 	synchronize_rcu_expedited().)
>>
>> Or just use ratelimit (untested):
> 
> One of my tests was to always replace synchronize_sched_expedited with 
> synchronize_sched and things turned out to be even worse. Not sure if
> it makes sense to test yopur in-the-middle approach?

I don't think it applies here, since down_write/up_write is a
synchronous API.

If the revert isn't easy, I think backporting rcu_sync is the best bet.
 The issue is that rcu_sync doesn't eliminate synchronize_sched, it only
makes it more rare.  So it's possible that it isn't eliminating the root
cause of the problem.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ