lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 15 May 2012 17:40:36 +0530
From:	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To:	David Rientjes <rientjes@...gle.com>
CC:	a.p.zijlstra@...llo.nl, mingo@...nel.org, pjt@...gle.com,
	paul@...lmenage.org, akpm@...ux-foundation.org, rjw@...k.pl,
	nacc@...ibm.com, paulmck@...ux.vnet.ibm.com, tglx@...utronix.de,
	seto.hidetoshi@...fujitsu.com, tj@...nel.org, mschmidt@...hat.com,
	berrange@...hat.com, nikunj@...ux.vnet.ibm.com,
	vatsa@...ux.vnet.ibm.com, liuj97@...il.com,
	linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: [PATCH v3 0/5] CPU hotplug, cpusets: Fix issues with cpusets
 handling during suspend/resume

On 05/15/2012 05:28 AM, David Rientjes wrote:

> On Mon, 14 May 2012, Srivatsa S. Bhat wrote:
> 
>> Currently the kernel doesn't handle cpusets properly during suspend/resume.
>> After a resume, all non-root cpusets end up having only 1 cpu (the boot cpu),
>> causing massive performance degradation of workloads. One major user of cpusets
>> is libvirt, which means that after a suspend/hibernation cycle, all VMs
>> suddenly end up running terribly slow!
>>
>> Also, the kernel moves the tasks from one cpuset to another during CPU hotplug
>> in the suspend/resume path, leading to a task-management nightmare after
>> resume.
>>
> 
> To deal with mempolicy rebinding when a cpuset changes, I made a change to 
> mempolicies to store the user nodemask passed to set_mempolicy() or 
> mbind() so the intention of the user could be preserved.  It seems like 
> you should do the same thing for cpusets to store the "intended" set of 
> cpus and respect that during cpu online?
> 


Well, I think Nishanth addressed this one already.. As he said, that idea was
implemented in v2 of the patchset[1], and it turned out to be against hotplug
semantics, as pointed out by Peter Zijlstra.

[1]. http://thread.gmane.org/gmane.linux.documentation/4805

>> Patches 1 & 2 are cleanups that separate out hotplug handling so that we can
>> implement different logic for different hotplug events (CPU/Mem
>> online/offline). This also leads to some optimizations and more importantly
>> prepares the ground for any further work dealing with cpusets during hotplug.
>>
>> Patch 3 is a bug fix - it ensures that the tasks attached to the root cpuset
>> see the updated cpus_allowed mask upon CPU hotplug.
>>
>> Patches 4 and 5 implement the fix for cpusets handling during suspend/resume.
> 
> All of your patches are labeled to stable@...r.kernel.org, but I seriously 
> doubt any of this is stable material since it has been a long-standing 
> issue (and perhaps intentional in some cases)


Yes, it is a long-standing issue (bug), but it is not intentional.
People are struggling to deal with this kernel bug for suspend/resume and
there have been numerous bug-reports and stuff everywhere. It is high-time we
fix this in the kernel and get it into stable kernels too (because they too have
this bug).

> and your series includes 
> cleanups and optimizations that wouldn't be stable candidates, so I'd 
> suggest removing that annotation.
> 


Well, the existing code was so messed up that I didn't have a choice but to
clean it up before fixing the suspend/resume case. Had I tried to implement
the fix without cleaning it up, it would have been absolutely horrible, I believe.

And the optimizations? those are just side effects of that cleanup! That really
tells the extent to which it was messed up in the first place!

Regards,
Srivatsa S. Bhat

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ