[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1205142055160.10906@chino.kir.corp.google.com>
Date: Mon, 14 May 2012 21:04:16 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Nishanth Aravamudan <nacc@...ux.vnet.ibm.com>
cc: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>,
a.p.zijlstra@...llo.nl, mingo@...nel.org, pjt@...gle.com,
paul@...lmenage.org, akpm@...ux-foundation.org, rjw@...k.pl,
nacc@...ibm.com, paulmck@...ux.vnet.ibm.com, tglx@...utronix.de,
seto.hidetoshi@...fujitsu.com, tj@...nel.org, mschmidt@...hat.com,
berrange@...hat.com, nikunj@...ux.vnet.ibm.com,
vatsa@...ux.vnet.ibm.com, liuj97@...il.com,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: [PATCH v3 5/5] cpusets, suspend: Save and restore cpusets during
suspend/resume
On Mon, 14 May 2012, Nishanth Aravamudan wrote:
> > I see what you're doing with this and think it will fix the problem that
> > you're trying to address, but I think it could become much more general
> > to just the suspend case: if an admin sets a cpuset to have cpus 4-6, for
> > example, and cpu 5 goes offline, then I believe the cpuset should once
> > again become 4-6 if cpu 5 comes back online. So I think this should be
> > implemented like mempolicies are which save the user intended nodemask
> > that may become restricted by cpuset placement but will be rebound if the
> > cpuset includes the intended nodes.
>
> Heh, please read the thread at
> http://marc.info/?l=linux-kernel&m=133615922717112&w=2 ... subject is
> "[PATCH v2 0/7] CPU hotplug, cpusets: Fix issues with cpusets handling
> upon CPU hotplug". That was effectively the same solution Srivatsa
> originally posted. But after lengthy discussions with PeterZ and others,
> it was decided that suspend/resume is a special case where it makes
> sense to save "policy" but that generally cpu/memory hotplug is a
> destructive operation and nothing is required to be retained (that
> certain policies are retained is unfortunately now expected, but isn't
> guaranteed for cpusets, at least).
>
If you do set_mempolicy(MPOL_BIND, 2-3) to bind a thread to nodes 2-3 that
is attached to a cpuset whereas cpuset.mems == 2-3, and then cpuset.mems
changes to 0-1, what is the expected behavior? Do we immediately oom on
the next allocation? If cpuset.mems is set again to 2-3, what's the
desired behavior?
I fixed this problem by introducing MPOL_F_* flags in set_mempolicy(2) by
saving the user intended nodemask passed by set_mempolicy() and respecting
it whenever allowed by cpusets.
Right now, the behavior of what happens for a cpuset where cpuset.cpus ==
2-3 and then cpus 2-3 go offline and then are brought back online is
undefined. The same is true of cpuset.cpus during resume. So if you're
going to add a cpumask to struct cpuset, then why not respect it for all
offline events and get rid of all this specialized suspend-only stuff?
It's very simple to make this consistent across all cpu hotplug events and
build suspend on top of it from a cpuset perspective.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists