[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1253118916.7180.6.camel@laptop>
Date: Wed, 16 Sep 2009 18:35:16 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: dipankar@...ibm.com
Cc: Gautham R Shenoy <ego@...ibm.com>,
Joel Schopp <jschopp@...tin.ibm.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Balbir Singh <balbir@...ibm.com>,
Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Arun R Bharadwaj <arun@...ux.vnet.ibm.com>,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
"Darrick J. Wong" <djwong@...ibm.com>
Subject: Re: [PATCH v3 0/3] cpu: pseries: Cpu offline states framework
On Wed, 2009-09-16 at 21:54 +0530, Dipankar Sarma wrote:
> No, for this specific case, latency isn't an issue. The issue is -
> how do we cede unused vcpus to hypervisor for better energy management ?
> Yes, it can be done by a hypervisor manager telling the kernel to
> offline and make a bunch of vcpus "inactive". It does have to choose
> offline (release vcpu) vs. inactive (cede but guranteed if needed).
> The problem is that long ago we exported a lot of hotplug stuff to
> userspace through the sysfs interface and we cannot do something
> inside the kernel without keeping the sysfs stuff consistent.
> This seems like a sane way to do that without undoing all the
> virtual cpu hotplug infrastructure in different supporting archs.
I'm still not getting it..
Suppose we have some guest, it booted with 4 cpus.
We then offline 2 of them.
Apparently this LPAR binds guest cpus to physical cpus?
So we use a hypervisor interface to reclaim these 2 offlined cpus and
re-assign them to some other guest.
So far so good, right?
Now if you were to try and online the cpus in the guest, it'd fail
because the cpus aren't backed anymore, and the hot-plug simply
times-out and fails.
And we're still good, right?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists