lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Sep 2009 21:54:59 +0530
From:	Dipankar Sarma <dipankar@...ibm.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	Gautham R Shenoy <ego@...ibm.com>,
	Joel Schopp <jschopp@...tin.ibm.com>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Balbir Singh <balbir@...ibm.com>,
	Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
	Arun R Bharadwaj <arun@...ux.vnet.ibm.com>,
	linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
	"Darrick J. Wong" <djwong@...ibm.com>
Subject: Re: [PATCH v3 0/3] cpu: pseries: Cpu offline states framework

On Wed, Sep 16, 2009 at 05:32:51PM +0200, Peter Zijlstra wrote:
> On Wed, 2009-09-16 at 20:58 +0530, Dipankar Sarma wrote:
> > On Tue, Sep 15, 2009 at 02:11:41PM +0200, Peter Zijlstra wrote:
> > > On Tue, 2009-09-15 at 17:36 +0530, Gautham R Shenoy wrote:
> > > > This patchset contains the offline state driver implemented for
> > > > pSeries. For pSeries, we define three available_hotplug_states. They are:
> > > > 
> > > >         online: The processor is online.
> > > > 
> > > >         offline: This is the the default behaviour when the cpu is offlined
> > > > 
> > > >         inactive: This cedes the vCPU to the hypervisor with a cede latency
> > > > 
> > > > Any feedback on the patchset will be immensely valuable.
> > > 
> > > I still think its a layering violation... its the hypervisor manager
> > > that should be bothered in what state an off-lined cpu is in. 
> > 
> > The problem is that all hypervisor managers cannot figure out what sort
> > of latency guest OS can tolerate under the situation. They wouldn't know
> > from what context guest OS has ceded the vcpu. It has to have
> > some sort of hint, which is what the guest OS provides.
> 
> I'm missing something here, hot-unplug is a slow path and should not
> ever be latency critical..?

You aren't, I did :)

No, for this specific case, latency isn't an issue. The issue is -
how do we cede unused vcpus to hypervisor for better energy management ?
Yes, it can be done by a hypervisor manager telling the kernel to
offline and make a bunch of vcpus "inactive". It does have to choose
offline (release vcpu) vs. inactive (cede but guranteed if needed).
The problem is that long ago we exported a lot of hotplug stuff to
userspace through the sysfs interface and we cannot do something
inside the kernel without keeping the sysfs stuff consistent.
This seems like a sane way to do that without undoing all the
virtual cpu hotplug infrastructure in different supporting archs.

Thanks
Dipankar
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ