lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090731080728.GA25049@rhlx01.hs-esslingen.de>
Date:	Fri, 31 Jul 2009 10:07:28 +0200
From:	Andreas Mohr <andi@...as.de>
To:	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Cc:	Robert Hancock <hancockrwd@...il.com>,
	Andreas Mohr <andi@...as.de>,
	Corrado Zoccolo <czoccolo@...il.com>,
	LKML <linux-kernel@...r.kernel.org>, linux-acpi@...r.kernel.org
Subject: Re: Dynamic configure max_cstate

Hi,

On Fri, Jul 31, 2009 at 03:06:46PM +0800, Zhang, Yanmin wrote:
> On Thu, 2009-07-30 at 21:43 -0600, Robert Hancock wrote:
> > On 07/28/2009 04:11 AM, Andreas Mohr wrote:
> > > Oh, and about the places which submit I/O requests where one would have to
> > > flag this: are they in any way correlated with the scheduler I/O wait
> > > value? Would the I/O wait mechanism be a place to more easily and centrally
> > > indicate that we're waiting for a request to come back in "very soon"?
> > > OTOH I/O requests may have vastly differing delay expectations,
> > > thus specifically only short-term expected I/O replies should be flagged,
> > > otherwise we're wasting lots of ACPI deep idle opportunities.
> > 
> > Did the results show a big difference in performance between maximum C2 
> > and maximum C3? 
> No big difference. I tried different max cstate by processor.max_cstate.
> Mostly, processor.max_cstate=1 could get the similiar result like idle=poll.

OK, but I'd say that this doesn't mean that we should implement a
hard-coded mechanism which simply says "in such cases, don't do anything > C1".
Instead we should strive for a far-reaching _generic_ mechanism
which gathers average latencies of various I/O activities/devices
and then uses some formula to determine the maximum (not necessarily ACPI)
idle latency that we're willing to endure (e.g. average device I/O reply latency
divided by 10 or so).

And in addition to this, we should also take into account (read: skip)
any idle states which kill busmaster DMA completely
(in case of busmaster DMA I/O activities, that is).

_Lots_ of very nice opportunities for improvement here, I'd say...
(in the 5, 10 or even 40% range in the case of certain network I/O)

Andreas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ