[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+Cwtg8=KQW3AAgyDi=8U30F0+TD2EDXU_dEMdB5uHF4MBg@mail.gmail.com>
Date: Mon, 10 Jul 2017 17:29:00 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Aubrey Li <aubrey.li@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Len Brown <len.brown@...el.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>, ak@...ux.intel.com,
Tim Chen <tim.c.chen@...ux.intel.com>, arjan@...ux.intel.com,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Yang Zhang <yang.zhang.wz@...il.com>,
"the arch/x86 maintainers" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Aubrey Li <aubrey.li@...ux.intel.com>
Subject: Re: [RFC PATCH v1 00/11] Create fast idle path for short idle periods
2017-07-10 16:46 GMT+08:00 Peter Zijlstra <peterz@...radead.org>:
> On Mon, Jul 10, 2017 at 09:38:30AM +0800, Aubrey Li wrote:
>> We measured 3%~5% improvemnt in disk IO workload, and 8%~20% improvement in
>> network workload.
>
> Argh, what a mess :/
Agreed, this patchset is a variant of
https://lkml.org/lkml/2017/6/22/296 As I mentioned before, we should
not churn the core path.
Regards,
Wanpeng Li
>
> So how much of the gain is simply due to skipping NOHZ? Mike used to
> carry a patch that would throttle NOHZ. And that is a _far_ smaller and
> simpler patch to do.
Powered by blists - more mailing lists