[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <75ec9e490d1d4854ae2be4ad5b3b24b9@hisilicon.com>
Date: Tue, 1 Jun 2021 08:09:09 +0000
From: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
To: Mel Gorman <mgorman@...e.de>
CC: Peter Zijlstra <peterz@...radead.org>,
"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
"mingo@...hat.com" <mingo@...hat.com>,
"dietmar.eggemann@....com" <dietmar.eggemann@....com>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
"bsegall@...gle.com" <bsegall@...gle.com>,
"valentin.schneider@....com" <valentin.schneider@....com>,
"juri.lelli@...hat.com" <juri.lelli@...hat.com>,
"bristot@...hat.com" <bristot@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"guodong.xu@...aro.org" <guodong.xu@...aro.org>,
yangyicong <yangyicong@...wei.com>,
tangchengchang <tangchengchang@...wei.com>,
Linuxarm <linuxarm@...wei.com>
Subject: RE: [PATCH] sched: fair: don't depend on wake_wide if waker and wakee
are already in same LLC
> -----Original Message-----
> From: Mel Gorman [mailto:mgorman@...e.de]
> Sent: Tuesday, June 1, 2021 7:59 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@...ilicon.com>
> Cc: Peter Zijlstra <peterz@...radead.org>; vincent.guittot@...aro.org;
> mingo@...hat.com; dietmar.eggemann@....com; rostedt@...dmis.org;
> bsegall@...gle.com; valentin.schneider@....com; juri.lelli@...hat.com;
> bristot@...hat.com; linux-kernel@...r.kernel.org; guodong.xu@...aro.org;
> yangyicong <yangyicong@...wei.com>; tangchengchang
> <tangchengchang@...wei.com>; Linuxarm <linuxarm@...wei.com>
> Subject: Re: [PATCH] sched: fair: don't depend on wake_wide if waker and wakee
> are already in same LLC
>
> On Mon, May 31, 2021 at 10:21:55PM +0000, Song Bao Hua (Barry Song) wrote:
> > The benchmark of tbenchs is still positive:
> >
> > tbench4
> >
> > 5.13-rc4 5.13-rc4
> > disable-llc-wakewide/
> >
> > Hmean 1 514.87 ( 0.00%) 505.17 * -1.88%*
> > Hmean 2 914.45 ( 0.00%) 918.45 * 0.44%*
> > Hmean 4 1483.81 ( 0.00%) 1485.38 * 0.11%*
> > Hmean 8 2211.62 ( 0.00%) 2236.02 * 1.10%*
> > Hmean 16 2129.80 ( 0.00%) 2450.81 * 15.07%*
> > Hmean 32 5098.35 ( 0.00%) 5085.20 * -0.26%*
> > Hmean 64 4797.62 ( 0.00%) 4801.34 * 0.08%*
> > Hmean 80 4802.89 ( 0.00%) 4780.40 * -0.47%*
> >
> > I guess something which work across several LLC domains
> > cause performance regression.
> >
> > I wonder how your test will be like if you pin the testing
> > to CPUs within one LLC?
> >
>
> While I could do this, what would be the benefit? Running within one LLC
> would be running the test in one small fraction of the entire machine as
> the machine has multiple LLCs per NUMA node. A patch dealing with how the
> scheduler works with respect to LLC should take different configurations
> into consideration as best as possible.
I do agree with this. And I do admit this patch is lacking of
consideration and testing of supporting various configurations.
But more input of numbers will be helpful on figuring out a better
solution which can either extend to wider configurations or shrink
to some specific machines like those whose whole numa share
LLC or desktops whose all cpus share LLC in v2. eg:
My pc with the newest i9 intel has all 10 cpus(20 threads) sharing
LLC.
>
> --
> Mel Gorman
> SUSE Labs
Thanks
Barry
Powered by blists - more mailing lists