[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fa57d6f059ba68cbb2d372c7c63f008e2b6a80cf.camel@gmx.de>
Date: Wed, 10 Nov 2021 16:40:08 +0100
From: Mike Galbraith <efault@....de>
To: Tao Zhou <tao.zhou@...ux.dev>,
Mel Gorman <mgorman@...hsingularity.net>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Valentin Schneider <valentin.schneider@....com>,
Aubrey Li <aubrey.li@...ux.intel.com>,
Barry Song <song.bao.hua@...ilicon.com>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] sched/fair: Couple wakee flips with heavy wakers
On Wed, 2021-11-10 at 17:53 +0800, Tao Zhou wrote:
> On Fri, Oct 29, 2021 at 09:42:19AM +0100, Mel Gorman wrote:
> > On Fri, Oct 29, 2021 at 12:19:48AM +0800, Tao Zhou wrote:
> > > Hi Mel,
> > >
> > > On Thu, Oct 28, 2021 at 10:48:33AM +0100, Mel Gorman wrote:
> > >
> > > > @@ -5865,6 +5865,14 @@ static void record_wakee(struct task_struct *p)
> > > > }
> > > >
> > > > if (current->last_wakee != p) {
> > > > + int min = __this_cpu_read(sd_llc_size) << 1;
> > > > + /*
> > > > + * Couple the wakee flips to the waker for the case where it
> > > > + * doesn't accrue flips, taking care to not push the wakee
> > > > + * high enough that the wake_wide() heuristic fails.
> > > > + */
> > > > + if (current->wakee_flips > p->wakee_flips * min)
> > > > + p->wakee_flips++;
> > > > current->last_wakee = p;
> > > > current->wakee_flips++;
> > > > }
> > > > @@ -5895,7 +5903,7 @@ static int wake_wide(struct task_struct *p)
> > > >
> > > > if (master < slave)
> > > > swap(master, slave);
> > > > - if (slave < factor || master < slave * factor)
> > > > + if ((slave < factor && master < (factor>>1)*factor) || master < slave * factor)
> > >
> > > So, the check like this include the above range:
> > >
> > > if ((slave < factor && master < slave * factor) ||
> > > master < slave * factor)
> > >
> > > That "factor>>1" filter some.
> > >
> > > If "slave < factor" is true and "master < (factor>>1)*factor" is false,
> > > then we check "master < slave * factor".(This is one path added by the
> > > check "&& master < (factor>>1)*factor").
> > > In the latter check "slave < factor" must be true, the result of this
> > > check depend on slave in the range [factor, factor>>1] if there is possibility
> > > that "master < slave * factor". If slave in [factor>>1, 0], the check of
> > > "master < slave * factor" is absolutly false and this can be filtered if
> > > we use a variable to load the result of master < (factor>>1)*factor.
> > >
> > > My random random inputs and continue confusing to move on.
> > >
> >
> > I'm not sure what point you're trying to make.
>
> Ok, some days later even can not understand what my saying myself. After
> wrong and right aross with my wreck head I just try to make this:
>
> if ((slave < factor && master < (factor>>1)*factor) || (slave >= factor>>1) && master < slave * factor)
>
> check "slave > factor>>1" for filter the cases that is calculated if I
> am not wrong. If this have a little effect that will be to not need to
> do "master < slave * factor" for some time not sure.
Take the original:
if (slave < factor || master < slave * factor)
return 0;
That is looking for a waker:wakees ratio of sd_llc_size, and does it
the way it does because you can create "flips" galore by waking only
two tasks, but using the two comparisons together makes it more likely
that you're waking sd_llc_size tasks. Take my box's 8 rq servicing
LLC, if wakee is 8, multi-waker being 8 times that suggests 8 wakees,
each having been awakened 8 times by our multi-waker, qualifying the
pair to be considered part of a load too large to restrict to one LLC.
But what happens when our multi-waker isn't always waking a uniformly
growing/shrinking set of workers, it's a bit chaotic, and the flip
count of some wakees decay below our magic 8? The right side can be
happy as a clam because the multi-waker is flipping madly enough to
make wakee * llc_size nothing remotely resembling a hurdle, but there
sits a deal breaker on the left.. so we should wake these threads
affine? I should have left that alone, or at least picked a big
arbitrary stopper, but picked half of our magic "I might be waking a
herd" number to say nah, as long as the ratio on the right looks herd
like AND our multi-waker appears to be waking at least half a herd,
wake it wide.
That not-a-noop probably should die despite having not (yet) shown an
evil side because it dings up an already questionable enough heuristic.
-Mike
Powered by blists - more mailing lists