[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a2a4cd5b398390dcf01b800c964b80c6eba89d18.camel@gmx.de>
Date: Wed, 17 May 2023 21:52:21 +0200
From: Mike Galbraith <efault@....de>
To: Chen Yu <yu.c.chen@...el.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Tim Chen <tim.c.chen@...el.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
K Prateek Nayak <kprateek.nayak@....com>,
Abel Wu <wuyun.abel@...edance.com>,
Yicong Yang <yangyicong@...ilicon.com>,
"Gautham R . Shenoy" <gautham.shenoy@....com>,
Len Brown <len.brown@...el.com>,
Chen Yu <yu.chen.surf@...il.com>,
Arjan Van De Ven <arjan.van.de.ven@...el.com>,
Aaron Lu <aaron.lu@...el.com>, Barry Song <baohua@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] sched/fair: Introduce SIS_PAIR to wakeup task on
local idle core first
On Thu, 2023-05-18 at 00:57 +0800, Chen Yu wrote:
> >
> I'm thinking of two directions based on current patch:
>
> 1. Check the task duration, if it is a high speed ping-pong pair, let the
> wakee search for an idle SMT sibling on current core.
>
> This strategy give the best overall performance improvement, but
> the short task duration tweak based on online CPU number would be
> an obstacle.
Duration is pretty useless, as it says nothing about concurrency.
Taking the 500us metric as an example, one pipe ping-pong can meet
that, and toss up to nearly 50% of throughput out the window if you
stack based only on duration.
> Or
>
> 2. Honors the idle core.
> That is to say, if there is an idle core in the system, choose that
> idle core first. Otherwise, fall back to searching for an idle smt
> sibling rather than choosing a idle CPU in a random half-busy core.
>
> This strategy could partially mitigate the C2C overhead, and not
> breaking the idle-core-first strategy. So I had a try on it, with
> above change, I did see some improvement when the system is around
> half busy(afterall, the idle_has_core has to be false):
If mitigation is the goal, and until the next iteration of socket
growth that's not a waste of effort, continuing to honor idle core is
the only option that has a ghost of a chance.
That said, I don't like the waker/wakee have met heuristic much either,
because tasks waking one another before can just as well mean they met
at a sleeping lock, it does not necessarily imply latency bound IPC.
I haven't met a heuristic I like, and that includes the ones I invent.
The smarter you try to make them, the more precious fast path cycles
they eat, and there's a never ending supply of holes in the damn things
that want plugging. A prime example was the SIS_CURRENT heuristic self
destructing in my box, rendering that patch a not quite free noop :)
-Mike
Powered by blists - more mailing lists