[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b81a0e641e6724c8aaf5b6a4b32fa4b550ecbbcd.camel@surriel.com>
Date: Mon, 22 Mar 2021 22:08:07 -0400
From: Rik van Riel <riel@...riel.com>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: linux-kernel@...r.kernel.org, kernel-team@...com,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Valentin Schneider <valentin.schneider@....com>
Subject: Re: [PATCH v2] sched/fair: bring back select_idle_smt, but
differently
On Mon, 2021-03-22 at 15:33 +0000, Mel Gorman wrote:
> If trying that, I would put that in a separate patch. At one point
> I did play with clearing prev, target and recent but hit problems.
> Initialising the mask and clearing them in select_idle_sibling() hurt
> the fast path and doing it later was not much better. IIRC, the
> problem
> I hit was that the cost of clearing multiple CPUs before the search
> was
> not offset by gains from a more efficient search.
I'm definitely avoiding the more expensive operations,
and am only using __cpumask_clear_cpu now :)
> If I had to guess, simply initialising cpumask after calling
> select_idle_smt() will be faster for your particular case because you
> have a reasonable expectation that prev's SMT sibling is idle when
> there
> are no idle cores. Checking if prev's sibling is free when there are
> no
> idle cores is fairly cheap in comparison to a cpumask initialisation
> and
> partial clearing.
>
> If you have the testing capacity and time, test both.
Kicking off more tests soon. I'll get back with a v3 patch
on Wednesday.
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists