[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55ce0c78-16d4-6e73-87ae-0a88ceca1b28@meta.com>
Date: Tue, 27 Jun 2023 12:31:29 -0400
From: Chris Mason <clm@...a.com>
To: David Vernet <void@...ifault.com>,
"Gautham R. Shenoy" <gautham.shenoy@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, mingo@...hat.com,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
rostedt@...dmis.org, dietmar.eggemann@....com, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
joshdon@...gle.com, roman.gushchin@...ux.dev, tj@...nel.org,
kernel-team@...a.com, K Prateek Nayak <kprateek.nayak@....com>
Subject: Re: [RFC PATCH 3/3] sched: Implement shared wakequeue in CFS
On 6/26/23 11:17 PM, David Vernet wrote:
> On Mon, Jun 26, 2023 at 11:34:41AM +0530, Gautham R. Shenoy wrote:
>>
>> observations: there are run-to-run variations with this benchmark. i
>> will try with the newer schbench later this week.
>
> +cc Chris. He showed how you can use schbench to cause painful
> contention with swqueue in [0]. Chris -- any other schbench incantations
> that you'd recommend we try with Gautham's patches?
>
> [0]: https://lore.kernel.org/lkml/c8419d9b-2b31-2190-3058-3625bdbcb13d@meta.com/
Hopefully my command line from this other email will be consistent with
the new schbench, please let me know if you're still seeing variations.
In terms of other things to try, I was creating one worker per messenger
to maximize runqueue lock contention, but variations on -t may be
interesting.
Basically I did -t 1 -m <increasing> until all the idle time on
unpatched Linus was soaked up, and then I compared Linus and swqueue.
-chris
Powered by blists - more mailing lists