lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <909fd5b2-a1c6-49cf-8efa-c71affb1a4fe@efficios.com>
Date:   Mon, 6 Nov 2023 11:32:02 -0500
From:   Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:     K Prateek Nayak <kprateek.nayak@....com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Swapnil Sapkal <Swapnil.Sapkal@....com>,
        Aaron Lu <aaron.lu@...el.com>, Chen Yu <yu.c.chen@...el.com>,
        Tim Chen <tim.c.chen@...el.com>,
        "Gautham R . Shenoy" <gautham.shenoy@....com>, x86@...nel.org
Subject: Re: [RFC PATCH v2 0/2] sched/fair migration reduction features

On 2023-10-26 23:27, K Prateek Nayak wrote:
[...]
> --
> It is a mixed bag of results, as expected. I would love to hear your
> thoughts on the results. Meanwhile, I'll try to get some more data
> from other benchmarks.

I suspect that workloads that exhibit a client-server (1:1) pairing 
pattern are hurt by the bias towards leaving tasks on their prev 
runqueue: they benefit from moving both client/server tasks as close as 
possible so they share either the same core or a common cache.

The hackbench workload is also client-server, but there are N-client and 
N-server threads, creating a N:N relationship which really does not work 
well when trying to pull tasks on sync wakeup: tasks then bounce all 
over the place.

It's tricky though. If we try to fix the "1:1" client-server pattern 
with a heuristic, we may miss scenarios which are close to 1:1 but don't 
exactly match.

I'm working on a rewrite of select_task_rq_fair, with the aim to tackle 
the more general task placement problem taking into account the following:

- We want to converge towards a task placement that moves tasks with
   most waker/wakee interactions as close as possible in the cache
   topology,
- We can use the core util_est/capacity metrics to calculate whether we
   have capacity left to enqueue a task in a core's runqueue.
- The underlying assumption is that work conserving [1] is not a good
   characteristic to aim for, because it does not take into account the
   overhead associated with migrations, and thus lack of cache locality.

Thanks,

Mathieu

[1] I use the definition of "work conserving" found here:
     https://people.ece.ubc.ca/sasha/papers/eurosys16-final29.pdf

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ