lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZUzzeEptTlgalJIc@chenyu5-mobl2.ccr.corp.intel.com>
Date:   Thu, 9 Nov 2023 22:58:00 +0800
From:   Chen Yu <yu.c.chen@...el.com>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
CC:     K Prateek Nayak <kprateek.nayak@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        "Vincent Guittot" <vincent.guittot@...aro.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Swapnil Sapkal <Swapnil.Sapkal@....com>,
        Aaron Lu <aaron.lu@...el.com>,
        "Tim Chen" <tim.c.chen@...el.com>,
        "Gautham R . Shenoy" <gautham.shenoy@....com>, <x86@...nel.org>
Subject: Re: [RFC PATCH v2 0/2] sched/fair migration reduction features

On 2023-11-06 at 11:32:02 -0500, Mathieu Desnoyers wrote:
> On 2023-10-26 23:27, K Prateek Nayak wrote:
> [...]
> > --
> > It is a mixed bag of results, as expected. I would love to hear your
> > thoughts on the results. Meanwhile, I'll try to get some more data
> > from other benchmarks.
> 
> I suspect that workloads that exhibit a client-server (1:1) pairing pattern
> are hurt by the bias towards leaving tasks on their prev runqueue: they
> benefit from moving both client/server tasks as close as possible so they
> share either the same core or a common cache.

Yes, this should be true if the wakee's previous runqueue is not idle, at least
on Prateek's machine. Does it mean, the change in PATCH 2/2 that "chooses previous
CPU over target CPU when all CPUs are busy" might not be a universal win for the
1:1 workloads?

> 
> The hackbench workload is also client-server, but there are N-client and
> N-server threads, creating a N:N relationship which really does not work
> well when trying to pull tasks on sync wakeup: tasks then bounce all over
> the place.
> 
> It's tricky though. If we try to fix the "1:1" client-server pattern with a
> heuristic, we may miss scenarios which are close to 1:1 but don't exactly
> match.
> 
> I'm working on a rewrite of select_task_rq_fair, with the aim to tackle the
> more general task placement problem taking into account the following:
> 
> - We want to converge towards a task placement that moves tasks with
>   most waker/wakee interactions as close as possible in the cache
>   topology,
> - We can use the core util_est/capacity metrics to calculate whether we
>   have capacity left to enqueue a task in a core's runqueue.
> - The underlying assumption is that work conserving [1] is not a good
>   characteristic to aim for, because it does not take into account the
>   overhead associated with migrations, and thus lack of cache locality.

Agree, one pain point is how to figure out the requirement of a wakee.
Does the wakee want an idle CPU, or want cache locality? One heuristic
I'm thinking of to predict if a task is cache sensitive: check both the task's
average runtime, and its average sleep time. If the runtime is long, it usually
indicates that this task has large cache footprint, in terms of icache/dcache.
If the sleep time is short, it means that this task is likely to revisit its hot
cache soon.

thanks,
Chenyu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ