lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230210183155.GA11997@ranerica-svr.sc.intel.com>
Date:   Fri, 10 Feb 2023 10:31:55 -0800
From:   Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
To:     Valentin Schneider <vschneid@...hat.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Ricardo Neri <ricardo.neri@...el.com>,
        "Ravi V. Shankar" <ravi.v.shankar@...el.com>,
        Ben Segall <bsegall@...gle.com>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Len Brown <len.brown@...el.com>, Mel Gorman <mgorman@...e.de>,
        "Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Ionela Voinescu <ionela.voinescu@....com>, x86@...nel.org,
        linux-kernel@...r.kernel.org, "Tim C . Chen" <tim.c.chen@...el.com>
Subject: Re: [PATCH v3 06/10] sched/fair: Use the prefer_sibling flag of the
 current sched domain

On Fri, Feb 10, 2023 at 05:12:30PM +0000, Valentin Schneider wrote:
> On 10/02/23 17:53, Peter Zijlstra wrote:
> > On Fri, Feb 10, 2023 at 02:54:56PM +0000, Valentin Schneider wrote:
> >
> >> So something like have SD_PREFER_SIBLING affect the SD it's on (and not
> >> its parent), but remove it from the lowest non-degenerated topology level?
> >
> > So I was rather confused about the whole moving it between levels things
> > this morning -- conceptually, prefer siblings says you want to try
> > sibling domains before filling up your current domain. Now, balancing
> > between siblings happens one level up, hence looking at child->flags
> > makes perfect sense.
> >
> > But looking at the current domain and still calling it prefer sibling
> > makes absolutely no sense what so ever.
> >
> 
> True :-)
> 
> > In that confusion I think I also got the polarity wrong, I thought you
> > wanted to kill prefer_sibling for the assymetric SMT cases, instead you
> > want to force enable it as long as there is one SMT child around.

Exactly.

> >
> > Whichever way around it we do it, I'm thinking perhaps some renaming
> > might be in order to clarify things.
> >
> > How about adding a flag SD_SPREAD_TASKS, which is the effective toggle
> > of the behaviour, but have it be set by children with SD_PREFER_SIBLING
> > or something.
> >
> 
> Or entirely bin SD_PREFER_SIBLING and stick with SD_SPREAD_TASKS, but yeah
> something along those lines.

I sense a consesus towards SD_SPREAD_TASKS.

> 
> > OTOH, there's also
> >
> >   if (busiest->group_weight == 1 || sds->prefer_sibling) {
> >
> > which explicitly also takes the group-of-one (the !child case) into
> > account, but that's not consistently done.
> >
> >       sds->prefer_sibling = !child || child->flags & SD_PREFER_SIBLING;

This would need a special provision for SD_ASYM_CPUCAPACITY.

> >
> > seems an interesting option,
> 
> > except perhaps ASYM_CPUCAPACITY -- I
> > forget, can CPUs of different capacity be in the same leaf domain? With
> > big.LITTLE I think not, they had their own cache domains and so you get
> > at least MC domains per capacity, but DynamiQ might have totally wrecked
> > that party.
> 
> Yeah, newer systems can have different capacities in one MC domain, cf:
> 
>   b7a331615d25 ("sched/fair: Add asymmetric CPU capacity wakeup scan")
> 
> >
> >> (+ add it to the first NUMA level to keep things as they are, even if TBF I
> >> find relying on it for NUMA balancing a bit odd).
> >
> > Arguably it ought to perhaps be one of those node_reclaim_distance
> > things. The thing is that NUMA-1 is often fairly quick, esp. these days
> > where it's basically on die numa.

To conserve the current behavior the NUMA level would need to have
SD_SPREAD_TASKS. It will be cleared along with SD_BALANCE_{EXEC, FORK} and
SD_WAKE_AFFINE if the numa distance is larger than node_reclaim_distance,
yes?

Thanks and BR,
Ricardo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ