lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 1 May 2023 18:42:55 -0700
From:   Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Ricardo Neri <ricardo.neri@...el.com>,
        "Ravi V. Shankar" <ravi.v.shankar@...el.com>,
        Ben Segall <bsegall@...gle.com>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Len Brown <len.brown@...el.com>, Mel Gorman <mgorman@...e.de>,
        "Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Valentin Schneider <vschneid@...hat.com>,
        Ionela Voinescu <ionela.voinescu@....com>, x86@...nel.org,
        linux-kernel@...r.kernel.org,
        Shrikanth Hegde <sshegde@...ux.vnet.ibm.com>,
        Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
        naveen.n.rao@...ux.vnet.ibm.com
Subject: Re: [PATCH v4 00/12] sched: Avoid unnecessary migrations within SMT
 domains

On Sat, Apr 29, 2023 at 05:32:19PM +0200, Peter Zijlstra wrote:
> On Thu, Apr 06, 2023 at 01:31:36PM -0700, Ricardo Neri wrote:
> > Hi,
> > 
> > This is v4 of this series. Previous versions can be found here [1], [2],
> > and here [3]. To avoid duplication, I do not include the cover letter of
> > the original submission. You can read it in [1].
> > 
> > This patchset applies cleanly on today's master branch of the tip tree.
> > 
> > Changes since v3:
> > 
> > Nobody liked the proposed changes to the setting of prefer_sibling.
> > Instead, I tweaked the solution that Dietmar proposed. Now the busiest
> > group, not the local group, determines the setting of prefer_sibling.
> > 
> > Vincent suggested improvements to the logic to decide whether to follow
> > asym_packing priorities. Peter suggested to wrap that in a helper function.
> > I added sched_use_asym_prio().
> > 
> > Ionela found that removing SD_ASYM_PACKING from the SMT domain in x86
> > rendered sd_asym_packing NULL in SMT cores. Now highest_flag_domain()
> > does not assume that all child domains have the requested flag.
> > 
> > Tim found that asym_active_balance() needs to also check for the idle
> > states of the SMT siblings of lb_env::dst_cpu. I added such check.
> > 
> > I wrongly assumed that asym_packing could only be used when the busiest
> > group had exactly one busy CPU. This broke asym_packing balancing at the
> > DIE domain. I limited this check to balances between cores at the MC
> > level.
> > 
> > As per suggestion from Dietmar, I removed sched_asym_smt_can_pull_tasks()
> > and placed its logic in sched_asym(). Also, sched_asym() uses
> > sched_smt_active() to skip checks when not needed.
> > 
> > I also added a patch from Chen Yu to enable asym_packing balancing in
> > Meteor Lake, which has CPUs of different maximum frequency in more than
> > one die.
> 
> Is the actual topology of Meteor Lake already public? This patch made me
> wonder if we need SCHED_CLUSTER topology in the hybrid_topology thing,

Indeed, Meteor Lake will need SCHED_CLUSTER as does Alder Lake. This is in
addition to multi-die support.


> but I can't remember (one of the raisins why the endless calls are such
> a frigging waste of time) and I can't seem to find the answer using
> Google either.
> 
> > Hopefully, these patches are in sufficiently good shape to be merged?
> 
> Changelogs are very sparse towards the end and I had to reverse engineer
> some of it which is a shame. But yeah, on a first reading the code looks
> mostly ok. Specifically 8-10 had me WTF a bit and only at 11 did it
> start to make a little sense. Mostly they utterly fail to answer the
> very fundament "why did you do this" question.

I am sorry changelogs are not sufficiently clear. I thought stating the
overall goal in the cover letter was enough. In the future, would you
prefer that I repeat the cover letter instead of referring to it? Should
individual changelogs state the overall goal?

> 
> Also, you seem to have forgotten to Cc our friends from IBM such that
> they might verify you didn't break their Power7 stuff -- or do you have
> a Power7 yourself to verify and forgot to mention that?

I do not have a Power7 system. I did emulate it on an x86 system by
setting all cores with identical sg->asym_prefer_cpu. Within, cores, SMT
siblings had asymmetric priorities. It was only SMT2, though.

> 
> > Chen Yu (1):
> >   x86/sched: Add the SD_ASYM_PACKING flag to the die domain of hybrid
> >     processors
> > 
> > Ricardo Neri (11):
> >   sched/fair: Move is_core_idle() out of CONFIG_NUMA
> >   sched/fair: Only do asym_packing load balancing from fully idle SMT
> >     cores
> >   sched/fair: Simplify asym_packing logic for SMT cores
> >   sched/fair: Let low-priority cores help high-priority busy SMT cores
> >   sched/fair: Keep a fully_busy SMT sched group as busiest
> >   sched/fair: Use the busiest group to set prefer_sibling
> >   sched/fair: Do not even the number of busy CPUs via asym_packing
> >   sched/topology: Check SDF_SHARED_CHILD in highest_flag_domain()
> >   sched/topology: Remove SHARED_CHILD from ASYM_PACKING
> >   x86/sched: Remove SD_ASYM_PACKING from the SMT domain flags
> >   x86/sched/itmt: Give all SMT siblings of a core the same priority
> > 
> >  arch/x86/kernel/itmt.c         |  23 +---
> >  arch/x86/kernel/smpboot.c      |   4 +-
> >  include/linux/sched/sd_flags.h |   5 +-
> >  kernel/sched/fair.c            | 216 +++++++++++++++++----------------
> >  kernel/sched/sched.h           |  22 +++-
> >  5 files changed, 138 insertions(+), 132 deletions(-)
> 
> I'm going to start to queue this and hopefully push out post -rc1 if
> nobody objects.

Thanks! Will it be content for v6.4 or v6.5?

BR,
Ricardo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ