lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jhjzh0dtqf9.mognet@arm.com>
Date:   Tue, 09 Feb 2021 11:46:02 +0000
From:   Valentin Schneider <valentin.schneider@....com>
To:     Vincent Guittot <vincent.guittot@...aro.org>,
        Barry Song <song.bao.hua@...ilicon.com>
Cc:     Mel Gorman <mgorman@...e.de>, Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        linuxarm@...neuler.org, "xuwei \(O\)" <xuwei5@...wei.com>,
        "Liguozhu \(Kenneth\)" <liguozhu@...ilicon.com>,
        tiantao6@...ilicon.com, wanghuiqiang@...wei.com,
        "Zengtao \(B\)" <prime.zeng@...ilicon.com>,
        Jonathan Cameron <jonathan.cameron@...wei.com>,
        Guodong Xu <guodong.xu@...aro.org>,
        Meelis Roos <mroos@...ux.ee>
Subject: Re: [PATCH v3] sched/topology: fix the issue groups don't span domain->span for NUMA diameter > 2

On 09/02/21 10:46, Vincent Guittot wrote:
> On Tue, 9 Feb 2021 at 09:27, Barry Song <song.bao.hua@...ilicon.com> wrote:
>> Real servers which suffer from this problem include Kunpeng920 and 8-node
>> Sun Fire X4600-M2, at least.
>>
>> Here we move to use the *child* domain of the *child* domain of node2's
>> domain2 as the new added sched_group. At the same, we re-use the lower
>> level sgc directly.
>
> Have you evaluated the impact on the imbalance and next_update fields ?
>

sgc->next_update is safe since it's only touched by CPUs that have the
group span as local group (which is never the case for CPUs where we do
this "grandchildren" trick).

I'm a bit less clear about sgc->imbalance. I think it can be set by remote
CPUs, but it should only be cleared when running load_balance() by CPUs
that have that group span as local group, as per:

  int *group_imbalance = &sd_parent->groups->sgc->imbalance;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ