lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADjb_WR+611uXfPjME4dTeLRPsKTYoR52X4KSuxhZts1SSnrWA@mail.gmail.com>
Date:   Wed, 19 Feb 2020 18:00:05 +0800
From:   Chen Yu <yu.chen.surf@...il.com>
To:     Valentin Schneider <valentin.schneider@....com>
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Mel Gorman <mgorman@...e.de>, Tony Luck <tony.luck@...el.com>,
        Aubrey Li <aubrey.li@...ux.intel.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Borislav Petkov <bp@...e.de>
Subject: Re: [RFC] Display the cpu of sched domain in procfs

Hi Valentin,
Thanks very much for looking at my question,
On Wed, Feb 19, 2020 at 4:13 PM Valentin Schneider
<valentin.schneider@....com> wrote:
>
> Hi,
>
> On 19/02/2020 07:15, Chen Yu wrote:
> > Problem:
> > sched domain topology is not always consistent with the CPU topology exposed at
> > /sys/devices/system/cpu/cpuX/topology,  which makes it
> > hard for monitor tools to distinguish the CPUs among different sched domains.
> >
> > For example, on x86 if there are NUMA nodes within a package, say,
> > SNC(Sub-Numa-Cluster),
> > then there would be no die sched domain but only NUMA sched domains
> > created. As a result,
> > you don't know what the sched domain hierarchical is by only looking
> > at /sys/devices/system/cpu/cpuX/topology.
> >
> > Although by appending sched_debug in command line would show the sched
> > domain CPU topology,
> > it is only printed once during boot up, which makes it hard to track
> > at run-time.
> >
>
> It should (and in my experience, is) be printed any time there is a sched
> domain update - hotplug, cpusets, IOW not just at bootup.
>
Right, whenever domain has changed it will be printed.
> e.g. if I hotplug out a CPU:
>
> root@...sch-juno:~# echo 0 > /sys/devices/system/cpu/cpu3/online
> [40150.882586] CPU3: shutdown
> [40150.885383] psci: CPU3 killed (polled 0 ms)
> [40150.891362] CPU0 attaching NULL sched-domain.
> [40150.895954] CPU1 attaching NULL sched-domain.
> [40150.900433] CPU2 attaching NULL sched-domain.
> [40150.906583] CPU3 attaching NULL sched-domain.
> [40150.910998] CPU4 attaching NULL sched-domain.
> [40150.915444] CPU5 attaching NULL sched-domain.
> [40150.920108] CPU0 attaching sched-domain(s):
> [40150.924396]  domain-0: span=0,4-5 level=MC
> [40150.928592]   groups: 0:{ span=0 cap=444 }, 4:{ span=4 cap=445 }, 5:{ span=5 cap=446 }
> [40150.936684]   domain-1: span=0-2,4-5 level=DIE
> [40150.941207]    groups: 0:{ span=0,4-5 cap=1335 }, 1:{ span=1-2 cap=2041 }
> [40150.948107] CPU1 attaching sched-domain(s):
> [40150.952342]  domain-0: span=1-2 level=MC
> [40150.956311]   groups: 1:{ span=1 cap=1020 }, 2:{ span=2 cap=1021 }
> [40150.962592]   domain-1: span=0-2,4-5 level=DIE
> [40150.967082]    groups: 1:{ span=1-2 cap=2041 }, 0:{ span=0,4-5 cap=1335 }
> [40150.973984] CPU2 attaching sched-domain(s):
> [40150.978208]  domain-0: span=1-2 level=MC
> [40150.982176]   groups: 2:{ span=2 cap=1021 }, 1:{ span=1 cap=1021 }
> [40150.988431]   domain-1: span=0-2,4-5 level=DIE
> [40150.992922]    groups: 1:{ span=1-2 cap=2042 }, 0:{ span=0,4-5 cap=1335 }
> [40150.999819] CPU4 attaching sched-domain(s):
> [40151.004045]  domain-0: span=0,4-5 level=MC
> [40151.008186]   groups: 4:{ span=4 cap=445 }, 5:{ span=5 cap=446 }, 0:{ span=0 cap=444 }
> [40151.016220]   domain-1: span=0-2,4-5 level=DIE
> [40151.020722]    groups: 0:{ span=0,4-5 cap=1335 }, 1:{ span=1-2 cap=2044 }
> [40151.027619] CPU5 attaching sched-domain(s):
> [40151.031843]  domain-0: span=0,4-5 level=MC
> [40151.035985]   groups: 5:{ span=5 cap=446 }, 0:{ span=0 cap=444 }, 4:{ span=4 cap=445 }
> [40151.044021]   domain-1: span=0-2,4-5 level=DIE
> [40151.048512]    groups: 0:{ span=0,4-5 cap=1335 }, 1:{ span=1-2 cap=2043 }
> [40151.055440] root domain span: 0-2,4-5 (max cpu_capacity = 1024)
>
>
> Same for setting up cpusets:
>
> root@...sch-juno:~# cgset -r cpuset.mems=0 asym
> root@...sch-juno:~# cgset -r cpuset.cpu_exclusive=1 asym
> root@...sch-juno:~#
> root@...sch-juno:~# cgcreate -g cpuset:smp
> root@...sch-juno:~# cgset -r cpuset.cpus=4-5 smp
> root@...sch-juno:~# cgset -r cpuset.mems=0 smp
> root@...sch-juno:~# cgset -r cpuset.cpu_exclusive=1 smp
> root@...sch-juno:~#
> root@...sch-juno:~# cgset -r cpuset.sched_load_balance=0 .
> [40224.135466] CPU0 attaching NULL sched-domain.
> [40224.140038] CPU1 attaching NULL sched-domain.
> [40224.144531] CPU2 attaching NULL sched-domain.
> [40224.148951] CPU3 attaching NULL sched-domain.
> [40224.153366] CPU4 attaching NULL sched-domain.
> [40224.157811] CPU5 attaching NULL sched-domain.
> [40224.162394] CPU0 attaching sched-domain(s):
> [40224.166623]  domain-0: span=0,3 level=MC
> [40224.170624]   groups: 0:{ span=0 cap=445 }, 3:{ span=3 cap=446 }
> [40224.176709]   domain-1: span=0-3 level=DIE
> [40224.180884]    groups: 0:{ span=0,3 cap=891 }, 1:{ span=1-2 cap=2044 }
> [40224.187497] CPU1 attaching sched-domain(s):
> [40224.191753]  domain-0: span=1-2 level=MC
> [40224.195724]   groups: 1:{ span=1 cap=1021 }, 2:{ span=2 cap=1023 }
> [40224.202010]   domain-1: span=0-3 level=DIE
> [40224.206154]    groups: 1:{ span=1-2 cap=2044 }, 0:{ span=0,3 cap=890 }
> [40224.212792] CPU2 attaching sched-domain(s):
> [40224.217020]  domain-0: span=1-2 level=MC
> [40224.220989]   groups: 2:{ span=2 cap=1023 }, 1:{ span=1 cap=1020 }
> [40224.227244]   domain-1: span=0-3 level=DIE
> [40224.231386]    groups: 1:{ span=1-2 cap=2042 }, 0:{ span=0,3 cap=889 }
> [40224.238025] CPU3 attaching sched-domain(s):
> [40224.242252]  domain-0: span=0,3 level=MC
> [40224.246221]   groups: 3:{ span=3 cap=446 }, 0:{ span=0 cap=443 }
> [40224.252329]   domain-1: span=0-3 level=DIE
> [40224.256474]    groups: 0:{ span=0,3 cap=889 }, 1:{ span=1-2 cap=2042 }
> [40224.263142] root domain span: 0-3 (max cpu_capacity = 1024)
> [40224.268945] CPU4 attaching sched-domain(s):
> [40224.273200]  domain-0: span=4-5 level=MC
> [40224.277173]   groups: 4:{ span=4 cap=446 }, 5:{ span=5 cap=444 }
> [40224.283291] CPU5 attaching sched-domain(s):
> [40224.287517]  domain-0: span=4-5 level=MC
> [40224.291487]   groups: 5:{ span=5 cap=444 }, 4:{ span=4 cap=446 }
> [40224.297584] root domain span: 4-5 (max cpu_capacity = 446)
> [40224.303185] rd 4-5: CPUs do not have asymmetric capacities
>
> So in short, if you have sched_debug enabled, whatever is reported as the
> last sched domain hierarchy in dmesg will be the one in use.
>
> Now, if you have a userspace that tries to be clever and wants to use this
> information then yes, this isn't ideal, but then that's a different matter.
The dmesg might be lost if someone has once used dmesg -c to clear the log,
and the /var/log/dmesg might not always there. And it is not common to trigger
sched domain update once boot up in some environment.
But anyway, this information printed by sched_debug is very fertile for knowing
the topology.
> I think exposing the NUMA boundaries is fair game - and they already are
> via /sys/devices/system/node/node*/.
It seems that the numa sysfs could not reflect the SNC topology, it just has the
*leaf* numa node information. Say, node0 and node1 might form one sched_domain.
> I'm not sure we'd want to expose more
> (e.g. MC span), ideally that is something you shouldn't really have to care
> about - that's the scheduler's job.
I agree, it just aims to facilitate the user space, or just for
debugging purpose.
But as Dietmar replied, the /proc/schedstat has already exposed it.


thanks again,
Chenyu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ