[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4fb2735a-4e2f-d913-a4ee-4a02f2b0c6b3@arm.com>
Date: Wed, 19 Feb 2020 10:46:52 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Chen Yu <yu.chen.surf@...il.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Mel Gorman <mgorman@...e.de>, Tony Luck <tony.luck@...el.com>,
Aubrey Li <aubrey.li@...ux.intel.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Borislav Petkov <bp@...e.de>
Subject: Re: [RFC] Display the cpu of sched domain in procfs
On 19/02/2020 10:00, Chen Yu wrote:
>> Now, if you have a userspace that tries to be clever and wants to use this
>> information then yes, this isn't ideal, but then that's a different matter.
> The dmesg might be lost if someone has once used dmesg -c to clear the log,
> and the /var/log/dmesg might not always there. And it is not common to trigger
> sched domain update once boot up in some environment.
> But anyway, this information printed by sched_debug is very fertile for knowing
> the topology.
>> I think exposing the NUMA boundaries is fair game - and they already are
>> via /sys/devices/system/node/node*/.
> It seems that the numa sysfs could not reflect the SNC topology, it just has the
> *leaf* numa node information. Say, node0 and node1 might form one sched_domain.
Right, but if you have leaves + distance table, then userspace can try to
be clever about it without being exposed to scheduler innards.
Powered by blists - more mailing lists