[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADjb_WRg-kbWCoPcds82SGFUfSpkvCQytfjvZV674NxOuTRE3Q@mail.gmail.com>
Date: Wed, 19 Feb 2020 18:01:58 +0800
From: Chen Yu <yu.chen.surf@...il.com>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Valentin Schneider <valentin.schneider@....com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Mel Gorman <mgorman@...e.de>, Tony Luck <tony.luck@...el.com>,
Aubrey Li <aubrey.li@...ux.intel.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Borislav Petkov <bp@...e.de>
Subject: Re: [RFC] Display the cpu of sched domain in procfs
Hi Dietmar,
On Wed, Feb 19, 2020 at 4:53 PM Dietmar Eggemann
<dietmar.eggemann@....com> wrote:
>
> On 19/02/2020 09:13, Valentin Schneider wrote:
> > Hi,
> >
> > On 19/02/2020 07:15, Chen Yu wrote:
> >> Problem:
> >> sched domain topology is not always consistent with the CPU topology exposed at
> >> /sys/devices/system/cpu/cpuX/topology, which makes it
> >> hard for monitor tools to distinguish the CPUs among different sched domains.
> >>
> >> For example, on x86 if there are NUMA nodes within a package, say,
> >> SNC(Sub-Numa-Cluster),
> >> then there would be no die sched domain but only NUMA sched domains
> >> created. As a result,
> >> you don't know what the sched domain hierarchical is by only looking
> >> at /sys/devices/system/cpu/cpuX/topology.
> >>
> >> Although by appending sched_debug in command line would show the sched
> >> domain CPU topology,
> >> it is only printed once during boot up, which makes it hard to track
> >> at run-time.
>
> What about /proc/schedstat?
>
That's it! I did not notice it before, this should work although the
user space might
need to parse the format.
--
Thanks,
Chenyu
> E.g. on Intel Xeon CPU E5-2690 v2
>
> $ cat /proc/schedstat | head
> version 15
> timestamp 4486170100
> cpu0 0 0 0 0 0 0 59501267037720 16902762382193 1319621004
> domain0 00,00100001 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> domain1 00,3ff003ff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> domain2 ff,ffffffff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>
> ^^^^^^^^^^^
>
> cpu1 0 0 0 0 0 0 56045879920164 16758983055410 1318489275
> domain0 00,00200002 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> domain1 00,3ff003ff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> domain2 ff,ffffffff 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> ...
>
Powered by blists - more mailing lists