[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YFR2kwakbcGiI37w@kroah.com>
Date: Fri, 19 Mar 2021 11:01:55 +0100
From: Greg KH <gregkh@...uxfoundation.org>
To: Jonathan Cameron <Jonathan.Cameron@...wei.com>
Cc: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>,
"tim.c.chen@...ux.intel.com" <tim.c.chen@...ux.intel.com>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"will@...nel.org" <will@...nel.org>,
"rjw@...ysocki.net" <rjw@...ysocki.net>,
"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
"bp@...en8.de" <bp@...en8.de>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"lenb@...nel.org" <lenb@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"dietmar.eggemann@....com" <dietmar.eggemann@....com>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
"bsegall@...gle.com" <bsegall@...gle.com>,
"mgorman@...e.de" <mgorman@...e.de>,
"msys.mizuma@...il.com" <msys.mizuma@...il.com>,
"valentin.schneider@....com" <valentin.schneider@....com>,
"juri.lelli@...hat.com" <juri.lelli@...hat.com>,
"mark.rutland@....com" <mark.rutland@....com>,
"sudeep.holla@....com" <sudeep.holla@....com>,
"aubrey.li@...ux.intel.com" <aubrey.li@...ux.intel.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-acpi@...r.kernel.org" <linux-acpi@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>, "xuwei (O)" <xuwei5@...wei.com>,
"Zengtao (B)" <prime.zeng@...ilicon.com>,
"guodong.xu@...aro.org" <guodong.xu@...aro.org>,
yangyicong <yangyicong@...wei.com>,
"Liguozhu (Kenneth)" <liguozhu@...ilicon.com>,
"linuxarm@...neuler.org" <linuxarm@...neuler.org>,
"hpa@...or.com" <hpa@...or.com>
Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a
die
On Fri, Mar 19, 2021 at 09:36:16AM +0000, Jonathan Cameron wrote:
> On Fri, 19 Mar 2021 06:57:08 +0000
> "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com> wrote:
>
> > > -----Original Message-----
> > > From: Greg KH [mailto:gregkh@...uxfoundation.org]
> > > Sent: Friday, March 19, 2021 7:35 PM
> > > To: Song Bao Hua (Barry Song) <song.bao.hua@...ilicon.com>
> > > Cc: tim.c.chen@...ux.intel.com; catalin.marinas@....com; will@...nel.org;
> > > rjw@...ysocki.net; vincent.guittot@...aro.org; bp@...en8.de;
> > > tglx@...utronix.de; mingo@...hat.com; lenb@...nel.org; peterz@...radead.org;
> > > dietmar.eggemann@....com; rostedt@...dmis.org; bsegall@...gle.com;
> > > mgorman@...e.de; msys.mizuma@...il.com; valentin.schneider@....com; Jonathan
> > > Cameron <jonathan.cameron@...wei.com>; juri.lelli@...hat.com;
> > > mark.rutland@....com; sudeep.holla@....com; aubrey.li@...ux.intel.com;
> > > linux-arm-kernel@...ts.infradead.org; linux-kernel@...r.kernel.org;
> > > linux-acpi@...r.kernel.org; x86@...nel.org; xuwei (O) <xuwei5@...wei.com>;
> > > Zengtao (B) <prime.zeng@...ilicon.com>; guodong.xu@...aro.org; yangyicong
> > > <yangyicong@...wei.com>; Liguozhu (Kenneth) <liguozhu@...ilicon.com>;
> > > linuxarm@...neuler.org; hpa@...or.com
> > > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within
> > > a die
> > >
> > > On Fri, Mar 19, 2021 at 05:16:15PM +1300, Barry Song wrote:
> > > > diff --git a/Documentation/admin-guide/cputopology.rst
> > > b/Documentation/admin-guide/cputopology.rst
> > > > index b90dafc..f9d3745 100644
> > > > --- a/Documentation/admin-guide/cputopology.rst
> > > > +++ b/Documentation/admin-guide/cputopology.rst
> > > > @@ -24,6 +24,12 @@ core_id:
> > > > identifier (rather than the kernel's). The actual value is
> > > > architecture and platform dependent.
> > > >
> > > > +cluster_id:
> > > > +
> > > > + the Cluster ID of cpuX. Typically it is the hardware platform's
> > > > + identifier (rather than the kernel's). The actual value is
> > > > + architecture and platform dependent.
> > > > +
> > > > book_id:
> > > >
> > > > the book ID of cpuX. Typically it is the hardware platform's
> > > > @@ -56,6 +62,14 @@ package_cpus_list:
> > > > human-readable list of CPUs sharing the same physical_package_id.
> > > > (deprecated name: "core_siblings_list")
> > > >
> > > > +cluster_cpus:
> > > > +
> > > > + internal kernel map of CPUs within the same cluster.
> > > > +
> > > > +cluster_cpus_list:
> > > > +
> > > > + human-readable list of CPUs within the same cluster.
> > > > +
> > > > die_cpus:
> > > >
> > > > internal kernel map of CPUs within the same die.
> > >
> > > Why are these sysfs files in this file, and not in a Documentation/ABI/
> > > file which can be correctly parsed and shown to userspace?
> >
> > Well. Those ABIs have been there for much a long time. It is like:
> >
> > [root@...h1 topology]# ls
> > core_id core_siblings core_siblings_list physical_package_id thread_siblings thread_siblings_list
> > [root@...h1 topology]# pwd
> > /sys/devices/system/cpu/cpu100/topology
> > [root@...h1 topology]# cat core_siblings_list
> > 64-127
> > [root@...h1 topology]#
> >
> > >
> > > Any chance you can fix that up here as well?
> >
> > Yes. we will send a separate patch to address this, which won't
> > be in this patchset. This patchset will base on that one.
> >
> > >
> > > Also note that "list" is not something that goes in sysfs, sysfs is "one
> > > value per file", and a list is not "one value". How do you prevent
> > > overflowing the buffer of the sysfs file if you have a "list"?
> > >
> >
> > At a glance, the list is using "-" rather than a real list
> > [root@...h1 topology]# cat core_siblings_list
> > 64-127
> >
> > Anyway, I will take a look if it has any chance to overflow.
>
> It could in theory be alternate CPUs as comma separated list.
> So it's would get interesting around 500-1000 cpus (guessing).
>
> Hopefully no one has that crazy a cpu numbering scheme but it's possible
> (note that cluster is fine for this, but I guess it might eventually
> happen for core-siblings list (cpus within a package).
>
> Shouldn't crash or anything like that but might terminate early.
We have a broken sysfs api already for listing LED numbers that has had
to be worked around in the past, please do not create a new one with
that same problem, we should learn from them :)
thanks,
greg k-h
Powered by blists - more mailing lists