[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YK4Ho7e+LCqjYA2X@hirez.programming.kicks-ass.net>
Date: Wed, 26 May 2021 10:32:35 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: kajoljain <kjain@...ux.ibm.com>
Cc: mpe@...erman.id.au, linuxppc-dev@...ts.ozlabs.org,
linux-nvdimm@...ts.01.org, linux-kernel@...r.kernel.org,
maddy@...ux.vnet.ibm.com, santosh@...six.org,
aneesh.kumar@...ux.ibm.com, vaibhav@...ux.ibm.com,
dan.j.williams@...el.com, ira.weiny@...el.com,
atrajeev@...ux.vnet.ibm.com, tglx@...utronix.de,
rnsastry@...ux.ibm.com
Subject: Re: [RFC v2 4/4] powerpc/papr_scm: Add cpu hotplug support for
nvdimm pmu device
On Wed, May 26, 2021 at 12:56:58PM +0530, kajoljain wrote:
> On 5/25/21 7:46 PM, Peter Zijlstra wrote:
> > On Tue, May 25, 2021 at 06:52:16PM +0530, Kajol Jain wrote:
> >> It adds cpumask to designate a cpu to make HCALL to
> >> collect the counter data for the nvdimm device and
> >> update ABI documentation accordingly.
> >>
> >> Result in power9 lpar system:
> >> command:# cat /sys/devices/nmem0/cpumask
> >> 0
> >
> > Is this specific to the papr thing, or should this be in generic nvdimm
> > code?
>
> This code is not specific to papr device and we can move it to
> generic nvdimm interface. But do we need to add some checks on whether
> any arch/platform specific driver want that support or we can assume
> that this will be something needed by all platforms?
I'm a complete NVDIMM n00b, but to me it would appear they would have to
conform to the normal memory hierarchy and would thus always be
per-node.
Also, if/when deviation from this rule is observed, we can always
rework/extend this. For now I think it would make sense to have the
per-node ness of the thing expressed in the generic layer.
Powered by blists - more mailing lists