[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ab7ee13b-fccf-4366-c18c-f63ddf0552e2@linux.ibm.com>
Date: Fri, 28 May 2021 13:23:00 +0530
From: kajoljain <kjain@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mpe@...erman.id.au, linuxppc-dev@...ts.ozlabs.org,
linux-nvdimm@...ts.01.org, linux-kernel@...r.kernel.org,
maddy@...ux.vnet.ibm.com, santosh@...six.org,
aneesh.kumar@...ux.ibm.com, vaibhav@...ux.ibm.com,
dan.j.williams@...el.com, ira.weiny@...el.com,
atrajeev@...ux.vnet.ibm.com, tglx@...utronix.de,
rnsastry@...ux.ibm.com
Subject: Re: [RFC v2 4/4] powerpc/papr_scm: Add cpu hotplug support for nvdimm
pmu device
On 5/26/21 2:02 PM, Peter Zijlstra wrote:
> On Wed, May 26, 2021 at 12:56:58PM +0530, kajoljain wrote:
>> On 5/25/21 7:46 PM, Peter Zijlstra wrote:
>>> On Tue, May 25, 2021 at 06:52:16PM +0530, Kajol Jain wrote:
>
>>>> It adds cpumask to designate a cpu to make HCALL to
>>>> collect the counter data for the nvdimm device and
>>>> update ABI documentation accordingly.
>>>>
>>>> Result in power9 lpar system:
>>>> command:# cat /sys/devices/nmem0/cpumask
>>>> 0
>>>
>>> Is this specific to the papr thing, or should this be in generic nvdimm
>>> code?
>>
>> This code is not specific to papr device and we can move it to
>> generic nvdimm interface. But do we need to add some checks on whether
>> any arch/platform specific driver want that support or we can assume
>> that this will be something needed by all platforms?
>
> I'm a complete NVDIMM n00b, but to me it would appear they would have to
> conform to the normal memory hierarchy and would thus always be
> per-node.
>
> Also, if/when deviation from this rule is observed, we can always
> rework/extend this. For now I think it would make sense to have the
> per-node ness of the thing expressed in the generic layer.
>
Hi Peter,
Thanks for the suggestion, I will send new RFC patchset with these changes.
Thanks,
Kajol Jain
Powered by blists - more mailing lists