lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 27 May 2022 21:15:09 +0530
From:   Aneesh Kumar K V <aneesh.kumar@...ux.ibm.com>
To:     Jonathan Cameron <Jonathan.Cameron@...wei.com>
Cc:     linux-mm@...ck.org, akpm@...ux-foundation.org,
        Huang Ying <ying.huang@...el.com>,
        Greg Thelen <gthelen@...gle.com>,
        Yang Shi <shy828301@...il.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Tim C Chen <tim.c.chen@...el.com>,
        Brice Goglin <brice.goglin@...il.com>,
        Michal Hocko <mhocko@...nel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Hesham Almatary <hesham.almatary@...wei.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Alistair Popple <apopple@...dia.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Feng Tang <feng.tang@...el.com>,
        Jagdish Gediya <jvgediya@...ux.ibm.com>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        David Rientjes <rientjes@...gle.com>
Subject: Re: [RFC PATCH v4 5/7] mm/demotion: Add support to associate rank
 with memory tier

On 5/27/22 8:15 PM, Jonathan Cameron wrote:
> On Fri, 27 May 2022 17:55:26 +0530
> "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com> wrote:
> 
>> The rank approach allows us to keep memory tier device IDs stable even if there
>> is a need to change the tier ordering among different memory tiers. e.g. DRAM
>> nodes with CPUs will always be on memtier1, no matter how many tiers are higher
>> or lower than these nodes. A new memory tier can be inserted into the tier
>> hierarchy for a new set of nodes without affecting the node assignment of any
>> existing memtier, provided that there is enough gap in the rank values for the
>> new memtier.
>>
>> The absolute value of "rank" of a memtier doesn't necessarily carry any meaning.
>> Its value relative to other memtiers decides the level of this memtier in the tier
>> hierarchy.
>>
>> For now, This patch supports hardcoded rank values which are 100, 200, & 300 for
>> memory tiers 0,1 & 2 respectively.
>>
>> Below is the sysfs interface to read the rank values of memory tier,
>> /sys/devices/system/memtier/memtierN/rank
>>
>> This interface is read only for now, write support can be added when there is
>> a need of flexibility of more number of memory tiers(> 3) with flexibile ordering
>> requirement among them, rank can be utilized there as rank decides now memory
>> tiering ordering and not memory tier device ids.
>>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@...ux.ibm.com>
> 
> I'd squash a lot of this with the original patch introducing tiers. As things
> stand we have 2 tricky to follow patches covering the same code rather than
> one that would be simpler.
> 

Sure. Will do that in the next update.

> Jonathan
> 
>> ---
>>   drivers/base/node.c     |   5 +-
>>   drivers/dax/kmem.c      |   2 +-
>>   include/linux/migrate.h |  17 ++--
>>   mm/migrate.c            | 218 ++++++++++++++++++++++++----------------
>>   4 files changed, 144 insertions(+), 98 deletions(-)
>>
>> diff --git a/drivers/base/node.c b/drivers/base/node.c
>> index cf4a58446d8c..892f7c23c94e 100644
>> --- a/drivers/base/node.c
>> +++ b/drivers/base/node.c
>> @@ -567,8 +567,11 @@ static ssize_t memtier_show(struct device *dev,
>>   			    char *buf)
>>   {
>>   	int node = dev->id;
>> +	int tier_index = node_get_memory_tier_id(node);
>>   
>> -	return sysfs_emit(buf, "%d\n", node_get_memory_tier(node));
>> +	if (tier_index != -1)
>> +		return sysfs_emit(buf, "%d\n", tier_index);
> I think failure to get a tier is an error. So if it happens, return an error code.
> Also prefered to handle errors out of line as more idiomatic so reviewers
> read it quicker.
> 
> 	if (tier_index == -1)
> 		return -EINVAL;
> 
> 	return sysfs_emit()...
> 
>> +	return 0;
>>   }
>>   


That was needed to handle NUMA nodes that is not part of any memory 
tiers, like CPU only NUMA node or NUMA node that doesn't want to 
participate in memory demotion.



>>   static ssize_t memtier_store(struct device *dev,
>> diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
>> index 991782aa2448..79953426ddaf 100644
>> --- a/drivers/dax/kmem.c
>> +++ b/drivers/dax/kmem.c
>> @@ -149,7 +149,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
>>   	dev_set_drvdata(dev, data);
>>   


...

>>   
>> -static DEVICE_ATTR_RO(default_tier);
>> +static DEVICE_ATTR_RO(default_rank);
>>   
>>   static struct attribute *memoty_tier_attrs[] = {
>> -	&dev_attr_max_tiers.attr,
>> -	&dev_attr_default_tier.attr,
>> +	&dev_attr_max_tier.attr,
>> +	&dev_attr_default_rank.attr,
> 
> hmm. Not sure why rename to tier rather than tiers.
> 
> Also, I think we default should be tier, not rank.  If someone later
> wants to change the rank of tier1 that's up to them, but any new hotplugged
> memory should still end up in their by default.
> 

Didn't we say, the tier index/device id is a meaning less entity that 
control just the naming. ie, for memtier128, 128 doesn't mean anything.
Instead it is the rank value associated with memtier128 that control the 
demotion order? If so what we want to update the userspace is max tier 
index userspace can expect and what is the default rank value to which 
memory will be added by hotplug.

But yes. tierindex 1 and default rank 200 are reserved and created by 
default.


....

>>   	/*
>>   	 * if node is already part of the tier proceed with the
>>   	 * current tier value, because we might want to establish
>> @@ -2411,15 +2452,17 @@ int node_set_memory_tier(int node, int tier)
>>   	 * before it was made part of N_MEMORY, hence estabilish_migration_targets
>>   	 * will have skipped this node.
>>   	 */
>> -	if (current_tier != -1)
>> -		tier = current_tier;
>> -	ret = __node_set_memory_tier(node, tier);
>> +	if (memtier)
>> +		establish_migration_targets();
>> +	else {
>> +		/* For now rank value and tier value is same. */
> 
> We should avoid baking that in...


Making it dynamic adds lots of complexity such as an ida alloc for tier 
index etc. I didn't want to get there unless we are sure we need dynamic 
number of tiers.

-aneesh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ