[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k07zrx3t.fsf@linux.ibm.com>
Date: Wed, 27 Jul 2022 10:05:50 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org,
Wei Xu <weixugc@...gle.com>, Yang Shi <shy828301@...il.com>,
Davidlohr Bueso <dave@...olabs.net>,
Tim C Chen <tim.c.chen@...el.com>,
Michal Hocko <mhocko@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Hesham Almatary <hesham.almatary@...wei.com>,
Dave Hansen <dave.hansen@...el.com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Alistair Popple <apopple@...dia.com>,
Dan Williams <dan.j.williams@...el.com>,
Johannes Weiner <hannes@...xchg.org>, jvgediya.oss@...il.com
Subject: Re: [PATCH v10 5/8] mm/demotion: Build demotion targets based on
explicit memory tiers
"Huang, Ying" <ying.huang@...el.com> writes:
> Aneesh Kumar K V <aneesh.kumar@...ux.ibm.com> writes:
>
>> On 7/26/22 1:14 PM, Huang, Ying wrote:
>>> "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com> writes:
>>>
....
>>> + */
>>>> +int next_demotion_node(int node)
>>>> +{
>>>> + struct demotion_nodes *nd;
>>>> + int target;
>>>> +
>>>> + if (!node_demotion)
>>>> + return NUMA_NO_NODE;
>>>> +
>>>> + nd = &node_demotion[node];
>>>> +
>>>> + /*
>>>> + * node_demotion[] is updated without excluding this
>>>> + * function from running.
>>>> + *
>>>> + * Make sure to use RCU over entire code blocks if
>>>> + * node_demotion[] reads need to be consistent.
>>>> + */
>>>> + rcu_read_lock();
>>>> + /*
>>>> + * If there are multiple target nodes, just select one
>>>> + * target node randomly.
>>>> + *
>>>> + * In addition, we can also use round-robin to select
>>>> + * target node, but we should introduce another variable
>>>> + * for node_demotion[] to record last selected target node,
>>>> + * that may cause cache ping-pong due to the changing of
>>>> + * last target node. Or introducing per-cpu data to avoid
>>>> + * caching issue, which seems more complicated. So selecting
>>>> + * target node randomly seems better until now.
>>>> + */
>>>> + target = node_random(&nd->preferred);
>>>
>>> In one of the most common cases, nodes_weight(&nd->preferred) == 1.
>>> Where, get_random_int() in node_random() just wastes CPU cycles and
>>> random entropy. So the original struct demotion_nodes implementation
>>> appears better.
>>>
>>> struct demotion_nodes {
>>> unsigned short nr;
>>> short nodes[DEMOTION_TARGET_NODES];
>>> };
>>>
>>
>>
>> Is that measurable difference? using nodemask_t makes it much easier with respect to
>> implementation. IMHO if we observe the usage of node_random() to have performance impact
>> with nodes_weight() == 1 we should fix node_random() to handle that? If you strongly
>> feel we should fix this, i can opencode node_random to special case node_weight() == 1?
>
> If there's no much difference, why not just use the existing code?
> IMHO, it's your responsibility to prove your new implementation is
> better via numbers, for example, reduced code lines, with better or same
> performance.
>
> Another policy is just to use the existing code in the first version.
> Then change it based on measurement.
One of the reason I switched to nodemask_t is to make code simpler.
demotion target is essentially a node mask.
>
> In general, I care more about the most common cases, that is, 0 or 1
> demotion target.
How about I switch to the below opencoded version. That should take care
of the above concern.
>
>> - target = node_random(&nd->preferred);
>> + node_weight = nodes_weight(nd->preferred);
>> + switch (node_weight) {
>> + case 0:
>> + target = NUMA_NO_NODE;
>> + break;
>> + case 1:
>> + target = first_node(nd->preferred);
>> + break;
>> + default:
>> + target = bitmap_ord_to_pos(nd->preferred.bits,
>> + get_random_int() % node_weight, MAX_NUMNODES);
>> + break;
>> + }
>>
>>
Powered by blists - more mailing lists