[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65ab6b3e-82b4-0a4e-bd6e-5869f735a8f7@wanadoo.fr>
Date: Fri, 11 Feb 2022 18:44:39 +0100
From: Christophe JAILLET <christophe.jaillet@...adoo.fr>
To: Yury Norov <yury.norov@...il.com>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
Andrew Morton <akpm@...ux-foundation.org>,
Michał Mirosław <mirq-linux@...e.qmqm.pl>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Peter Zijlstra <peterz@...radead.org>,
David Laight <David.Laight@...lab.com>,
Joe Perches <joe@...ches.com>, Dennis Zhou <dennis@...nel.org>,
Emil Renner Berthing <kernel@...il.dk>,
Nicholas Piggin <npiggin@...il.com>,
Matti Vaittinen <matti.vaittinen@...rohmeurope.com>,
Alexey Klimov <aklimov@...hat.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 46/49] mm/mempolicy: replace nodes_weight with
nodes_weight_eq
Le 10/02/2022 à 23:49, Yury Norov a écrit :
> do_migrate_pages() calls nodes_weight() to compare the weight
> of nodemask with a given number. We can do it more efficiently with
> nodes_weight_eq() because conditional nodes_weight() may stop
> traversing the nodemask earlier, as soon as condition is (or is not)
> met.
>
> Signed-off-by: Yury Norov <yury.norov@...il.com>
> ---
> mm/mempolicy.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 7c852793d9e8..56efd00b1b6e 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -1154,7 +1154,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
> * [0-7] - > [3,4,5] moves only 0,1,2,6,7.
> */
>
> - if ((nodes_weight(*from) != nodes_weight(*to)) &&
> + if (!nodes_weight_eq(*from, nodes_weight(*to)) &&
> (node_isset(s, *to)))
Hi,
I've not looked in details, but would it make sense to hoist the
"(nodes_weight(*from) != nodes_weight(*to))" test out of the
for_each_node_mask() to compute it only once?
'from' and 'to' look unmodified in the loop.
Just my 2c,
CJ
> continue;
>
Powered by blists - more mailing lists