[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YYMCI2S03+azi7nK@casper.infradead.org>
Date: Wed, 3 Nov 2021 21:41:55 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Christophe JAILLET <christophe.jaillet@...adoo.fr>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-janitors@...r.kernel.org
Subject: Re: [PATCH] mm/mremap_pages: Save a few cycles in 'get_dev_pagemap()'
On Wed, Nov 03, 2021 at 10:35:34PM +0100, Christophe JAILLET wrote:
> Use 'percpu_ref_tryget_live_rcu()' instead of 'percpu_ref_tryget_live()' to
> save a few cycles when it is known that the rcu lock is already
> taken/released.
If this is really important, we can add an __xa_load() which doesn't
take the RCU read lock.
I honestly think that the xarray is the wrong data structure here,
and we'd be better off with a simple array of (start, pointer)
tuples.
> Signed-off-by: Christophe JAILLET <christophe.jaillet@...adoo.fr>
> ---
> mm/memremap.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 84de22c14567..012e8d23d365 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -506,7 +506,7 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
> /* fall back to slow path lookup */
> rcu_read_lock();
> pgmap = xa_load(&pgmap_array, PHYS_PFN(phys));
> - if (pgmap && !percpu_ref_tryget_live(pgmap->ref))
> + if (pgmap && !percpu_ref_tryget_live_rcu(pgmap->ref))
> pgmap = NULL;
> rcu_read_unlock();
>
> --
> 2.30.2
>
>
Powered by blists - more mailing lists