[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGudoHGUwS_zY1KWStMtKoy=eogLigy7ucpEQXzTZGANU=35Jw@mail.gmail.com>
Date: Mon, 9 Dec 2024 15:30:13 +0100
From: Mateusz Guzik <mjguzik@...il.com>
To: David Hildenbrand <david@...hat.com>
Cc: yuzhao@...gle.com, akpm@...ux-foundation.org, willy@...radead.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: remove an avoidable load of page refcount in page_ref_add_unless
On Mon, Dec 9, 2024 at 3:22 PM David Hildenbrand <david@...hat.com> wrote:
>
> On 09.12.24 13:33, Mateusz Guzik wrote:
> > That is to say I think this thread just about exhausted the time
> > warranted by this patch. No hard feelz if it gets dropped, but then I
> > do strongly suggest adding a justification to the extra load.
>
> Maybe it's sufficient for now to simply do your change with a comment:
>
> diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
> index 8c236c651d1d6..1efc992ad5687 100644
> --- a/include/linux/page_ref.h
> +++ b/include/linux/page_ref.h
> @@ -234,7 +234,13 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u)
>
> rcu_read_lock();
> /* avoid writing to the vmemmap area being remapped */
> - if (!page_is_fake_head(page) && page_ref_count(page) != u)
> + if (!page_is_fake_head(page))
> + /*
> + * atomic_add_unless() will currently never modify the value
> + * if it already is u. If that ever changes, we'd have to have
> + * a separate check here, such that we won't be writing to
> + * write-protected vmemmap areas.
> + */
> ret = atomic_add_unless(&page->_refcount, nr, u);
> rcu_read_unlock();
>
>
> It would bail out during testing ... hopefully, such that we can detect any such change.
>
Not my call to make, but looks good. ;)
fwiw I don't need any credit and I would be more than happy if you
just submitted the thing as your own without me being mentioned. *No*
cc would also be appreciated.
--
Mateusz Guzik <mjguzik gmail.com>
Powered by blists - more mailing lists