[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f8f30747-1313-4939-a2ad-3accd14ba01f@redhat.com>
Date: Thu, 18 Apr 2024 17:09:41 +0200
From: David Hildenbrand <david@...hat.com>
To: Lance Yang <ioworker0@...il.com>
Cc: akpm@...ux-foundation.org, cgroups@...r.kernel.org, chris@...kel.net,
corbet@....net, dalias@...c.org, fengwei.yin@...el.com,
glaubitz@...sik.fu-berlin.de, hughd@...gle.com, jcmvbkbc@...il.com,
linmiaohe@...wei.com, linux-doc@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-sh@...r.kernel.org,
linux-trace-kernel@...r.kernel.org, muchun.song@...ux.dev,
naoya.horiguchi@....com, peterx@...hat.com, richardycc@...gle.com,
ryan.roberts@....com, shy828301@...il.com, willy@...radead.org,
ysato@...rs.sourceforge.jp, ziy@...dia.com
Subject: Re: [PATCH v1 04/18] mm: track mapcount of large folios in single
value
On 18.04.24 16:50, Lance Yang wrote:
> Hey David,
>
> FWIW, just a nit below.
Hi!
Thanks, but that was done on purpose.
This way, we'll have a memory barrier (due to at least one
atomic_inc_and_test()) between incrementing the folio refcount
(happening before the rmap change) and incrementing the mapcount.
Is it required? Not 100% sure, refcount vs. mapcount checks are always a
bit racy. But doing it this way let me sleep better at night ;)
[with no subpage mapcounts, we'd do the atomic_inc_and_test on the large
mapcount and have the memory barrier there again; but that's stuff for
the future]
Thanks!
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 2608c40dffad..08bb6834cf72 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1143,7 +1143,6 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
> int *nr_pmdmapped)
> {
> atomic_t *mapped = &folio->_nr_pages_mapped;
> - const int orig_nr_pages = nr_pages;
> int first, nr = 0;
>
> __folio_rmap_sanity_checks(folio, page, nr_pages, level);
> @@ -1155,6 +1154,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
> break;
> }
>
> + atomic_add(nr_pages, &folio->_large_mapcount);
> do {
> first = atomic_inc_and_test(&page->_mapcount);
> if (first) {
> @@ -1163,7 +1163,6 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
> nr++;
> }
> } while (page++, --nr_pages > 0);
> - atomic_add(orig_nr_pages, &folio->_large_mapcount);
> break;
> case RMAP_LEVEL_PMD:
> first = atomic_inc_and_test(&folio->_entire_mapcount);
>
> Thanks,
> Lance
>
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists