lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK1f24nO-7QUYxXsYqDH=Hg7J_Hn9rxpkfQzaBBOpqFnzbCATQ@mail.gmail.com>
Date: Fri, 19 Apr 2024 08:31:58 +0800
From: Lance Yang <ioworker0@...il.com>
To: David Hildenbrand <david@...hat.com>
Cc: akpm@...ux-foundation.org, cgroups@...r.kernel.org, chris@...kel.net, 
	corbet@....net, dalias@...c.org, fengwei.yin@...el.com, 
	glaubitz@...sik.fu-berlin.de, hughd@...gle.com, jcmvbkbc@...il.com, 
	linmiaohe@...wei.com, linux-doc@...r.kernel.org, 
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org, 
	linux-mm@...ck.org, linux-sh@...r.kernel.org, 
	linux-trace-kernel@...r.kernel.org, muchun.song@...ux.dev, 
	naoya.horiguchi@....com, peterx@...hat.com, richardycc@...gle.com, 
	ryan.roberts@....com, shy828301@...il.com, willy@...radead.org, 
	ysato@...rs.sourceforge.jp, ziy@...dia.com
Subject: Re: [PATCH v1 04/18] mm: track mapcount of large folios in single value

On Thu, Apr 18, 2024 at 11:09 PM David Hildenbrand <david@...hat.com> wrote:
>
> On 18.04.24 16:50, Lance Yang wrote:
> > Hey David,
> >
> > FWIW, just a nit below.
>
> Hi!
>

Thanks for clarifying!

> Thanks, but that was done on purpose.
>
> This way, we'll have a memory barrier (due to at least one
> atomic_inc_and_test()) between incrementing the folio refcount
> (happening before the rmap change) and incrementing the mapcount.
>
> Is it required? Not 100% sure, refcount vs. mapcount checks are always a
> bit racy. But doing it this way let me sleep better at night ;)

Yep, I understood :)

Thanks,
Lance

>
> [with no subpage mapcounts, we'd do the atomic_inc_and_test on the large
> mapcount and have the memory barrier there again; but that's stuff for
> the future]
>
> Thanks!



>
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index 2608c40dffad..08bb6834cf72 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1143,7 +1143,6 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
> >               int *nr_pmdmapped)
> >   {
> >       atomic_t *mapped = &folio->_nr_pages_mapped;
> > -     const int orig_nr_pages = nr_pages;
> >       int first, nr = 0;
> >
> >       __folio_rmap_sanity_checks(folio, page, nr_pages, level);
> > @@ -1155,6 +1154,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
> >                       break;
> >               }
> >
> > +             atomic_add(nr_pages, &folio->_large_mapcount);
> >               do {
> >                       first = atomic_inc_and_test(&page->_mapcount);
> >                       if (first) {
> > @@ -1163,7 +1163,6 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
> >                                       nr++;
> >                       }
> >               } while (page++, --nr_pages > 0);
> > -             atomic_add(orig_nr_pages, &folio->_large_mapcount);
> >               break;
> >       case RMAP_LEVEL_PMD:
> >               first = atomic_inc_and_test(&folio->_entire_mapcount);
> >
> > Thanks,
> > Lance
> >
>
> --
> Cheers,
>
> David / dhildenb
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ