lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZiB5wwHpJTCSQb3h@x1n>
Date: Wed, 17 Apr 2024 21:39:15 -0400
From: Peter Xu <peterx@...hat.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	David Hildenbrand <david@...hat.com>,
	Mina Almasry <almasrymina@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Muchun Song <muchun.song@...ux.dev>,
	David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH 1/3] fixup! mm: always initialise folio->_deferred_list

On Thu, Apr 18, 2024 at 12:46:39AM +0100, Matthew Wilcox wrote:
> On Wed, Apr 17, 2024 at 05:18:34PM -0400, Peter Xu wrote:
> > Current mm-unstable will hit this when running test_hugetlb_memcg.  This
> > fixes the crash for me.

[1]

> > 
> > Signed-off-by: Peter Xu <peterx@...hat.com>
> > ---
> >  mm/memcontrol.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 1840ba4c355d..7703ced535a3 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -7529,6 +7529,7 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
> >  
> >  	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
> >  	VM_BUG_ON_FOLIO(folio_order(folio) > 1 &&
> > +			!folio_test_hugetlb(folio) &&
> >  			!list_empty(&folio->_deferred_list), folio);
> 
> Hum.  I thought we didn't get here for hugetlb folios.  What
> stacktrace did you get?

A normal hugetlb free path iirc.  You can try the test case, I mentioned
the reproducer [1] above.  It crashes constantly.

> 
> I'm basing it on comments like this:
> 
>                 /* hugetlb has its own memcg */
>                 if (folio_test_hugetlb(folio)) {
>                         if (lruvec) {
>                                 unlock_page_lruvec_irqrestore(lruvec, flags);
>                                 lruvec = NULL;
>                         }
>                         free_huge_folio(folio);
>                         continue;
>                 }

Hugetlb does have its own memcg but I guess now it's even more than that,
see the patch merged months ago:

https://lore.kernel.org/all/20231006184629.155543-4-nphamcs@gmail.com/

Especially:

@@ -1902,6 +1902,7 @@ void free_huge_folio(struct folio *folio)
 				     pages_per_huge_page(h), folio);
 	hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h),
 					  pages_per_huge_page(h), folio);
+	mem_cgroup_uncharge(folio);
 	if (restore_reserve)
 		h->resv_huge_pages++;

The comment above looks a bit confusing to me, as memcg is not the only
thing special for hugetlb pages.  Explicitly mentioning it there made me
feel like hugetlb can be freed like a normal compound page if memcg is not
a problem, but looks like not yet?

-- 
Peter Xu


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ