lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4wx7oSzt4vn6B+LRoZetMhH-fDXRFrCFRyoqVOakLidjg@mail.gmail.com>
Date: Tue, 5 Mar 2024 10:57:51 +1300
From: Barry Song <21cnbao@...il.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org, david@...hat.com, 
	chrisl@...nel.org, yuzhao@...gle.com, hanchuanhua@...o.com, 
	linux-kernel@...r.kernel.org, willy@...radead.org, ying.huang@...el.com, 
	xiang@...nel.org, mhocko@...e.com, shy828301@...il.com, 
	wangkefeng.wang@...wei.com, Barry Song <v-songbaohua@...o.com>, 
	Hugh Dickins <hughd@...gle.com>
Subject: Re: [RFC PATCH] mm: hold PTL from the first PTE while reclaiming a
 large folio

On Tue, Mar 5, 2024 at 1:21 AM Ryan Roberts <ryan.roberts@....com> wrote:
>
> Hi Barry,
>
> On 04/03/2024 10:37, Barry Song wrote:
> > From: Barry Song <v-songbaohua@...o.com>
> >
> > page_vma_mapped_walk() within try_to_unmap_one() races with other
> > PTEs modification such as break-before-make, while iterating PTEs
> > of a large folio, it will only begin to acquire PTL after it gets
> > a valid(present) PTE. break-before-make intermediately sets PTEs
> > to pte_none. Thus, a large folio's PTEs might be partially skipped
> > in try_to_unmap_one().
>
> I just want to check my understanding here - I think the problem occurs for
> PTE-mapped, PMD-sized folios as well as smaller-than-PMD-size large folios? Now
> that I've had a look at the code and have a better understanding, I think that
> must be the case? And therefore this problem exists independently of my work to
> support swap-out of mTHP? (From your previous report I was under the impression
> that it only affected mTHP).

I think this affects all large folios with PTEs entries more than 1. but hugeTLB
is handled as a whole in try_to_unmap_one and its rmap is removed all
together, i feel hugeTLB doesn't have this problem.

>
> Its just that the problem is becoming more pronounced because with mTHP,
> PTE-mapped large folios are much more common?

right. as now large folios become a more common case, and it is my case
running in millions of phones.

BTW, I feel we can somehow learn from hugeTLB, for example, we can reclaim
all PTEs all together rather than iterating PTEs one by one. This will improve
performance. for example, a batched
set_ptes_to_swap_entries()
{
}
then we only need to loop once for a large folio, right now we are looping
nr_pages times.

>
> > For example, for an anon folio, after try_to_unmap_one(), we may
> > have PTE0 present, while PTE1 ~ PTE(nr_pages - 1) are swap entries.
> > So folio will be still mapped, the folio fails to be reclaimed.
> > What’s even more worrying is, its PTEs are no longer in a unified
> > state. This might lead to accident folio_split() afterwards. And
> > since a part of PTEs are now swap entries, accessing them will
> > incur page fault - do_swap_page.
> > It creates both anxiety and more expense. While we can't avoid
> > userspace's unmap to break up unified PTEs such as CONT-PTE for
> > a large folio, we can indeed keep away from kernel's breaking up
> > them due to its code design.
> > This patch is holding PTL from PTE0, thus, the folio will either
> > be entirely reclaimed or entirely kept. On the other hand, this
> > approach doesn't increase PTL contention. Even w/o the patch,
> > page_vma_mapped_walk() will always get PTL after it sometimes
> > skips one or two PTEs because intermediate break-before-makes
> > are short, according to test. Of course, even w/o this patch,
> > the vast majority of try_to_unmap_one still can get PTL from
> > PTE0. This patch makes the number 100%.
> > The other option is that we can give up in try_to_unmap_one
> > once we find PTE0 is not the first entry we get PTL, we call
> > page_vma_mapped_walk_done() to end the iteration at this case.
> > This will keep the unified PTEs while the folio isn't reclaimed.
> > The result is quite similar with small folios with one PTE -
> > either entirely reclaimed or entirely kept.
> > Reclaiming large folios by holding PTL from PTE0 seems a better
> > option comparing to giving up after detecting PTL begins from
> > non-PTE0.
> >
> > Cc: Hugh Dickins <hughd@...gle.com>
> > Signed-off-by: Barry Song <v-songbaohua@...o.com>
>
> Do we need a Fixes tag?

I don't feel a strong need for this as this doesn't cause a crash, memory
leak or whatever serious.

>
> > ---
> >  mm/vmscan.c | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 0b888a2afa58..e4722fbbcd0c 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1270,6 +1270,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
> >
> >                       if (folio_test_pmd_mappable(folio))
> >                               flags |= TTU_SPLIT_HUGE_PMD;
> > +                     /*
> > +                      * if page table lock is not held from the first PTE of
> > +                      * a large folio, some PTEs might be skipped because of
> > +                      * races with break-before-make, for example, PTEs can
> > +                      * be pte_none intermediately, thus one or more PTEs
> > +                      * might be skipped in try_to_unmap_one, we might result
> > +                      * in a large folio is partially mapped and partially
> > +                      * unmapped after try_to_unmap
> > +                      */
> > +                     if (folio_test_large(folio))
> > +                             flags |= TTU_SYNC;
>
> This looks sensible to me after thinking about it for a while. But I also have a
> gut feeling that there might be some more subtleties that are going over my
> head, since I'm not expert in this area. So will leave others to provide R-b :)
>

ok, thanks :-)

> Thanks,
> Ryan
>
> >
> >                       try_to_unmap(folio, flags);
> >                       if (folio_mapped(folio)) {
>

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ