[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3376f8c8-f76d-4f5f-903f-2e9edc968a76@redhat.com>
Date: Tue, 2 Jul 2024 15:24:27 +0200
From: David Hildenbrand <david@...hat.com>
To: Yu Zhao <yuzhao@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>
Cc: Frank van der Linden <fvdl@...gle.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>, Peter Xu
<peterx@...hat.com>, Yang Shi <yang@...amperecomputing.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH mm-unstable v2] mm/hugetlb_vmemmap: fix race with
speculative PFN walkers
On 28.06.24 00:27, Yu Zhao wrote:
> While investigating HVO for THPs [1], it turns out that speculative
> PFN walkers like compaction can race with vmemmap modifications, e.g.,
>
> CPU 1 (vmemmap modifier) CPU 2 (speculative PFN walker)
> ------------------------------- ------------------------------
> Allocates an LRU folio page1
> Sees page1
> Frees page1
>
> Allocates a hugeTLB folio page2
> (page1 being a tail of page2)
>
> Updates vmemmap mapping page1
> get_page_unless_zero(page1)
>
> Even though page1->_refcount is zero after HVO, get_page_unless_zero()
> can still try to modify this read-only field, resulting in a crash.
>
> An independent report [2] confirmed this race.
>
> There are two discussed approaches to fix this race:
> 1. Make RO vmemmap RW so that get_page_unless_zero() can fail without
> triggering a PF.
> 2. Use RCU to make sure get_page_unless_zero() either sees zero
> page->_refcount through the old vmemmap or non-zero page->_refcount
> through the new one.
>
> The second approach is preferred here because:
> 1. It can prevent illegal modifications to struct page[] that has been
> HVO'ed;
> 2. It can be generalized, in a way similar to ZERO_PAGE(), to fix
> similar races in other places, e.g., arch_remove_memory() on x86
> [3], which frees vmemmap mapping offlined struct page[].
>
> While adding synchronize_rcu(), the goal is to be surgical, rather
> than optimized. Specifically, calls to synchronize_rcu() on the error
> handling paths can be coalesced, but it is not done for the sake of
> Simplicity: noticeably, this fix removes ~50% more lines than it adds.
>
> According to the hugetlb_optimize_vmemmap section in
> Documentation/admin-guide/sysctl/vm.rst, enabling HVO makes allocating
> or freeing hugeTLB pages "~2x slower than before". Having
> synchronize_rcu() on top makes those operations even worse, and this
> also affects the user interface /proc/sys/vm/nr_overcommit_hugepages.
>
> [1] https://lore.kernel.org/20240229183436.4110845-4-yuzhao@google.com/
> [2] https://lore.kernel.org/917FFC7F-0615-44DD-90EE-9F85F8EA9974@linux.dev/
> [3] https://lore.kernel.org/be130a96-a27e-4240-ad78-776802f57cad@redhat.com/
>
> Signed-off-by: Yu Zhao <yuzhao@...gle.com>
> Acked-by: Muchun Song <muchun.song@...ux.dev>
> ---
> include/linux/page_ref.h | 8 +++++-
> mm/hugetlb.c | 53 ++++++----------------------------------
> mm/hugetlb_vmemmap.c | 16 ++++++++++++
> 3 files changed, 30 insertions(+), 47 deletions(-)
>
> diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
> index 490d0ad6e56d..8c236c651d1d 100644
> --- a/include/linux/page_ref.h
> +++ b/include/linux/page_ref.h
> @@ -230,7 +230,13 @@ static inline int folio_ref_dec_return(struct folio *folio)
>
> static inline bool page_ref_add_unless(struct page *page, int nr, int u)
> {
> - bool ret = atomic_add_unless(&page->_refcount, nr, u);
> + bool ret = false;
> +
> + rcu_read_lock();
> + /* avoid writing to the vmemmap area being remapped */
> + if (!page_is_fake_head(page) && page_ref_count(page) != u)
> + ret = atomic_add_unless(&page->_refcount, nr, u);
> + rcu_read_unlock();
The page_is_fake_head() thingy in page_ref.h is a bit suboptimal,
currently it really only works on _refcount. But I get why it is
required right now, hmmm.
(independent, all users of page_ref_add_unless seem to pass u==0, maybe
we should clean that up at some point; hard to imagine other use cases
for refcounts besides "unless 0").
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists