[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZivlrMAwRI6xJhc-@casper.infradead.org>
Date: Fri, 26 Apr 2024 18:34:36 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Sidhartha Kumar <sidhartha.kumar@...cle.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, linmiaohe@...wei.com,
jane.chu@...cle.com, nao.horiguchi@...il.com, osalvador@...e.de
Subject: Re: [PATCH] mm/memory-failure: remove shake_page()
On Fri, Apr 26, 2024 at 10:15:11AM -0700, Sidhartha Kumar wrote:
> Use a folio in get_any_page() to save 5 calls to compound head and
> convert the last user of shake_page() to shake_folio(). This allows us
> to remove the shake_page() definition.
So I didn't do this before because I wasn't convinced it was safe.
We don't have a refcount on the folio, so the page might no longer
be part of this folio by the time we get the refcount on the folio.
I'd really like to see some argumentation for why this is safe.
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@...cle.com>
> ---
> mm/memory-failure.c | 20 ++++++++------------
> 1 file changed, 8 insertions(+), 12 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 16ada4fb02b79..273f6fef29f25 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -385,11 +385,6 @@ void shake_folio(struct folio *folio)
> }
> EXPORT_SYMBOL_GPL(shake_folio);
>
> -static void shake_page(struct page *page)
> -{
> - shake_folio(page_folio(page));
> -}
> -
> static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
> unsigned long address)
> {
> @@ -1433,6 +1428,7 @@ static int get_any_page(struct page *p, unsigned long flags)
> {
> int ret = 0, pass = 0;
> bool count_increased = false;
> + struct folio *folio = page_folio(p);
>
> if (flags & MF_COUNT_INCREASED)
> count_increased = true;
> @@ -1446,7 +1442,7 @@ static int get_any_page(struct page *p, unsigned long flags)
> if (pass++ < 3)
> goto try_again;
> ret = -EBUSY;
> - } else if (!PageHuge(p) && !is_free_buddy_page(p)) {
> + } else if (!folio_test_hugetlb(folio) && !is_free_buddy_page(p)) {
> /* We raced with put_page, retry. */
> if (pass++ < 3)
> goto try_again;
> @@ -1459,7 +1455,7 @@ static int get_any_page(struct page *p, unsigned long flags)
> * page, retry.
> */
> if (pass++ < 3) {
> - shake_page(p);
> + shake_folio(folio);
> goto try_again;
> }
> ret = -EIO;
> @@ -1467,7 +1463,7 @@ static int get_any_page(struct page *p, unsigned long flags)
> }
> }
>
> - if (PageHuge(p) || HWPoisonHandlable(p, flags)) {
> + if (folio_test_hugetlb(folio) || HWPoisonHandlable(p, flags)) {
> ret = 1;
> } else {
> /*
> @@ -1475,12 +1471,12 @@ static int get_any_page(struct page *p, unsigned long flags)
> * it into something we can handle.
> */
> if (pass++ < 3) {
> - put_page(p);
> - shake_page(p);
> + folio_put(folio);
> + shake_folio(folio);
> count_increased = false;
> goto try_again;
> }
> - put_page(p);
> + folio_put(folio);
> ret = -EIO;
> }
> out:
> @@ -1643,7 +1639,7 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
>
> /*
> * try_to_unmap() might put mlocked page in lru cache, so call
> - * shake_page() again to ensure that it's flushed.
> + * shake_folio() again to ensure that it's flushed.
> */
> if (mlocked)
> shake_folio(folio);
> --
> 2.44.0
>
Powered by blists - more mailing lists