[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <900252c7-b16c-49b9-8c01-60e6a7a48683@redhat.com>
Date: Wed, 16 Jul 2025 13:36:28 -0400
From: Luiz Capitulino <luizcap@...hat.com>
To: David Hildenbrand <david@...hat.com>, willy@...radead.org,
akpm@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, shivankg@....com,
sj@...nel.org, harry.yoo@...cle.com
Subject: Re: [PATCH v3 2/4] mm/util: introduce snapshot_page()
On 2025-07-16 06:16, David Hildenbrand wrote:
> [...]
>
>> -dump:
>> - __dump_folio(foliop, &precise, pfn, idx);
>> + __dump_folio(&ps.folio_snapshot, &ps.page_snapshot, ps.pfn, ps.idx);
>
> Nit that can be cleaned up later on top:
>
> We should probably call this
>
> __dump_page_snapshot() and then just pass ... the page_snapshot.
>
>> }
>> void dump_page(const struct page *page, const char *reason)
>> diff --git a/mm/util.c b/mm/util.c
>> index 0b270c43d7d1..f270bf42465b 100644
>> --- a/mm/util.c
>> +++ b/mm/util.c
>> @@ -25,6 +25,7 @@
>> #include <linux/sizes.h>
>> #include <linux/compat.h>
>> #include <linux/fsnotify.h>
>> +#include <linux/page_idle.h>
>> #include <linux/uaccess.h>
>> @@ -1171,3 +1172,81 @@ int compat_vma_mmap_prepare(struct file *file, struct vm_area_struct *vma)
>> return 0;
>> }
>> EXPORT_SYMBOL(compat_vma_mmap_prepare);
>> +
>> +static void set_ps_flags(struct page_snapshot *ps, const struct folio *folio,
>> + const struct page *page)
>> +{
>> + /*
>> + * Only the first page of a high-order buddy page has PageBuddy() set.
>> + * So we have to check manually whether this page is part of a high-
>> + * order buddy page.
>> + */
>> + if (PageBuddy(page))
>> + ps->flags |= PAGE_SNAPSHOT_PG_BUDDY;
>> + else if (page_count(page) == 0 && is_free_buddy_page(page))
>> + ps->flags |= PAGE_SNAPSHOT_PG_BUDDY;
>> +
>> + if (folio_test_idle(folio))
>> + ps->flags |= PAGE_SNAPSHOT_PG_IDLE;
>> +}
>> +
>> +/**
>> + * snapshot_page() - Create a snapshot of a struct page
>> + * @ps: Pointer to a struct page_snapshot to store the page snapshot
>> + * @page: The page to snapshot
>> + *
>> + * Create a snapshot of the page and store both its struct page and struct
>> + * folio representations in @ps.
>> + *
>> + * Note that creating a faithful snapshot may fail if the compound
>
> Maybe highlight that this is not really expected to happen, ever.
>
>> + * state of the page keeps changing (e.g., due to a folio split). In
>> + * this case, ps->faithful is set to false, and the snapshot assumes
>
> There is no ps->faithful.
Yes, good catch. This was from an earlier version.
Is it fine if I fix only this with a follow up patch for Andrew in this
thread or would you prefer that I post v4 with all the other changes as
well?
>
>> + * that @page refers to a single page.
>> + */
>> +void snapshot_page(struct page_snapshot *ps, const struct page *page)
>> +{
>> + unsigned long head, nr_pages = 1;
>> + struct folio *foliop;
>> + int loops = 5;
>> +
>> + ps->pfn = page_to_pfn(page);
>> + ps->flags = PAGE_SNAPSHOT_FAITHFUL;
>> +
>> +again:
>> + memset(&ps->folio_snapshot, 0, sizeof(struct folio));
>> + memcpy(&ps->page_snapshot, page, sizeof(*page));
>> + head = ps->page_snapshot.compound_head;
>> + if ((head & 1) == 0) {
>> + ps->idx = 0;
>> + foliop = (struct folio *)&ps->page_snapshot;
>> + if (!folio_test_large(foliop)) {
>> + set_ps_flags(ps, page_folio(page), page);
>> + memcpy(&ps->folio_snapshot, foliop,
>> + sizeof(struct page));
>> + return;
>> + }
>> + foliop = (struct folio *)page;
>> + } else {
>> + foliop = (struct folio *)(head - 1);
>> + ps->idx = folio_page_idx(foliop, page);
>> + }
>
> Condition could be cleaned up by reversing both things
>
> if (head & 1) {
> /* Tail page, lookup the actual head. */
> foliop = (struct folio *)(head - 1);
> ps->idx = folio_page_idx(foliop, page);
> } else
> ...
> }
>
> But you're just moving that code, so no need to do that now.
>
>
> I think we could improve some of that in the future a bit to
> make it even more faithful.
>
> But for now this should be just fine.
>
> Acked-by: David Hildenbrand <david@...hat.com>
>
Powered by blists - more mailing lists