[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4zJ519W4GYFo=KF7DUcS+fDhhBCKP7TQz0V-xhR8qiCSw@mail.gmail.com>
Date: Tue, 30 Jul 2024 01:18:58 +1200
From: Barry Song <21cnbao@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Chuanhua Han <chuanhuahan@...il.com>, akpm@...ux-foundation.org, linux-mm@...ck.org,
ying.huang@...el.com, baolin.wang@...ux.alibaba.com, chrisl@...nel.org,
david@...hat.com, hannes@...xchg.org, hughd@...gle.com,
kaleshsingh@...gle.com, kasong@...cent.com, linux-kernel@...r.kernel.org,
mhocko@...e.com, minchan@...nel.org, nphamcs@...il.com, ryan.roberts@....com,
senozhatsky@...omium.org, shakeel.butt@...ux.dev, shy828301@...il.com,
surenb@...gle.com, v-songbaohua@...o.com, xiang@...nel.org,
yosryahmed@...gle.com, Chuanhua Han <hanchuanhua@...o.com>
Subject: Re: [PATCH v5 3/4] mm: support large folios swapin as a whole for
zRAM-like swapfile
On Tue, Jul 30, 2024 at 12:55 AM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Mon, Jul 29, 2024 at 02:36:38PM +0800, Chuanhua Han wrote:
> > Matthew Wilcox <willy@...radead.org> 于2024年7月29日周一 11:51写道:
> > >
> > > On Fri, Jul 26, 2024 at 09:46:17PM +1200, Barry Song wrote:
> > > > - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
> > > > - vma, vmf->address, false);
> > > > + folio = alloc_swap_folio(vmf);
> > > > page = &folio->page;
> > >
> > > This is no longer correct. You need to set 'page' to the precise page
> > > that is being faulted rather than the first page of the folio. It was
> > > fine before because it always allocated a single-page folio, but now it
> > > must use folio_page() or folio_file_page() (whichever has the correct
> > > semantics for you).
> > >
> > > Also you need to fix your test suite to notice this bug. I suggest
> > > doing that first so that you know whether you've got the calculation
> > > correct.
> >
> > >
> > >
> > This is no problem now, we support large folios swapin as a whole, so
> > the head page is used here instead of the page that is being faulted.
> > You can also refer to the current code context, now support large
> > folios swapin as a whole, and previously only support small page
> > swapin is not the same.
>
> You have completely failed to understand the problem. Let's try it this
> way:
>
> We take a page fault at address 0x123456789000.
> If part of a 16KiB folio, that's page 1 of the folio at 0x123456788000.
> If you now map page 0 of the folio at 0x123456789000, you've
> given the user the wrong page! That looks like data corruption.
>
> The code in
> if (folio_test_large(folio) && folio_test_swapcache(folio)) {
> as Barry pointed out will save you -- but what if those conditions fail?
> What if the mmap has been mremap()ed and the folio now crosses a PMD
> boundary? mk_pte() will now be called on the wrong page.
Chuanhua understood everything correctly. I think you might have missed
that we have very strict checks both before allocating large folios and before
mapping them for this new allocated mTHP swap-in case.
to allocate a large folio, we check all alignment requirements; PTEs have
aligned swap offset and all physically contiguous, that is how mTHP
is swapped out. if a mTHP has been mremap() to be unaligned, we won't
swap them in as mTHP. two reasons: 1. we have no way to figure out
what is the start address of a previous mTHP for non-swapcache case;
2. mremap() to unaligned addresses is rare.
to map a large folio, we check all PTEs are still there by double confirming
can_swapin_thp() is true. if PTEs have changed, this is a "goto out_nomap"
case.
/* allocated large folios for SWP_SYNCHRONOUS_IO */
if (folio_test_large(folio) && !folio_test_swapcache(folio)) {
unsigned long nr = folio_nr_pages(folio);
unsigned long folio_start = ALIGN_DOWN(vmf->address,
nr * PAGE_SIZE);
unsigned long idx = (vmf->address - folio_start) / PAGE_SIZE;
pte_t *folio_ptep = vmf->pte - idx;
if (!can_swapin_thp(vmf, folio_ptep, nr))
goto out_nomap;
page_idx = idx;
address = folio_start;
ptep = folio_ptep;
goto check_folio;
}
Thanks
Barry
Powered by blists - more mailing lists