[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4zNxC5u088RRnKeM18skEJvwTd22mB_FWSA67K3S-CKPw@mail.gmail.com>
Date: Tue, 11 Jun 2024 05:00:03 +0800
From: Barry Song <21cnbao@...il.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>,
Nhat Pham <nphamcs@...il.com>, Chengming Zhou <chengming.zhou@...ux.dev>,
Baolin Wang <baolin.wang@...ux.alibaba.com>, Chris Li <chrisl@...nel.org>,
Ryan Roberts <ryan.roberts@....com>, David Hildenbrand <david@...hat.com>,
Matthew Wilcox <willy@...radead.org>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm: zswap: handle incorrect attempts to load of large folios
On Tue, Jun 11, 2024 at 4:12 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> On Mon, Jun 10, 2024 at 1:06 PM Barry Song <21cnbao@...il.com> wrote:
> >
> > On Tue, Jun 11, 2024 at 1:42 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> > >
> > > On Fri, Jun 7, 2024 at 9:13 PM Barry Song <21cnbao@...il.com> wrote:
> > > >
> > > > On Sat, Jun 8, 2024 at 10:37 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> > > > >
> > > > > Zswap does not support storing or loading large folios. Until proper
> > > > > support is added, attempts to load large folios from zswap are a bug.
> > > > >
> > > > > For example, if a swapin fault observes that contiguous PTEs are
> > > > > pointing to contiguous swap entries and tries to swap them in as a large
> > > > > folio, swap_read_folio() will pass in a large folio to zswap_load(), but
> > > > > zswap_load() will only effectively load the first page in the folio. If
> > > > > the first page is not in zswap, the folio will be read from disk, even
> > > > > though other pages may be in zswap.
> > > > >
> > > > > In both cases, this will lead to silent data corruption. Proper support
> > > > > needs to be added before large folio swapins and zswap can work
> > > > > together.
> > > > >
> > > > > Looking at callers of swap_read_folio(), it seems like they are either
> > > > > allocated from __read_swap_cache_async() or do_swap_page() in the
> > > > > SWP_SYNCHRONOUS_IO path. Both of which allocate order-0 folios, so
> > > > > everything is fine for now.
> > > > >
> > > > > However, there is ongoing work to add to support large folio swapins
> > > > > [1]. To make sure new development does not break zswap (or get broken by
> > > > > zswap), add minimal handling of incorrect loads of large folios to
> > > > > zswap.
> > > > >
> > > > > First, move the call folio_mark_uptodate() inside zswap_load().
> > > > >
> > > > > If a large folio load is attempted, and any page in that folio is in
> > > > > zswap, return 'true' without calling folio_mark_uptodate(). This will
> > > > > prevent the folio from being read from disk, and will emit an IO error
> > > > > because the folio is not uptodate (e.g. do_swap_fault() will return
> > > > > VM_FAULT_SIGBUS). It may not be reliable recovery in all cases, but it
> > > > > is better than nothing.
> > > > >
> > > > > This was tested by hacking the allocation in __read_swap_cache_async()
> > > > > to use order 2 and __GFP_COMP.
> > > > >
> > > > > In the future, to handle this correctly, the swapin code should:
> > > > > (a) Fallback to order-0 swapins if zswap was ever used on the machine,
> > > > > because compressed pages remain in zswap after it is disabled.
> > > > > (b) Add proper support to swapin large folios from zswap (fully or
> > > > > partially).
> > > > >
> > > > > Probably start with (a) then followup with (b).
> > > > >
> > > > > [1]https://lore.kernel.org/linux-mm/20240304081348.197341-6-21cnbao@gmail.com/
> > > > >
> > > > > Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
> > > > > ---
> > > > >
> > > > > v1: https://lore.kernel.org/lkml/20240606184818.1566920-1-yosryahmed@google.com/
> > > > >
> > > > > v1 -> v2:
> > > > > - Instead of using VM_BUG_ON() use WARN_ON_ONCE() and add some recovery
> > > > > handling (David Hildenbrand).
> > > > >
> > > > > ---
> > > > > mm/page_io.c | 1 -
> > > > > mm/zswap.c | 22 +++++++++++++++++++++-
> > > > > 2 files changed, 21 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/mm/page_io.c b/mm/page_io.c
> > > > > index f1a9cfab6e748..8f441dd8e109f 100644
> > > > > --- a/mm/page_io.c
> > > > > +++ b/mm/page_io.c
> > > > > @@ -517,7 +517,6 @@ void swap_read_folio(struct folio *folio, struct swap_iocb **plug)
> > > > > delayacct_swapin_start();
> > > > >
> > > > > if (zswap_load(folio)) {
> > > > > - folio_mark_uptodate(folio);
> > > > > folio_unlock(folio);
> > > > > } else if (data_race(sis->flags & SWP_FS_OPS)) {
> > > > > swap_read_folio_fs(folio, plug);
> > > > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > > > index b9b35ef86d9be..ebb878d3e7865 100644
> > > > > --- a/mm/zswap.c
> > > > > +++ b/mm/zswap.c
> > > > > @@ -1557,6 +1557,26 @@ bool zswap_load(struct folio *folio)
> > > > >
> > > > > VM_WARN_ON_ONCE(!folio_test_locked(folio));
> > > > >
> > > > > + /*
> > > > > + * Large folios should not be swapped in while zswap is being used, as
> > > > > + * they are not properly handled. Zswap does not properly load large
> > > > > + * folios, and a large folio may only be partially in zswap.
> > > > > + *
> > > > > + * If any of the subpages are in zswap, reading from disk would result
> > > > > + * in data corruption, so return true without marking the folio uptodate
> > > > > + * so that an IO error is emitted (e.g. do_swap_page() will sigfault).
> > > > > + *
> > > > > + * Otherwise, return false and read the folio from disk.
> > > > > + */
> > > > > + if (folio_test_large(folio)) {
> > > > > + if (xa_find(tree, &offset,
> > > > > + offset + folio_nr_pages(folio) - 1, XA_PRESENT)) {
> > > > > + WARN_ON_ONCE(1);
> > > > > + return true;
> > > > > + }
> > > > > + return false;
> > > >
> > > > IMHO, this appears to be over-designed. Personally, I would opt to
> > > > use
> > > >
> > > > if (folio_test_large(folio))
> > > > return true;
> > >
> > > I am sure you mean "return false" here. Always returning true means we
> > > will never read a large folio from either zswap or disk, whether it's
> > > in zswap or not. Basically guaranteeing corrupting data for large
> > > folio swapin, even if zswap is disabled :)
> > >
> > > >
> > > > Before we address large folio support in zswap, it’s essential
> > > > not to let them coexist. Expecting valid data by lunchtime is
> > > > not advisable.
> > >
> > > The goal here is to enable development for large folio swapin without
> > > breaking zswap or being blocked on adding support in zswap. If we
> > > always return false for large folios, as you suggest, then even if the
> > > folio is in zswap (or parts of it), we will go read it from disk. This
> > > will result in silent data corruption.
> > >
> > > As you mentioned before, you spent a week debugging problems with your
> > > large folio swapin series because of a zswap problem, and even after
> > > then, the zswap_is_enabled() check you had is not enough to prevent
> > > problems as I mentioned before (if zswap was enabled before). So we
> > > need stronger checks to make sure we don't break things when we
> > > support large folio swapin.
> > >
> > > Since we can't just check if zswap is enabled or not, we need to
> > > rather check if the folio (or any part of it) is in zswap or not. We
> > > can only WARN in that case, but delivering the error to userspace is a
> > > couple of extra lines of code (not set uptodate), and will make the
> > > problem much easier to notice.
> > >
> > > I am not sure I understand what you mean. The alternative is to
> > > introduce a config option (perhaps internal) for large folio swapin,
> > > and make this depend on !CONFIG_ZSWAP, or make zswap refuse to get
> > > enabled if large folio swapin is enabled (through config or boot
> > > option). This is until proper handling is added, of course.
> >
> > Hi Yosry,
> > My point is that anybody attempts to do large folios swap-in should
> > either
> > 1. always use small folios if zswap has been once enabled before or now
> > or
> > 2. address the large folios swapin issues in zswap
> >
> > there is no 3rd way which you are providing.
> >
> > it is over-designed to give users true or false based on if data is zswap
> > as there is always a chance data could be in zswap. so before approach
> > 2 is done, we should always WARN_ON large folios and report data
> > corruption.
>
> We can't always WARN_ON for large folios, as this will fire even if
> zswap was never enabled. The alternative is tracking whether zswap was
> ever enabled, and checking that instead of checking if any part of the
> folio is in zswap.
>
> Basically replacing xa_find(..) with zswap_was_enabled(..) or something.
My point is that mm core should always fallback
if (zswap_was_or_is_enabled())
goto fallback;
till zswap fixes the issue. This is the only way to enable large folios swap-in
development before we fix zswap.
>
> What I don't like about this is that we will report data corruption
> even in cases where data is not really corrupted and it exists on
> disk. For example, if zswap is globally enabled but disabled in a
> cgroup, there shouldn't be any corruption swapping in large folios.
>
> That being said, I don't feel strongly, as long as we either check
> that part of the folio is in zswap or that zswap was ever enabled (or
> maybe check if a page was ever stored, just in case zswap was enabled
> and immediately disabled).
>
> Johannes, Nhat, any opinions on which way to handle this?
Thanks
Barry
Powered by blists - more mailing lists