lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240608023654.3513385-1-yosryahmed@google.com>
Date: Sat,  8 Jun 2024 02:36:54 +0000
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, Baolin Wang <baolin.wang@...ux.alibaba.com>, 
	Barry Song <21cnbao@...il.com>, Chris Li <chrisl@...nel.org>, 
	Ryan Roberts <ryan.roberts@....com>, David Hildenbrand <david@...hat.com>, 
	Matthew Wilcox <willy@...radead.org>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	Yosry Ahmed <yosryahmed@...gle.com>
Subject: [PATCH v2] mm: zswap: handle incorrect attempts to load of large folios

Zswap does not support storing or loading large folios. Until proper
support is added, attempts to load large folios from zswap are a bug.

For example, if a swapin fault observes that contiguous PTEs are
pointing to contiguous swap entries and tries to swap them in as a large
folio, swap_read_folio() will pass in a large folio to zswap_load(), but
zswap_load() will only effectively load the first page in the folio. If
the first page is not in zswap, the folio will be read from disk, even
though other pages may be in zswap.

In both cases, this will lead to silent data corruption. Proper support
needs to be added before large folio swapins and zswap can work
together.

Looking at callers of swap_read_folio(), it seems like they are either
allocated from __read_swap_cache_async() or do_swap_page() in the
SWP_SYNCHRONOUS_IO path. Both of which allocate order-0 folios, so
everything is fine for now.

However, there is ongoing work to add to support large folio swapins
[1]. To make sure new development does not break zswap (or get broken by
zswap), add minimal handling of incorrect loads of large folios to
zswap.

First, move the call folio_mark_uptodate() inside zswap_load().

If a large folio load is attempted, and any page in that folio is in
zswap, return 'true' without calling folio_mark_uptodate(). This will
prevent the folio from being read from disk, and will emit an IO error
because the folio is not uptodate (e.g. do_swap_fault() will return
VM_FAULT_SIGBUS). It may not be reliable recovery in all cases, but it
is better than nothing.

This was tested by hacking the allocation in __read_swap_cache_async()
to use order 2 and __GFP_COMP.

In the future, to handle this correctly, the swapin code should:
(a) Fallback to order-0 swapins if zswap was ever used on the machine,
because compressed pages remain in zswap after it is disabled.
(b) Add proper support to swapin large folios from zswap (fully or
partially).

Probably start with (a) then followup with (b).

[1]https://lore.kernel.org/linux-mm/20240304081348.197341-6-21cnbao@gmail.com/

Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
---

v1: https://lore.kernel.org/lkml/20240606184818.1566920-1-yosryahmed@google.com/

v1 -> v2:
- Instead of using VM_BUG_ON() use WARN_ON_ONCE() and add some recovery
  handling (David Hildenbrand).

---
 mm/page_io.c |  1 -
 mm/zswap.c   | 22 +++++++++++++++++++++-
 2 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/mm/page_io.c b/mm/page_io.c
index f1a9cfab6e748..8f441dd8e109f 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -517,7 +517,6 @@ void swap_read_folio(struct folio *folio, struct swap_iocb **plug)
 	delayacct_swapin_start();
 
 	if (zswap_load(folio)) {
-		folio_mark_uptodate(folio);
 		folio_unlock(folio);
 	} else if (data_race(sis->flags & SWP_FS_OPS)) {
 		swap_read_folio_fs(folio, plug);
diff --git a/mm/zswap.c b/mm/zswap.c
index b9b35ef86d9be..ebb878d3e7865 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1557,6 +1557,26 @@ bool zswap_load(struct folio *folio)
 
 	VM_WARN_ON_ONCE(!folio_test_locked(folio));
 
+	/*
+	 * Large folios should not be swapped in while zswap is being used, as
+	 * they are not properly handled. Zswap does not properly load large
+	 * folios, and a large folio may only be partially in zswap.
+	 *
+	 * If any of the subpages are in zswap, reading from disk would result
+	 * in data corruption, so return true without marking the folio uptodate
+	 * so that an IO error is emitted (e.g. do_swap_page() will sigfault).
+	 *
+	 * Otherwise, return false and read the folio from disk.
+	 */
+	if (folio_test_large(folio)) {
+		if (xa_find(tree, &offset,
+			    offset + folio_nr_pages(folio) - 1, XA_PRESENT)) {
+			WARN_ON_ONCE(1);
+			return true;
+		}
+		return false;
+	}
+
 	/*
 	 * When reading into the swapcache, invalidate our entry. The
 	 * swapcache can be the authoritative owner of the page and
@@ -1590,7 +1610,7 @@ bool zswap_load(struct folio *folio)
 		zswap_entry_free(entry);
 		folio_mark_dirty(folio);
 	}
-
+	folio_mark_uptodate(folio);
 	return true;
 }
 
-- 
2.45.2.505.gda0bf45e8d-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ