lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231215081204.8802-1-qiuxu.zhuo@intel.com>
Date: Fri, 15 Dec 2023 16:12:04 +0800
From: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
To: naoya.horiguchi@....com
Cc: linmiaohe@...wei.com,
	akpm@...ux-foundation.org,
	tony.luck@...el.com,
	ying.huang@...el.com,
	qiuxu.zhuo@...el.com,
	linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: [PATCH 1/1] mm: memory-failure: Re-split hw-poisoned huge page on -EAGAIN

During the process of splitting a hw-poisoned huge page, it is possible
for the reference count of the huge page to be increased by the threads
within the affected process, leading to a failure in splitting the
hw-poisoned huge page with an error code of -EAGAIN.

This issue can be reproduced when doing memory error injection to a
multiple-thread process, and the error occurs within a huge page.
The call path with the returned -EAGAIN during the testing is shown below:

  memory_failure()
    try_to_split_thp_page()
      split_huge_page()
        split_huge_page_to_list() {
          ...
          Step A: can_split_folio() - Checked that the thp can be split.
          Step B: unmap_folio()
          Step C: folio_ref_freeze() - Failed and returned -EAGAIN.
          ...
        }

The testing logs indicated that some huge pages were split successfully
via the call path above (Step C was successful for these huge pages).
However, some huge pages failed to split due to a failure at Step C, and
it was observed that the reference count of the huge page increased between
Step A and Step C.

Testing has shown that after receiving -EAGAIN, simply re-splitting the
hw-poisoned huge page within memory_failure() always results in the same
-EAGAIN. This is possible because memory_failure() is executed in the
currently affected process. Before this process exits memory_failure() and
is terminated, its threads could increase the reference count of the
hw-poisoned page.

To address this issue, employ the kernel worker to re-split the hw-poisoned
huge page. By the time this worker begins re-splitting the hw-poisoned huge
page, the affected process has already been terminated, preventing its
threads from increasing the reference count. Experimental results have
consistently shown that this worker successfully re-splits these
hw-poisoned huge pages on its first attempt.

The kernel log (before):
  [ 1116.862895] Memory failure: 0x4097fa7: recovery action for unsplit thp: Ignored

The kernel log (after):
  [  793.573536] Memory failure: 0x2100dda: recovery action for unsplit thp: Delayed
  [  793.574666] Memory failure: 0x2100dda: split unsplit thp successfully.

Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
---
 mm/memory-failure.c | 73 +++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 71 insertions(+), 2 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 660c21859118..0db4cf712a78 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -72,6 +72,60 @@ atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0);
 
 static bool hw_memory_failure __read_mostly = false;
 
+#define SPLIT_THP_MAX_RETRY_CNT		10
+#define SPLIT_THP_INIT_DELAYED_MS	1
+
+static bool split_thp_pending;
+
+struct split_thp_req {
+	struct delayed_work work;
+	struct page *thp;
+	int retries;
+};
+
+static void split_thp_work_fn(struct work_struct *work)
+{
+	struct split_thp_req *req = container_of(work, typeof(*req), work.work);
+	int ret;
+
+	/* Split the thp. */
+	get_page(req->thp);
+	lock_page(req->thp);
+	ret = split_huge_page(req->thp);
+	unlock_page(req->thp);
+	put_page(req->thp);
+
+	/* Retry with an exponential backoff. */
+	if (ret && ++req->retries < SPLIT_THP_MAX_RETRY_CNT) {
+		schedule_delayed_work(to_delayed_work(work),
+				      msecs_to_jiffies(SPLIT_THP_INIT_DELAYED_MS << req->retries));
+		return;
+	}
+
+	pr_err("%#lx: split unsplit thp %ssuccessfully.\n", page_to_pfn(req->thp), ret ? "un" : "");
+	kfree(req);
+	split_thp_pending = false;
+}
+
+static bool split_thp_delayed(struct page *thp)
+{
+	struct split_thp_req *req;
+
+	if (split_thp_pending)
+		return false;
+
+	req = kmalloc(sizeof(*req), GFP_ATOMIC);
+	if (!req)
+		return false;
+
+	req->thp = thp;
+	req->retries = 0;
+	INIT_DELAYED_WORK(&req->work, split_thp_work_fn);
+	split_thp_pending = true;
+	schedule_delayed_work(&req->work, msecs_to_jiffies(SPLIT_THP_INIT_DELAYED_MS));
+	return true;
+}
+
 static DEFINE_MUTEX(mf_mutex);
 
 void num_poisoned_pages_inc(unsigned long pfn)
@@ -2275,8 +2329,23 @@ int memory_failure(unsigned long pfn, int flags)
 		 * page is a valid handlable page.
 		 */
 		SetPageHasHWPoisoned(hpage);
-		if (try_to_split_thp_page(p) < 0) {
-			res = action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
+		res = try_to_split_thp_page(p);
+		if (res < 0) {
+			/*
+			 * Re-attempting try_to_split_thp_page() here could consistently
+			 * yield -EAGAIN, as the threads of the process may increment the
+			 * reference count of the huge page before the process exits
+			 * memory_failure() and terminates.
+			 *
+			 * Employ the kernel worker to re-split the huge page. By the time
+			 * this worker initiates the re-splitting process, the affected
+			 * process has already been terminated, preventing its threads from
+			 * incrementing the reference count.
+			 */
+			if (res == -EAGAIN && split_thp_delayed(p))
+				res = action_result(pfn, MF_MSG_UNSPLIT_THP, MF_DELAYED);
+			else
+				res = action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
 			goto unlock_mutex;
 		}
 		VM_BUG_ON_PAGE(!page_count(p), p);
-- 
2.17.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ