lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230817-free_pcppages_bulk-v1-1-c14574a9f80c@kernel.org>
Date:   Thu, 17 Aug 2023 23:05:23 -0700
From:   Chris Li <chrisl@...nel.org>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Kemeng Shi <shikemeng@...weicloud.com>
Cc:     akpm@...ux-foundation.org, baolin.wang@...ux.alibaba.com,
        mgorman@...hsingularity.net, Michal Hocko <mhocko@...e.com>,
        david@...hat.com, willy@...radead.org, linux-mm@...ck.org,
        Namhyung Kim <namhyung@...gle.com>,
        Greg Thelen <gthelen@...gle.com>, linux-kernel@...r.kernel.org,
        Chris Li <chrisl@...nel.org>,
        John Sperbeck <jsperbeck@...gle.com>
Subject: [PATCH RFC 1/2] mm/page_alloc: safeguard free_pcppages_bulk

The current free_pcppages_bulk() can panic when
pcp->count is changed outside of this function by
the BPF program injected in ftrace function entry.

Commit c66a36af7ba3a628 was to fix on the BPF program side
to not allocate memory inside the spinlock.

But the kernel can still panic loading similar BPF without the fix.
Here is the step to reproduce it:

$ git checkout 19030564ab116757e32
$ cd tools/perf
$ make perf
$ ./perf lock con -ab -- ./perf bench sched messaging

You should be able to see the kernel panic within 20 seconds.

Here is what happened in the panic:

count = min(pcp->count, count);

free_pcppages_bulk() assumes count and pcp->count are in sync.
There are no pcp->count changes outside of this function.

That assumption gets broken when BPF lock contention code
allocates memory inside spinlock. pcp->count is one less than
"count". The loop only checks against "count" and runs into
a deadloop because pcp->count drops to zero and all lists
are empty. In a deadloop pindex_min can grow bigger than pindex_max
and pindex_max can lower to negative. The kernel panic is happening
on the pindex trying to access outside of pcp->lists ranges.

Notice that this is just one of the (buggy) BPF programs that
can break it.  Other than the spin lock, there are other function
tracepoints under this function can be hooked up to the BPF program
which can allocate memory and change the pcp->count.

One argument is that BPF should not allocate memory under the
spinlock. On the other hand, the kernel can just check pcp->count
inside the loop to avoid the kernel panic.

Signed-off-by: Chris Li <chrisl@...nel.org>
Reported-by: John Sperbeck<jsperbeck@...gle.com>
---
 mm/page_alloc.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1eb3864e1dbc7..347cb93081a02 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1215,12 +1215,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 	bool isolated_pageblocks;
 	struct page *page;
 
-	/*
-	 * Ensure proper count is passed which otherwise would stuck in the
-	 * below while (list_empty(list)) loop.
-	 */
-	count = min(pcp->count, count);
-
 	/* Ensure requested pindex is drained first. */
 	pindex = pindex - 1;
 
@@ -1266,7 +1260,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 
 			__free_one_page(page, page_to_pfn(page), zone, order, mt, FPI_NONE);
 			trace_mm_page_pcpu_drain(page, order, mt);
-		} while (count > 0 && !list_empty(list));
+		} while (count > 0 && pcp->count > 0 && !list_empty(list));
 	}
 
 	spin_unlock_irqrestore(&zone->lock, flags);

-- 
2.42.0.rc1.204.g551eb34607-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ