[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20240318102117.2839904-10-lee@kernel.org>
Date: Mon, 18 Mar 2024 10:21:21 +0000
From: Lee Jones <lee@...nel.org>
To: linux-cve-announce@...r.kernel.org
Cc: Lee Jones <lee@...nel.org>
Subject: CVE-2024-26639: mm, kmsan: fix infinite recursion due to RCU critical section
Description
===========
In the Linux kernel, the following vulnerability has been resolved:
mm, kmsan: fix infinite recursion due to RCU critical section
Alexander Potapenko writes in [1]: "For every memory access in the code
instrumented by KMSAN we call kmsan_get_metadata() to obtain the metadata
for the memory being accessed. For virtual memory the metadata pointers
are stored in the corresponding `struct page`, therefore we need to call
virt_to_page() to get them.
According to the comment in arch/x86/include/asm/page.h,
virt_to_page(kaddr) returns a valid pointer iff virt_addr_valid(kaddr) is
true, so KMSAN needs to call virt_addr_valid() as well.
To avoid recursion, kmsan_get_metadata() must not call instrumented code,
therefore ./arch/x86/include/asm/kmsan.h forks parts of
arch/x86/mm/physaddr.c to check whether a virtual address is valid or not.
But the introduction of rcu_read_lock() to pfn_valid() added instrumented
RCU API calls to virt_to_page_or_null(), which is called by
kmsan_get_metadata(), so there is an infinite recursion now. I do not
think it is correct to stop that recursion by doing
kmsan_enter_runtime()/kmsan_exit_runtime() in kmsan_get_metadata(): that
would prevent instrumented functions called from within the runtime from
tracking the shadow values, which might introduce false positives."
Fix the issue by switching pfn_valid() to the _sched() variant of
rcu_read_lock/unlock(), which does not require calling into RCU. Given
the critical section in pfn_valid() is very small, this is a reasonable
trade-off (with preemptible RCU).
KMSAN further needs to be careful to suppress calls into the scheduler,
which would be another source of recursion. This can be done by wrapping
the call to pfn_valid() into preempt_disable/enable_no_resched(). The
downside is that this sacrifices breaking scheduling guarantees; however,
a kernel compiled with KMSAN has already given up any performance
guarantees due to being heavily instrumented.
Note, KMSAN code already disables tracing via Makefile, and since mmzone.h
is included, it is not necessary to use the notrace variant, which is
generally preferred in all other cases.
The Linux kernel CVE team has assigned CVE-2024-26639 to this issue.
Affected and fixed versions
===========================
Issue introduced in 6.1.76 with commit 68ed9e333240 and fixed in 6.1.77 with commit dc904345e377
Issue introduced in 6.6.15 with commit 70064241f222 and fixed in 6.6.16 with commit 6335c0cdb2ea
Issue introduced in 6.7.3 with commit 3a01daace71b and fixed in 6.7.4 with commit 5a33420599fa
Please see https://www.kernel.org or a full list of currently supported
kernel versions by the kernel community.
Unaffected versions might change over time as fixes are backported to
older supported kernel versions. The official CVE entry at
https://cve.org/CVERecord/?id=CVE-2024-26639
will be updated if fixes are backported, please check that for the most
up to date information about this issue.
Affected files
==============
The file(s) affected by this issue are:
arch/x86/include/asm/kmsan.h
include/linux/mmzone.h
Mitigation
==========
The Linux kernel CVE team recommends that you update to the latest
stable kernel version for this, and many other bugfixes. Individual
changes are never tested alone, but rather are part of a larger kernel
release. Cherry-picking individual commits is not recommended or
supported by the Linux kernel community at all. If however, updating to
the latest release is impossible, the individual changes to resolve this
issue can be found at these commits:
https://git.kernel.org/stable/c/dc904345e3771aa01d0b8358b550802fdc6fe00b
https://git.kernel.org/stable/c/6335c0cdb2ea0ea02c999e04d34fd84f69fb27ff
https://git.kernel.org/stable/c/5a33420599fa0288792537e6872fd19cc8607ea6
Powered by blists - more mailing lists