[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAG_fn=UuC=d+jJOor1qMYjP48=mhSf7y=s=gwj6APaFroGqQdA@mail.gmail.com>
Date: Wed, 20 Mar 2024 13:06:33 +0100
From: Alexander Potapenko <glider@...gle.com>
To: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, kasan-dev@...glegroups.com, tglx@...utronix.de,
x86@...nel.org, Linus Torvalds <torvalds@...ux-foundation.org>,
Dmitry Vyukov <dvyukov@...gle.com>, Marco Elver <elver@...gle.com>
Subject: Re: [PATCH v1 3/3] x86: call instrumentation hooks from copy_mc.c
On Wed, Mar 20, 2024 at 11:40 AM Tetsuo Handa
<penguin-kernel@...ove.sakura.ne.jp> wrote:
>
> On 2024/03/20 18:29, Alexander Potapenko wrote:
> > But for KASAN/KCSAN we can afford more aggressive checks.
> > First, if we postpone them after the actual memory accesses happen,
> > the kernel may panic on the invalid access without a decent error
> > report.
> > Second, even if in a particular case only `len-ret` bytes were copied,
> > the caller probably expected both `src` and `dst` to have `len`
> > addressable bytes.
> > Checking for the whole length in this case is more likely to detect a
> > real error than produce a false positive.
>
> KASAN/KCSAN care about whether the requested address range is accessible but
> do not care about whether the requested address range was actually accessed?
I am not exactly sure under which circumstances a copy_mc may fail,
but let's consider how copy_to_user() is handled.
In instrument_copy_to_user()
(https://elixir.bootlin.com/linux/latest/source/include/linux/instrumented.h#L110)
we check the whole ranges before the copy is performed.
Assume there is buggy code in the kernel that allocates N bytes for
some buffer and then copies N+1 bytes from that buffer to the
userspace.
If we are (un)lucky enough, the userspace code may be always
allocating the destination buffer in a way that prevents
copy_to_user() from copying more than N bytes.
Yet it is possible to provide a userspace buffer that is big enough to
trigger an OOB read in the kernel, and reporting this issue is usually
the right thing to do, even if it does not occur during testing.
Moreover, if dst can receive N+1 bytes, but the OOB read happens to
crash the kernel, we'll get a simple panic report instead of a KASAN
report, if we decide to perform the check after copying the data.
>
> By the way, we have the same problem for copy_page() and I was thinking about
> https://lkml.kernel.org/r/1a817eb5-7cd8-44d6-b409-b3bc3f377cb9@I-love.SAKURA.ne.jp .
> But given that instrument_memcpy_{before,after} are added,
> how do we want to use instrument_memcpy_{before,after} for copy_page() ?
> Should we rename assembly version of copy_page() so that we don't need to use
> tricky wrapping like below?
I think renaming the assembly version and providing a `static inline
void copy_page()` in arch/x86/include/asm/page_64.h should be cleaner,
but it is up to x86 people to decide.
The patch below seems to work:
============================================
diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index cc6b8e087192e..70ee3da32397e 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -8,6 +8,7 @@
#include <asm/cpufeatures.h>
#include <asm/alternative.h>
+#include <linux/instrumented.h>
#include <linux/kmsan-checks.h>
/* duplicated to the one in bootmem.h */
@@ -58,7 +59,14 @@ static inline void clear_page(void *page)
: "cc", "memory", "rax", "rcx");
}
-void copy_page(void *to, void *from);
+void copy_page_asm(void *to, void *from);
+
+static inline void copy_page(void *to, void *from)
+{
+ instrument_memcpy_before(to, from, PAGE_SIZE);
+ copy_page_asm(to, from);
+ instrument_memcpy_after(to, from, PAGE_SIZE, 0);
+}
#ifdef CONFIG_X86_5LEVEL
/*
diff --git a/arch/x86/lib/copy_page_64.S b/arch/x86/lib/copy_page_64.S
index d6ae793d08faf..e65b70406d48a 100644
--- a/arch/x86/lib/copy_page_64.S
+++ b/arch/x86/lib/copy_page_64.S
@@ -13,13 +13,13 @@
* prefetch distance based on SMP/UP.
*/
ALIGN
-SYM_FUNC_START(copy_page)
+SYM_FUNC_START(copy_page_asm)
ALTERNATIVE "jmp copy_page_regs", "", X86_FEATURE_REP_GOOD
movl $4096/8, %ecx
rep movsq
RET
-SYM_FUNC_END(copy_page)
-EXPORT_SYMBOL(copy_page)
+SYM_FUNC_END(copy_page_asm)
+EXPORT_SYMBOL(copy_page_asm)
SYM_FUNC_START_LOCAL(copy_page_regs)
subq $2*8, %rsp
============================================
Powered by blists - more mailing lists