[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZwT6_gzV2evijOGK@elver.google.com>
Date: Tue, 8 Oct 2024 11:27:26 +0200
From: Marco Elver <elver@...gle.com>
To: Sabyrzhan Tasbolatov <snovitoll@...il.com>
Cc: akpm@...ux-foundation.org, andreyknvl@...il.com, bpf@...r.kernel.org,
dvyukov@...gle.com, glider@...gle.com, kasan-dev@...glegroups.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
ryabinin.a.a@...il.com,
syzbot+61123a5daeb9f7454599@...kaller.appspotmail.com,
vincenzo.frascino@....com
Subject: Re: [PATCH v2 1/1] mm, kasan, kmsan: copy_from/to_kernel_nofault
On Tue, Oct 08, 2024 at 01:46PM +0500, Sabyrzhan Tasbolatov wrote:
> On Tue, Oct 8, 2024 at 1:32 PM Marco Elver <elver@...gle.com> wrote:
> >
> > On Sat, Oct 05, 2024 at 09:48PM +0500, Sabyrzhan Tasbolatov wrote:
> > > Instrument copy_from_kernel_nofault() with KMSAN for uninitialized kernel
> > > memory check and copy_to_kernel_nofault() with KASAN, KCSAN to detect
> > > the memory corruption.
> > >
> > > syzbot reported that bpf_probe_read_kernel() kernel helper triggered
> > > KASAN report via kasan_check_range() which is not the expected behaviour
> > > as copy_from_kernel_nofault() is meant to be a non-faulting helper.
> > >
> > > Solution is, suggested by Marco Elver, to replace KASAN, KCSAN check in
> > > copy_from_kernel_nofault() with KMSAN detection of copying uninitilaized
> > > kernel memory. In copy_to_kernel_nofault() we can retain
> > > instrument_write() for the memory corruption instrumentation but before
> > > pagefault_disable().
> >
> > I don't understand why it has to be before the whole copy i.e. before
> > pagefault_disable()?
> >
>
> I was unsure about this decision as well - I should've waited for your response
> before sending the PATCH when I was asking for clarification. Sorry
> for the confusion,
> I thought that what you meant as the instrumentation was already done after
> pagefault_disable().
I just did some digging and there is some existing instrumentation, but
not for what we want. The accesses in the loop on x86 do this:
copy_to_kernel_nofault:
#define __put_kernel_nofault(dst, src, type, err_label) \
__put_user_size(*((type *)(src)), (__force type __user *)(dst), \
sizeof(type), err_label)
and __put_user_size:
#define __put_user_size(x, ptr, size, label) \
do { \
__typeof__(*(ptr)) __x = (x); /* eval x once */ \
__typeof__(ptr) __ptr = (ptr); /* eval ptr once */ \
__chk_user_ptr(__ptr); \
switch (size) { \
case 1: \
__put_user_goto(__x, __ptr, "b", "iq", label); \
break; \
case 2: \
__put_user_goto(__x, __ptr, "w", "ir", label); \
break; \
case 4: \
__put_user_goto(__x, __ptr, "l", "ir", label); \
break; \
case 8: \
__put_user_goto_u64(__x, __ptr, label); \
break; \
default: \
__put_user_bad(); \
} \
instrument_put_user(__x, __ptr, size); \
} while (0)
which already has an instrument_put_user, which expands to this:
#define instrument_put_user(from, ptr, size) \
({ \
kmsan_copy_to_user(ptr, &from, sizeof(from), 0); \
})
So this is already instrumented for KMSAN, to check no uninitialized
memory is accessed - but that's only useful if copying to user space.
__put_kernel_nofault is "abusing" the same helper to copy to the kernel,
so adding explicit instrumentation as proposed still makes sense.
Thanks,
-- Marco
Powered by blists - more mailing lists