lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNMAVFzqnCZhEity9cjiqQ9CVN1X7qeeeAp_6yKjwKo8iw@mail.gmail.com>
Date: Wed, 2 Oct 2024 17:59:32 +0200
From: Marco Elver <elver@...gle.com>
To: Sabyrzhan Tasbolatov <snovitoll@...il.com>
Cc: ryabinin.a.a@...il.com, glider@...gle.com, andreyknvl@...il.com, 
	dvyukov@...gle.com, vincenzo.frascino@....com, akpm@...ux-foundation.org, 
	kasan-dev@...glegroups.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: instrument copy_from/to_kernel_nofault

On Fri, 27 Sept 2024 at 17:14, Sabyrzhan Tasbolatov <snovitoll@...il.com> wrote:
>
> Instrument copy_from_kernel_nofault(), copy_to_kernel_nofault()
> with instrument_memcpy_before() for KASAN, KCSAN checks and
> instrument_memcpy_after() for KMSAN.

There's a fundamental problem with instrumenting
copy_from_kernel_nofault() - it's meant to be a non-faulting helper,
i.e. if it attempts to read arbitrary kernel addresses, that's not a
problem because it won't fault and BUG. These may be used in places
that probe random memory, and KASAN may say that some memory is
invalid and generate a report - but in reality that's not a problem.

In the Bugzilla bug, Andrey wrote:

> KASAN should check both arguments of copy_from/to_kernel_nofault() for accessibility when both are fault-safe.

I don't see this patch doing it, or at least it's not explained. By
looking at the code, I see that it does the instrument_memcpy_before()
right after pagefault_disable(), which tells me that KASAN or other
tools will complain if a page is not faulted in. These helpers are
meant to be usable like that - despite their inherent unsafety,
there's little that I see that KASAN can help with.

What _might_ be useful, is detecting copying faulted-in but
uninitialized memory to user space. So I think the only
instrumentation we want to retain is KMSAN instrumentation for the
copy_from_kernel_nofault() helper, and only if no fault was
encountered.

Instrumenting copy_to_kernel_nofault() may be helpful to catch memory
corruptions, but only if faulted-in memory was accessed.



> Tested on x86_64 and arm64 with CONFIG_KASAN_SW_TAGS.
> On arm64 with CONFIG_KASAN_HW_TAGS, kunit test currently fails.
> Need more clarification on it - currently, disabled in kunit test.
>
> Reported-by: Andrey Konovalov <andreyknvl@...il.com>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=210505
> Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@...il.com>
> ---
>  mm/kasan/kasan_test.c | 31 +++++++++++++++++++++++++++++++
>  mm/maccess.c          |  8 ++++++--
>  2 files changed, 37 insertions(+), 2 deletions(-)
>
> diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> index 567d33b49..329d81518 100644
> --- a/mm/kasan/kasan_test.c
> +++ b/mm/kasan/kasan_test.c
> @@ -1944,6 +1944,36 @@ static void match_all_mem_tag(struct kunit *test)
>         kfree(ptr);
>  }
>
> +static void copy_from_to_kernel_nofault_oob(struct kunit *test)
> +{
> +       char *ptr;
> +       char buf[128];
> +       size_t size = sizeof(buf);
> +
> +       /* Not detecting fails currently with HW_TAGS */
> +       KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_HW_TAGS);
> +
> +       ptr = kmalloc(size - KASAN_GRANULE_SIZE, GFP_KERNEL);
> +       KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
> +       OPTIMIZER_HIDE_VAR(ptr);
> +
> +       if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) {
> +               /* Check that the returned pointer is tagged. */
> +               KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
> +               KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
> +       }
> +
> +       KUNIT_EXPECT_KASAN_FAIL(test,
> +               copy_from_kernel_nofault(&buf[0], ptr, size));
> +       KUNIT_EXPECT_KASAN_FAIL(test,
> +               copy_from_kernel_nofault(ptr, &buf[0], size));
> +       KUNIT_EXPECT_KASAN_FAIL(test,
> +               copy_to_kernel_nofault(&buf[0], ptr, size));
> +       KUNIT_EXPECT_KASAN_FAIL(test,
> +               copy_to_kernel_nofault(ptr, &buf[0], size));
> +       kfree(ptr);
> +}
> +
>  static struct kunit_case kasan_kunit_test_cases[] = {
>         KUNIT_CASE(kmalloc_oob_right),
>         KUNIT_CASE(kmalloc_oob_left),
> @@ -2017,6 +2047,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
>         KUNIT_CASE(match_all_not_assigned),
>         KUNIT_CASE(match_all_ptr_tag),
>         KUNIT_CASE(match_all_mem_tag),
> +       KUNIT_CASE(copy_from_to_kernel_nofault_oob),
>         {}
>  };
>
> diff --git a/mm/maccess.c b/mm/maccess.c
> index 518a25667..2c4251df4 100644
> --- a/mm/maccess.c
> +++ b/mm/maccess.c
> @@ -15,7 +15,7 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src,
>
>  #define copy_from_kernel_nofault_loop(dst, src, len, type, err_label)  \
>         while (len >= sizeof(type)) {                                   \
> -               __get_kernel_nofault(dst, src, type, err_label);                \
> +               __get_kernel_nofault(dst, src, type, err_label);        \
>                 dst += sizeof(type);                                    \
>                 src += sizeof(type);                                    \
>                 len -= sizeof(type);                                    \
> @@ -32,6 +32,7 @@ long copy_from_kernel_nofault(void *dst, const void *src, size_t size)
>                 return -ERANGE;
>
>         pagefault_disable();
> +       instrument_memcpy_before(dst, src, size);
>         if (!(align & 7))
>                 copy_from_kernel_nofault_loop(dst, src, size, u64, Efault);
>         if (!(align & 3))
> @@ -39,6 +40,7 @@ long copy_from_kernel_nofault(void *dst, const void *src, size_t size)
>         if (!(align & 1))
>                 copy_from_kernel_nofault_loop(dst, src, size, u16, Efault);
>         copy_from_kernel_nofault_loop(dst, src, size, u8, Efault);
> +       instrument_memcpy_after(dst, src, size, 0);
>         pagefault_enable();
>         return 0;
>  Efault:
> @@ -49,7 +51,7 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault);
>
>  #define copy_to_kernel_nofault_loop(dst, src, len, type, err_label)    \
>         while (len >= sizeof(type)) {                                   \
> -               __put_kernel_nofault(dst, src, type, err_label);                \
> +               __put_kernel_nofault(dst, src, type, err_label);        \
>                 dst += sizeof(type);                                    \
>                 src += sizeof(type);                                    \
>                 len -= sizeof(type);                                    \
> @@ -63,6 +65,7 @@ long copy_to_kernel_nofault(void *dst, const void *src, size_t size)
>                 align = (unsigned long)dst | (unsigned long)src;
>
>         pagefault_disable();
> +       instrument_memcpy_before(dst, src, size);
>         if (!(align & 7))
>                 copy_to_kernel_nofault_loop(dst, src, size, u64, Efault);
>         if (!(align & 3))
> @@ -70,6 +73,7 @@ long copy_to_kernel_nofault(void *dst, const void *src, size_t size)
>         if (!(align & 1))
>                 copy_to_kernel_nofault_loop(dst, src, size, u16, Efault);
>         copy_to_kernel_nofault_loop(dst, src, size, u8, Efault);
> +       instrument_memcpy_after(dst, src, size, 0);
>         pagefault_enable();
>         return 0;
>  Efault:
> --
> 2.34.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@...glegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20240927151438.2143936-1-snovitoll%40gmail.com.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ