lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 25 Sep 2019 10:48:31 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Aleksa Sarai <cyphar@...har.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...hat.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Christian Brauner <christian@...uner.io>,
        Rasmus Villemoes <linux@...musvillemoes.dk>,
        Al Viro <viro@...iv.linux.org.uk>,
        GNU C Library <libc-alpha@...rceware.org>,
        Linux API <linux-api@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1 1/4] lib: introduce copy_struct_from_user() helper

On Wed, Sep 25, 2019 at 10:21 AM Aleksa Sarai <cyphar@...har.com> wrote:
>
> Just to make sure I understand, the following diff would this solve the
> problem? If so, I'll apply it, and re-send in a few hours.

Actually, looking at it more, it's still buggy.

That final "size smaller than unsigned long" doesn't correctly handle
the case of (say) a single byte in the middle of a 8-byte word.

So you need to do something like this:

    int is_zeroed_user(const void __user *from, size_t size)
  {
        unsigned long val, mask, align;

        if (unlikely(!size))
                return true;

        if (!user_access_begin(from, size))
                return -EFAULT;

        align = (uintptr_t) from % sizeof(unsigned long);
        from -= align;
        size += align;

        mask = ~aligned_byte_mask(align);

        while (size >= sizeof(unsigned long)) {
                unsafe_get_user(val, (unsigned long __user *) from, err_fault);
                val &= mask;
                if (unlikely(val))
                        goto done;
                mask = ~0ul;
                from += sizeof(unsigned long);
                size -= sizeof(unsigned long);
        }

        if (size) {
                /* (@from + @size) is unaligned. */
                unsafe_get_user(val, (unsigned long __user *) from, err_fault);
                mask &= aligned_byte_mask(size);
                val &= mask;
        }

  done:
        user_access_end();
        return (val == 0);
  err_fault:
        user_access_end();
        return -EFAULT;
  }

note how "mask" carries around from the very beginning all the way to
the end, and "align" itself is no longer used after mask has been
calculated.

That's required because of say a 2-byte read at offset 5. You end up
with "align=5, size=7" at the beginning, and mask needs to be
0x00ffff0000000000 (on little-endian) for that final access.

Anyway, I checked, and the above seems to generate ok code quality
too. Sadly "unsafe_get_user()" cannot use "asm goto" because of a gcc
limitation (no asm goto with outputs), so it's not _perfect_, but
that's literally a compiler limitation.

But I didn't actually _test_ the end result. You should probably
verify that it gets the right behavior exactly for those interesting
cases where we mask both the beginning and the end.

            Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ