lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201113191751.rwgv2gyw5dblhe3j@ast-mbp>
Date:   Fri, 13 Nov 2020 11:17:51 -0800
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Daniel Xu <dxu@...uu.xyz>, bpf <bpf@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>, kernel-team@...com
Subject: Re: [PATCH bpf v5 1/2] lib/strncpy_from_user.c: Don't overcopy bytes
 after NUL terminator

On Fri, Nov 13, 2020 at 10:08:02AM -0800, Linus Torvalds wrote:
> On Fri, Nov 13, 2020 at 9:03 AM Alexei Starovoitov
> <alexei.starovoitov@...il.com> wrote:
> >
> > Linus,
> > I think you might have an opinion about it.
> > Please see commit log for the reason we need this fix.
> 
> Why is BPF doing this?
> 
> The thing is, if you care about the data after the strncpy, you should
> be clearing it yourself.
> 
> The kernel "strncpy_from_user()" is  *NOT* the same as "strncpy()",
> despite the name. It never has been, and it never will be. Just the
> return value being different should have given it away.
> 
> In particular, "strncpy()" is documented to zero-pad the end of the
> area. strncpy_from_user() in contrast, is documented to NOT do that.
> You cannot - and must not - depend on it.
> 
> If you want the zero-padding, you need to do it yourself. We're not
> slowing down strncpy_from_user() because you want it, because NOBODY
> ELSE cares, and you're depending on semantics that do not exist, and
> have never existed.
> 
> So if you want padding, you do something like
> 
>    long strncpy_from_user_pad(char *dst, const char __user *src, long count)
>    {
>          long res = strncpy_from_user(dst, src, count)
>          if (res >= 0)
>                 memset(dst+res, 0, count-res);
>         return res;
>    }
> 
> because BPF is doing things wrong as-is, and the problem is very much
> that BPF is relying on undefined data *after* the string.
> 
> And again - we're not slowing down the good cases just because BPF
> depends on bad behavior.

You misunderstood.
BPF side does not depend on zero padding.
The destination buffer was already initialized with zeros before the call.
What BPF didn't expect is strncpy_from_user() copying extra garbage after NUL byte.
I bet some kernel routine would be equally surprised by such behavior.
Hence I still think the fix belongs in this function.
I think the theoretical performance degradation is exactly theoretical.
The v4 approach preserves performance. It wasn't switching to byte_at_a_time:
https://patchwork.kernel.org/project/netdevbpf/patch/4ff12d0c19de63e7172d25922adfb83ae7c8691f.1604620776.git.dxu@dxuuu.xyz/
but it caused KASAN splats, since kernel usage of strncpy_from_user()
doesn't init dest array unlike bpf does.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ