[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191008201616.GW26530@ZenIV.linux.org.uk>
Date: Tue, 8 Oct 2019 21:16:16 +0100
From: Al Viro <viro@...iv.linux.org.uk>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Guenter Roeck <linux@...ck-us.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH] Convert filldir[64]() from __put_user() to
unsafe_put_user()
On Tue, Oct 08, 2019 at 08:58:58PM +0100, Al Viro wrote:
> That's powerpc. And while the constant-sized bits are probably pretty
> useless there as well, note the allow_read_from_user()/prevent_read_from_user()
> part. Looks suspiciously similar to user_access_begin()/user_access_end()...
>
> The difference is, they have separate "for read" and "for write" primitives
> and they want the range in their user_access_end() analogue. Separating
> the read and write isn't a problem for callers (we want them close to
> the actual memory accesses). Passing the range to user_access_end() just
> might be tolerable, unless it makes you throw up...
BTW, another related cleanup is futex_atomic_op_inuser() and
arch_futex_atomic_op_inuser(). In the former we have
if (!access_ok(uaddr, sizeof(u32)))
return -EFAULT;
ret = arch_futex_atomic_op_inuser(op, oparg, &oldval, uaddr);
if (ret)
return ret;
and in the latter we've got STAC/CLAC pairs stuck into inlined bits
on x86. As well as allow_write_to_user(uaddr, sizeof(*uaddr)) on
ppc...
I don't see anything in x86 one objtool would've barfed if we pulled
STAC/CLAC out and turned access_ok() into user_access_begin(),
with matching user_access_end() right after the call of
arch_futex_atomic_op_inuser(). Everything is inlined there and
no scary memory accesses would get into the scope (well, we do
have
if (!ret)
*oval = oldval;
in the very end of arch_futex_atomic_op_inuser() there, but oval
is the address of a local variable in the sole caller; if we run
with kernel stack on ring 3 page, we are deeply fucked *and*
wouldn't have survived that far into futex_atomic_op_inuser() anyway ;-)
Powered by blists - more mailing lists