[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191015194012.GO26530@ZenIV.linux.org.uk>
Date: Tue, 15 Oct 2019 20:40:12 +0100
From: Al Viro <viro@...iv.linux.org.uk>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Guenter Roeck <linux@...ck-us.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Darren Hart <dvhart@...radead.org>,
linux-arch <linux-arch@...r.kernel.org>
Subject: Re: [PATCH] Convert filldir[64]() from __put_user() to
unsafe_put_user()
On Tue, Oct 15, 2019 at 12:00:34PM -0700, Linus Torvalds wrote:
> On Tue, Oct 15, 2019 at 11:08 AM Al Viro <viro@...iv.linux.org.uk> wrote:
> >
> > Another question: right now we have
> > if (!access_ok(uaddr, sizeof(u32)))
> > return -EFAULT;
> >
> > ret = arch_futex_atomic_op_inuser(op, oparg, &oldval, uaddr);
> > if (ret)
> > return ret;
> > in kernel/futex.c. Would there be any objections to moving access_ok()
> > inside the instances and moving pagefault_disable()/pagefault_enable() outside?
>
> I think we should remove all the "atomic" versions, and just make the
> rule be that if you want atomic, you surround it with
> pagefault_disable()/pagefault_enable().
Umm... I thought about that, but ended up with "it documents the intent" -
pagefault_disable() might be implicit (e.g. done by kmap_atomic()) or
several levels up the call chain. Not sure.
> That covers not just the futex ops (where "atomic" is actually
> somewhat ambiguous - the ops themselves are atomic too, so the naming
> might stay, although arguably the "futex" part makes that pointless
> too), but also copy_to_user_inatomic() and the powerpc version of
> __get_user_inatomic().
Eh? copy_to_user_inatomic() doesn't exist; __copy_to_user_inatomic()
does, but...
arch/mips/kernel/unaligned.c:1307: res = __copy_to_user_inatomic(addr, fpr, sizeof(*fpr));
drivers/gpu/drm/i915/i915_gem.c:313: unwritten = __copy_to_user_inatomic(user_data,
lib/test_kasan.c:510: unused = __copy_to_user_inatomic(usermem, kmem, size + 1);
mm/maccess.c:98: ret = __copy_to_user_inatomic((__force void __user *)dst, src, size);
these are all callers it has left anywhere and I'm certainly going to kill it.
Now, __copy_from_user_inatomic() has a lot more callers left... Frankly,
the messier part of API is the nocache side of things. Consider e.g. this:
/* platform specific: cacheless copy */
static void cacheless_memcpy(void *dst, void *src, size_t n)
{
/*
* Use the only available X64 cacheless copy. Add a __user cast
* to quiet sparse. The src agument is already in the kernel so
* there are no security issues. The extra fault recovery machinery
* is not invoked.
*/
__copy_user_nocache(dst, (void __user *)src, n, 0);
}
or this
static void ntb_memcpy_tx(struct ntb_queue_entry *entry, void __iomem *offset)
{
#ifdef ARCH_HAS_NOCACHE_UACCESS
/*
* Using non-temporal mov to improve performance on non-cached
* writes, even though we aren't actually copying from user space.
*/
__copy_from_user_inatomic_nocache(offset, entry->buf, entry->len);
#else
memcpy_toio(offset, entry->buf, entry->len);
#endif
/* Ensure that the data is fully copied out before setting the flags */
wmb();
ntb_tx_copy_callback(entry, NULL);
}
"user" part is bollocks in both cases; moreover, I really wonder about that
ifdef in ntb one - ARCH_HAS_NOCACHE_UACCESS is x86-only *at* *the* *moment*
and it just so happens that ..._toio() doesn't require anything special on
x86. Have e.g. arm grow nocache stuff and the things will suddenly break,
won't they?
Powered by blists - more mailing lists