[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1510090946200.6097@nanos>
Date: Fri, 9 Oct 2015 10:06:41 +0100 (IST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Jaccon Bastiaansen <jaccon.bastiaansen@...il.com>
cc: x86@...nel.org, mingo@...hat.com, "H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
h.zuidam@...puter.org, stable@...r.kernel.org
Subject: Re: [RFC]: Possible race condition in kernel futex code
On Mon, 5 Oct 2015, Jaccon Bastiaansen wrote:
> We did some tests with different compilers, kernel versions and kernel
> configs, with the following results:
>
> Linux 3.12.48, x86_64_defconfig, GCC 4.6.1 :
> copy_user_generic_unrolled being used, so race condition possible
> Linux 3.12.48, x86_64_defconfig, GCC 4.9.1 :
> copy_user_generic_unrolled being used, so race condition possible
> Linux 4.2.3, x86_64_defconfig, GCC 4.6.1 : 32 bit read being used, no
> race condition
> Linux 4.2.3, x86_64_defconfig, GCC 4.9.1 : 32 bit read being used, no
> race condition
>
>
> Our idea to fix this problem is use an explicit 32 bit read in
> get_futex_value_locked() instead of using the generic function
> copy_from_user_inatomic() and hoping the compiler uses an atomic
> access and the right access size.
You cannot use an explicit 32bit read. We need an access which handles
the fault gracefully.
In current mainline this is done proper:
ret = __copy_from_user_inatomic(dst, src, size = sizeof(u32))
__copy_from_user_nocheck(dst, src, size)
if (!__builtin_constant_p(size))
return copy_user_generic(dst, (__force void *)src, size);
size is constant so we end up in the switch case
switch(size) {
case 4:
__get_user_asm(*(u32 *)dst, (u32 __user *)src,
ret, "l", "k", "=r", 4);
return ret;
....
In 3.12 this is different:
__copy_from_user_inatomic()
copy_user_generic()
copy_user_generic_unrolled()
So this is only an issue for kernel versions < 3.13. It was fixed with
ff47ab4ff3cd: Add 1/2/4/8 byte optimization to 64bit __copy_{from,to}_user_inatomic
but nobody noticed that the race you described can happen, so it was
never backported to the stable kernels.
@stable: Can you please pick up ff47ab4ff3cd plus
df90ca969035d x86, sparse: Do not force removal of __user when calling copy_to/from_user_nocheck()
for stable kernels <= 3.12?
If that's too much of churn, then I can come up with an explicit fix
for this. Let me know.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists