lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151018002317.GH18971@kroah.com>
Date:	Sat, 17 Oct 2015 17:23:17 -0700
From:	Greg KH <greg@...ah.com>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	Jaccon Bastiaansen <jaccon.bastiaansen@...il.com>, x86@...nel.org,
	mingo@...hat.com, "H. Peter Anvin" <hpa@...or.com>,
	Peter Zijlstra <peterz@...radead.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	h.zuidam@...puter.org, stable@...r.kernel.org
Subject: Re: [RFC]: Possible race condition in kernel futex code

On Fri, Oct 09, 2015 at 10:06:41AM +0100, Thomas Gleixner wrote:
> On Mon, 5 Oct 2015, Jaccon Bastiaansen wrote:
> > We did some tests with different compilers, kernel versions and kernel
> > configs, with the following results:
> > 
> > Linux 3.12.48, x86_64_defconfig, GCC 4.6.1 :
> > copy_user_generic_unrolled being used, so race condition possible
> > Linux 3.12.48, x86_64_defconfig, GCC 4.9.1 :
> > copy_user_generic_unrolled being used, so race condition possible
> > Linux 4.2.3, x86_64_defconfig, GCC 4.6.1 : 32 bit read being used, no
> > race condition
> > Linux 4.2.3, x86_64_defconfig, GCC 4.9.1 : 32 bit read being used, no
> > race condition
> > 
> > 
> > Our idea to fix this problem is use an explicit 32 bit read in
> > get_futex_value_locked() instead of using the generic function
> > copy_from_user_inatomic() and hoping the compiler uses an atomic
> > access and the right access size.
> 
> You cannot use an explicit 32bit read. We need an access which handles
> the fault gracefully.
> 
> In current mainline this is done proper:
> 
> ret = __copy_from_user_inatomic(dst, src, size = sizeof(u32))
> 
>         __copy_from_user_nocheck(dst, src, size)
> 
>     	       if (!__builtin_constant_p(size))
>                      return copy_user_generic(dst, (__force void *)src, size);
> 	
> 	       size is constant so we end up in the switch case
> 
> 	       switch(size) {
> 	       
> 	       case 4:
> 	       	    __get_user_asm(*(u32 *)dst, (u32 __user *)src,
> 		     		   ret, "l", "k", "=r", 4);
> 		    return ret;
> ....
> 
> In 3.12 this is different:
> 
>    __copy_from_user_inatomic()
> 	copy_user_generic()
> 	    copy_user_generic_unrolled()
> 
> So this is only an issue for kernel versions < 3.13. It was fixed with
> 
> ff47ab4ff3cd: Add 1/2/4/8 byte optimization to 64bit __copy_{from,to}_user_inatomic
> 
> but nobody noticed that the race you described can happen, so it was
> never backported to the stable kernels.
> 
> @stable: Can you please pick up ff47ab4ff3cd plus 
> 
> df90ca969035d x86, sparse: Do not force removal of __user when calling copy_to/from_user_nocheck()
> 
> for stable kernels <= 3.12?
> 
> If that's too much of churn, then I can come up with an explicit fix
> for this. Let me know.

Now applied to 3.10-stable, thanks.

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists