lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 02 Aug 2016 10:47:52 -0700 (PDT)
From:	David Miller <davem@...emloft.net>
To:	mpatocka@...hat.com
Cc:	sparclinux@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] sparc: fix incorrect value returned by
 copy_from_user_fixup

From: Mikulas Patocka <mpatocka@...hat.com>
Date: Tue, 2 Aug 2016 08:20:15 -0400 (EDT)

> On Mon, 1 Aug 2016, David Miller wrote:
> 
>> From: Mikulas Patocka <mpatocka@...hat.com>
>> Date: Sun, 31 Jul 2016 19:50:57 -0400 (EDT)
>> 
>> > @@ -18,9 +25,9 @@
>> >   * of the cases, just fix things up simply here.
>> >   */
>> >  
>> > -static unsigned long compute_size(unsigned long start, unsigned long size, unsigned long *offset)
>> > +static unsigned long compute_size(unsigned long start, unsigned long size, unsigned long *offset, unsigned long prefetch)
>> >  {
>> > -	unsigned long fault_addr = current_thread_info()->fault_address;
>> > +	unsigned long fault_addr = current_thread_info()->fault_address - prefetch;
>> >  	unsigned long end = start + size;
>> >  
>> >  	if (fault_addr < start || fault_addr >= end) {
>> > @@ -36,7 +43,7 @@ unsigned long copy_from_user_fixup(void
>> >  {
>> >  	unsigned long offset;
>> >  
>> > -	size = compute_size((unsigned long) from, size, &offset);
>> > +	size = compute_size((unsigned long) from, size, &offset, COPY_FROM_USER_PREFETCH);
>> >  	if (likely(size))
>> >  		memset(to + offset, 0, size);
>> >  
>> 
>> I think this might cause a problem.  Assume we are not in one of those
>> prefetching loops and are just doing a byte at a time, and therefore
>> hit the fault exactly at the beginning of the missing page.
>> 
>> You will rewind 0x100 bytes and the caller will restart the copy at
>> "faulting address  - 0x100".
>> 
>> If someone is using atomic user copies, and using the returned length
>> to determine which page in userspace needs to be faulted in, and
>> then restart the copy, then we will loop forever.
> 
> This isn't guaranteed on x86 neither.
> 
> __copy_user_intel reads and writes 64 bytes in one loop iteration (and it 
> prefetches the data for the next iteration with "movl 64(%4), %%eax". If 
> it fails, it reports the amount of remaining data at the start of the loop 
> iteration. The reported value may be 67 bytes lower than the fault 
> location.

That's very interesting, let me do some research into this, as I was
pretty sure something like futexes or similar had some requirement in
this area.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ