[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <528D0D61.1030902@linux.intel.com>
Date: Wed, 20 Nov 2013 11:28:33 -0800
From: "H. Peter Anvin" <hpa@...ux.intel.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>
CC: Peter Anvin <hpa@...or.com>, tglx@...utronix.de,
linux-tip-commits@...r.kernel.org, fenghua.yu@...el.com,
linux-kernel@...r.kernel.org
Subject: Re: [tip:x86/asm] x86-64, copy_user: Remove zero byte check before
copy user buffer.
On 11/16/2013 10:44 PM, Linus Torvalds wrote:
> So this doesn't do the 32-bit truncation in the error path of the
> generic string copy. Oversight?
>
> Linus
I looked at the code again, and it turns out to be false alarm.
We *do* do 32-bit truncation in every path, still:
> ENTRY(copy_user_generic_string)
> CFI_STARTPROC
> ASM_STAC
> cmpl $8,%edx
> jb 2f /* less than 8 bytes, go to byte copy loop */
-> If we jump here, we will truncate at 2:
> ALIGN_DESTINATION
> movl %edx,%ecx
-> If we don't jb 2f we end up
> shrl $3,%ecx
32-bit truncation here...
> andl $7,%edx
> 1: rep
> movsq
> 2: movl %edx,%ecx
32-bit truncation here...
> 3: rep
> movsb
> xorl %eax,%eax
> ASM_CLAC
> ret
>
> .section .fixup,"ax"
> 11: lea (%rdx,%rcx,8),%rcx
> 12: movl %ecx,%edx /* ecx is zerorest also */
-> Even if %rdx+%rcx*8 > 2^32 we end up truncating at 12: -- not that it
matters, since both arguments are prototyped as "unsigned" and therefore
the C compiler is supposed to guarantee the upper 32 bits are ignored.
So I think Fenghua's patch is fine as-is.
-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists