[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHirt9jWrkBk3btjxe1VVW_XSZYkGJMpc0vM3_OP5ugqrR2G=w@mail.gmail.com>
Date: Mon, 23 Apr 2018 17:45:28 +0800
From: Heiher <r@....cc>
To: Matt Redfearn <matt.redfearn@...s.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-kernel@...r.kernel.org, stable@...r.kernel.org,
James Hogan <jhogan@...nel.org>,
Ralf Baechle <ralf@...ux-mips.org>, linux-mips@...ux-mips.org
Subject: Re: [PATCH 3.18 45/52] MIPS: memset.S: Fix clobber of v1 in last_fixup
Hi,
Now I understand, thank you explain.
On Mon, Apr 23, 2018 at 5:36 PM, Matt Redfearn <matt.redfearn@...s.com> wrote:
>
>
> On 23/04/18 08:16, Heiher wrote:
>>
>> Hi,
>>
>> IIRC, The v1 is a temporary register, value is not preserved across
>> function calls.
>
>
> v1 is conventionally used for a function return value and as such can be
> changed by called functions. However, bzero is called from inline assembly
> and v1 is not in the clobbers list
> https://elixir.bootlin.com/linux/v4.17-rc1/source/arch/mips/include/asm/uaccess.h#L652
> So the calling function does not expect that register to have been used and
> can legitimately expect its value to remain after the function call, which
> without this patch, it does not - as demonstrated by the test code.
>
> Thanks,
> Matt
>
>
>>
>> I don't see any functions that generated by compiler to restore values
>> of v1 after clobbered it.
>>
>> On Sun, Apr 22, 2018 at 9:54 PM, Greg Kroah-Hartman
>> <gregkh@...uxfoundation.org> wrote:
>>>
>>> 3.18-stable review patch. If anyone has any objections, please let me
>>> know.
>>>
>>> ------------------
>>>
>>> From: Matt Redfearn <matt.redfearn@...s.com>
>>>
>>> commit c96eebf07692e53bf4dd5987510d8b550e793598 upstream.
>>>
>>> The label .Llast_fixup\@ is jumped to on page fault within the final
>>> byte set loop of memset (on < MIPSR6 architectures). For some reason, in
>>> this fault handler, the v1 register is randomly set to a2 & STORMASK.
>>> This clobbers v1 for the calling function. This can be observed with the
>>> following test code:
>>>
>>> static int __init __attribute__((optimize("O0"))) test_clear_user(void)
>>> {
>>> register int t asm("v1");
>>> char *test;
>>> int j, k;
>>>
>>> pr_info("\n\n\nTesting clear_user\n");
>>> test = vmalloc(PAGE_SIZE);
>>>
>>> for (j = 256; j < 512; j++) {
>>> t = 0xa5a5a5a5;
>>> if ((k = clear_user(test + PAGE_SIZE - 256, j)) != j - 256) {
>>> pr_err("clear_user (%px %d) returned %d\n", test + PAGE_SIZE -
>>> 256, j, k);
>>> }
>>> if (t != 0xa5a5a5a5) {
>>> pr_err("v1 was clobbered to 0x%x!\n", t);
>>> }
>>> }
>>>
>>> return 0;
>>> }
>>> late_initcall(test_clear_user);
>>>
>>> Which demonstrates that v1 is indeed clobbered (MIPS64):
>>>
>>> Testing clear_user
>>> v1 was clobbered to 0x1!
>>> v1 was clobbered to 0x2!
>>> v1 was clobbered to 0x3!
>>> v1 was clobbered to 0x4!
>>> v1 was clobbered to 0x5!
>>> v1 was clobbered to 0x6!
>>> v1 was clobbered to 0x7!
>>>
>>> Since the number of bytes that could not be set is already contained in
>>> a2, the andi placing a value in v1 is not necessary and actively
>>> harmful in clobbering v1.
>>>
>>> Reported-by: James Hogan <jhogan@...nel.org>
>>> Signed-off-by: Matt Redfearn <matt.redfearn@...s.com>
>>> Cc: Ralf Baechle <ralf@...ux-mips.org>
>>> Cc: linux-mips@...ux-mips.org
>>> Cc: stable@...r.kernel.org
>>> Patchwork: https://patchwork.linux-mips.org/patch/19109/
>>> Signed-off-by: James Hogan <jhogan@...nel.org>
>>> Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
>>>
>>> ---
>>> arch/mips/lib/memset.S | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> --- a/arch/mips/lib/memset.S
>>> +++ b/arch/mips/lib/memset.S
>>> @@ -210,7 +210,7 @@
>>>
>>> .Llast_fixup\@:
>>> jr ra
>>> - andi v1, a2, STORMASK
>>> + nop
>>>
>>> .Lsmall_fixup\@:
>>> PTR_SUBU a2, t1, a0
>>>
>>>
>>>
>>
>>
>>
>
--
Best regards!
Hev
https://hev.cc
Powered by blists - more mailing lists