[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c994091cdc9d42718769f584b7d4a134@AcuMS.aculab.com>
Date: Wed, 7 Oct 2020 08:23:14 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Tony Luck' <tony.luck@...el.com>, Borislav Petkov <bp@...en8.de>
CC: Youquan Song <youquan.song@...el.com>,
"x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v3 4/6] x86/mce: Avoid tail copy when machine check
terminated a copy from user
From: Tony Luck
> Sent: 06 October 2020 22:09
>
> In the page fault case it is ok to see if a few more unaligned bytes
> can be copied from the source address. Worst case is that the page fault
> will be triggered again.
>
> Machine checks are more serious. Just give up at the point where the
> main copy loop triggered the #MC and return from the copy code as if
> the copy succeeded. The machine check handler will use task_work_add() to
> make sure that the task is sent a SIGBUS.
Isn't that just plain wrong?
If copy is reported as succeeding the kernel code will use the 'old'
data that is in the buffer as if it had been read from userspace.
This could end up with kernel stack data being written to a file.
Even zeroing the rest of the kernel buffer is wrong.
IIRC the code to try to maximise the copy has been removed.
So the 'slow' retry wont happen any more.
David
>
> Signed-off-by: Tony Luck <tony.luck@...el.com>
> ---
> arch/x86/lib/copy_user_64.S | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S
> index 5b68e945bf65..77b9b2a3b5c8 100644
> --- a/arch/x86/lib/copy_user_64.S
> +++ b/arch/x86/lib/copy_user_64.S
> @@ -15,6 +15,7 @@
> #include <asm/asm.h>
> #include <asm/smap.h>
> #include <asm/export.h>
> +#include <asm/trapnr.h>
>
> .macro ALIGN_DESTINATION
> /* check for bad alignment of destination */
> @@ -221,6 +222,7 @@ EXPORT_SYMBOL(copy_user_enhanced_fast_string)
> * Try to copy last bytes and clear the rest if needed.
> * Since protection fault in copy_from/to_user is not a normal situation,
> * it is not necessary to optimize tail handling.
> + * Don't try to copy the tail if machine check happened
> *
> * Input:
> * rdi destination
> @@ -232,11 +234,24 @@ EXPORT_SYMBOL(copy_user_enhanced_fast_string)
> */
> SYM_CODE_START_LOCAL(.Lcopy_user_handle_tail)
> movl %edx,%ecx
> + cmp $X86_TRAP_MC,%eax /* check if X86_TRAP_MC */
> + je 3f
> 1: rep movsb
> 2: mov %ecx,%eax
> ASM_CLAC
> ret
>
> + /*
> + * Return zero to pretend that this copy succeeded. This
> + * is counter-intuitive, but needed to prevent the code
> + * in lib/iov_iter.c from retrying and running back into
> + * the poison cache line again. The machine check handler
> + * will ensure that a SIGBUS is sent to the task.
> + */
> +3: xorl %eax,%eax
> + ASM_CLAC
> + ret
> +
> _ASM_EXTABLE_CPY(1b, 2b)
> SYM_CODE_END(.Lcopy_user_handle_tail)
>
> --
> 2.21.1
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists