[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190301125718.GS32534@hirez.programming.kicks-ass.net>
Date: Fri, 1 Mar 2019 13:57:18 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Thomas Gleixner <tglx@...utronix.de>, Peter Anvin <hpa@...or.com>,
Julien Thierry <julien.thierry@....com>,
Will Deacon <will.deacon@....com>,
Andy Lutomirski <luto@...capital.net>,
Ingo Molnar <mingo@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
James Morse <james.morse@....com>, valentin.schneider@....com,
Brian Gerst <brgerst@...il.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Andrew Lutomirski <luto@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Denys Vlasenko <dvlasenk@...hat.com>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
Chris Wilson <chris@...is-wilson.co.uk>
Subject: Re: [PATCH 6/8] i915,uaccess: Fix redundant CLAC
On Fri, Mar 01, 2019 at 01:27:45PM +0100, Peter Zijlstra wrote:
> arch/x86/lib/usercopy_64.o: warning: objtool: .altinstr_replacement+0x30: redundant UACCESS disable
> The usercopy one is difficult, that's copy_user_handle_tail(), it is
> buggered though, because that lacks notrace and thus has a __fentry__
> call in.
>
> Also, afaict all exception jumps into copy_user_handle_tail() will have
> AC=1, but the __{get,put}_user_nocheck() things do STAC/CLAC all over
> again.
>
> So what do we do? Annotate that we start with AC=1 and then immediately
> do the clac, and then let __{get,put}_user_nocheck() do their own thing?
> or make it use the unsafe stuff?
Or.. we move the thing to assembly. Of course, I suck at (writing) asm,
so the below is probably broken in various ways.
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -208,9 +208,6 @@ __copy_from_user_flushcache(void *dst, c
}
unsigned long
-copy_user_handle_tail(char *to, char *from, unsigned len);
-
-unsigned long
mcsafe_handle_tail(char *to, char *from, unsigned len);
#endif /* _ASM_X86_UACCESS_64_H */
--- a/arch/x86/lib/copy_user_64.S
+++ b/arch/x86/lib/copy_user_64.S
@@ -194,6 +194,29 @@ ENDPROC(copy_user_enhanced_fast_string)
EXPORT_SYMBOL(copy_user_enhanced_fast_string)
/*
+ * Try to copy last bytes and clear the rest if needed.
+ * Since protection fault in copy_from/to_user is not a normal situation,
+ * it is not necessary to optimize tail handling.
+ *
+ * Input:
+ * rdi destination
+ * rsi source
+ * rdx count
+ *
+ * Output:
+ * eax uncopied bytes or 0 if successful.
+ */
+ENTRY(copy_user_handle_tail)
+ movl %edx,%ecx
+1: rep movsb
+2: mov %ecx,%eax
+ ASM_CLAC
+ ret
+
+ _ASM_EXTABLE_UA(1b, 2b)
+END(copy_user_handle_tail)
+
+/*
* copy_user_nocache - Uncached memory copy with exception handling
* This will force destination out of cache for more performance.
*
--- a/arch/x86/lib/usercopy_64.c
+++ b/arch/x86/lib/usercopy_64.c
@@ -55,26 +55,6 @@ unsigned long clear_user(void __user *to
EXPORT_SYMBOL(clear_user);
/*
- * Try to copy last bytes and clear the rest if needed.
- * Since protection fault in copy_from/to_user is not a normal situation,
- * it is not necessary to optimize tail handling.
- */
-__visible unsigned long
-copy_user_handle_tail(char *to, char *from, unsigned len)
-{
- for (; len; --len, to++) {
- char c;
-
- if (__get_user_nocheck(c, from++, sizeof(char)))
- break;
- if (__put_user_nocheck(c, to, sizeof(char)))
- break;
- }
- clac();
- return len;
-}
-
-/*
* Similar to copy_user_handle_tail, probe for the write fault point,
* but reuse __memcpy_mcsafe in case a new read error is encountered.
* clac() is handled in _copy_to_iter_mcsafe().
Powered by blists - more mailing lists