[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <qc5o7g2cz2lmu2ac3bielkhr6novbjhx6k7xxzijag3fcvq4qq@fl76ynhguliw>
Date: Tue, 29 Oct 2024 10:13:59 +0200
From: "Kirill A . Shutemov" <kirill@...temov.name>
To: Josh Poimboeuf <jpoimboe@...nel.org>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>, Borislav Petkov <bp@...en8.de>,
Peter Zijlstra <peterz@...radead.org>, Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Waiman Long <longman@...hat.com>, Dave Hansen <dave.hansen@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>, Linus Torvalds <torvalds@...ux-foundation.org>,
Michael Ellerman <mpe@...erman.id.au>, linuxppc-dev@...ts.ozlabs.org,
Andrew Cooper <andrew.cooper3@...rix.com>, Mark Rutland <mark.rutland@....com>
Subject: Re: [PATCH v3 1/6] x86/uaccess: Avoid barrier_nospec() in 64-bit
copy_from_user()
On Mon, Oct 28, 2024 at 06:56:14PM -0700, Josh Poimboeuf wrote:
> The barrier_nospec() in 64-bit copy_from_user() is slow. Instead use
> pointer masking to force the user pointer to all 1's if the access_ok()
> mispredicted true for an invalid address.
>
> The kernel test robot reports a 2.6% improvement in the per_thread_ops
> benchmark (see link below).
>
> To avoid regressing powerpc and 32-bit x86, move their barrier_nospec()
> calls to their respective raw_copy_from_user() implementations so
> there's no functional change there.
>
> Note that for safety on some AMD CPUs, this relies on recent commit
> 86e6b1547b3d ("x86: fix user address masking non-canonical speculation
> issue").
>
> Link: https://lore.kernel.org/202410281344.d02c72a2-oliver.sang@intel.com
> Signed-off-by: Josh Poimboeuf <jpoimboe@...nel.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
--
Kiryl Shutsemau / Kirill A. Shutemov
Powered by blists - more mailing lists