lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <380dc816-1732-90dc-268e-4a8c3e7ccc7d@suse.com>
Date:   Thu, 4 Jan 2018 16:02:06 +0100
From:   Juergen Gross <jgross@...e.com>
To:     David Woodhouse <dwmw@...zon.co.uk>, ak@...ux.intel.com
Cc:     Paul Turner <pjt@...gle.com>, LKML <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Greg Kroah-Hartman <gregkh@...ux-foundation.org>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Dave Hansen <dave.hansen@...el.com>, tglx@...utronix.de,
        Kees Cook <keescook@...gle.com>,
        Rik van Riel <riel@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Andy Lutomirski <luto@...capital.net>,
        Jiri Kosina <jikos@...nel.org>, gnomes@...rguk.ukuu.org.uk
Subject: Re: [PATCH v3 10/13] x86/retpoline/pvops: Convert assembler indirect
 jumps

On 04/01/18 15:37, David Woodhouse wrote:
> Convert pvops invocations to use non-speculative call sequences, when
> CONFIG_RETPOLINE is enabled.
> 
> There is scope for future optimisation here — once the pvops methods are
> actually set, we could just turn the damn things into *direct* jumps.
> But this is perfectly sufficient for now, without that added complexity.

I don't see the need to modify the pvops calls.

All indirect calls are replaced by either direct calls or other code
long before any user code is active.

For modules the replacements are in place before the module is being
used.


Juergen

> 
> Signed-off-by: David Woodhouse <dwmw@...zon.co.uk>
> ---
>  arch/x86/include/asm/paravirt_types.h | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 6ec54d01972d..54b735b8ae12 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -336,11 +336,17 @@ extern struct pv_lock_ops pv_lock_ops;
>  #define PARAVIRT_PATCH(x)					\
>  	(offsetof(struct paravirt_patch_template, x) / sizeof(void *))
>  
> +#define paravirt_clobber(clobber)		\
> +	[paravirt_clobber] "i" (clobber)
> +#ifdef CONFIG_RETPOLINE
> +#define paravirt_type(op)				\
> +	[paravirt_typenum] "i" (PARAVIRT_PATCH(op)),	\
> +	[paravirt_opptr] "r" ((op))
> +#else
>  #define paravirt_type(op)				\
>  	[paravirt_typenum] "i" (PARAVIRT_PATCH(op)),	\
>  	[paravirt_opptr] "i" (&(op))
> -#define paravirt_clobber(clobber)		\
> -	[paravirt_clobber] "i" (clobber)
> +#endif
>  
>  /*
>   * Generate some code, and mark it as patchable by the
> @@ -392,7 +398,11 @@ int paravirt_disable_iospace(void);
>   * offset into the paravirt_patch_template structure, and can therefore be
>   * freely converted back into a structure offset.
>   */
> +#ifdef CONFIG_RETPOLINE
> +#define PARAVIRT_CALL	"call __x86.indirect_thunk.%V[paravirt_opptr];"
> +#else
>  #define PARAVIRT_CALL	"call *%c[paravirt_opptr];"
> +#endif
>  
>  /*
>   * These macros are intended to wrap calls through one of the paravirt
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ