lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 17 Sep 2017 17:08:46 +0200
From:   Borislav Petkov <bp@...e.de>
To:     Brijesh Singh <brijesh.singh@....com>
Cc:     linux-kernel@...r.kernel.org, x86@...nel.org, kvm@...r.kernel.org,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H . Peter Anvin" <hpa@...or.com>,
        Andy Lutomirski <luto@...nel.org>,
        Tom Lendacky <thomas.lendacky@....com>,
        Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
        David Laight <David.Laight@...LAB.COM>,
        Arnd Bergmann <arnd@...db.de>
Subject: Re: [Part1 PATCH v4 13/17] x86/io: Unroll string I/O when SEV is
 active

On Sat, Sep 16, 2017 at 07:34:14AM -0500, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky@....com>
> 
> Secure Encrypted Virtualization (SEV) does not support string I/O, so
> unroll the string I/O operation into a loop operating on one element at
> a time.
> 
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: "H. Peter Anvin" <hpa@...or.com>
> Cc: Borislav Petkov <bp@...e.de>
> Cc: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
> Cc: David Laight <David.Laight@...LAB.COM>
> Cc: Arnd Bergmann <arnd@...db.de>
> Cc: x86@...nel.org
> Cc: linux-kernel@...r.kernel.org
> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
> Signed-off-by: Brijesh Singh <brijesh.singh@....com>
> ---
>  arch/x86/include/asm/io.h | 42 ++++++++++++++++++++++++++++++++++++++----
>  arch/x86/mm/mem_encrypt.c |  8 ++++++++
>  2 files changed, 46 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
> index c40a95c33bb8..07c28ee398d9 100644
> --- a/arch/x86/include/asm/io.h
> +++ b/arch/x86/include/asm/io.h
> @@ -265,6 +265,20 @@ static inline void slow_down_io(void)
>  
>  #endif
>  
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> +
> +extern struct static_key_false __sev;
> +static inline bool __sev_active(void)
> +{
> +	return static_branch_unlikely(&__sev);
> +}

I'm still not happy about the two's sev_active() and __sev_active()
naming. Perhaps the __ variant should be called sev_key_active() or ...

Blergh, my naming sux. In any case, it would be cool to be more obvious
from the naming which variant uses the static key and which is the slow
one.

I'm also thinking of maybe having a single sev_active() which uses the
static key but that is perhaps an overkill on slow paths...

Hrrmmm.

In any case, looking at gcc output, the unrolled variant gets put
out-of-line, as expected.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ