lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 27 Oct 2020 12:08:12 +0100
From:   Borislav Petkov <bp@...en8.de>
To:     Joerg Roedel <joro@...tes.org>
Cc:     x86@...nel.org, Joerg Roedel <jroedel@...e.de>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Kees Cook <keescook@...omium.org>,
        Arvind Sankar <nivedita@...m.mit.edu>,
        Martin Radev <martin.b.radev@...il.com>,
        Tom Lendacky <thomas.lendacky@....com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 3/5] x86/boot/compressed/64: Check SEV encryption in
 64-bit boot-path

On Wed, Oct 21, 2020 at 02:39:36PM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S
> new file mode 100644
> index 000000000000..5075458ecad0
> --- /dev/null
> +++ b/arch/x86/kernel/sev_verify_cbit.S

Why a separate file? You're using it just like verify_cpu.S and this is
kinda verifying CPU so you could simply add the functionality there...

> @@ -0,0 +1,90 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + *	sev_verify_cbit.S - Code for verification of the C-bit position reported
> + *			    by the Hypervisor when running with SEV enabled.
> + *
> + *	Copyright (c) 2020  Joerg Roedel (jroedel@...e.de)
> + *
> + * Implements sev_verify_cbit() which is called before switching to a new
> + * long-mode page-table at boot.
> + *
> + * It verifies that the C-bit position is correct by writing a random value to
> + * an encrypted memory location while on the current page-table. Then it
> + * switches to the new page-table to verify the memory content is still the
> + * same. After that it switches back to the current page-table and when the
> + * check succeeded it returns. If the check failed the code invalidates the
> + * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
> + * make sure no interrupt or exception can get the CPU out of the hlt loop.
> + *
> + * New page-table pointer is expected in %rdi (first parameter)
> + *
> + */
> +SYM_FUNC_START(sev_verify_cbit)
> +#ifdef CONFIG_AMD_MEM_ENCRYPT

Yeah, can you please use the callee-clobbered registers in the order as
they're used by the ABI, see arch/x86/entry/calling.h.

Because I'm looking at this and wondering are rsi, rdx and rcx somehow
live here and you're avoiding them...

Otherwise nice commenting - I like when it is properly explained what
the asm does and what it expects as input, cool.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ