[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJhGHyCMMCY9bZauzrSeQr_62SpJgZQEQy9P7Rh28HXJtF5O5A@mail.gmail.com>
Date: Sun, 24 Jan 2021 22:11:14 +0800
From: Lai Jiangshan <jiangshanlai+lkml@...il.com>
To: Joerg Roedel <joro@...tes.org>
Cc: X86 ML <x86@...nel.org>, Joerg Roedel <jroedel@...e.de>,
"H. Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Jiri Slaby <jslaby@...e.cz>,
Dan Williams <dan.j.williams@...el.com>,
Tom Lendacky <thomas.lendacky@....com>,
Juergen Gross <jgross@...e.com>,
Kees Cook <keescook@...omium.org>,
David Rientjes <rientjes@...gle.com>,
Cfir Cohen <cfir@...gle.com>,
Erdem Aktas <erdemaktas@...gle.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Mike Stunes <mstunes@...are.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Martin Radev <martin.b.radev@...il.com>,
LKML <linux-kernel@...r.kernel.org>, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH v7 45/72] x86/entry/64: Add entry code for #VC handler
> +
> + /*
> + * No need to switch back to the IST stack. The current stack is either
> + * identical to the stack in the IRET frame or the VC fall-back stack,
> + * so it is definitly mapped even with PTI enabled.
> + */
> + jmp paranoid_exit
> +
>
Hello
I know we don't enable PTI on AMD, but the above comment doesn't align to the
next code.
We assume PTI is enabled as the comments said "even with PTI enabled".
When #VC happens after entry_SYSCALL_64 but before it switches to the
kernel CR3. vc_switch_off_ist() will switch the stack to the kernel stack
and paranoid_exit can't work when it switches to user CR3 on the kernel stack.
The comment above lost information that the current stack is possible to be
the kernel stack which is mapped not user CR3.
Maybe I missed something.
Thanks
Lai
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> +asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *regs)
> +{
> + unsigned long sp, *stack;
> + struct stack_info info;
> + struct pt_regs *regs_ret;
> +
> + /*
> + * In the SYSCALL entry path the RSP value comes from user-space - don't
> + * trust it and switch to the current kernel stack
> + */
> + if (regs->ip >= (unsigned long)entry_SYSCALL_64 &&
> + regs->ip < (unsigned long)entry_SYSCALL_64_safe_stack) {
> + sp = this_cpu_read(cpu_current_top_of_stack);
> + goto sync;
> + }
Powered by blists - more mailing lists