[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrUqK6hv4AuGL=GtK+12TCmr5nBA7CBy=X7TNA=w_Jk0Qw@mail.gmail.com>
Date: Tue, 19 May 2020 10:06:05 -0700
From: Andy Lutomirski <luto@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>,
Andrew Cooper <andrew.cooper3@...rix.com>
Cc: LKML <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>,
"Paul E. McKenney" <paulmck@...nel.org>,
Andy Lutomirski <luto@...nel.org>,
Alexandre Chartre <alexandre.chartre@...cle.com>,
Frederic Weisbecker <frederic@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Petr Mladek <pmladek@...e.com>,
Steven Rostedt <rostedt@...dmis.org>,
Joel Fernandes <joel@...lfernandes.org>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Juergen Gross <jgross@...e.com>,
Brian Gerst <brgerst@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Will Deacon <will@...nel.org>,
Tom Lendacky <thomas.lendacky@....com>,
Wei Liu <wei.liu@...nel.org>,
Michael Kelley <mikelley@...rosoft.com>,
Jason Chen CJ <jason.cj.chen@...el.com>,
Zhao Yakui <yakui.zhao@...el.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>
Subject: Re: [patch V6 10/37] x86/entry: Switch XEN/PV hypercall entry to IDTENTRY
On Fri, May 15, 2020 at 5:10 PM Thomas Gleixner <tglx@...utronix.de> wrote:
>
>
> Convert the XEN/PV hypercall to IDTENTRY:
>
> - Emit the ASM stub with DECLARE_IDTENTRY
> - Remove the ASM idtentry in 64bit
> - Remove the open coded ASM entry code in 32bit
> - Remove the old prototypes
>
> The handler stubs need to stay in ASM code as it needs corner case handling
> and adjustment of the stack pointer.
>
> Provide a new C function which invokes the entry/exit handling and calls
> into the XEN handler on the interrupt stack.
>
> The exit code is slightly different from the regular idtentry_exit() on
> non-preemptible kernels. If the hypercall is preemptible and need_resched()
> is set then XEN provides a preempt hypercall scheduling function. Add it as
> conditional path to __idtentry_exit() so the function can be reused.
>
> __idtentry_exit() is forced inlined so on the regular idtentry_exit() path
> the extra condition is optimized out by the compiler.
>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
> Cc: Juergen Gross <jgross@...e.com>
>
> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> index 882ada245bd5..34caf3849632 100644
> --- a/arch/x86/entry/common.c
> +++ b/arch/x86/entry/common.c
> @@ -27,6 +27,9 @@
> #include <linux/syscalls.h>
> #include <linux/uaccess.h>
>
> +#include <xen/xen-ops.h>
> +#include <xen/events.h>
> +
> #include <asm/desc.h>
> #include <asm/traps.h>
> #include <asm/vdso.h>
> @@ -35,6 +38,7 @@
> #include <asm/nospec-branch.h>
> #include <asm/io_bitmap.h>
> #include <asm/syscall.h>
> +#include <asm/irq_stack.h>
>
> #define CREATE_TRACE_POINTS
> #include <trace/events/syscalls.h>
> @@ -539,7 +543,8 @@ void noinstr idtentry_enter(struct pt_regs *regs)
> }
> }
>
> -static __always_inline void __idtentry_exit(struct pt_regs *regs)
> +static __always_inline void __idtentry_exit(struct pt_regs *regs,
> + bool preempt_hcall)
> {
> lockdep_assert_irqs_disabled();
>
> @@ -573,6 +578,16 @@ static __always_inline void __idtentry_exit(struct pt_regs *regs)
> instrumentation_end();
> return;
> }
> + } else if (IS_ENABLED(CONFIG_XEN_PV)) {
> + if (preempt_hcall) {
> + /* See CONFIG_PREEMPTION above */
> + instrumentation_begin();
> + rcu_irq_exit_preempt();
> + xen_maybe_preempt_hcall();
> + trace_hardirqs_on();
> + instrumentation_end();
> + return;
> + }
Ewwwww! This shouldn't be taken as a NAK -- it's just an expression of disgust.
> }
> /*
> * If preemption is disabled then this needs to be done
> @@ -612,5 +627,43 @@ static __always_inline void __idtentry_exit(struct pt_regs *regs)
> */
> void noinstr idtentry_exit(struct pt_regs *regs)
> {
> - __idtentry_exit(regs);
> + __idtentry_exit(regs, false);
> +}
> +
> +#ifdef CONFIG_XEN_PV
> +static void __xen_pv_evtchn_do_upcall(void)
> +{
> + irq_enter_rcu();
> + inc_irq_stat(irq_hv_callback_count);
> +
> + xen_hvm_evtchn_do_upcall();
> +
> + irq_exit_rcu();
> +}
> +
> +__visible noinstr void xen_pv_evtchn_do_upcall(struct pt_regs *regs)
> +{
> + struct pt_regs *old_regs;
> +
> + idtentry_enter(regs);
> + old_regs = set_irq_regs(regs);
> +
> + if (!irq_needs_irq_stack(regs)) {
> + instrumentation_begin();
> + __xen_pv_evtchn_do_upcall();
> + instrumentation_end();
> + } else {
> + run_on_irqstack(__xen_pv_evtchn_do_upcall, NULL);
> + }
Shouldn't this be:
instrumentation_begin();
if (!irq_needs_irq_stack(...))
__blah();
else
run_on_irqstack(__blah, NULL);
instrumentation_end();
or even:
instrumentation_begin();
run_on_irqstack_if_needed(__blah, NULL);
instrumentation_end();
****** BUT *******
I think this is all arse-backwards. This is a giant mess designed to
pretend we support preemption and to emulate normal preemption in a
non-preemptible kernel. I propose one to two massive cleanups:
A: Just delete all of this code. Preemptible hypercalls on
non-preempt kernels will still process interrupts but won't get
preempted. If you want preemption, compile with preemption.
B: Turn this thing around. Specifically, in the one and only case we
care about, we know pretty much exactly what context we got this entry
in: we're running in a schedulable context doing an explicitly
preemptible hypercall, and we have RIP pointing at a SYSCALL
instruction (presumably, but we shouldn't bet on it) in the hypercall
page. Ideally we would change the Xen PV ABI so the hypercall would
return something like EAGAIN instead of auto-restarting and we could
ditch this mess entirely. But the ABI seems to be set in stone or at
least in molasses, so how about just:
idt_entry(exit(regs));
if (inhcall && need_resched())
schedule();
Off the top of my head, I don't see any reason this wouldn't work, and
it's a heck of a lot cleaner. Possibly it should really be:
if (inhcall) {
if (!WARN_ON(regs->ip not in hypercall page))
cond_resched();
}
Powered by blists - more mailing lists