[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190919150809.145400160@linutronix.de>
Date: Thu, 19 Sep 2019 17:03:21 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: LKML <linux-kernel@...r.kernel.org>
Cc: x86@...nel.org, Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Marc Zyngier <maz@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-arch@...r.kernel.org
Subject: [RFC patch 07/15] arm64/syscall: Remove obscure flag check
The syscall handling code has an obscure check of pending work which does a
shortcut before returning to user space. It calls into the exit work code
when the flags at entry time required an entry into the slowpath. That does
not make sense because the underlying work functionality will reevaluate
the flags anyway and not do anything.
Replace that by a straight forward test for work flags. Preparatory change
for switching to the generic syscall exit work handling code.
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
---
arch/arm64/kernel/syscall.c | 32 ++++++--------------------------
1 file changed, 6 insertions(+), 26 deletions(-)
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -93,8 +93,6 @@ static void cortex_a76_erratum_1463225_s
static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
const syscall_fn_t syscall_table[])
{
- unsigned long flags = current_thread_info()->flags;
-
regs->orig_x0 = regs->regs[0];
regs->syscallno = scno;
/* Set default error number */
@@ -105,33 +103,15 @@ static void el0_svc_common(struct pt_reg
user_exit();
scno = syscall_enter_from_usermode(regs, scno);
- if (scno == NO_SYSCALL)
- goto trace_exit;
-
- invoke_syscall(regs, scno, sc_nr, syscall_table);
+ if (scno != NO_SYSCALL)
+ invoke_syscall(regs, scno, sc_nr, syscall_table);
- /*
- * The tracing status may have changed under our feet, so we have to
- * check again. However, if we were tracing entry, then we always trace
- * exit regardless, as the old entry assembly did.
- */
- if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
- local_daif_mask();
- flags = current_thread_info()->flags;
- if (!has_syscall_work(flags)) {
- /*
- * We're off to userspace, where interrupts are
- * always enabled after we restore the flags from
- * the SPSR.
- */
- trace_hardirqs_on();
- return;
- }
+ local_daif_mask();
+ if (has_syscall_work(current_thread_info()->flags) ||
+ IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
local_daif_restore(DAIF_PROCCTX);
+ syscall_trace_exit(regs);
}
-
-trace_exit:
- syscall_trace_exit(regs);
}
static inline void sve_user_discard(void)
Powered by blists - more mailing lists