[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56786606.6090506@oracle.com>
Date: Mon, 21 Dec 2015 15:50:14 -0500
From: Sasha Levin <sasha.levin@...cle.com>
To: Andy Lutomirski <luto@...nel.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Cc: Frédéric Weisbecker <fweisbec@...il.com>,
Rik van Riel <riel@...hat.com>,
Oleg Nesterov <oleg@...hat.com>,
Denys Vlasenko <vda.linux@...glemail.com>,
Borislav Petkov <bp@...en8.de>,
Kees Cook <keescook@...omium.org>,
Brian Gerst <brgerst@...il.com>, paulmck@...ux.vnet.ibm.com
Subject: Re: [PATCH v5 08/17] x86/entry: Add enter_from_user_mode and use it
in syscalls
On 07/03/2015 03:44 PM, Andy Lutomirski wrote:
> Changing the x86 context tracking hooks is dangerous because there
> are no good checks that we track our context correctly. Add a
> helper to check that we're actually in CONTEXT_USER when we enter
> from user mode and wire it up for syscall entries.
>
> Subsequent patches will wire this up for all non-NMI entries as
> well. NMIs are their own special beast and cannot currently switch
> overall context tracking state. Instead, they have their own
> special RCU hooks.
>
> This is a tiny speedup if !CONFIG_CONTEXT_TRACKING (removes a
> branch) and a tiny slowdown if CONFIG_CONTEXT_TRACING (adds a layer
> of indirection). Eventually, we should fix up the core context
> tracking code to supply a function that does what we want (and can
> be much simpler than user_exit), which will enable us to get rid of
> the extra call.
Hey Andy,
I see the following warning in today's -next:
[ 2162.706868] ------------[ cut here ]------------
[ 2162.708021] WARNING: CPU: 4 PID: 28801 at arch/x86/entry/common.c:44 enter_from_user_mode+0x1c/0x50()
[ 2162.709466] Modules linked in:
[ 2162.709998] CPU: 4 PID: 28801 Comm: trinity-c375 Tainted: G B 4.4.0-rc5-next-20151221-sasha-00020-g840272e-dirty #2753
[ 2162.711847] 0000000000000000 00000000f17e6fcd ffff880292d5fe08 ffffffffa4045334
[ 2162.713108] 0000000041b58ab3 ffffffffaf66686b ffffffffa4045289 ffff880292d5fdc0
[ 2162.714544] 0000000000000000 00000000f17e6fcd ffffffffa23cf466 0000000000000004
[ 2162.715793] Call Trace:
[ 2162.716229] dump_stack (lib/dump_stack.c:52)
[ 2162.719021] warn_slowpath_common (kernel/panic.c:484)
[ 2162.721014] warn_slowpath_null (kernel/panic.c:518)
[ 2162.721950] enter_from_user_mode (arch/x86/entry/common.c:44 (discriminator 7) include/linux/context_tracking_state.h:30 (discriminator 7) include/linux/context_tracking.h:30 (discriminator 7) arch/x86/entry/common.c:45 (discriminator 7))
[ 2162.722911] syscall_trace_enter_phase1 (arch/x86/entry/common.c:94)
[ 2162.726914] tracesys (arch/x86/entry/entry_64.S:241)
[ 2162.727704] ---[ end trace 1e5b49c361cbfe8b ]---
[ 2162.728468] BUG: scheduling while atomic: trinity-c375/28801/0x00000401
[ 2162.729517] Modules linked in:
[ 2162.730020] Preemption disabled param_attr_store (kernel/params.c:625)
[ 2162.731304]
[ 2162.731579] CPU: 4 PID: 28801 Comm: trinity-c375 Tainted: G B W 4.4.0-rc5-next-20151221-sasha-00020-g840272e-dirty #2753
[ 2162.733432] 0000000000000000 00000000f17e6fcd ffff880292d5fe20 ffffffffa4045334
[ 2162.734778] 0000000041b58ab3 ffffffffaf66686b ffffffffa4045289 ffff880292d5fde0
[ 2162.736036] fffffffface198f9 00000000f17e6fcd ffff880292d5fe50 0000000000000282
[ 2162.737309] Call Trace:
[ 2162.737718] dump_stack (lib/dump_stack.c:52)
[ 2162.740566] __schedule_bug (kernel/sched/core.c:3102)
[ 2162.741498] __schedule (./arch/x86/include/asm/preempt.h:27 kernel/sched/core.c:3116 kernel/sched/core.c:3225)
[ 2162.742391] schedule (kernel/sched/core.c:3312 (discriminator 1))
[ 2162.743221] exit_to_usermode_loop (arch/x86/entry/common.c:246)
[ 2162.744331] syscall_return_slowpath (arch/x86/entry/common.c:282 include/linux/context_tracking_state.h:30 include/linux/context_tracking.h:24 arch/x86/entry/common.c:284 arch/x86/entry/common.c:344)
[ 2162.745364] int_ret_from_sys_call (arch/x86/entry/entry_64.S:282)
Thanks,
Sasha
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists