[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150818230235.GA13685@lerouge>
Date: Wed, 19 Aug 2015 01:02:37 +0200
From: Frederic Weisbecker <fweisbec@...il.com>
To: Andy Lutomirski <luto@...capital.net>
Cc: Andy Lutomirski <luto@...nel.org>, X86 ML <x86@...nel.org>,
Sasha Levin <sasha.levin@...cle.com>,
Brian Gerst <brgerst@...il.com>,
Denys Vlasenko <dvlasenk@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Oleg Nesterov <oleg@...hat.com>,
Borislav Petkov <bp@...en8.de>, Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH] x86/entry/64: Context-track syscalls before enabling
interrupts
On Tue, Aug 18, 2015 at 03:35:30PM -0700, Andy Lutomirski wrote:
> On Tue, Aug 18, 2015 at 3:16 PM, Frederic Weisbecker <fweisbec@...il.com> wrote:
> > On Tue, Aug 18, 2015 at 12:11:59PM -0700, Andy Lutomirski wrote:
> >> This fixes a couple minor holes if we took an IRQ very early in syscall
> >> processing:
> >>
> >> - We could enter the IRQ with CONTEXT_USER. Everything worked (RCU
> >> was fine), but we could warn if all the debugging options were
> >> set.
> >
> > So this is fixing issues after your changes that call user_exit() from
> > IRQs, right?
>
> Yes. Here's an example splat, courtesy of Sasha:
>
> https://gist.github.com/sashalevin/a006a44989312f6835e7
>
> >
> > But the IRQs aren't supposed to call user_exit(), they have their own hooks.
> > That's where the real issue is.
>
> In -tip, the assumption is that we *always* switch to CONTEXT_KERNEL
> when entering the kernel for a non-NMI reason.
Why? IRQs don't need that! We already have irq_enter()/irq_exit().
And we don't want to call rcu_user_*() pairs on IRQs, you're
introducing a serious performance regression here! And I'm talking about
the code that's currently in -tip.
> That means that we can
> avoid all of the (expensive!) checks for what context we're in.
If you're referring to context tracking, the context check is a per-cpu
read. Not something that's usually considered expensive.
> It also means that (other than IRQs, which need further cleanup), we only
> switch once per user/kernel switch.
???
>
> The cost for doing should be essentially zero, modulo artifacts from
> poor inlining.
And modulo rcu_user_*() that do multiple costly atomic_add_return() operations
implying full memory barriers. Plus the unnecessary vtime accounting that doubles
the existing one in irq_enter/exit() (those even imply a lock currently, which will
probably be turned to seqcount, but still, full memory barriers...).
I'm sorry but I'm going to NACK any code that does that in IRQs (and again that
concerns current tip:x86/asm).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists