[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150618080106.GA11473@gmail.com>
Date: Thu, 18 Jun 2015 10:01:06 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Andy Lutomirski <luto@...capital.net>
Cc: "H. Peter Anvin" <hpa@...or.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Denys Vlasenko <vda.linux@...glemail.com>,
Borislav Petkov <bp@...en8.de>, X86 ML <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: For your amusement: slightly faster syscalls
* Andy Lutomirski <luto@...capital.net> wrote:
> On Mon, Jun 15, 2015 at 2:42 PM, H. Peter Anvin <hpa@...or.com> wrote:
> > On 06/15/2015 02:30 PM, Linus Torvalds wrote:
> >>
> >> On Jun 12, 2015 2:09 PM, "Andy Lutomirski" <luto@...capital.net
> >> <mailto:luto@...capital.net>> wrote:
> >>>
> >>> Caveat emptor: it also disables SMP.
> >>
> >> OK, I don't think it's interesting in that form.
> >>
> >> For small cpu counts, I guess we could have per-cpu syscall entry points
> >> (unless the syscall entry msr is shared across hyperthreading? Some msr's are
> >> per thread, others per core, AFAIK), and it could actually work that way.
> >>
> >> But I'm not sure the three cycles is worth the worry and the complexity.
> >
> > We discussed the per-cpu syscall entry point, and the issue at hand is that it
> > is very hard to do that without with fairly high probability touch another
> > cache line and quite possibly another page (and hence a TLB entry.)
( So apparently I wasn't Cc:ed, or gmail ate the mail - so I can only guess from
the surrounding discussion what this patch does, as my lkml folder is still
doing a long refresh ... )
>
> I think this isn't actually true. If we were going to do a per-cpu syscall
> entry point, then we might as well duplicate all of the entry code per cpu
> instead of just a short trampoline. That would avoid extra TLB misses and (L1)
> cache misses, I think.
>
> I still think this is far too complicated for three cycles. I was hoping for
> more.
The other problem with duplicating entry code is that with per CPU entry code we
split its cache footprint in higher level caches (such as the L2 but also L3
cache).
The interesting number would be to check cache cold entry performance, not cache
hot one: the NUMA latency advantage of having per node copies of the entry code
might be worth it.
... and that's why UP is the least interesting case ;-)
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists