[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081104204400.GC10825@elte.hu>
Date: Tue, 4 Nov 2008 21:44:00 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Alexander van Heukelum <heukelum@...tmail.fm>
Cc: Andi Kleen <andi@...stfloor.org>,
Cyrill Gorcunov <gorcunov@...il.com>,
Alexander van Heukelum <heukelum@...lshack.com>,
LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, lguest@...abs.org,
jeremy@...source.com, Steven Rostedt <srostedt@...hat.com>,
Mike Travis <travis@....com>
Subject: Re: [PATCH RFC/RFB] x86_64, i386: interrupt dispatch changes
* Alexander van Heukelum <heukelum@...tmail.fm> wrote:
> On Tue, 4 Nov 2008 18:05:01 +0100, "Andi Kleen" <andi@...stfloor.org>
> said:
> > > not taking into account the cost of cs reading (which I
> > > don't suspect to be that expensive apart from writting,
> >
> > GDT accesses have an implied LOCK prefix. Especially
> > on some older CPUs that could be slow.
> >
> > I don't know if it's a problem or not but it would need
> > some careful benchmarking on different systems to make sure interrupt
> > latencies are not impacted.
That's not a real issue on anything produced in this decade as we have
had per CPU GDTs in Linux for about a decade as well.
It's only an issue on ancient CPUs that export all their LOCKed cycles
to the bus. Pentium and older or so. The PPro got it right already.
What matters is what i said before: the actual raw cycle count before
and after the patch, on the two main classes of CPUs, and the amount
of icache we can save.
> That's good to know. I assume this LOCKed bus cycle only occurs if
> the (hidden) segment information is not cached in some way? How many
> segments are typically cached? In particular, does it optimize
> switching between two segments?
>
> > Another reason I would be also careful with this patch is that it
> > will likely trigger slow paths in JITs like qemu/vmware/etc.
>
> Software can be fixed ;).
Yes, and things like vmware were never a reason to hinder Linux.
> > Also code segment switching is likely not something that current
> > and future micro architectures will spend a lot of time
> > optimizing.
> >
> > I'm not sure that risk is worth the small improvement in code
> > size.
>
> I think it is worth exploring a bit more. I feel it should be a
> neutral change worst-case performance-wise, but I really think the
> new code is more readable/understandable.
It's all measurable, so the vague "risk" mentioned above can be
dispelled via hard numbers.
> > An alternative BTW to having all the stubs in the executable would
> > be to just dynamically generate them when the interrupt is set up.
> > Then you would only have the stubs around for the interrupts which
> > are actually used.
>
> I was trying to simplify things, not make it even less transparent
> ;).
yep, the complexity of dynamic stubs is the last thing we need here.
And as hpa's comments point it out, compressing the rather stupid irq
stubs might be a third option that looks promising as well.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists