[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DEF4101.17745.1C109461@pageexec.freemail.hu>
Date: Wed, 08 Jun 2011 11:29:37 +0200
From: pageexec@...email.hu
To: Ingo Molnar <mingo@...e.hu>
CC: Andrew Lutomirski <luto@....edu>, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, Jesper Juhl <jj@...osbits.net>,
Borislav Petkov <bp@...en8.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Arjan van de Ven <arjan@...radead.org>,
Jan Beulich <JBeulich@...ell.com>,
richard -rw- weinberger <richard.weinberger@...il.com>,
Mikael Pettersson <mikpe@...uu.se>,
Andi Kleen <andi@...stfloor.org>,
Brian Gerst <brgerst@...il.com>,
Louis Rilling <Louis.Rilling@...labs.com>,
Valdis.Kletnieks@...edu
Subject: Re: [PATCH v5 8/9] x86-64: Emulate legacy vsyscalls
On 8 Jun 2011 at 9:16, Ingo Molnar wrote:
> The thing is, as i explained it before, your claim:
>
> > a page fault is never a fast path
>
> is simply ridiculous on its face and crazy talk.
you didn't *explain* a thing. you *claimed* something but offered *no*
proof. where's your measurement showing the single cycle improvement?
how many times do i get to ask you for it before you're willing to provide
it? does it even exist? you see, i'm beginning to think that you simply
just made up that claim, or in plain english, you lied about it. is that
really the case?
> Beyond all the reasons why we don't want to touch the page fault path
> we have a working, implemented, tested IDT based alternative approach
> here that is faster
btw, the pf based approach can be made as fast as well since the necessary
checks can be moved up early. but then we'll face the single cycle brigade ;).
> and more compartmented
what does that even mean here? the pf based approach is less code btw.
> Even if you do not take my word for it, several prominent kernel
> developers told you already that you are wrong,
you must have been reading a different thread or i wasn't cc'd on those
claims. care to quote them back (i only remember Pekka's mail and he has
yet to back up his claim about single/low cycle counts being important
for the bootup case)?
also claiming something and proving something are different things. as
i told you already, ex cathedra statements don't work here.
> and i also showed you the commits that prove you wrong.
unfounded single cycle improvement claims don't a proof make. show your
measurements instead. provided they exist that is.
> Your reply to that was to try to change the topic,
what change are you talking about? you insisted on calling the pf path
fast and your single cycle improvements relevant, you get to prove it.
> laced with frequent insults thrown at me. You called me an 'asshole' yet
> the only thing i did was that i argued with you patiently.
i wish you had argued (i.e., presented well thought out, true and releavant
statements) but instead you only threw out completely baseless accusations,
insinuations, or even outright lies, never mind the several ad hominem
statements that i generously overlooked since unlike you, i can handle
the heat of a discussion ;). IOW, stop pretending to be the hurt angel
here, you're very far from it.
> Is there *any* point where you are willing to admit that you are
> wrong or should i just start filtering out your emails to save me all
> this trouble?
sure, just prove me wrong on a claim and i'll admit it ;).
> When you comment on technical details you generally
> make very good suggestions so i'd hate to stop listening to your
> feedback, but there's a S/N ratio threshold under which i will need
> to do it ...
you sound like i care about who you listen to. if you're a mature person
you might as well act as one. like start answering the questions i posed
you in the last round of emails then we'll see about that S/N ratio.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists