[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090301020708.GI26292@one.firstfloor.org>
Date: Sun, 1 Mar 2009 03:07:08 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Arjan van de Ven <arjan@...radead.org>
Cc: Andi Kleen <andi@...stfloor.org>, "H. Peter Anvin" <hpa@...or.com>,
David Miller <davem@...emloft.net>,
torvalds@...ux-foundation.org, mingo@...e.hu,
nickpiggin@...oo.com.au, sqazi@...gle.com,
linux-kernel@...r.kernel.org, tglx@...utronix.de
Subject: Re: [patch] x86, mm: pass in 'total' to __copy_from_user_*nocache()
On Sat, Feb 28, 2009 at 05:38:13PM -0800, Arjan van de Ven wrote:
> On Sun, 1 Mar 2009 02:48:22 +0100
> Andi Kleen <andi@...stfloor.org> wrote:
>
> > > the entire point of using movntq and friends was to save half the
> >
> > I thought the point was to not pollute caches? At least that is
> > what I remember being told when I merged the patch.
> >
>
> the reason that movntq and co are faster is because you avoid the
> write-allocate behavior of the caches....
Not faster than rep ; movs which does similar magic anyways. Of course it being
magic it can vary a lot and is somewhat unpredictable.
Also in my experince movnt is not actually that much faster for small
transfers (as in what the kernel mostly does)
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists