[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1300299787.3128.495.camel@calx>
Date: Wed, 16 Mar 2011 13:23:07 -0500
From: Matt Mackall <mpm@...enic.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: George Spelvin <linux@...izon.com>, penberg@...helsinki.fi,
herbert@...dor.hengli.com.au, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/8] drivers/random: Cache align ip_random better
On Wed, 2011-03-16 at 10:17 -0700, Hugh Dickins wrote:
> On Sun, 13 Mar 2011, George Spelvin wrote:
>
> > Cache aligning the secret[] buffer makes copying from it infinitesimally
> > more efficient.
> > ---
> > drivers/char/random.c | 2 +-
> > 1 files changed, 1 insertions(+), 1 deletions(-)
> >
> > diff --git a/drivers/char/random.c b/drivers/char/random.c
> > index 72a4fcb..4bcc4f2 100644
> > --- a/drivers/char/random.c
> > +++ b/drivers/char/random.c
> > @@ -1417,8 +1417,8 @@ static __u32 twothirdsMD4Transform(__u32 const buf[4], __u32 const in[12])
> > #define HASH_MASK ((1 << HASH_BITS) - 1)
> >
> > static struct keydata {
> > - __u32 count; /* already shifted to the final position */
> > __u32 secret[12];
> > + __u32 count; /* already shifted to the final position */
> > } ____cacheline_aligned ip_keydata[2];
> >
> > static unsigned int ip_cnt;
>
> I'm intrigued: please educate me. On what architectures does cache-
> aligning a 48-byte buffer (previously offset by 4 bytes) speed up
> copying from it, and why? Does the copying involve 8-byte or 16-byte
> instructions that benefit from that alignment, rather than cacheline
> alignment?
I think this alignment exists to minimize the number of cacheline
bounces on SMP as this can be a pretty hot structure in the network
stack. It could probably benefit from a per-cpu treatment.
--
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists