[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1290693946.2858.323.camel@edumazet-laptop>
Date: Thu, 25 Nov 2010 15:05:46 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: Changli Gao <xiaosuo@...il.com>
Cc: Jozsef Kadlecsik <kadlec@...ckhole.kfki.hu>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
netfilter-devel@...r.kernel.org,
Linus Torvalds <torvalds@...l.org>,
Rusty Russell <rusty@...tcorp.com.au>
Subject: Re: [PATCH 2/2] The new jhash implementation
Le jeudi 25 novembre 2010 à 21:55 +0800, Changli Gao a écrit :
> > I suggest :
> >
> > #include <linux/unaligned/packed_struct.h>
> > ...
> > a += __get_unaligned_cpu32(k);
> > b += __get_unaligned_cpu32(k+4);
> > c += __get_unaligned_cpu32(k+8);
> >
> > Fits nicely in registers.
> >
>
> I think you mean get_unaligned_le32().
>
No, I meant __get_unaligned_cpu32()
We do same thing in jhash2() :
a += k[0];
b += k[1];
c += k[2];
We dont care of bit order of the 32bit quantity we are adding to a,b or
c , as long its consistent for the current machine ;)
get_unaligned_le32() would be slow on big endian arches.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists