[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.00.0903242215590.26397@fbirervta.pbzchgretzou.qr>
Date: Tue, 24 Mar 2009 22:17:17 +0100 (CET)
From: Jan Engelhardt <jengelh@...ozas.de>
To: Eric Dumazet <dada1@...mosbay.com>
cc: David Miller <davem@...emloft.net>, kaber@...sh.net,
netdev@...r.kernel.org, netfilter-devel@...r.kernel.org
Subject: Re: netfilter 07/41: arp_tables: unfold two critical loops in
arp_packet_match()
On Tuesday 2009-03-24 22:06, Eric Dumazet wrote:
>>> +/*
>>> + * Unfortunatly, _b and _mask are not aligned to an int (or long int)
>>> + * Some arches dont care, unrolling the loop is a win on them.
>>> + */
>>> +static unsigned long ifname_compare(const char *_a, const char *_b, const char *_mask)
>>> +{
>>> +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
>>> + const unsigned long *a = (const unsigned long *)_a;
>>> + const unsigned long *b = (const unsigned long *)_b;
>>
>> I think we can at least give some help for the platforms which
>> require alignment.
>>
>> We can, for example, assume 16-bit alignment and thus loop
>> over u16's
>
>Right. How about this incremental patch ?
>
>Thanks
>
>[PATCH] arp_tables: ifname_compare() can assume 16bit alignment
>
>Arches without efficient unaligned access can still perform a loop
>assuming 16bit alignment in ifname_compare()
Allow me some skepticism, but the code looks pretty much like a
standard memcmp.
> unsigned long ret = 0;
>+ const u16 *a = (const u16 *)_a;
>+ const u16 *b = (const u16 *)_b;
>+ const u16 *mask = (const u16 *)_mask;
> int i;
>
>- for (i = 0; i < IFNAMSIZ; i++)
>- ret |= (_a[i] ^ _b[i]) & _mask[i];
>+ for (i = 0; i < IFNAMSIZ/sizeof(u16); i++)
>+ ret |= (a[i] ^ b[i]) & mask[i];
> #endif
> return ret;
> }
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists