[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALx6S34YosSQV6HSWcJh1Z7mJ_15u=7-rcJQFfmqKPtocvkX7g@mail.gmail.com>
Date: Mon, 4 Jan 2016 15:58:22 -0800
From: Tom Herbert <tom@...bertland.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
Kernel Team <kernel-team@...com>
Subject: Re: [PATCH net-next] net: Implement fast csum_partial for x86_64
On Mon, Jan 4, 2016 at 3:52 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Mon, 2016-01-04 at 15:34 -0800, Tom Herbert wrote:
>> On Mon, Jan 4, 2016 at 2:36 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>> > On Sun, 2016-01-03 at 15:22 -0800, Tom Herbert wrote:
>> > \...
>> >> +402: /* Length 2, align is 1, 3, or 5 */
>> >> + movb (%rdi), %al
>> >> + movb 1(%rdi), %ah
>> >
>> > Looks like a movw (%rdi),%ax
>> >
>> Wouldn't that be an unaligned access?
>
> x86 does not care. (This is why we have
> CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
>
> I bet it is faster using a single instruction.
>
Okay, I'll re-implement without worrying about alignment. If it's not
an issue at all (even for eight bytes) then that will be a speedup.
Thanks,
Tom
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists