[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1381847168.2045.45.camel@edumazet-glaptop.roam.corp.google.com>
Date: Tue, 15 Oct 2013 07:26:08 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Sébastien Dugué <sebastien.dugue@...l.net>
Cc: Andi Kleen <andi@...stfloor.org>,
Neil Horman <nhorman@...driver.com>,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org
Subject: Re: [PATCH] x86: Run checksumming in parallel accross multiple alu's
On Tue, 2013-10-15 at 16:15 +0200, Sébastien Dugué wrote:
> Hi Eric,
>
> On Tue, 15 Oct 2013 07:06:25 -0700
> Eric Dumazet <eric.dumazet@...il.com> wrote:
> > But the csum cost is both for sender and receiver ?
>
> No, it was only on the receiver side that I noticed it.
>
Yes, as Andi said, we do the csum while copying the data for the
sender : (I disabled hardware assist tx checksum using 'ethtool -K eth0
tx off')
17.21% netperf [kernel.kallsyms] [k]
csum_partial_copy_generic
|
--- csum_partial_copy_generic
|
|--97.39%-- __libc_send
|
--2.61%-- tcp_sendmsg
inet_sendmsg
sock_sendmsg
_sys_sendto
sys_sendto
system_call_fastpath
__libc_send
> Sorry, but this is 3 years old stuff and I do not have the
> setup anymore to reproduce.
And the receiver should also do the same : (ethtool -K eth0 rx off)
10.55% netserver [kernel.kallsyms] [k]
csum_partial_copy_generic
|
--- csum_partial_copy_generic
|
|--98.24%-- __libc_recv
|
--1.76%-- skb_copy_and_csum_datagram
skb_copy_and_csum_datagram_iovec
tcp_rcv_established
tcp_v4_do_rcv
|
|--73.05%-- tcp_prequeue_process
| tcp_recvmsg
| inet_recvmsg
| sock_recvmsg
| SYSC_recvfrom
| SyS_recvfrom
| system_call_fastpath
| __libc_recv
|
So I suspect something is wrong with IPoIB.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists