[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180510075840.12840865@xeon-e3>
Date: Thu, 10 May 2018 07:58:40 -0700
From: Stephen Hemminger <stephen@...workplumber.org>
To: Naruto Nguyen <narutonguyen2018@...il.com>
Cc: netdev@...r.kernel.org
Subject: Re: Significant capacity drop on loopback interface
On Thu, 10 May 2018 15:35:59 +0700
Naruto Nguyen <narutonguyen2018@...il.com> wrote:
> Hello everyone,
>
> Recently, I used netperf to test the TCP performance on loopback
> interface on my 2 nodes, one is installed kernel 4.4.103 and the other
> is 3.12.61
>
> netperf -l 100 -t TCP_RR
> netperf -l 100 -t TCP_RR -- -D
>
> In both cases, I see that the throughput on 4.4.103 is about just 1/2
> in comparing with 3.12.61 node
>
> # netperf -l 100 -t TCP_RR
> MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0
> AF_INET to localhost () port 0 AF_INET : first burst 0
> Local /Remote
> Socket Size Request Resp. Elapsed Trans.
> Send Recv Size Size Time Rate
> bytes Bytes bytes bytes secs. per sec
>
> 16384 87380 1 1 100.00 37714.68
> 16384 87380
>
>
> netperf -l 100 -t TCP_RR
> MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0
> AF_INET to localhost () port 0 AF_INET : first burst 0
> Local /Remote
> Socket Size Request Resp. Elapsed Trans.
> Send Recv Size Size Time Rate
> bytes Bytes bytes bytes secs. per sec
>
> 16384 87380 1 1 100.00 64038.41
> 16384 87380
>
>
> When running tcpdump to capture all packets in loopback interface, I
> see that during 200s capture, the number of packets on loopback of
> 4.4.103 is double the number of packets in 3.12.61? Could you please
> let me know if it can cause the low throughput as above? Do we have
> any tuning for TCP on loopback to improve the performace (actually the
> low throughput also happens with UDP) or if we have any known
> performance issue in 4.4 kernel on loopback?
>
> Thanks a lot,
> Brs,
> Naruto
This might just be the increased overhead of KPTI to fix Spectre/Meltdown.
Loopback is very sensitive to syscall overhead.
Powered by blists - more mailing lists