[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5909e281-8b4c-2cbc-3d55-c3f743885f1b@gmail.com>
Date: Mon, 9 Jul 2018 04:34:10 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Paolo Abeni <pabeni@...hat.com>,
Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Florian Westphal <fw@...len.de>, NeilBrown <neilb@...e.com>
Subject: Re: [RFC PATCH] ip: re-introduce fragments cache worker
On 07/09/2018 02:43 AM, Paolo Abeni wrote:
> On Fri, 2018-07-06 at 07:20 -0700, Eric Dumazet wrote:
>> I will test/polish it later, I am coming back from vacations and have a backlog.
>>
>> Here are my results : (Note that I have _not_ changed /proc/sys/net/ipv4/ipfrag_time )
>>
>> lpaa6:~# grep . /proc/sys/net/ipv4/ipfrag_* ; grep FRAG /proc/net/sockstat
>> /proc/sys/net/ipv4/ipfrag_high_thresh:104857600
>> /proc/sys/net/ipv4/ipfrag_low_thresh:78643200
>> /proc/sys/net/ipv4/ipfrag_max_dist:0
>> /proc/sys/net/ipv4/ipfrag_secret_interval:0
>> /proc/sys/net/ipv4/ipfrag_time:30
>> FRAG: inuse 1379 memory 105006776
>>
>> lpaa5:/export/hda3/google/edumazet# ./super_netperf 400 -H 10.246.7.134 -t UDP_STREAM -l 60
>> netperf: send_omni: send_data failed: No route to host
>> netperf: send_omni: send_data failed: No route to host
>> 9063
>>
>>
>> I would say that it looks pretty good to me.
>
> Is that with an unmodifed kernel?
>
> I would be happy if I could replicate such results. With the same
> configuration I see:
>
> [netdev9 ~]# grep . /proc/sys/net/ipv4/ipfrag_*; nstat>/dev/null; sleep 1; nstat|grep IpR; grep FRAG /proc/net/sockstat
> /proc/sys/net/ipv4/ipfrag_high_thresh:104857600
> /proc/sys/net/ipv4/ipfrag_low_thresh:3145728
> /proc/sys/net/ipv4/ipfrag_max_dist:64
> /proc/sys/net/ipv4/ipfrag_secret_interval:0
> /proc/sys/net/ipv4/ipfrag_time:30
> IpReasmReqds 827385 0.0
> IpReasmFails 827385 0.0
> FRAG: inuse 1038 memory 105326208
>
> [netdev8 ~]# ./super_netperf.sh 400 -H 192.168.101.2 -t UDP_STREAM -l 60
> 213.6
>
> Note: this setup is intentionally lossy (on the sender side), to stress
> the frag cache:
>
> [netdev8 ~]# tc -s qdisc show dev em1
> qdisc mq 8001: root
> Sent 73950097203 bytes 49639120 pkt (dropped 2052241, overlimits 0 requeues 41)
> backlog 0b 0p requeues 41
> # ...
>
> drops here are due to ldelay being higher than fq_codel's target (I use
> fq_codel default values). Can you please share your sender's TC conf
> and number of tx queues?
You seem to self inflict losses on the sender, and that is terrible for the
(convoluted) stress test you want to run.
I use mq + fq : no losses on the sender.
Do not send patches to solve a problem that does not exist on the field.
If some customers are using netperf and UDP_STREAM with frags, just tell them to
use TCP instead :)
Powered by blists - more mailing lists