[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1460135072.6473.441.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Fri, 08 Apr 2016 10:04:32 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: David Miller <davem@...emloft.net>
Cc: yangyingliang@...wei.com, netdev@...r.kernel.org,
dingtianhong@...wei.com
Subject: Re: [PATCH RFC] net: decrease the length of backlog queue
immediately after it's detached from sk
On Fri, 2016-04-08 at 12:53 -0400, David Miller wrote:
> From: Eric Dumazet <eric.dumazet@...il.com>
> Date: Fri, 08 Apr 2016 07:44:25 -0700
>
> > On Fri, 2016-04-08 at 19:18 +0800, Yang Yingliang wrote:
> >
> >> I expand tcp_adv_win_scale and tcp_rmem. It has no effect.
> >
> > Try :
> >
> > echo -2 >/proc/sys/net/ipv4/tcp_adv_win_scale
> >
> > And restart your flows.
>
> I'm honestly beginning to suspect a bug in their driver and how they
> handle skb->truesize.
>
> Yang, until you show us the driver you are using and how is handles
> receive packets, we are largely in the dark about a major component
> of this issue and that is entirely unfair to us.
Apparently their skb->truesize and skb->len combinations are correct.
I suspect an issue with rcvbuf autouning on a bidirectional tcp traffic.
We mostly focus on unidirectional flows, but they seem to use a mixed
case.
Also, fact that sendmsg() locks the socket for the duration of the call
is problematic : I suspect their issues would mostly disappear by using
smaller chunk sizes (ie 64KB per sendmsg() instead of 256KB).
We also could add resched points in sendmsg() (processing backlog if it
gets too hot), but I fear this would slow down the fast path.
Powered by blists - more mailing lists