[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK6E8=ciimMB+Xa2Fp4Rb7NEGsWt_XVVc-UM_68Ux30MwgdZug@mail.gmail.com>
Date: Mon, 29 Aug 2016 09:53:42 -0700
From: Yuchung Cheng <ycheng@...gle.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Neal Cardwell <ncardwell@...gle.com>
Subject: Re: [PATCH net-next] tcp: add tcp_add_backlog()
On Sat, Aug 27, 2016 at 9:25 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>
> On Sat, 2016-08-27 at 09:13 -0700, Yuchung Cheng wrote:
> > On Sat, Aug 27, 2016 at 7:37 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> > >
>
> > > + /* Only socket owner can try to collapse/prune rx queues
> > > + * to reduce memory overhead, so add a little headroom here.
> > > + * Few sockets backlog are possibly concurrently non empty.
> > > + */
> > > + limit += 64*1024;
> > Just a thought: only add the headroom if ofo queue exists (e.g., signs
> > of losses ore recovery).
>
> Testing the ofo would add a cache line miss, and likely slow down the
> other cpu processing the other packets for this flow.
>
> Also, even if the ofo does not exist, the sk_rcvbuf budget can be
> consumed by regular receive queue.
>
> We still need to be able to process incoming ACK, if both send and
> receive queues are 'full'.
>
> >
> > btw is the added headroom subject to the memory pressure check?
>
> Remind that the backlog check here is mostly to avoid some kind of DOS
> attacks that we had in the past.
>
> While we should definitely prevents DOS attacks, we should also not drop
> legitimate traffic.
>
> Here, number of backlogged sockets is limited by the number of cpus in
> the host (if CONFIG_PREEMPT is disabled), or number of threads blocked
> during a sendmsg()/recvmsg() (if CONFIG_PREEMPT is enabled)
>
> So we do not need to be ultra precise, just have a safe guard.
>
> The pressure check will be done at the time skbs will be added into a
> receive/ofo queue in the very near future.
Good to know. Thanks.
>
>
> Thanks !
>
>
>
Powered by blists - more mailing lists