[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJY8UypOGqSOJo531ny4isPSiTg2xW-rO_xNmnYVVovQw@mail.gmail.com>
Date: Thu, 7 Sep 2023 19:16:01 +0200
From: Eric Dumazet <edumazet@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: "David S . Miller" <davem@...emloft.net>, Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org,
eric.dumazet@...il.com, Soheil Hassas Yeganeh <soheil@...gle.com>,
Neal Cardwell <ncardwell@...gle.com>, Yuchung Cheng <ycheng@...gle.com>
Subject: Re: [RFC net-next 4/4] tcp: defer regular ACK while processing socket backlog
On Thu, Sep 7, 2023 at 7:09 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Wed, 6 Sep 2023 20:10:46 +0000 Eric Dumazet wrote:
> > This idea came after a particular workload requested
> > the quickack attribute set on routes, and a performance
> > drop was noticed for large bulk transfers.
>
> Is it okay if I asked why quickack?
> Is it related to delay-based CC?
Note the patch is also helping the 'regular' mode, without "quickack 1" .
This is CC related in any way, but some TCP tx zerocopy workload, sending
one chunk at a time, waiting for the TCP tx zerocopy completion in
order to proceed for the next chunk,
because the 'next chunk' is re-using the memory.
The receiver application is not sending back a message (otherwise the
'delayed ack' would be piggybacked in the reply),
and it also does not know what size of the message was expected (so no
SO_RCVLOWAT or anything could be attempted)
For this kind of workload, it is crucial the last ACK is not delayed, at all.
Powered by blists - more mailing lists