[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230721075013.0d71e21b@kernel.org>
Date: Fri, 21 Jul 2023 07:50:13 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Hannes Reinecke <hare@...e.de>
Cc: Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>, Keith
Busch <kbusch@...nel.org>, linux-nvme@...ts.infradead.org, Eric Dumazet
<edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
netdev@...r.kernel.org, Boris Pismenny <boris.pismenny@...il.com>
Subject: Re: [PATCH 6/6] net/tls: implement ->read_sock()
On Fri, 21 Jul 2023 15:53:05 +0200 Hannes Reinecke wrote:
> >> + err = tls_rx_one_record(sk, NULL, &darg);
> >> + if (err < 0) {
> >> + tls_err_abort(sk, -EBADMSG);
> >> + goto read_sock_end;
> >> + }
> >> +
> >> + sk_flush_backlog(sk);
> >
> > Hm, could be a bit often but okay.
> >
> When would you suggest to do it?
> (Do I need to do it at all?)
I picked every 128kB for the normal Rx path. I looked thru my old notes
and it seems I didn't measure smaller batches :( Only 128kB - 4MB.
Flush every 128kB was showing a 4% throughput hit, but much better TCP
behavior. Not sure how the perf hit would scale below 128kB, maybe
the lower the threshold the lower the overhead because statistically
only one of every 4 calls will have something to do? (GRO coalesces
to 64kB == 4 TLS records). Dunno. You'd have to measure.
But its not a blocker for me, FWIW, you can keep the flushing on every
record.
Powered by blists - more mailing lists