lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 29 May 2020 14:50:43 -0700
From:   Jakub Kicinski <>
To:     Saeed Mahameed <>
Cc:     "" <>,
        "" <>,
        Tariq Toukan <>
Subject: Re: [net-next 10/11] net/mlx5e: kTLS, Add kTLS RX resync support

On Fri, 29 May 2020 20:44:29 +0000 Saeed Mahameed wrote:
> > I thought you said that resync requests are guaranteed to never fail?
> I didn't say that :),  maybe tariq did say this before my review,

Boris ;)

> but basically with the current mlx5 arch, it is impossible to guarantee
> this unless we open 1 service queue per ktls offloads and that is going
> to be an overkill!

IIUC every ooo packet causes a resync request in your implementation -
is that true?

It'd be great to have more information about the operation of the
device in the commit message..

> This is a rare corner case anyway, where more than 1k tcp connections
> sharing the same RX ring will request resync at the same exact moment. 

IDK about that. Certain applications are architected for max capacity,
not efficiency under steady load. So it matters a lot how the system
behaves under stress. What if this is the chain of events:

overload -> drops -> TLS steams go out of sync -> all try to resync

We don't want to add extra load on every record if HW offload is
enabled. That's why the next record hint backs off, checks socket 
state etc.

BTW I also don't understand why mlx5e_ktls_rx_resync() has a
tls_offload_rx_force_resync_request(sk) at the end. If the update 
from the NIC comes with a later seq than current, request the sync 
for _that_ seq. I don't understand the need to force a call back on
every record here. 

Also if the sync failed because queue was full, I don't see how forcing 
another sync attempt for the next record is going to match?

Powered by blists - more mailing lists