[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8d866cf2-df52-5085-f0d4-864d15b8667d@suse.de>
Date: Wed, 19 Jul 2023 09:27:43 +0200
From: Hannes Reinecke <hare@...e.de>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Sagi Grimberg <sagi@...mberg.me>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"open list:NETWORKING [GENERAL]" <netdev@...r.kernel.org>
Subject: Re: nvme-tls and TCP window full
On 7/18/23 20:59, Jakub Kicinski wrote:
> On Thu, 13 Jul 2023 12:16:13 +0200 Hannes Reinecke wrote:
>>>> And my reading seems that the current in-kernel TLS implementation
>>>> assumes TCP as the underlying transport anyway, so no harm done.
>>>> Jakub?
>>>
>>> While it is correct that the assumption for tcp only, I think the
>>> right thing to do would be to store the original read_sock and call
>>> that...
>>
>> Ah, sure. Or that.
>
> Yup, sorry for late reply, read_sock could also be replaced by BPF
> or some other thing, even if it's always TCP "at the bottom".
Hmm. So what do you suggest?
Remember, the current patch does this:
@@ -377,7 +376,7 @@ static int tls_strp_read_copyin(struct tls_strparser
*strp)
desc.count = 1; /* give more than one skb per call */
/* sk should be locked here, so okay to do read_sock */
- sock->ops->read_sock(strp->sk, &desc, tls_strp_copyin);
+ tcp_read_sock(strp->sk, &desc, tls_strp_copyin);
return desc.error;
}
precisely because ->read_sock() gets redirected when TLS engages.
And also remember TLS does _not_ use the normal redirection by
intercepting the callbacks from 'struct sock', but rather replaces the
->ops callback in struct socket.
So I'm slightly at a loss on how to implement a new callback without
having to redo the entire TLS handover.
Hence I vastly prefer just the simple patch by using tcp_read_sock()
directly.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@...e.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
Powered by blists - more mailing lists