lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Jul 2022 12:53:32 +0200
From:   Paolo Abeni <pabeni@...hat.com>
To:     Jakub Kicinski <kuba@...nel.org>, davem@...emloft.net
Cc:     netdev@...r.kernel.org, edumazet@...gle.com, borisp@...dia.com,
        john.fastabend@...il.com, maximmi@...dia.com, tariqt@...dia.com,
        vfedorenko@...ek.ru, yoshfuji@...ux-ipv6.org, dsahern@...nel.org
Subject: Re: [PATCH net-next v2 5/7] tcp: allow tls to decrypt directly from
 the tcp rcv queue

On Tue, 2022-07-19 at 16:11 -0700, Jakub Kicinski wrote:
> Expose TCP rx queue accessor and cleanup, so that TLS can
> decrypt directly from the TCP queue. The expectation
> is that the caller can access the skb returned from
> tcp_recv_skb() and up to inq bytes worth of data (some
> of which may be in ->next skbs) and then call
> tcp_read_done() when data has been consumed.
> The socket lock must be held continuously across
> those two operations.
> 
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> ---
> CC: yoshfuji@...ux-ipv6.org
> CC: dsahern@...nel.org
> ---
>  include/net/tcp.h |  2 ++
>  net/ipv4/tcp.c    | 44 +++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 45 insertions(+), 1 deletion(-)
> 
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index 8e48dc56837b..90340d66b731 100644
> --- a/include/net/tcp.h
> +++ b/include/net/tcp.h
> @@ -673,6 +673,8 @@ void tcp_get_info(struct sock *, struct tcp_info *);
>  int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
>  		  sk_read_actor_t recv_actor);
>  int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor);
> +struct sk_buff *tcp_recv_skb(struct sock *sk, u32 seq, u32 *off);
> +void tcp_read_done(struct sock *sk, size_t len);
>  
>  void tcp_initialize_rcv_mss(struct sock *sk);
>  
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 96b6e9c22068..155251a6c5a6 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -1625,7 +1625,7 @@ static void tcp_eat_recv_skb(struct sock *sk, struct sk_buff *skb)
>  	__kfree_skb(skb);
>  }
>  
> -static struct sk_buff *tcp_recv_skb(struct sock *sk, u32 seq, u32 *off)
> +struct sk_buff *tcp_recv_skb(struct sock *sk, u32 seq, u32 *off)
>  {
>  	struct sk_buff *skb;
>  	u32 offset;
> @@ -1648,6 +1648,7 @@ static struct sk_buff *tcp_recv_skb(struct sock *sk, u32 seq, u32 *off)
>  	}
>  	return NULL;
>  }
> +EXPORT_SYMBOL(tcp_recv_skb);
>  
>  /*
>   * This routine provides an alternative to tcp_recvmsg() for routines
> @@ -1778,6 +1779,47 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
>  }
>  EXPORT_SYMBOL(tcp_read_skb);
>  
> +void tcp_read_done(struct sock *sk, size_t len)
> +{
> +	struct tcp_sock *tp = tcp_sk(sk);
> +	u32 seq = tp->copied_seq;
> +	struct sk_buff *skb;
> +	size_t left;
> +	u32 offset;
> +
> +	if (sk->sk_state == TCP_LISTEN)
> +		return;
> +
> +	left = len;
> +	while (left && (skb = tcp_recv_skb(sk, seq, &offset)) != NULL) {
> +		int used;
> +
> +		used = min_t(size_t, skb->len - offset, left);
> +		seq += used;
> +		left -= used;
> +
> +		if (skb->len > offset + used)
> +			break;
> +
> +		if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) {
> +			tcp_eat_recv_skb(sk, skb);
> +			++seq;
> +			break;
> +		}
> +		tcp_eat_recv_skb(sk, skb);
> +	}
> +	WRITE_ONCE(tp->copied_seq, seq);
> +
> +	tcp_rcv_space_adjust(sk);
> +
> +	/* Clean up data we have read: This will do ACK frames. */
> +	if (left != len) {
> +		tcp_recv_skb(sk, seq, &offset);

I *think* tcp_recv_skb() is not needed here, the consumed skb has been
removed in the above loop. AFAICS tcp_read_sock() needs it because the
recv_actor can release and re-acquire the socket lock just after the
previous tcp_recv_skb() call. 

I guess that retpoline overhead is a concern, to avoid calling
tcp_read_sock() with a dummy recv_actor?

Paolo



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ