lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 29 Jan 2022 12:37:17 +0800
From:   Tony Lu <tonylu@...ux.alibaba.com>
To:     "D. Wythe" <alibuda@...ux.alibaba.com>
Cc:     kgraul@...ux.ibm.com, kuba@...nel.org, davem@...emloft.net,
        netdev@...r.kernel.org, linux-s390@...r.kernel.org,
        linux-rdma@...r.kernel.org, matthieu.baerts@...sares.net
Subject: Re: [PATCH v2 net-next 2/3] net/smc: Limits backlog connections

On Fri, Jan 28, 2022 at 10:44:37PM +0800, D. Wythe wrote:
> From: "D. Wythe" <alibuda@...ux.alibaba.com>
> 
> Current implementation does not handling backlog semantics, one
> potential risk is that server will be flooded by infinite amount
> connections, even if client was SMC-incapable.
> 
> This patch works to put a limit on backlog connections, referring to the
> TCP implementation, we divides SMC connections into two categories:
> 
> 1. Half SMC connection, which includes all TCP established while SMC not
> connections.
> 
> 2. Full SMC connection, which includes all SMC established connections.
> 
> For half SMC connection, since all half SMC connections starts with TCP
> established, we can achieve our goal by put a limit before TCP
> established. Refer to the implementation of TCP, this limits will based
> on not only the half SMC connections but also the full connections,
> which is also a constraint on full SMC connections.
> 
> For full SMC connections, although we know exactly where it starts, it's
> quite hard to put a limit before it. The easiest way is to block wait
> before receive SMC confirm CLC message, while it's under protection by
> smc_server_lgr_pending, a global lock, which leads this limit to the
> entire host instead of a single listen socket. Another way is to drop
> the full connections, but considering the cast of SMC connections, we
> prefer to keep full SMC connections.
> 
> Even so, the limits of full SMC connections still exists, see commits
> about half SMC connection below.
> 
> After this patch, the limits of backend connection shows like:
> 
> For SMC:
> 
> 1. Client with SMC-capability can makes 2 * backlog full SMC connections
>    or 1 * backlog half SMC connections and 1 * backlog full SMC
>    connections at most.
> 
> 2. Client without SMC-capability can only makes 1 * backlog half TCP
>    connections and 1 * backlog full TCP connections.
> 
> Signed-off-by: D. Wythe <alibuda@...ux.alibaba.com>
> ---
> changelog:
> v2: fix compile warning
> ---
>  net/smc/af_smc.c | 43 +++++++++++++++++++++++++++++++++++++++++++
>  net/smc/smc.h    |  4 ++++
>  2 files changed, 47 insertions(+)
> 
> diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
> index 1b40304..66a0e64 100644
> --- a/net/smc/af_smc.c
> +++ b/net/smc/af_smc.c
> @@ -73,6 +73,34 @@ static void smc_set_keepalive(struct sock *sk, int val)
>  	smc->clcsock->sk->sk_prot->keepalive(smc->clcsock->sk, val);
>  }
>  
> +static struct sock *smc_tcp_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
> +					  struct request_sock *req,
> +					  struct dst_entry *dst,
> +					  struct request_sock *req_unhash,
> +					  bool *own_req)
> +{
> +	struct smc_sock *smc;
> +
> +	smc = (struct smc_sock *)((uintptr_t)sk->sk_user_data & ~SK_USER_DATA_NOCOPY);
> +
> +	if (READ_ONCE(sk->sk_ack_backlog) + atomic_read(&smc->smc_pendings) >
> +				sk->sk_max_ack_backlog)
> +		goto drop;
> +
> +	if (sk_acceptq_is_full(&smc->sk)) {
> +		NET_INC_STATS(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
> +		goto drop;
> +	}
> +
> +	/* passthrough to origin syn recv sock fct */
> +	return smc->ori_af_ops->syn_recv_sock(sk, skb, req, dst, req_unhash, own_req);

I am wondering if there would introduce more overhead, compared with
original implement?

> +
> +drop:
> +	dst_release(dst);
> +	tcp_listendrop(sk);
> +	return NULL;
> +}
> +
>  static struct smc_hashinfo smc_v4_hashinfo = {
>  	.lock = __RW_LOCK_UNLOCKED(smc_v4_hashinfo.lock),
>  };
> @@ -1491,6 +1519,9 @@ static void smc_listen_out(struct smc_sock *new_smc)
>  	struct smc_sock *lsmc = new_smc->listen_smc;
>  	struct sock *newsmcsk = &new_smc->sk;
>  
> +	if (tcp_sk(new_smc->clcsock->sk)->syn_smc)
> +		atomic_dec(&lsmc->smc_pendings);
> +
>  	if (lsmc->sk.sk_state == SMC_LISTEN) {
>  		lock_sock_nested(&lsmc->sk, SINGLE_DEPTH_NESTING);
>  		smc_accept_enqueue(&lsmc->sk, newsmcsk);
> @@ -2096,6 +2127,9 @@ static void smc_tcp_listen_work(struct work_struct *work)
>  		if (!new_smc)
>  			continue;
>  
> +		if (tcp_sk(new_smc->clcsock->sk)->syn_smc)
> +			atomic_inc(&lsmc->smc_pendings);
> +
>  		new_smc->listen_smc = lsmc;
>  		new_smc->use_fallback = lsmc->use_fallback;
>  		new_smc->fallback_rsn = lsmc->fallback_rsn;
> @@ -2163,6 +2197,15 @@ static int smc_listen(struct socket *sock, int backlog)
>  	smc->clcsock->sk->sk_data_ready = smc_clcsock_data_ready;
>  	smc->clcsock->sk->sk_user_data =
>  		(void *)((uintptr_t)smc | SK_USER_DATA_NOCOPY);
> +
> +	/* save origin ops */
> +	smc->ori_af_ops = inet_csk(smc->clcsock->sk)->icsk_af_ops;
> +
> +	smc->af_ops = *smc->ori_af_ops;
> +	smc->af_ops.syn_recv_sock = smc_tcp_syn_recv_sock;
> +
> +	inet_csk(smc->clcsock->sk)->icsk_af_ops = &smc->af_ops;

Consider to save syn_recv_sock this field only? There seems no need to
save this ops all.

Thank you,
Tony Lu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ