[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <71705484-46fc-469f-9357-07a076ee0e73@redhat.com>
Date: Tue, 20 Jan 2026 15:13:22 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Xin Long <lucien.xin@...il.com>, network dev <netdev@...r.kernel.org>,
quic@...ts.linux.dev
Cc: davem@...emloft.net, kuba@...nel.org, Eric Dumazet <edumazet@...gle.com>,
Simon Horman <horms@...nel.org>, Stefan Metzmacher <metze@...ba.org>,
Moritz Buhl <mbuhl@...nbsd.org>, Tyler Fanelli <tfanelli@...hat.com>,
Pengtao He <hepengtao@...omi.com>, Thomas Dreibholz <dreibh@...ula.no>,
linux-cifs@...r.kernel.org, Steve French <smfrench@...il.com>,
Namjae Jeon <linkinjeon@...nel.org>, Paulo Alcantara <pc@...guebit.com>,
Tom Talpey <tom@...pey.com>, kernel-tls-handshake@...ts.linux.dev,
Chuck Lever <chuck.lever@...cle.com>, Jeff Layton <jlayton@...nel.org>,
Steve Dickson <steved@...hat.com>, Hannes Reinecke <hare@...e.de>,
Alexander Aring <aahringo@...hat.com>, David Howells <dhowells@...hat.com>,
Matthieu Baerts <matttbe@...nel.org>, John Ericson <mail@...nericson.me>,
Cong Wang <xiyou.wangcong@...il.com>, "D . Wythe"
<alibuda@...ux.alibaba.com>, Jason Baron <jbaron@...mai.com>,
illiliti <illiliti@...tonmail.com>, Sabrina Dubroca <sd@...asysnail.net>,
Marcelo Ricardo Leitner <marcelo.leitner@...il.com>,
Daniel Stenberg <daniel@...x.se>,
Andy Gospodarek <andrew.gospodarek@...adcom.com>
Subject: Re: [PATCH net-next v7 08/16] quic: add path management
On 1/15/26 4:11 PM, Xin Long wrote:
> @@ -0,0 +1,524 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/* QUIC kernel implementation
> + * (C) Copyright Red Hat Corp. 2023
> + *
> + * This file is part of the QUIC kernel implementation
> + *
> + * Initialization/cleanup for QUIC protocol support.
> + *
> + * Written or modified by:
> + * Xin Long <lucien.xin@...il.com>
> + */
> +
> +#include <net/udp_tunnel.h>
> +#include <linux/quic.h>
> +
> +#include "common.h"
> +#include "family.h"
> +#include "path.h"
> +
> +static int (*quic_path_rcv)(struct sock *sk, struct sk_buff *skb, u8 err);
It's unclear why an indirect call is needed here. At least some
explanation is needed in the commit message, possibly you could call
directly a static function.
> +
> +static int quic_udp_rcv(struct sock *sk, struct sk_buff *skb)
> +{
> + memset(skb->cb, 0, sizeof(skb->cb));
> + QUIC_SKB_CB(skb)->seqno = -1;
> + QUIC_SKB_CB(skb)->time = quic_ktime_get_us();
> +
> + skb_pull(skb, sizeof(struct udphdr));
> + skb_dst_force(skb);
> + quic_path_rcv(sk, skb, 0);
> + return 0;
Why not:
return quic_path_rcv(sk, skb, 0);
?
> +static struct quic_udp_sock *quic_udp_sock_create(struct sock *sk, union quic_addr *a)
> +{
> + struct udp_tunnel_sock_cfg tuncfg = {};
> + struct udp_port_cfg udp_conf = {};
> + struct net *net = sock_net(sk);
> + struct quic_uhash_head *head;
> + struct quic_udp_sock *us;
> + struct socket *sock;
> +
> + us = kzalloc(sizeof(*us), GFP_KERNEL);
> + if (!us)
> + return NULL;
> +
> + quic_udp_conf_init(sk, &udp_conf, a);
> + if (udp_sock_create(net, &udp_conf, &sock)) {
> + pr_debug("%s: failed to create udp sock\n", __func__);
> + kfree(us);
> + return NULL;
> + }
> +
> + tuncfg.encap_type = 1;
> + tuncfg.encap_rcv = quic_udp_rcv;
> + tuncfg.encap_err_lookup = quic_udp_err;
> + setup_udp_tunnel_sock(net, sock, &tuncfg);
> +
> + refcount_set(&us->refcnt, 1);
> + us->sk = sock->sk;
> + memcpy(&us->addr, a, sizeof(*a));
> + us->bind_ifindex = sk->sk_bound_dev_if;
> +
> + head = quic_udp_sock_head(net, ntohs(a->v4.sin_port));
> + hlist_add_head(&us->node, &head->head);
> + INIT_WORK(&us->work, quic_udp_sock_put_work);
> +
> + return us;
> +}
> +
> +static bool quic_udp_sock_get(struct quic_udp_sock *us)
> +{
> + return refcount_inc_not_zero(&us->refcnt);
> +}
> +
> +static void quic_udp_sock_put(struct quic_udp_sock *us)
> +{
> + if (refcount_dec_and_test(&us->refcnt))
> + queue_work(quic_wq, &us->work);
Why using a workqueue here? AFAICS all the caller are in process
context. Is that to break a possible deadlock due to nested mutex?
Likely a comment on the refcount/locking scheme would help.
/P
Powered by blists - more mailing lists