[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADvbK_dEbOvdaMB4jGkaQMO7j0CnnpYUYJXmS-eKxmURybG09w@mail.gmail.com>
Date: Thu, 8 Jan 2026 15:53:38 -0500
From: Xin Long <lucien.xin@...il.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: network dev <netdev@...r.kernel.org>, quic@...ts.linux.dev, davem@...emloft.net,
kuba@...nel.org, Eric Dumazet <edumazet@...gle.com>, Simon Horman <horms@...nel.org>,
Stefan Metzmacher <metze@...ba.org>, Moritz Buhl <mbuhl@...nbsd.org>, Tyler Fanelli <tfanelli@...hat.com>,
Pengtao He <hepengtao@...omi.com>, Thomas Dreibholz <dreibh@...ula.no>, linux-cifs@...r.kernel.org,
Steve French <smfrench@...il.com>, Namjae Jeon <linkinjeon@...nel.org>,
Paulo Alcantara <pc@...guebit.com>, Tom Talpey <tom@...pey.com>, kernel-tls-handshake@...ts.linux.dev,
Chuck Lever <chuck.lever@...cle.com>, Jeff Layton <jlayton@...nel.org>,
Steve Dickson <steved@...hat.com>, Hannes Reinecke <hare@...e.de>, Alexander Aring <aahringo@...hat.com>,
David Howells <dhowells@...hat.com>, Matthieu Baerts <matttbe@...nel.org>,
John Ericson <mail@...nericson.me>, Cong Wang <xiyou.wangcong@...il.com>,
"D . Wythe" <alibuda@...ux.alibaba.com>, Jason Baron <jbaron@...mai.com>,
illiliti <illiliti@...tonmail.com>, Sabrina Dubroca <sd@...asysnail.net>,
Marcelo Ricardo Leitner <marcelo.leitner@...il.com>, Daniel Stenberg <daniel@...x.se>,
Andy Gospodarek <andrew.gospodarek@...adcom.com>
Subject: Re: [PATCH net-next v6 06/16] quic: add stream management
On Thu, Jan 8, 2026 at 3:29 PM Xin Long <lucien.xin@...il.com> wrote:
>
> On Thu, Jan 8, 2026 at 10:36 AM Paolo Abeni <pabeni@...hat.com> wrote:
> >
> > On 1/5/26 3:04 PM, Xin Long wrote:
> > > +/* Create and register new streams for sending or receiving. */
> > > +static struct quic_stream *quic_stream_create(struct quic_stream_table *streams,
> > > + s64 max_stream_id, bool send, bool is_serv)
> > > +{
> > > + struct quic_stream_limits *limits = &streams->send;
> > > + struct quic_stream *stream = NULL;
> > > + gfp_t gfp = GFP_KERNEL_ACCOUNT;
> > > + s64 stream_id;
> > > +
> > > + if (!send) {
> > > + limits = &streams->recv;
> > > + gfp = GFP_ATOMIC | __GFP_ACCOUNT;
> > > + }
> > > + stream_id = limits->next_bidi_stream_id;
> > > + if (quic_stream_id_uni(max_stream_id))
> > > + stream_id = limits->next_uni_stream_id;
> > > +
> > > + /* rfc9000#section-2.1: A stream ID that is used out of order results in all streams
> > > + * of that type with lower-numbered stream IDs also being opened.
> > > + */
> > > + while (stream_id <= max_stream_id) {
> > > + stream = kzalloc(sizeof(*stream), gfp);
> > > + if (!stream)
> > > + return NULL;
> >
> > Do you need to release the allocated ids in case of failure? It would be
> > sourprising to find some ids allocated when this call fails/returns NULL.
> I was aware of this, but didn't change it. As the streams are always opened
> sequentially, I think it's fine just to leave them without causing problems
> when users assume these streams are not yet open.
>
> >
> > > +
> > > + stream->id = stream_id;
> > > + if (quic_stream_id_uni(stream_id)) {
> > > + if (send) {
> > > + stream->send.max_bytes = limits->max_stream_data_uni;
> > > + } else {
> > > + stream->recv.max_bytes = limits->max_stream_data_uni;
> > > + stream->recv.window = stream->recv.max_bytes;
> > > + }
> > > + /* Streams must be opened sequentially. Update the next stream ID so the
> > > + * correct starting point is known if an out-of-order open is requested.
> > > + */
> > > + limits->next_uni_stream_id = stream_id + QUIC_STREAM_ID_STEP;
> > > + limits->streams_uni++;
> > > +
> > > + quic_stream_add(streams, stream);
> > > + stream_id += QUIC_STREAM_ID_STEP;
> > > + continue;
> > > + }
> > > +
> > > + if (quic_stream_id_local(stream_id, is_serv)) {
> > > + stream->send.max_bytes = streams->send.max_stream_data_bidi_remote;
> > > + stream->recv.max_bytes = streams->recv.max_stream_data_bidi_local;
> > > + } else {
> > > + stream->send.max_bytes = streams->send.max_stream_data_bidi_local;
> > > + stream->recv.max_bytes = streams->recv.max_stream_data_bidi_remote;
> > > + }
> > > + stream->recv.window = stream->recv.max_bytes;
> > > +
> > > + limits->next_bidi_stream_id = stream_id + QUIC_STREAM_ID_STEP;
> > > + limits->streams_bidi++;
> > > +
> > > + quic_stream_add(streams, stream);
> > > + stream_id += QUIC_STREAM_ID_STEP;
> > > + }
> > > + return stream;
> > > +}
> > > +
> > > +/* Check if a send or receive stream ID is already closed. */
> > > +static bool quic_stream_id_closed(struct quic_stream_table *streams, s64 stream_id, bool send)
> > > +{
> > > + struct quic_stream_limits *limits = send ? &streams->send : &streams->recv;
> > > +
> > > + if (quic_stream_id_uni(stream_id))
> > > + return stream_id < limits->next_uni_stream_id;
> > > + return stream_id < limits->next_bidi_stream_id;
> >
> > I can't recall if I mentioned the following in a past review... it looks
> > like the above assumes wrap around are not possible, which is realistic
> > given the u64 counters - it would require > 100y on a server allocating
> > 4G ids per second.
> >
> > But it would be nice to explcitly document such assumption somewhere.
> >
> How about I add a simple comment in quic_stream_create() right above
> the next_uni_stream_id/streams_uni increases, like
>
> "Note overflow of next_uni_stream_id/streams_uni is impossible with u64."
>
> > > +}
> > > +
> > > +/* Check if a stream ID would exceed local (recv) or peer (send) limits. */
> > > +bool quic_stream_id_exceeds(struct quic_stream_table *streams, s64 stream_id, bool send)
> > > +{
> > > + u64 nstreams;
> > > +
> > > + if (!send) {
> > > + if (quic_stream_id_uni(stream_id))
> > > + return stream_id > streams->recv.max_uni_stream_id;
> > > + return stream_id > streams->recv.max_bidi_stream_id;
> > > + }
> > > +
> > > + if (quic_stream_id_uni(stream_id)) {
> > > + if (stream_id > streams->send.max_uni_stream_id)
> > > + return true;
> > > + stream_id -= streams->send.next_uni_stream_id;
> > > + nstreams = quic_stream_id_to_streams(stream_id);
> >
> > It's not clear to me why send streams only have this additional check.
> This is a good question.
>
> For recv.max_uni_stream_id, it changes based on next_uni/bidi_stream_id,
> max_streams_uni/bidi and streams_uni/bidi in quic_stream_max_streams_update(),
> there's no need to check them again. (maybe I should leave a comment here)
>
> But for send.max_uni_stream_id, it was updated simply from the peer's updated
> recv.max_uni_stream_id announcement, it must check its local counts and
> limits as well.
>
> >
> > > + return nstreams + streams->send.streams_uni > streams->send./;
> >
> > Possibly it would be more consistent
> >
> > max_uni_stream_id -> max_stream_ids_uni
> >
> > (no strong preferences)
> I actually got the variable name from
> https://datatracker.ietf.org/doc/html/rfc9000.
>
Sorry, I may misunderstand here. from the variable names:
- max_uni_stream_id: is a stream_id
- max_stream_ids_uni: should be a stream_id counter.
max_uni_stream_id = quic_stream_id_to_streams(max_stream_ids_uni);
max_stream_ids_uni = quic_stream_streams_to_id(max_uni_stream_id)
I used max_uni_stream_id, as in most places it's used to check against
stream_id.
Thanks.
> >
> > > + }
> > > +
> > > + if (stream_id > streams->send.max_bidi_stream_id)
> > > + return true;
> > > + stream_id -= streams->send.next_bidi_stream_id;
> > > + nstreams = quic_stream_id_to_streams(stream_id);
> > > + return nstreams + streams->send.streams_bidi > streams->send.max_streams_bidi;
> > > +}
> >
> > [...]
> > > +/* Get or create a receive stream by ID. Requires sock lock held. */
> > > +struct quic_stream *quic_stream_recv_get(struct quic_stream_table *streams, s64 stream_id,
> > > + bool is_serv)
> > > +{
> > > + struct quic_stream *stream;
> > > +
> > > + if (!quic_stream_id_valid(stream_id, is_serv, false))
> > > + return ERR_PTR(-EINVAL);
> > > +
> > > + stream = quic_stream_find(streams, stream_id);
> > > + if (stream)
> > > + return stream;
> > > +
> > > + if (quic_stream_id_local(stream_id, is_serv)) {
> > > + if (quic_stream_id_closed(streams, stream_id, true))
> > > + return ERR_PTR(-ENOSTR);
> > > + return ERR_PTR(-EINVAL);
> > > + }
> > > +
> > > + if (quic_stream_id_closed(streams, stream_id, false))
> > > + return ERR_PTR(-ENOSTR);
> > > +
> > > + if (quic_stream_id_exceeds(streams, stream_id, false))
> > > + return ERR_PTR(-EAGAIN);
> > > +
> > > + stream = quic_stream_create(streams, stream_id, false, is_serv);
> > > + if (!stream)
> > > + return ERR_PTR(-ENOSTR);
> > > + if (quic_stream_id_valid(stream_id, is_serv, true))
> > > + streams->send.active_stream_id = stream_id;
> >
> > This function is really similar to quic_stream_send_get(), I think it
> > should be easy factor out a common helper (and possibly use directly
> > such helper with no send/recv wrapper).
> >
> I will factor out a common helper quic_stream_get() but keep
> quic_stream_send_get/put() as:
>
> struct quic_stream *quic_stream_send_get(...)
> {
> return quic_stream_get(streams, stream_id, is_serv, true);
> }
>
> struct quic_stream *quic_stream_recv_get(...)
> {
> return quic_stream_get(streams, stream_id, is_serv, false);
> }
>
> Thanks.
Powered by blists - more mailing lists