[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260129050546.374517-1-kuba@kernel.org>
Date: Wed, 28 Jan 2026 21:05:46 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: achender@...nel.org
Cc: Jakub Kicinski <kuba@...nel.org>,
edumazet@...gle.com,
allison.henderson@...cle.com,
horms@...nel.org,
linux-kselftest@...r.kernel.org,
rds-devel@....oracle.com,
netdev@...r.kernel.org,
pabeni@...hat.com,
linux-rdma@...r.kernel.org
Subject: Re: [net-next,v2,1/7] net/rds: new extension header: rdma bytes
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
net/rds: new extension header: rdma bytes
This patch introduces a new RDS extension header RDS_EXTHDR_RDMA_BYTES
for RDMA byte accounting and refactors rds_message_add_extension() to
support multiple extensions per message instead of just one.
> diff --git a/net/rds/ib_send.c b/net/rds/ib_send.c
> index f9d28ddd168d..03afe3df902d 100644
> --- a/net/rds/ib_send.c
> +++ b/net/rds/ib_send.c
> @@ -577,11 +577,29 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
> /* If it has a RDMA op, tell the peer we did it. This is
> * used by the peer to release use-once RDMA MRs. */
> if (rm->rdma.op_active) {
> - struct rds_ext_header_rdma ext_hdr;
> + struct rds_ext_header_rdma ext_hdr = {};
> + struct rds_ext_header_rdma_bytes
> + rdma_bytes_ext_hdr = {};
>
> ext_hdr.h_rdma_rkey = cpu_to_be32(rm->rdma.op_rkey);
> rds_message_add_extension(&rm->m_inc.i_hdr,
> - RDS_EXTHDR_RDMA, &ext_hdr, sizeof(ext_hdr));
> + RDS_EXTHDR_RDMA, &ext_hdr);
> +
> + /* prepare the rdma bytes ext header */
> + rdma_bytes_ext_hdr.h_rflags = rm->rdma.op_write ?
> + RDS_FLAG_RDMA_WR_BYTES : RDS_FLAG_RDMA_RD_BYTES;
> + rdma_bytes_ext_hdr.h_rdma_bytes =
> + cpu_to_be32(rm->rdma.op_bytes);
> +
> + if (rds_message_add_extension(&rm->m_inc.i_hdr,
> + RDS_EXTHDR_RDMA_BYTES,
> + &rdma_bytes_ext_hdr)) {
The new rds_ext_header_rdma_bytes struct contains __be32 + u8 = 5 bytes
of data, but without __packed it will be padded to 8 bytes. This means
sizeof(struct rds_ext_header_rdma_bytes) returns 8, not 5.
With the h_exthdr space limited to 16 bytes, adding RDS_EXTHDR_RDMA
(1+4=5 bytes) plus RDS_EXTHDR_RDMA_BYTES (1+8=9 bytes) consumes 14 bytes,
leaving only 2 bytes available.
> + /* rdma bytes ext header was added successfully,
> + * notify the remote side via flag in header
> + */
> + rm->m_inc.i_hdr.h_flags |=
> + RDS_FLAG_EXTHDR_EXTENSION;
> + }
> }
> if (rm->m_rdma_cookie) {
> rds_message_add_rdma_dest_extension(&rm->m_inc.i_hdr,
> rds_rdma_cookie_key(rm->m_rdma_cookie),
> rds_rdma_cookie_offset(rm->m_rdma_cookie));
> }
When rm->rdma.op_active and rm->m_rdma_cookie are both set, could this
cause rds_message_add_rdma_dest_extension() to silently fail? The
RDS_EXTHDR_RDMA_DEST extension requires 9 bytes (1+8), but only 2 bytes
remain after the RDMA and RDMA_BYTES extensions are added.
The return value of rds_message_add_rdma_dest_extension() is not checked
here. If both conditions are true, the RDMA destination cookie would be
silently dropped without any indication to the caller.
Should the struct be marked __packed to use only 5 bytes on the wire, or
should there be handling for the case when extension space runs out?
> diff --git a/net/rds/rds.h b/net/rds/rds.h
> index 8a549fe687ac..cadfd7ec0ba9 100644
> --- a/net/rds/rds.h
> +++ b/net/rds/rds.h
[ ... ]
> @@ -258,6 +259,19 @@ struct rds_ext_header_rdma_dest {
> __be32 h_rdma_offset;
> };
>
> +/*
> + * This extension header tells the peer about delivered RDMA byte count.
> + */
> +#define RDS_EXTHDR_RDMA_BYTES 4
> +
> +struct rds_ext_header_rdma_bytes {
> + __be32 h_rdma_bytes; /* byte count */
> + u8 h_rflags; /* direction of RDMA, write or read */
> +};
Without __packed, this struct will be 8 bytes due to padding after h_rflags.
The other extension header structs (rds_ext_header_rdma, rds_ext_header_rdma_dest)
use only __be32 fields so they have no padding concerns.
Powered by blists - more mailing lists