[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251109002207.GF1859178@ziepe.ca>
Date: Sat, 8 Nov 2025 20:22:07 -0400
From: Jason Gunthorpe <jgg@...pe.ca>
To: Kriish Sharma <kriish.sharma2006@...il.com>
Cc: Leon Romanovsky <leon@...nel.org>,
Vlad Dumitrescu <vdumitrescu@...dia.com>,
Parav Pandit <parav@...dia.com>, Edward Srouji <edwards@...dia.com>,
linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
syzbot+938fcd548c303fe33c1a@...kaller.appspotmail.com
Subject: Re: [PATCH v2] RDMA/core: Check for missing DGID attribute in
ib_nl_is_good_ip_resp()
On Sat, Nov 08, 2025 at 03:43:36AM +0000, Kriish Sharma wrote:
> KMSAN reported a use of uninitialized memory in hex_byte_pack()
> via ip6_string() when printing %pI6 from ib_nl_handle_ip_res_resp().
> Previously, ib_nl_process_good_ip_rsep() used the 'gid' without
> verifying that the LS_NLA_TYPE_DGID attribute was present.
>
> This patch adds a check for the DGID attribute in ib_nl_is_good_ip_resp(),
> returning false if it is missing. This prevents uninitialized memory
> usage downstream in ib_nl_process_good_ip_rsep().
>
> Suggested-by: Vlad Dumitrescu <vdumitrescu@...dia.com>
> Reported-by: syzbot+938fcd548c303fe33c1a@...kaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=938fcd548c303fe33c1a
> Fixes: ae43f8286730 ("IB/core: Add IP to GID netlink offload")
> Signed-off-by: Kriish Sharma <kriish.sharma2006@...il.com>
> ---
> v2:
> - Added check for LS_NLA_TYPE_DGID in ib_nl_is_good_ip_resp() to
> avoid uninitialized 'gid' usage, as suggested by Vlad Dumitrescu.
>
> v1: https://lore.kernel.org/all/20251107041002.2091584-1-kriish.sharma2006@gmail.com
>
> drivers/infiniband/core/addr.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
> diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
> index 61596cda2b65..dde9114fe6a1 100644
> --- a/drivers/infiniband/core/addr.c
> +++ b/drivers/infiniband/core/addr.c
> @@ -93,13 +93,16 @@ static inline bool ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh)
> if (ret)
> return false;
>
> + if (!tb[LS_NLA_TYPE_DGID])
> + return false;
> +
> return true;
> }
>
> static void ib_nl_process_good_ip_rsep(const struct nlmsghdr *nlh)
> {
> const struct nlattr *head, *curr;
> - union ib_gid gid;
> + union ib_gid gid = {};
Let's drop this.
Looking at the whole flow, it looks like it is not using
nla_parse_deprecated properly.. I think it should look like this which
will fix the issue and make it run faster:
diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
index 61596cda2b65f3..35ba852a172aad 100644
--- a/drivers/infiniband/core/addr.c
+++ b/drivers/infiniband/core/addr.c
@@ -80,37 +80,25 @@ static const struct nla_policy ib_nl_addr_policy[LS_NLA_TYPE_MAX] = {
.min = sizeof(struct rdma_nla_ls_gid)},
};
-static inline bool ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh)
+static void ib_nl_process_ip_rsep(const struct nlmsghdr *nlh)
{
struct nlattr *tb[LS_NLA_TYPE_MAX] = {};
+ union ib_gid gid;
+ struct addr_req *req;
+ int found = 0;
int ret;
if (nlh->nlmsg_flags & RDMA_NL_LS_F_ERR)
- return false;
+ return;
ret = nla_parse_deprecated(tb, LS_NLA_TYPE_MAX - 1, nlmsg_data(nlh),
nlmsg_len(nlh), ib_nl_addr_policy, NULL);
if (ret)
- return false;
+ return;
- return true;
-}
-
-static void ib_nl_process_good_ip_rsep(const struct nlmsghdr *nlh)
-{
- const struct nlattr *head, *curr;
- union ib_gid gid;
- struct addr_req *req;
- int len, rem;
- int found = 0;
-
- head = (const struct nlattr *)nlmsg_data(nlh);
- len = nlmsg_len(nlh);
-
- nla_for_each_attr(curr, head, len, rem) {
- if (curr->nla_type == LS_NLA_TYPE_DGID)
- memcpy(&gid, nla_data(curr), nla_len(curr));
- }
+ if (!tb[LS_NLA_TYPE_DGID])
+ return;
+ memcpy(&gid, nla_data(tb[LS_NLA_TYPE_DGID]), sizeof(gid));
spin_lock_bh(&lock);
list_for_each_entry(req, &req_list, list) {
@@ -137,8 +125,7 @@ int ib_nl_handle_ip_res_resp(struct sk_buff *skb,
!(NETLINK_CB(skb).sk))
return -EPERM;
- if (ib_nl_is_good_ip_resp(nlh))
- ib_nl_process_good_ip_rsep(nlh);
+ ib_nl_process_ip_rsep(nlh);
return 0;
}
Powered by blists - more mailing lists