[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e3412f04-c0dd-409e-88b2-16766d361859@intel.com>
Date: Wed, 15 Jan 2025 11:22:34 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Dan Carpenter <dan.carpenter@...aro.org>
CC: Jakub Kicinski <kuba@...nel.org>, Louis Peens <louis.peens@...igine.com>,
Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, "Quentin
Monnet" <qmo@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
<bpf@...r.kernel.org>, <oss-drivers@...igine.com>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <kernel-janitors@...r.kernel.org>
Subject: Re: [PATCH net] nfp: bpf: prevent integer overflow in
nfp_bpf_event_output()
From: Dan Carpenter <dan.carpenter@...aro.org>
Date: Tue, 14 Jan 2025 21:43:26 +0300
> On Tue, Jan 14, 2025 at 06:17:22PM +0100, Alexander Lobakin wrote:
>> From: Dan Carpenter <dan.carpenter@...aro.org>
>> Date: Tue, 14 Jan 2025 13:45:04 +0300
>>
>>> [ I tried to send this email yesterday but apparently gmail blocked
>>> it for security reasons? So weird. - dan ]
>>>
>>> On Mon, Jan 13, 2025 at 01:32:11PM +0100, Alexander Lobakin wrote:
>>>> From: Dan Carpenter <dan.carpenter@...aro.org>
>>>> Date: Mon, 13 Jan 2025 09:18:39 +0300
>>>>
>>>>> The "sizeof(struct cmsg_bpf_event) + pkt_size + data_size" math could
>>>>> potentially have an integer wrapping bug on 32bit systems. Check for
>>>>
>>>> Not in practice I suppose? Do we need to fix "never" bugs?
>>>>
>>>
>>> No, this is from static analysis. We don't need to fix never bugs.
>>>
>>> This is called from nfp_bpf_ctrl_msg_rx() and nfp_bpf_ctrl_msg_rx_raw()
>>> and I assumed that since pkt_size and data_size come from skb->data on
>>> the rx path then they couldn't be trusted.
>>
>> skbs are always valid and skb->len could never cross INT_MAX to provoke
>> an overflow.
>>
>
> True but unrelated. I think you are looking at the wrong code...
>
> drivers/net/ethernet/netronome/nfp/bpf/offload.c
> 445 int nfp_bpf_event_output(struct nfp_app_bpf *bpf, const void *data,
> ^^^^
> This code comes from the network so it cannot be trusted.
>
> 446 unsigned int len)
> 447 {
> 448 struct cmsg_bpf_event *cbe = (void *)data;
> ^^^^^^^^^^^^^^^^^^^
> It is cast to a struct here.
>
> 449 struct nfp_bpf_neutral_map *record;
> 450 u32 pkt_size, data_size, map_id;
> 451 u64 map_id_full;
> 452
> 453 if (len < sizeof(struct cmsg_bpf_event))
> 454 return -EINVAL;
> 455
> 456 pkt_size = be32_to_cpu(cbe->pkt_size);
> 457 data_size = be32_to_cpu(cbe->data_size);
>
> pkt_size and data_size are u32 values which are controlled from
> over the network.
>
> 458 map_id_full = be64_to_cpu(cbe->map_ptr);
> 459 map_id = map_id_full;
> 460
> 461 if (len < sizeof(struct cmsg_bpf_event) + pkt_size + data_size)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> On a 32bit system, then this math can overflow. The pkt_size and
> data_size are too high and we should return -EINVAL but this check
> doesn't work because of integer wrapping.
>
> 462 return -EINVAL;
> 63 if (cbe->hdr.ver != NFP_CCM_ABI_VERSION)
> 464 return -EINVAL;
> 465
> 466 rcu_read_lock();
> 467 record = rhashtable_lookup(&bpf->maps_neutral, &map_id,
> 468 nfp_bpf_maps_neutral_params);
> 469 if (!record || map_id_full > U32_MAX) {
> 470 rcu_read_unlock();
> 471 cmsg_warn(bpf, "perf event: map id %lld (0x%llx) not recognized, dropping event\n",
> 472 map_id_full, map_id_full);
> 473 return -EINVAL;
> 474 }
> 475
> 476 bpf_event_output(record->ptr, be32_to_cpu(cbe->cpu_id),
> 477 &cbe->data[round_up(pkt_size, 4)], data_size,
> ^^^^^^^^^^^^^^^^^^^^^
> Here we are way out of bounds with regards to the cbe->data[] array.
Aaah okay. Thanks for the detailed explanation! My note was "generic",
not regarding precisely this case.
I think a couple checks here wouldn't hurt, not only for an overflow,
but functional ones whether we have sane values. It's an ask more for
the Netronome guys tho.
Regarding your particular patch, I think it's now pretty clear for me:
Reviewed-by: Alexander Lobakin <aleksander.lobakin@...el.com>
>
> regards,
> dan carpenter
Thanks,
Olek
Powered by blists - more mailing lists