[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <202205091614.C55B5D49F@keescook>
Date: Mon, 9 May 2022 16:20:47 -0700
From: Kees Cook <keescook@...omium.org>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <eric.dumazet@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>,
netdev <netdev@...r.kernel.org>, Coco Li <lixiaoyan@...gle.com>,
Tariq Toukan <tariqt@...dia.com>,
Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>
Subject: Re: [PATCH v4 net-next 12/12] mlx5: support BIG TCP packets
On Sat, May 07, 2022 at 04:19:06AM -0700, Eric Dumazet wrote:
> On Sat, May 7, 2022 at 12:46 AM Kees Cook <keescook@...omium.org> wrote:
> >
> > On Fri, May 06, 2022 at 06:54:05PM -0700, Jakub Kicinski wrote:
> > > On Fri, 6 May 2022 17:32:43 -0700 Eric Dumazet wrote:
> > > > On Fri, May 6, 2022 at 3:34 PM Jakub Kicinski <kuba@...nel.org> wrote:
> > > > > In function ‘fortify_memcpy_chk’,
> > > > > inlined from ‘mlx5e_sq_xmit_wqe’ at ../drivers/net/ethernet/mellanox/mlx5/core/en_tx.c:408:5:
> > > > > ../include/linux/fortify-string.h:328:25: warning: call to ‘__write_overflow_field’ declared with attribute warning: detected write beyond size of field (1st parameter); maybe use struct_group()? [-Wattribute-warning]
> > > > > 328 | __write_overflow_field(p_size_field, size);
> > > > > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > Ah, my old friend, inline_hdr.start. Looks a lot like another one I fixed
> > earlier in ad5185735f7d ("net/mlx5e: Avoid field-overflowing memcpy()"):
> >
> > if (attr->ihs) {
> > if (skb_vlan_tag_present(skb)) {
> > eseg->inline_hdr.sz |= cpu_to_be16(attr->ihs + VLAN_HLEN);
> > mlx5e_insert_vlan(eseg->inline_hdr.start, skb, attr->ihs);
> > stats->added_vlan_packets++;
> > } else {
> > eseg->inline_hdr.sz |= cpu_to_be16(attr->ihs);
> > memcpy(eseg->inline_hdr.start, skb->data, attr->ihs);
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > }
> > dseg += wqe_attr->ds_cnt_inl;
> >
> > This is actually two regions, 2 bytes in eseg and everything else in
> > dseg. Splitting the memcpy() will work:
> >
> > memcpy(eseg->inline_hdr.start, skb->data, sizeof(eseg->inline_hdr.start));
> > memcpy(dseg, skb->data + sizeof(eseg->inline_hdr.start), ihs - sizeof(eseg->inline_hdr.start));
> >
> > But this begs the question, what is validating that ihs -2 is equal to
> > wqe_attr->ds_cnt_inl * sizeof(*desg) ?
> >
> > And how is wqe bounds checked?
>
> Look at the definition of struct mlx5i_tx_wqe
>
> Then mlx5i_sq_calc_wqe_attr() computes the number of ds_cnt (16 bytes
> granularity)
> units needed.
>
> Then look at mlx5e_txqsq_get_next_pi()
Thanks! I'll study the paths.
> I doubt a compiler can infer that the driver is correct.
Agreed; this layering visibility is a bit strange to deal with. I'll see
if I can come up with a sane solution that doesn't split the memcpy but
establishes some way to do compile-time (or run-time) bounds checking.
If I can't, I suspect I'll have to create an "unsafe_memcpy" wrapper
that expressly ignores the structure layouts, etc. That's basically what
memcpy() currently is, so it's not a regression from that perspective.
I'd just prefer to find a way to refactor things so that the compiler
can actually help us do the bounds checking.
> Basically this is variable length structure, quite common in NIC
> world, given number of dma descriptor can vary from 1 to XX,
> and variable size of headers. (Typically, fast NIC want to get the
> headers inlined in TX descriptor)
Yup; most of the refactoring patches I've sent for the memcpy bounds
checking have been in networking. :) (But then, also, all the recent
security flaws with memcpy overflows have also been in networking,
so no real surprise, I guess.)
> NIC drivers send millions of packets per second.
> We can not really afford copying each component of a frame one byte at a time.
>
> The memcpy() here can typically copy IPv6 header (40 bytes) + TCP
> header (up to 60 bytes), plus more headers if encapsulation is added.
Right; I need to make sure this gets fixed without wrecking performance.
:)
--
Kees Cook
Powered by blists - more mailing lists