[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YaoeKfmJrDPhMXWp@google.com>
Date: Fri, 3 Dec 2021 13:39:53 +0000
From: Lee Jones <lee.jones@...aro.org>
To: Bjørn Mork <bjorn@...k.no>
Cc: Jakub Kicinski <kuba@...nel.org>, linux-kernel@...r.kernel.org,
stable@...r.kernel.org, Oliver Neukum <oliver@...kum.org>,
"David S. Miller" <davem@...emloft.net>, linux-usb@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [PATCH 1/1] net: cdc_ncm: Allow for dwNtbOutMaxSize to be unset
or zero
On Fri, 03 Dec 2021, Bjørn Mork wrote:
> >> It's been a while since I looked at this, so excuse me if I read it
> >> wrongly. But I think we need to catch more illegal/impossible values
> >> than just zero here? Any buffer size which cannot hold a single
> >> datagram is pointless.
> >>
> >> Trying to figure out what I possible meant to do with that
> >>
> >> min = min(min, max);
> >>
> >> I don't think it makes any sense? Does it? The "min" value we've
> >> carefully calculated allow one max sized datagram and headers. I don't
> >> think we should ever continue with a smaller buffer than that
> >
> > I was more confused with the comment you added to that code:
> >
> > /* some devices set dwNtbOutMaxSize too low for the above default */
> > min = min(min, max);
> >
> > ... which looks as though it should solve the issue of an inadequate
> > dwNtbOutMaxSize, but it almost does the opposite.
>
> That's what I read too. I must admit that I cannot remember writing any
> of this stuff. But I trust git...
In Git we trust!
> > I initially
> > changed this segment to use the max() macro instead, but the
> > subsequent clamp_t() macro simply chooses 'max' (0) value over the now
> > sane 'min' one.
>
> Yes, but what if we adjust max here instead of min?
That's what my patch does.
> > Which is why I chose
> >> Or are there cases where this is valid?
> >
> > I'm not an expert on the SKB code, but in my simple view of the world,
> > if you wish to use a buffer for any amount of data, you should
> > allocate space for it.
> >
> >> So that really should haven been catching this bug with a
> >>
> >> max = max(min, max)
> >
> > I tried this. It didn't work either.
> >
> > See the subsequent clamp_t() call a few lines down.
>
> This I don't understand. If we have for example
>
> new_tx = 0
> max = 0
> min = 1514(=datagram) + 8(=ndp) + 2(=1+1) * 4(=dpe) + 12(=nth) = 1542
>
> then
>
> max = max(min, max) = 1542
> val = clamp_t(u32, new_tx, min, max) = 1542
>
> so we return 1542 and everything is fine.
I don't believe so.
#define clamp_t(type, val, lo, hi) \
min_t(type, max_t(type, val, lo), hi)
So:
min_t(u32, max_t(u32, 0, 1542), 0)
So:
min_t(u32, 1542, 0) = 0
So we return 0 and everything is not fine. :)
Perhaps we should use max_t() here instead of clamp?
> >> or maybe more readable
> >>
> >> if (max < min)
> >> max = min
> >>
> >> What do you think?
> >
> > So the data that is added to the SKB is ctx->max_ndp_size, which is
> > allocated in cdc_ncm_init(). The code that does it looks like:
> >
> > if (ctx->is_ndp16)
> > ctx->max_ndp_size = sizeof(struct usb_cdc_ncm_ndp16) +
> > (ctx->tx_max_datagrams + 1) *
> > sizeof(struct usb_cdc_ncm_dpe16);
> > else
> > ctx->max_ndp_size = sizeof(struct usb_cdc_ncm_ndp32) +
> > (ctx->tx_max_datagrams + 1) *
> > sizeof(struct usb_cdc_ncm_dpe32);
> >
> > So this should be the size of the allocation too, right?
>
> This driver doesn't add data to the skb. It allocates a new buffer and
> copies one or more skbs into it. I'm sure that could be improved too..
"one or more skbs" == data :)
Either way, it's asking for more bits to be copied in than there is
space for. It's amazing that this worked at all. We only noticed it
when we increased the size of one of the SKB headers and some of the
accidentally allocated memory was eaten up.
> Without a complete rewrite we need to allocate new skbs large enough to hold
>
> NTH - frame header
> NDP x 1 - index table, with minimum two entries (1 datagram + terminator)
> datagram x 1 - ethernet frame
>
> This gives the minimum "tx_max" value.
>
> The device is supposed to tell us the maximum "tx_max" value in
> dwNtbOutMaxSize. In theory. In practice we cannot trust the device, as
> you point out. We know aleady deal with too large values (which are
> commonly seen in real products), but we also need to deal with too low
> values.
>
> I believe the "too low" is defined by the calculated minimum value, and
> the comment indicates that this what I tried to express but failed.
Right, that's how I read it too.
> > Why would the platform ever need to over-ride this? The platform
> > can't make the data area smaller since there won't be enough room. It
> > could perhaps make it bigger, but the min_t() and clamp_t() macros
> > will end up choosing the above allocation anyway.
> >
> > This leaves me feeling a little perplexed.
> >
> > If there isn't a good reason for over-riding then I could simplify
> > cdc_ncm_check_tx_max() greatly.
> >
> > What do *you* think? :)
>
> I also have the feeling that this could and should be simplified. This
> discussion shows that refactoring is required.
I'm happy to help with the coding, if we agree on a solution.
> git blame makes this all too embarrassing ;-)
:D
--
Lee Jones [李琼斯]
Senior Technical Lead - Developer Services
Linaro.org │ Open source software for Arm SoCs
Follow Linaro: Facebook | Twitter | Blog
Powered by blists - more mailing lists