[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878vb3ufnx.fsf@nemi.mork.no>
Date: Fri, 19 Oct 2012 08:41:54 +0200
From: Bjørn Mork <bjorn@...k.no>
To: Alexey Orishko <alexey.orishko@...il.com>
Cc: Oliver Neukum <oliver@...kum.org>, netdev@...r.kernel.org,
linux-usb@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Greg Suarez <gpsuarez2512@...il.com>,
"Fangxiaozhi \(Franko\)" <fangxiaozhi@...wei.com>,
Dan Williams <dcbw@...hat.com>,
Aleksander Morgado <aleksander@...edo.com>
Subject: Re: [PATCH net-next 02/14] net: cdc_ncm: use device rx_max value if update failed
Alexey Orishko <alexey.orishko@...il.com> writes:
> On Fri, Oct 19, 2012 at 12:09 AM, Bjørn Mork <bjorn@...k.no> wrote:
>> Oliver Neukum <oliver@...kum.org> writes:
>>> On Thursday 18 October 2012 22:40:55 Bjørn Mork wrote:
>>>> If the device refuses our updated value, then we must be prepared
>>>> to receive URBs as big as the device wants to send. Set rx_max
>>>> to the device provided value on error.
>>>
>>> Problematic in principle. How do you allocate a buffer of arbitrary size?
>>
>> You cannot of course. You can only try and give up if it doesn't work.
>> rx_submit would end up returning -ENOMEM, but we are not always checking
>> that so it will most likely fail silently.
>>
>> But I don't think we can just continue with the smaller buffer size
>> without having the device agree to that either. That is also likely to
>> fail silently. Note that this patch was added exactly because one of
>> the MBIM test devices did refuse the lower rx_max we tried to enforce.
>> The device insists on using 128kB buffers.
>>
>> Maybe we should cap it at some arbitrary reasonable value, and just bail
>> out from bind if the device insists on a larger buffer? Would that be
>> OK? How big buffers are (semi-)reasonable?
>>
>
> I recommend to drop this.
OK, will drop patch 01 (only necessary if some usbnet minidriver uses
buffers > 60 * 1518) and 02 from the next version of this series.
> Vendor has to fix firmware.
I agree in principle, and I'll report the problem to them. But as usual
I believe we have to support any weird firmware we encounter, if at all
possible.
> Current version of the driver supports 16-bit NTB, which means you can address
> (64K only - NTB header). So, how do you plan to use 64K-128K buffer space,
> if it can't be addressed by 16 bit offset?
This is of course true. The device does obey the 16bit header choice,
so I would hope that it does not send us buffers it can't use. But it
does send buffers in the range 32K-64K, which makes the current driver
fail in a rather ugly way.
I assume the current CDC_NCM_NTB_MAX_SIZE_RX is set to 32K as a sane
default, but how about allowing up to 64K for devices which does not
accept this? The other options are
- silently failing, or
- refusing to load with an error the user cannot do anything with,
and I don't think either of those are wanted, even if the NCM spec is
quite clear that the device is wrong here.
> Another angle to big buffers, even while using 64K buffers your TCP connection
> will suffer, so what's the point making huge buffers?
Agreed. There is no point. It's bloat. Which makes you kind of wonder
why they bothered to define a separate 32bit header format at all,
complicating the protocol quite a lot... Or why those writing the MBIM
spec didn't take their opportunity to remove this useless complication?
I am not holding it against you though ;-)
A nice side effect of the refactoring done to support MBIM is that most
of the 16bit header parsing has been isolated in separate functions,
making it trivial to implement the missing 32bit header support. Maybe
we should do that?
Bjørn
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists