[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2AC7D4AD8BA1C640B4C60C61C8E520154A8EE18283@EXDCVYMBSTM006.EQ1STM.local>
Date: Tue, 22 Jan 2013 16:51:24 +0100
From: Alexey ORISHKO <alexey.orishko@...ricsson.com>
To: Bjørn Mork <bjorn@...k.no>
Cc: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-usb@...r.kernel.org" <linux-usb@...r.kernel.org>,
Greg Suarez <gsuarez@...thmicro.com>,
Oliver Neukum <oneukum@...e.de>,
Alexey Orishko <alexey.orishko@...il.com>
Subject: RE: [PATCH net 2/3] net: cdc_mbim: send ZLP after max sized NTBs
H Bjørn,
> -----Original Message-----
> From: Bjørn Mork [mailto:bjorn@...k.no]
> Sent: Tuesday, January 22, 2013 10:54 AM
>
> > If you add ZLP for NTBs of dwNtbOutMaxSize, you are heavily affecting
> > CPU load, increasing interrupt load by factor of 2 in high load
> > traffic scenario and possibly decreasing throughput for all other
> > devices which behaves correctly.
>
> The current cdc_ncm/cdc_mbim drivers will pad a NTB to the full
> dwNtbOutMaxSize whenever it reaches at least 512 bytes. The reason is
> that this allows more efficient device DMA operation. This is
> something we do to adapt to device hardware restrictions even though
> there is no such recommendations in the NCM/MBIM specs.
There are a lot of things which were discussed during development of
specifications, but not ended up in the final version of the spec due to
various reasons. Companies attending USB-IF F2F meetings can benefit
from discussions between member companies and get access to additional
information not visible outside of USB-IF.
> The penalty on
> the host and bus should be obvious: Even with a quite small
> dwNtbOutMaxSize of 4096, we end up sending 8 x 512-byte data packets
> instead of the 2 we could have managed with.
It was intentional and it's developer choice (see also the last comment).
It's a straightforward approach, but this limit could be a dynamic
value based on statistic and current load.
>
> Now you claim that sending 9 packets, where the last one is a zero
> length packet, increase the interrupt load by factor of 2? How is
> that?
You set up DMA job to receive full NTB and get a single interrupt
when job is done. It means while DMA is collecting data, CPU can do
something else (send data to NW stack).
In other protocols you need to indicate the end of data, but in
NCM/MBIM we know for sure that host is not allowed to send more
than dwNtbOutMaxSize bytes, so ZLP is not needed.
If host decides to send ZLP after full NTB, CPU must handle
additional INT per every full NTB instead of doing useful work.
For FTP transfer with constantly full NTBs you get twice amount of
Interrupts.
> Why is the device DMA restrictions not a fault, while the device ZLP
> requirement is? Both seem like reasonable device hardware/firmware
> implementation imposed restrictions to me. Something we'll just have
> to accept.
Both specifications (NCM/MBIM) were written from the point of host (most
likely PC with 2/4 core CPU) being more powerful than device and with
requirement for host to honor device limitations.
Regards,
Alexey
Powered by blists - more mailing lists