[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f9ded7f6-7f7d-4564-8b06-e73e603ef927@lunn.ch>
Date: Thu, 21 Mar 2024 20:42:22 +0100
From: Andrew Lunn <andrew@...n.ch>
To: Selvamani Rajagopal <Selvamani.Rajagopal@...emi.com>
Cc: "Parthiban.Veerasooran@...rochip.com" <Parthiban.Veerasooran@...rochip.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>,
"horms@...nel.org" <horms@...nel.org>,
"saeedm@...dia.com" <saeedm@...dia.com>,
"anthony.l.nguyen@...el.com" <anthony.l.nguyen@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"corbet@....net" <corbet@....net>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"robh+dt@...nel.org" <robh+dt@...nel.org>,
"krzysztof.kozlowski+dt@...aro.org" <krzysztof.kozlowski+dt@...aro.org>,
"conor+dt@...nel.org" <conor+dt@...nel.org>,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
"Horatiu.Vultur@...rochip.com" <Horatiu.Vultur@...rochip.com>,
"ruanjinjie@...wei.com" <ruanjinjie@...wei.com>,
"Steen.Hegelund@...rochip.com" <Steen.Hegelund@...rochip.com>,
"vladimir.oltean@....com" <vladimir.oltean@....com>,
"UNGLinuxDriver@...rochip.com" <UNGLinuxDriver@...rochip.com>,
"Thorsten.Kummermehr@...rochip.com" <Thorsten.Kummermehr@...rochip.com>,
Piergiorgio Beruto <Pier.Beruto@...emi.com>,
"Nicolas.Ferre@...rochip.com" <Nicolas.Ferre@...rochip.com>,
"benjamin.bigler@...nformulastudent.ch" <benjamin.bigler@...nformulastudent.ch>
Subject: Re: [PATCH net-next v3 08/12] net: ethernet: oa_tc6: implement
transmit path to transfer tx ethernet frames
> > > This second part is clearly an optimisation. If you have lots of full
> > > MTU packets, 1514 bytes, they take around 24 chunks. Having the last
> > > chunk only 1/2 full does not waste too much bandwidth. But if you are
> > > carrying lots of small packets, say voice, 130 bytes, the wasted
> > > bandwidth starts to add up. But is there a use case for 10Mbps of
> > > small packets? I doubt it.
> > Yes, for sure there is a possibility to get into this scenario and the protocol also
> > supports that. But as proposed by you below, let's implement it as part of
> > optimization later.
> > >
> > > So if you don't have the ability to combine two packets into one
> > > chunk, i would do that later. Lets get the basics merged first, it can
> > > be optimised later.
> > Yes, I agree with this proposal to get the basic version merged first.
>
> While latency is important, so is using the available bandwidth efficiently. Here is a suggestion. We know that the tx credit available basically tells us,
> how many chunks could be transmitted without overflow. Instead of stopping the netif queue based on number of skbs queued, why not stop the queue based on
> number of bytes accumulated? Basically, at any given point of time, we enqueue the tx_skb_q until we are have enough bytes to cross the threshold of (tc6->tc_credit * OA_TC6_CHUNK_PAYLOAD_SIZE).
> This way, during the next transmit, we could utilize the whole available credits. Bandwidth utilization between bigger frames and smaller frames would be not be vastly different.
Please configure your email client to wrap emails at around 70
characters.
tc_credit is 5 bits. So it is a maximum of 32.
A 1514 frame takes around 24 chunks. So you only need two full size
frames to consume all your possible credit.
If you happen to have smaller voice packets, say 130 bytes, you need
three chunks to send it. So you might want to have 10 such packets on
hand in order to make use of all your credit. But if you have 10 voice
packets to send in a burst, your voice quality is going to be bad,
they should be 10ms to 20ms apart, not in a burst...
I don't like the original idea of having lots of packets in a transmit
queue. But having 1/2 dozen should not be an issue.
In general, we prefer things to be simple. We can then optimise later,
and use benchmarks to show the optimisations really do bring a benefit
to justify the added complexity.
Andrew
Powered by blists - more mailing lists