[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181107.220759.1889877577682317113.davem@davemloft.net>
Date: Wed, 07 Nov 2018 22:07:59 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: ruxandra.radulescu@....com
Cc: netdev@...r.kernel.org, ioana.ciornei@....com
Subject: Re: [PATCH net-next] dpaa2-eth: Introduce TX congestion management
From: Ioana Ciocoi Radulescu <ruxandra.radulescu@....com>
Date: Wed, 7 Nov 2018 10:31:16 +0000
> We chose this mechanism over BQL (to which it is conceptually
> very similar) because a) we can take advantage of the hardware
> offloading and b) BQL doesn't match well with our driver fastpath
> (we process ingress (Rx or Tx conf) frames in batches of up to 16,
> which in certain scenarios confuses the BQL adaptive algorithm,
> resulting in too low values of the limit and low performance).
First, this kind of explanation belongs in the commit message.
Second, you'll have to describe better what BQL, which is the
ultimate standard mechanism for every single driver in the
kernel to deal with this issue.
Are you saying that if 15 TX frames are pending, not TX interrupt
will arrive at all?
There absolutely must be some timeout or similar interrupt that gets
sent in that kind of situation. You cannot leave stale TX packets
on your ring unprocessed just because a non-multiple of 16 packets
were queued up and then TX activity stopped.
Powered by blists - more mailing lists