[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171219190426.GF6122@localhost.localdomain>
Date: Tue, 19 Dec 2017 17:04:27 -0200
From: Marcelo Ricardo Leitner <marcelo.leitner@...il.com>
To: David Miller <davem@...emloft.net>
Cc: michael.chan@...adcom.com, netdev@...r.kernel.org,
andrew.gospodarek@...adcom.com
Subject: Re: [PATCH net-next v5 0/5] Introduce NETIF_F_GRO_HW
On Tue, Dec 19, 2017 at 10:50:24AM -0500, David Miller wrote:
> From: Michael Chan <michael.chan@...adcom.com>
> Date: Sat, 16 Dec 2017 03:09:39 -0500
>
> > Introduce NETIF_F_GRO_HW feature flag and convert drivers that support
> > hardware GRO to use the new flag.
>
> Series applied, thanks for following through with this work.
Can we clarify on the meaning/expectations of dev_weight? The
documentation currently says:
The maximum number of packets that kernel can handle on a NAPI
interrupt, it's a Per-CPU variable.
I believe 'packets' here refers to packets on the wire.
For drivers doing LRO, we don't have visibility on how many
packets were aggregated so they count as 1, aggregated or not.
But for GRO_HW, drivers implementing it will get a bonus on its
dev_weight because instead of pulling 5 packets in a cycle to create 1
gro'ed skb, it will pull 1 big packet (which includes 5) and count it
as 1.
I understand that for all that matters, the hardware operations
involved on GRO_HW are really for only 1 packet, so it would make
sense to count it as 1. OTOH, this bump may cause additional pressure
in other places as in fact we are allowing more packets in in a given
cycle.
At least qede driver is counting 1 GRO_HW pkt as 1 budget.
Thanks,
Marcelo
Powered by blists - more mailing lists