lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171205181052.GD3327@localhost.localdomain>
Date:   Tue, 5 Dec 2017 16:10:52 -0200
From:   Marcelo Ricardo Leitner <marcelo.leitner@...il.com>
To:     Michael Chan <michael.chan@...adcom.com>
Cc:     Eric Dumazet <eric.dumazet@...il.com>,
        Or Gerlitz <gerlitz.or@...il.com>,
        David Miller <davem@...emloft.net>,
        Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 2/4] bnxt_en: Use NETIF_F_GRO_HW.

On Mon, Dec 04, 2017 at 04:07:15PM -0800, Michael Chan wrote:
> As already pointed out, GRO_HW is a subset of GRO.  Packets that
> cannot be aggregated in hardware (due to hardware resource limitations
> or protocol types that it doesn't handle) can just be passed to the
> stack for GRO aggregation.

How would the parameters/limits work in this case? I mean, currently
we have the default weight of 64 packets per napi poll cycle, the
budget of 300 per cycle and also the time constrain,
net.core.netdev_budget_usecs.

With GRO_HW, this 64 limit may be exceeded. I'm looking at qede code
and it works by couting each completion as 1 rcv_pkts
(qede_fp.c:1318). So if it now gets 64 packets, it's up to 64*MTU
aprox, GRO'ed or not. But with GRO_HW, seems it may be much more than
that and which may not be fair with other interfaces in the system.
Drivers supporting GRO_HW probably should account for this.

And how can one control how much time a packet may spend on NIC queue
waiting to be GRO'ed? Does it use the coalescing parameters for that?

  Marcelo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ