lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACKFLi=3u3JjVxY1F1TPBfJ2V-4OO7PKHmdcLTkQumpMpSbgww@mail.gmail.com>
Date:   Wed, 6 Dec 2017 13:04:39 -0800
From:   Michael Chan <michael.chan@...adcom.com>
To:     Marcelo Ricardo Leitner <marcelo.leitner@...il.com>
Cc:     Eric Dumazet <eric.dumazet@...il.com>,
        Or Gerlitz <gerlitz.or@...il.com>,
        David Miller <davem@...emloft.net>,
        Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 2/4] bnxt_en: Use NETIF_F_GRO_HW.

On Tue, Dec 5, 2017 at 10:10 AM, Marcelo Ricardo Leitner
<marcelo.leitner@...il.com> wrote:
> On Mon, Dec 04, 2017 at 04:07:15PM -0800, Michael Chan wrote:
>> As already pointed out, GRO_HW is a subset of GRO.  Packets that
>> cannot be aggregated in hardware (due to hardware resource limitations
>> or protocol types that it doesn't handle) can just be passed to the
>> stack for GRO aggregation.
>
> How would the parameters/limits work in this case? I mean, currently
> we have the default weight of 64 packets per napi poll cycle, the
> budget of 300 per cycle and also the time constrain,
> net.core.netdev_budget_usecs.

Good point.  Currently, it is no different than LRO.  Each aggregated
packet is counted as 1.  With LRO, you don't necessarily know many
packets were merged.  With GRO_HW, we know and it's possible to count
the original packets towards the NAPI budget.

>
> With GRO_HW, this 64 limit may be exceeded. I'm looking at qede code
> and it works by couting each completion as 1 rcv_pkts
> (qede_fp.c:1318). So if it now gets 64 packets, it's up to 64*MTU
> aprox, GRO'ed or not. But with GRO_HW, seems it may be much more than
> that and which may not be fair with other interfaces in the system.
> Drivers supporting GRO_HW probably should account for this.

Right.  We can make this adjustment for GRO_HW in a future patchset.

>
> And how can one control how much time a packet may spend on NIC queue
> waiting to be GRO'ed? Does it use the coalescing parameters for that?
>

The GRO_HW timer is currently not exposed.  It's different from
interrupt coalescing.  It's possible to make this a tunable parameter
in the future.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ