lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <27f7cfa13d1b5e7717e2d75595ab453951b18a96.camel@mellanox.com>
Date:   Fri, 23 Aug 2019 06:00:45 +0000
From:   Saeed Mahameed <saeedm@...lanox.com>
To:     "jakub.kicinski@...ronome.com" <jakub.kicinski@...ronome.com>
CC:     "davem@...emloft.net" <davem@...emloft.net>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Moshe Shemesh <moshe@...lanox.com>
Subject: Re: [net-next 4/8] net/mlx5e: Add device out of buffer counter

On Thu, 2019-08-22 at 18:33 -0700, Jakub Kicinski wrote:
> On Thu, 22 Aug 2019 23:35:52 +0000, Saeed Mahameed wrote:
> > From: Moshe Shemesh <moshe@...lanox.com>
> > 
> > Added the following packets drop counter:
> > Device out of buffer - counts packets which were dropped due to
> > full
> > device internal receive queue.
> > This counter will be shown on ethtool as a new counter called
> > dev_out_of_buffer.
> > The counter is read from FW by command QUERY_VNIC_ENV.
> > 
> > Signed-off-by: Moshe Shemesh <moshe@...lanox.com>
> > Signed-off-by: Saeed Mahameed <saeedm@...lanox.com>
> 
> Sounds like rx_fifo_errors, no? Doesn't rx_fifo_errors count RX
> overruns?

No, that is port buffer you are looking for and we got that fully
covered in mlx5. this is different.

This new counter is deep into the HW data path pipeline and it covers
very rare and complex scenarios that got only recently introduced with
swichdev mode and "some" lately added tunnels offloads that are routed
between VFs/PFs.

Normally the HW is lossless once the packet passes port buffers into
the data plane pipeline, let's call that "fast lane", BUT for sriov
configurations with switchdev mode enabled and some special hand
crafted tc tunnel offloads that requires hairpin between VFs/PFs, the
hw might decide to send some traffic to a "service lane" which is still
fast path but unlike the "fast lane" it handles traffic through "HW
internal" receive and send queues (just like we do with hairpin) that
might drop packets. the whole thing is transparent to driver and it is
HW implementation specific.

Thanks,
Saeed.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ