lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 19 May 2022 14:11:12 +0000
From:   David Laight <David.Laight@...LAB.COM>
To:     'Paolo Abeni' <pabeni@...hat.com>,
        'Pavan Chebbi' <pavan.chebbi@...adcom.com>
CC:     Michael Chan <michael.chan@...adcom.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "mchan@...adcom.com" <mchan@...adcom.com>,
        "David Miller" <davem@...emloft.net>
Subject: RE: tg3 dropping packets at high packet rates

From: Paolo Abeni
> Sent: 19 May 2022 14:29
....
> If the packet processing is 'bursty', you can have idle time and still
> hit now and the 'rx ring is [almost] full' condition. If pause frames
> are enabled, that will cause the peer to stop sending frames: drop can
> happen in the switch, and the local NIC will not notice (unless there
> are counters avaialble for pause frames sent).

The test program sending the data does spread it out.
So it isn't sending 2000 packets with minimal IPG every 10ms.
(I'm sending from 2 systems.)

I don't know if pause frames are enabled (ethtool might suggest they are).
But detecting whether they are sent is another matter.

In any case sending pause frames doesn't fix anything.
They are largely entirely useless unless you have a cable
that directly connects two computers.

> AFAICS the packet processing is bursty, because enqueuing packets to a
> remote CPU in considerably faster then full network stack processing.

I have taken restricted ftrace traces of the receiving system.
Not often seen more than 4 frames processed in one napi callback
Certainly didn't spot blocks of 100+ that you might expect
to see if the driver code was the bottleneck.

> Side note: on a not-to-obsolete H/W the kernel should be able to
> process >1mpps per cpu.

Yes, and, IIRC, a 33Mhz 486 can saturate 10MHz ethernet with
small packets.

In this case the cpu are almost twiddling their thumbs.
  model name      : Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
  stepping        : 2
  microcode       : 0x43
  cpu MHz         : 1300.000
cpu 14 (the one taking the interrupts) is running at full speed.

cpu doesn't seem to be the bottleneck.
The problem seems to be the hardware not using all the buffers
it has been given.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ