lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Fri, 2 Dec 2022 08:53:43 -0800
From:   Dave Taht <dave.taht@...il.com>
To:     Linux Kernel Network Developers <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>, xdp-newbies@...r.kernel.org
Cc:     libreqos <libreqos@...ts.bufferbloat.net>,
        Cake List <cake@...ts.bufferbloat.net>
Subject: questions: LPM -> irq or queue mapping on the card for cake?

all:

After a pretty long layoff from kernel development of any sort of
mine... we're pushing the limits again trying to get cake to support
10s of thousands of ISP subscribers, leveraging xdp for reads, and
ebpf "pping" for inline monitoring of TCP rtts, then a ton of htb bins
+ cake for each subscriber.

shameless plugs:

https://github.com/thebracket/cpumap-pping#cpumap-pping # everybody
needs kathie nichols and simon 's pping!!!
https://github.com/LibreQoE

(even more shameless plug - ask your isp to try libreqos.io out -
presently at 10k users, pushing 11gbit/sec, 24% of 16 cores! I'm
really, really amazed by all this, I remember struggling to get cake
to crack 50Mbits on a 600mhz mips box a decade back. Way to go
everyone! - love, rip van winkle)

Anyway... after adopting xdp fully over the past couple months... most
of the CPU time is now spent in htb, and while I see htb has been
successfully offloaded by the mlx5, it's not clear if that can pull
from from cake as an underlying qdisc (?), or only a pfifo, nor how
much buffering offloads like this introduce. ? Are there other cards
with an htb offload now?

Secondly -

The xdp path is marvelous, but in trying to drive this transparent
bridge to 100Gbit, I find myself wanting something old fashioned, and
in my mind, simpler than a match pattern - is there any ethernet card
that lets you do a TCAM mapping of a large (say, 128 thousand) set of
IP addresses to an irq or queue ?

1.2.3.4/29 -> irq 8 (or hw queue 8)
1.2.4.1/32 -> irq 9 (or hw queue 9)
a:b:c:d::/56 -> irq 8 (etc * 10s of thousands of other ip addresses, 1
or more LPM matches per subscriber)

Needn't be big enough to fill a bgp table...

This is different from RPS in that we don't want a rxhash to spread
the load "evenly", but to always direct a set of IP addresses to a
particular core, on a particular queue - which is setup to do all that
htb-ing (32k unique bins per core, e.g. 1.9m bins on 64 cores) and
cake-ing.

Other ideas for steppingstones on the march to 100gbit forwarding
through tons of cake gladly considered. We're going to be fiddling
with the metadata stuff in xdp as well - 3 hw hashes for cake, a rx
timestamp for codel will probably help there too.



-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ