lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ad26a7a3-38b1-5cbc-b4ed-ea5626a74bd8@gmail.com>
Date:   Thu, 9 Jul 2020 15:22:07 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     "YU, Xiangning" <xiangning.yu@...baba-inc.com>,
        Eric Dumazet <eric.dumazet@...il.com>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next v2 2/2] net: sched: Lockless Token Bucket (LTB)
 qdisc



On 7/9/20 11:20 AM, YU, Xiangning wrote:
> 
> 
> On 7/9/20 10:15 AM, Eric Dumazet wrote:
>>
>> Well, at Google we no longer have this issue.
>>
>> We adopted EDT model, so that rate limiting can be done in eBPF, by simply adjusting skb->tstamp.
>>
>> The qdisc is MQ + FQ.
>>
>> Stanislas Fomichev will present this use case at netdev conference 
>>
>> https://netdevconf.info/0x14/session.html?talk-replacing-HTB-with-EDT-and-BPF
>>
> This is cool, I would love to learn more about this!
> 
> Still please correct me if I'm wrong. This looks more like pacing on a per-flow basis, how do you support an overall rate limiting of multiple flows? Each individual flow won't have a global rate usage about others.
> 


No, this is really per-aggregate rate limiting, multiple TCP/UDP flows can share the same class.

Before that, we would have between 10 and 3000 HTB classes on a host.
We had internal code to bypass the HTB (on bond0 device) for non throttled packets,
since HTB could hardly cope with more than 1Mpps.

Now, an eBPF program (from sch_handle_egress()) using maps to perform classification
and (optional) rate-limiting based on various rules.

MQ+FQ is already doing the per-flow pacing (we have been using this for 8 years now)

The added eBPF code extended this pacing to be per aggregate as well.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ