[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ee52c2e4-4199-da40-8e86-57ef4085c968@iogearbox.net>
Date: Fri, 14 Apr 2023 11:34:58 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: Toke Høiland-Jørgensen <toke@...hat.com>,
Yafang Shao <laoar.shao@...il.com>, davem@...emloft.net,
edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com,
ast@...nel.org, hawk@...nel.org, john.fastabend@...il.com
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org,
Jesper Dangaard Brouer <brouer@...hat.com>,
Tonghao Zhang <xiangxia.m.yue@...il.com>, martin.lau@...ux.dev
Subject: Re: [PATCH net-next] bpf, net: Support redirecting to ifb with bpf
On 4/13/23 4:43 PM, Toke Høiland-Jørgensen wrote:
> Daniel Borkmann <daniel@...earbox.net> writes:
>
>>> 2). We can't redirect ingress packet to ifb with bpf
>>> By trying to analyze if it is possible to redirect the ingress packet to
>>> ifb with a bpf program, we find that the ifb device is not supported by
>>> bpf redirect yet.
>>
>> You actually can: Just let BPF program return TC_ACT_UNSPEC for this
>> case and then add a matchall with higher prio (so it runs after bpf)
>> that contains an action with mirred egress redirect that pushes to ifb
>> dev - there is no change needed.
>
> I wasn't aware that BPF couldn't redirect directly to an IFB; any reason
> why we shouldn't merge this patch in any case?
>
>>> This patch tries to resolve it by supporting redirecting to ifb with bpf
>>> program.
>>>
>>> Ingress bandwidth limit is useful in some scenarios, for example, for the
>>> TCP-based service, there may be lots of clients connecting it, so it is
>>> not wise to limit the clients' egress. After limiting the server-side's
>>> ingress, it will lower the send rate of the client by lowering the TCP
>>> cwnd if the ingress bandwidth limit is reached. If we don't limit it,
>>> the clients will continue sending requests at a high rate.
>>
>> Adding artificial queueing for the inbound traffic, aren't you worried
>> about DoS'ing your node?
>
> Just as an aside, the ingress filter -> ifb -> qdisc on the ifb
> interface does work surprisingly well, and we've been using that over in
> OpenWrt land for years[0]. It does have some overhead associated with it,
> but I wouldn't expect it to be a source of self-DoS in itself (assuming
> well-behaved TCP traffic).
Out of curiosity, wrt OpenWrt case, can you elaborate on the use case to why
choosing to do this on ingress via ifb rather than on the egress side? I
presume in this case it's regular router, so pkts would be forwarded anyway,
and in your case traversing qdisc layer / queuing twice (ingress phys dev ->
ifb, egress phys dev), right? What is the rationale that would justify such
setup aka why it cannot be solved differently?
Thanks,
Daniel
> -Toke
>
> [0] https://openwrt.org/docs/guide-user/network/traffic-shaping/sqm
Powered by blists - more mailing lists