[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e2a8ac28-f6ee-25e7-6cb9-cc28369b030a@iogearbox.net>
Date: Tue, 3 Aug 2021 10:08:40 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: Cong Wang <xiyou.wangcong@...il.com>
Cc: Peilin Ye <yepeilin.cs@...il.com>,
Jamal Hadi Salim <jhs@...atatu.com>,
Jiri Pirko <jiri@...nulli.us>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Cong Wang <cong.wang@...edance.com>,
Peilin Ye <peilin.ye@...edance.com>,
Alexei Starovoitov <ast@...nel.org>,
John Fastabend <john.fastabend@...il.com>
Subject: Re: [PATCH net-next 1/2] net/sched: sch_ingress: Support clsact
egress mini-Qdisc option
On 8/3/21 2:08 AM, Cong Wang wrote:
> On Mon, Aug 2, 2021 at 2:11 PM Daniel Borkmann <daniel@...earbox.net> wrote:
>>
>> NAK, just use clsact qdisc in the first place which has both ingress and egress
>> support instead of adding such hack. You already need to change your scripts for
>> clsact-on, so just swap 'tc qdisc add dev eth0 ingress' to 'tc qdisc add dev eth0
>> clsact' w/o needing to change kernel.
>
> If we were able to change the "script" as easily as you described,
> you would not even see such a patch. The fact is it is not under
> our control, the most we can do is change the qdisc after it is
> created by the "script", ideally without interfering its traffic,
> hence we have such a patch.
>
> (BTW, it is actually not a script, it is a cloud platform.)
Sigh, so you're trying to solve a non-technical issue with one cloud provider by
taking a detour for unnecessarily extending the kernel instead with functionality
that already exists in another qdisc (and potentially waiting few years until they
eventually upgrade). I presume Bytedance should be a big enough entity to make a
case for that provider to change it. After all swapping ingress with clsact for
such script is completely transparent and there is nothing that would break. (Fwiw,
from all the major cloud providers we have never seen such issue in our deployments.)
Thanks,
Daniel
Powered by blists - more mailing lists