[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe2e13a6-1fb6-c160-1d6f-31c09264911b@iogearbox.net>
Date: Thu, 8 Jun 2023 12:11:58 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: Jamal Hadi Salim <jhs@...atatu.com>
Cc: ast@...nel.org, andrii@...nel.org, martin.lau@...ux.dev,
razor@...ckwall.org, sdf@...gle.com, john.fastabend@...il.com,
kuba@...nel.org, dxu@...uu.xyz, joe@...ium.io, toke@...nel.org,
davem@...emloft.net, bpf@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 2/7] bpf: Add fd-based tcx multi-prog infra
with link support
Hi Jamal,
On 6/8/23 3:25 AM, Jamal Hadi Salim wrote:
[...]
> A general question (which i think i asked last time as well): who
> decides what comes after/before what prog in this setup? And would
> that same entity not have been able to make the same decision using tc
> priorities?
Back in the first version of the series I initially coded up this option
that the tc_run() would basically be a fake 'bpf_prog' and it would have,
say, fixed prio 1000. It would get executed via tcx_run() when iterating
via bpf_mprog_foreach_prog() where bpf_prog_run() is called, and then users
could pick for native BPF prio before or after that. But then the feedback
was that sticking to prio is a bad user experience which led to the
development of what is in patch 1 of this series (see the details there).
> The idea of protecting programs from being unloaded is very welcome
> but feels would have made sense to be a separate patchset (we have
> good need for it). Would it be possible to use that feature in tc and
> xdp?
BPF links are supported for XDP today, just tc BPF is one of the few
remainders where it is not the case, hence the work of this series. What
XDP lacks today however is multi-prog support. With the bpf_mprog concept
that could be addressed with that common/uniform api (and Andrii expressed
interest in integrating this also for cgroup progs), so yes, various hook
points/program types could benefit from it.
>> +struct tcx_entry {
>> + struct bpf_mprog_bundle bundle;
>> + struct mini_Qdisc __rcu *miniq;
>> +};
>> +
>
> Can you please move miniq to the front? From where i sit this looks:
> struct tcx_entry {
> struct bpf_mprog_bundle bundle
> __attribute__((__aligned__(64))); /* 0 3264 */
>
> /* XXX last struct has 36 bytes of padding */
>
> /* --- cacheline 51 boundary (3264 bytes) --- */
> struct mini_Qdisc * miniq; /* 3264 8 */
>
> /* size: 3328, cachelines: 52, members: 2 */
> /* padding: 56 */
> /* paddings: 1, sum paddings: 36 */
> /* forced alignments: 1 */
> } __attribute__((__aligned__(64)));
>
> That is a _lot_ of cachelines - at the expense of the status quo
> clsact/ingress qdiscs which access miniq.
Ah yes, I'll fix this up.
Thanks,
Daniel
Powered by blists - more mailing lists