[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220517020840.vyfp5cit66fs2k2o@MBP-98dd607d3435.dhcp.thefacebook.com>
Date: Mon, 16 May 2022 19:08:40 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>, Hao Luo <haoluo@...gle.com>,
Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>,
Shuah Khan <shuah@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Michal Hocko <mhocko@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Greg Thelen <gthelen@...gle.com>,
Shakeel Butt <shakeelb@...gle.com>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
bpf@...r.kernel.org, cgroups@...r.kernel.org
Subject: Re: [RFC PATCH bpf-next v2 2/7] cgroup: bpf: flush bpf stats on
rstat flush
On Sun, May 15, 2022 at 02:34:59AM +0000, Yosry Ahmed wrote:
> +
> +void bpf_rstat_flush(struct cgroup *cgrp, int cpu)
> +{
> + struct bpf_rstat_flusher *flusher;
> + struct bpf_rstat_flush_ctx ctx = {
> + .cgrp = cgrp,
> + .parent = cgroup_parent(cgrp),
> + .cpu = cpu,
> + };
> +
> + rcu_read_lock();
> + migrate_disable();
> + spin_lock(&bpf_rstat_flushers_lock);
> +
> + list_for_each_entry(flusher, &bpf_rstat_flushers, list)
> + (void) bpf_prog_run(flusher->prog, &ctx);
> +
> + spin_unlock(&bpf_rstat_flushers_lock);
> + migrate_enable();
> + rcu_read_unlock();
> +}
> diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
> index 24b5c2ab5598..0285d496e807 100644
> --- a/kernel/cgroup/rstat.c
> +++ b/kernel/cgroup/rstat.c
> @@ -2,6 +2,7 @@
> #include "cgroup-internal.h"
>
> #include <linux/sched/cputime.h>
> +#include <linux/bpf-rstat.h>
>
> static DEFINE_SPINLOCK(cgroup_rstat_lock);
> static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock);
> @@ -168,6 +169,7 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep)
> struct cgroup_subsys_state *css;
>
> cgroup_base_stat_flush(pos, cpu);
> + bpf_rstat_flush(pos, cpu);
Please use the following approach instead:
__weak noinline void bpf_rstat_flush(struct cgroup *cgrp, struct cgroup *parent, int cpu)
{
}
and change above line to:
bpf_rstat_flush(pos, cgroup_parent(pos), cpu);
Then tracing bpf fentry progs will be able to attach to bpf_rstat_flush.
Pretty much the patches 1, 2, 3 are not necessary.
In patch 4 add bpf_cgroup_rstat_updated/flush as two kfuncs instead of stable helpers.
This way patches 1,2,3,4 will become 2 trivial patches and we will be
able to extend the interface between cgroup rstat and bpf whenever we need
without worrying about uapi stability.
We had similar discusison with HID subsystem that plans to use bpf in HID
with the same approach.
See this patch set:
https://lore.kernel.org/bpf/20220421140740.459558-2-benjamin.tissoires@redhat.com/
You'd need patch 1 from it to enable kfuncs for tracing.
Your patch 5 is needed as-is.
Yonghong,
please review it.
Different approach for patch 1-4 won't affect patch 5.
Patches 6 and 7 look good.
With this approach that patch 7 will mostly stay as-is. Instead of:
+SEC("rstat/flush")
+int vmscan_flush(struct bpf_rstat_flush_ctx *ctx)
+{
+ struct vmscan_percpu *pcpu_stat;
+ struct vmscan *total_stat, *parent_stat;
+ struct cgroup *cgrp = ctx->cgrp, *parent = ctx->parent;
it will become
SEC("fentry/bpf_rstat_flush")
int BPF_PROG(vmscan_flush, struct cgroup *cgrp, struct cgroup *parent, int cpu)
Powered by blists - more mailing lists