[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP01T77oxbfQnHSX5irq0d=srArq=ZTf_VAMuw0QNhfcjJVdKQ@mail.gmail.com>
Date: Thu, 25 Aug 2022 01:09:08 +0200
From: Kumar Kartikeya Dwivedi <memxor@...il.com>
To: Hao Luo <haoluo@...gle.com>
Cc: linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
cgroups@...r.kernel.org, netdev@...r.kernel.org,
Alexei Starovoitov <ast@...nel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>,
Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
KP Singh <kpsingh@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Jiri Olsa <jolsa@...nel.org>, Michal Koutny <mkoutny@...e.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
David Rientjes <rientjes@...gle.com>,
Stanislav Fomichev <sdf@...gle.com>,
Shakeel Butt <shakeelb@...gle.com>,
Yosry Ahmed <yosryahmed@...gle.com>
Subject: Re: [PATCH bpf-next v9 5/5] selftests/bpf: add a selftest for cgroup
hierarchical stats collection
On Thu, 25 Aug 2022 at 01:07, Hao Luo <haoluo@...gle.com> wrote:
>
> On Tue, Aug 23, 2022 at 8:01 PM Hao Luo <haoluo@...gle.com> wrote:
> >
> > From: Yosry Ahmed <yosryahmed@...gle.com>
> >
> > Add a selftest that tests the whole workflow for collecting,
> > aggregating (flushing), and displaying cgroup hierarchical stats.
> >
> > TL;DR:
> > - Userspace program creates a cgroup hierarchy and induces memcg reclaim
> > in parts of it.
> > - Whenever reclaim happens, vmscan_start and vmscan_end update
> > per-cgroup percpu readings, and tell rstat which (cgroup, cpu) pairs
> > have updates.
> > - When userspace tries to read the stats, vmscan_dump calls rstat to flush
> > the stats, and outputs the stats in text format to userspace (similar
> > to cgroupfs stats).
> > - rstat calls vmscan_flush once for every (cgroup, cpu) pair that has
> > updates, vmscan_flush aggregates cpu readings and propagates updates
> > to parents.
> > - Userspace program makes sure the stats are aggregated and read
> > correctly.
> >
> > Detailed explanation:
> > - The test loads tracing bpf programs, vmscan_start and vmscan_end, to
> > measure the latency of cgroup reclaim. Per-cgroup readings are stored in
> > percpu maps for efficiency. When a cgroup reading is updated on a cpu,
> > cgroup_rstat_updated(cgroup, cpu) is called to add the cgroup to the
> > rstat updated tree on that cpu.
> >
> > - A cgroup_iter program, vmscan_dump, is loaded and pinned to a file, for
> > each cgroup. Reading this file invokes the program, which calls
> > cgroup_rstat_flush(cgroup) to ask rstat to propagate the updates for all
> > cpus and cgroups that have updates in this cgroup's subtree. Afterwards,
> > the stats are exposed to the user. vmscan_dump returns 1 to terminate
> > iteration early, so that we only expose stats for one cgroup per read.
> >
> > - An ftrace program, vmscan_flush, is also loaded and attached to
> > bpf_rstat_flush. When rstat flushing is ongoing, vmscan_flush is invoked
> > once for each (cgroup, cpu) pair that has updates. cgroups are popped
> > from the rstat tree in a bottom-up fashion, so calls will always be
> > made for cgroups that have updates before their parents. The program
> > aggregates percpu readings to a total per-cgroup reading, and also
> > propagates them to the parent cgroup. After rstat flushing is over, all
> > cgroups will have correct updated hierarchical readings (including all
> > cpus and all their descendants).
> >
> > - Finally, the test creates a cgroup hierarchy and induces memcg reclaim
> > in parts of it, and makes sure that the stats collection, aggregation,
> > and reading workflow works as expected.
> >
> > Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
> > Signed-off-by: Hao Luo <haoluo@...gle.com>
> > ---
>
> I saw this test failed on CI on s390x [0], because of using kfunc, and
> on s390x, "JIT does not support calling kernel function". Is there
> anything I can do about it
>
You can add it to the deny list, like this patch:
https://lore.kernel.org/bpf/20220824163906.1186832-1-deso@posteo.net
Powered by blists - more mailing lists