[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZLcJ1nH8KzWzoQWj@slm.duckdns.org>
Date: Tue, 18 Jul 2023 11:53:26 -1000
From: Tejun Heo <tj@...nel.org>
To: Hao Jia <jiahao.os@...edance.com>
Cc: lizefan.x@...edance.com, hannes@...xchg.org, mkoutny@...e.com,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH] cgroup/rstat: record the cumulative
per-cpu time of cgroup and its descendants
On Tue, Jul 18, 2023 at 06:08:50PM +0800, Hao Jia wrote:
> https://github.com/jiaozhouxiaojia/cgv2-stat-percpu_test/tree/main
Isn't that just adding the same numbers twice and verifying that? Maybe I'm
misunderstanding you. Here's a simpler case:
# cd /sys/fs/cgroup
# mkdir -p asdf/test0
# grep usage_usec asdf/test0/cpu.stat
usage_usec 0
# echo $$ > asdf/test0/cgroup.procs
# stress -c 1 & sleep 1; kill %%
[1] 122329
stress: info: [122329] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd
# grep usage_usec asdf/test0/cpu.stat
usage_usec 1000956
[1]+ Terminated stress -c 1
# grep usage_usec asdf/cpu.stat
usage_usec 1002548
# echo $$ > /sys/fs/cgroup/cgroup.procs
# rmdir asdf/test0
# grep usage_usec asdf/cpu.stat
usage_usec 1006338
So, we run `stress -c 1` for 1 second in the asdf/test0 cgroup and
asdf/cpu.stat correctly reports the cumulative usage. After removing
asdf/test0 cgroup, asdf's usage_usec is still there. What's missing here?
What are you adding?
Thanks.
--
tejun
Powered by blists - more mailing lists