[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e56570e5-5165-71e1-f4cc-b8ea2063aec8@yandex-team.com>
Date: Fri, 20 Aug 2021 12:37:47 +0300
From: Andrey Ryabinin <arbn@...dex-team.com>
To: Daniel Jordan <daniel.m.jordan@...cle.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>
Cc: Boris Burkov <boris@....io>,
Bharata B Rao <bharata@...ux.vnet.ibm.com>,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH 3/4] sched/cpuacct: fix user/system in shown
cpuacct.usage*
On 3/18/21 1:22 AM, Daniel Jordan wrote:
> Andrey Ryabinin <arbn@...dex-team.com> writes:
>
>> cpuacct has 2 different ways of accounting and showing user
>> and system times.
>>
>> The first one uses cpuacct_account_field() to account times
>> and cpuacct.stat file to expose them. And this one seems to work ok.
>>
>> The second one is uses cpuacct_charge() function for accounting and
>> set of cpuacct.usage* files to show times. Despite some attempts to
>> fix it in the past it still doesn't work. E.g. while running KVM
>> guest the cpuacct_charge() accounts most of the guest time as
>> system time. This doesn't match with user&system times shown in
>> cpuacct.stat or proc/<pid>/stat.
>
> I couldn't reproduce this running a cpu bound load in a kvm guest on a
> nohz_full cpu on 5.11. The time is almost entirely in cpuacct.usage and
> _user, while _sys stays low.
>
> Could you say more about how you're seeing this? Don't really doubt
> there's a problem, just wondering what you're doing.
>
Yeah, I it's almost unnoticable if you run some load in guest like qemu.
But more simple case with busy loop in KVM_RUN triggers this:
# git clone https://github.com/aryabinin/kvmsample
# make
# mkdir /sys/fs/cgroup/cpuacct/test
# echo $$ > /sys/fs/cgroup/cpuacct/test/tasks
# ./kvmsample &
# for i in {1..5}; do cat /sys/fs/cgroup/cpuacct/test/cpuacct.usage_sys; sleep 1; done
1976535645
2979839428
3979832704
4983603153
5983604157
>> diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
>> index 941c28cf9738..7eff79faab0d 100644
>> --- a/kernel/sched/cpuacct.c
>> +++ b/kernel/sched/cpuacct.c
>> @@ -29,7 +29,7 @@ struct cpuacct_usage {
>> struct cpuacct {
>> struct cgroup_subsys_state css;
>> /* cpuusage holds pointer to a u64-type object on every CPU */
>> - struct cpuacct_usage __percpu *cpuusage;
>
> Definition of struct cpuacct_usage can go away now.
>
Done.
>> @@ -99,7 +99,8 @@ static void cpuacct_css_free(struct cgroup_subsys_state *css)
>> static u64 cpuacct_cpuusage_read(struct cpuacct *ca, int cpu,
>> enum cpuacct_stat_index index)
>> {
>> - struct cpuacct_usage *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
>> + u64 *cpuusage = per_cpu_ptr(ca->cpuusage, cpu);
>> + u64 *cpustat = per_cpu_ptr(ca->cpustat, cpu)->cpustat;
>> u64 data;
>
> There's a BUG_ON below this that could probably be WARN_ON_ONCE while
> you're here
>
Sure.
>> @@ -278,8 +274,8 @@ static int cpuacct_stats_show(struct seq_file *sf, void *v)
>> for_each_possible_cpu(cpu) {
>> u64 *cpustat = per_cpu_ptr(ca->cpustat, cpu)->cpustat;
>>
>> - val[CPUACCT_STAT_USER] += cpustat[CPUTIME_USER];
>> - val[CPUACCT_STAT_USER] += cpustat[CPUTIME_NICE];
>> + val[CPUACCT_STAT_USER] += cpustat[CPUTIME_USER];
>> + val[CPUACCT_STAT_USER] += cpustat[CPUTIME_NICE];
>
> unnecessary whitespace change?
>
yup
Powered by blists - more mailing lists