[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABk29NuGs_9uxgbv678W=BGGinZNiUHO5T57FHGbOG+HP-FT2g@mail.gmail.com>
Date: Tue, 11 Jan 2022 15:38:20 -0800
From: Josh Don <joshdon@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
cgroups@...r.kernel.org,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] cgroup: add cpu.stat_percpu
On Tue, Jan 11, 2022 at 4:50 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Fri, Jan 07, 2022 at 03:41:37PM -0800, Josh Don wrote:
>
> > + seq_puts(seq, "usage_usec");
> > + for_each_possible_cpu(cpu) {
> > + cached_bstat = per_cpu_ptr(&cached_percpu_stats, cpu);
> > + val = cached_bstat->cputime.sum_exec_runtime;
> > + do_div(val, NSEC_PER_USEC);
> > + seq_printf(seq, " %llu", val);
> > + }
> > + seq_puts(seq, "\n");
> > +
> > + seq_puts(seq, "user_usec");
> > + for_each_possible_cpu(cpu) {
> > + cached_bstat = per_cpu_ptr(&cached_percpu_stats, cpu);
> > + val = cached_bstat->cputime.utime;
> > + do_div(val, NSEC_PER_USEC);
> > + seq_printf(seq, " %llu", val);
> > + }
> > + seq_puts(seq, "\n");
> > +
> > + seq_puts(seq, "system_usec");
> > + for_each_possible_cpu(cpu) {
> > + cached_bstat = per_cpu_ptr(&cached_percpu_stats, cpu);
> > + val = cached_bstat->cputime.stime;
> > + do_div(val, NSEC_PER_USEC);
> > + seq_printf(seq, " %llu", val);
> > + }
> > + seq_puts(seq, "\n");
>
> This is an anti-pattern; given enough CPUs (easy) this will trivially
> overflow the 1 page seq buffer.
>
> People are already struggling to fix existing ABI, lets not make the
> problem worse.
Is the concern there just the extra overhead from making multiple
trips into this handler and re-allocating the buffer until it is large
enough to take all the output? In that case, we could pre-allocate
with a size of the right order of magnitude, similar to /proc/stat.
Lack of per-cpu stats is a gap between cgroup v1 and v2, for which v2
can easily support this interface given that it already tracks the
stats percpu internally. I opted to dump them all in a single file
here, to match the consolidation that occurred from cpuacct->cpu.stat.
Powered by blists - more mailing lists