lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBvwFPRwA2LVQJkO@google.com>
Date: Wed, 7 May 2025 16:43:16 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
	Ian Rogers <irogers@...gle.com>,
	Kan Liang <kan.liang@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
	Adrian Hunter <adrian.hunter@...el.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>, LKML <linux-kernel@...r.kernel.org>,
	linux-perf-users@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>
Subject: Re: [RFC/PATCH] perf report: Support latency profiling in
 system-wide mode

On Tue, May 06, 2025 at 09:40:52AM +0200, Dmitry Vyukov wrote:
> On Tue, 6 May 2025 at 09:10, Namhyung Kim <namhyung@...nel.org> wrote:
> > > > > Where does the patch check that this mode is used only for system-wide profiles?
> > > > > Is it that PERF_SAMPLE_CPU present only for system-wide profiles?
> > > >
> > > > Basically yes, but you can use --sample-cpu to add it.
> > >
> > > Are you sure? --sample-cpu seems to work for non-system-wide profiles too.
> >
> > Yep, that's why I said "Basically".  So it's not 100% guarantee.
> >
> > We may disable latency column by default in this case and show warning
> > if it's requested.  Or we may add a new attribute to emit sched-switch
> > records only for idle tasks and enable the latency report only if the
> > data has sched-switch records.
> >
> > What do you think?
> 
> Depends on what problem we are trying to solve:
> 
> 1. Enabling latency profiling for system-wide mode.
> 
> 2. Switch events bloating trace too much.
> 
> 3. Lost switch events lead to imprecise accounting.
> 
> The patch mentions all 3 :)
> But I think 2 and 3 are not really specific to system-wide mode.
> An active single process profile can emit more samples than a
> system-wide profile on a lightly loaded system.

True.  But we don't need to care about lightly loaded systems as they
won't cause problems.


> Similarly, if we rely on switch events for system-wide mode, then it's
> equally subject to the lost events problem.

Right, but I'm afraid practically it'll increase the chance of lost
in system-wide mode.  The default size of the sample for system-wide
is 56 byte and the size of the switch is 48 byte.  And the default
sample frequency is 4000 Hz but it cannot control the rate of the
switch.  I saw around 10000 Hz of switches per CPU on my work env.

> 
> For problem 1: we can just permit --latency for system wide mode and
> fully rely on switch events.
> It's not any worse than we do now (wrt both profile size and lost events).

This can be an option and it'd work well on lightly loaded systems.
Maybe we can just try it first.  But I think it's better to have an
option to make it work on heavily loaded systems.

> 
> For problem 2: yes, we could emit only switches to idle tasks. Or
> maybe just a fake CPU sample for an idle task? That's effectively what
> we want, then your current accounting code will work w/o any changes.
> This should help wrt trace size only for system-wide mode (provided
> that user already enables CPU accounting for other reasons, otherwise
> it's unclear what's better -- attaching CPU to each sample, or writing
> switch events).

I'm not sure how we can add the fake samples.  The switch events will be
from the kernel and we may add the condition in the attribute.

And PERF_SAMPLE_CPU is on by default in system-wide mode.

> 
> For problem 3: switches to idle task won't really help. There can be
> lots of them, and missing any will lead to wrong accounting.

I don't know how severe the situation will be.  On heavily loaded
systems, the idle task won't run much and data size won't increase.
On lightly loaded systems, increased data will likely be handled well.


> A principled approach would be to attach a per-thread scheduler
> quantum sequence number to each CPU sample. The sequence number would
> be incremented on every context switch. Then any subset of CPU should
> be enough to understand when a task was scheduled in and out
> (scheduled in on the first CPU sample with sequence number N, and
> switched out on the last sample with sequence number N).

I'm not sure how it can help.  We don't need the switch info itself.
What's needed is when the CPU was idle, right?

Thanks,
Namhyung


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ