lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP-5=fUzD8VZRnsxEBNPK_7PAGzdFjzmBAupA-eh=7VCDHBkbA@mail.gmail.com>
Date: Wed, 15 May 2024 21:56:18 -0700
From: Ian Rogers <irogers@...gle.com>
To: Howard Chu <howardchu95@...il.com>
Cc: Namhyung Kim <namhyung@...nel.org>, Arnaldo Carvalho de Melo <acme@...nel.org>, peterz@...radead.org, 
	mingo@...hat.com, mark.rutland@....com, alexander.shishkin@...ux.intel.com, 
	jolsa@...nel.org, adrian.hunter@...el.com, kan.liang@...ux.intel.com, 
	zegao2021@...il.com, leo.yan@...ux.dev, ravi.bangoria@....com, 
	linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org, 
	bpf@...r.kernel.org
Subject: Re: [PATCH v2 0/4] Dump off-cpu samples directly

On Wed, May 15, 2024 at 9:24 PM Howard Chu <howardchu95@...il.com> wrote:
>
> Hello,
>
> Here is a little update on --off-cpu.
>
> > > It would be nice to start landing this work so I'm wondering what the
> > > minimal way to do that is. It seems putting behavior behind a flag is
> > > a first step.
>
> The flag to determine output threshold of off-cpu has been implemented.
> If the accumulated off-cpu time exceeds this threshold, output the sample
> directly; otherwise, save it later for off_cpu_write.
>
> But adding an extra pass to handle off-cpu samples introduces performance
> issues, here's the processing rate of --off-cpu sampling(with the
> extra pass to extract raw
> sample data) and without. The --off-cpu-threshold is in nanoseconds.
>
> +-----------------------------------------------------+---------------------------------------+----------------------+
> | comm                                                | type
>                        | process rate         |
> +-----------------------------------------------------+---------------------------------------+----------------------+
> | -F 4999 -a                                          | regular
> samples (w/o extra pass)      | 13128.675 samples/ms |
> +-----------------------------------------------------+---------------------------------------+----------------------+
> | -F 1 -a --off-cpu --off-cpu-threshold 100           | offcpu samples
> (extra pass)           |  2843.247 samples/ms |
> +-----------------------------------------------------+---------------------------------------+----------------------+
> | -F 4999 -a --off-cpu --off-cpu-threshold 100        | offcpu &
> regular samples (extra pass) |  3910.686 samples/ms |
> +-----------------------------------------------------+---------------------------------------+----------------------+
> | -F 4999 -a --off-cpu --off-cpu-threshold 1000000000 | few offcpu &
> regular (extra pass)     |  4661.229 samples/ms |
> +-----------------------------------------------------+---------------------------------------+----------------------+
>
> It's not ideal. I will find a way to reduce overhead. For example
> process them samples
> at save time as Ian mentioned.
>
> > > To turn the bpf-output samples into off-cpu events there is a pass
> > > added to the saving. I wonder if that can be more generic, like a save
> > > time perf inject.
>
> And I will find a default value for such a threshold based on performance
> and common use cases.
>
> > Sounds good.  We might add an option to specify the threshold to
> > determine whether to dump the data or to save it for later.  But ideally
> > it should be able to find a good default.
>
> These will be done before the GSoC kick-off on May 27.

This all sounds good. 100ns seems like quite a low threshold and 1s
extremely high, shame such a high threshold is marginal for the
context switch performance change. I wonder 100 microseconds may be a
more sensible threshold. It's 100 times larger than the cost of 1
context switch but considerably less than a frame redraw at 60FPS (16
milliseconds).

Thanks,
Ian

> Thanks,
> Howard
>
> On Thu, Apr 25, 2024 at 6:57 AM Namhyung Kim <namhyung@...nel.org> wrote:
> >
> > On Wed, Apr 24, 2024 at 3:19 PM Ian Rogers <irogers@...gle.com> wrote:
> > >
> > > On Wed, Apr 24, 2024 at 2:11 PM Arnaldo Carvalho de Melo
> > > <acme@...nel.org> wrote:
> > > >
> > > > On Wed, Apr 24, 2024 at 12:12:26PM -0700, Namhyung Kim wrote:
> > > > > Hello,
> > > > >
> > > > > On Tue, Apr 23, 2024 at 7:46 PM Howard Chu <howardchu95@...il.com> wrote:
> > > > > >
> > > > > > As mentioned in: https://bugzilla.kernel.org/show_bug.cgi?id=207323
> > > > > >
> > > > > > Currently, off-cpu samples are dumped when perf record is exiting. This
> > > > > > results in off-cpu samples being after the regular samples. Also, samples
> > > > > > are stored in large BPF maps which contain all the stack traces and
> > > > > > accumulated off-cpu time, but they are eventually going to fill up after
> > > > > > running for an extensive period. This patch fixes those problems by dumping
> > > > > > samples directly into perf ring buffer, and dispatching those samples to the
> > > > > > correct format.
> > > > >
> > > > > Thanks for working on this.
> > > > >
> > > > > But the problem of dumping all sched-switch events is that it can be
> > > > > too frequent on loaded machines.  Copying many events to the buffer
> > > > > can result in losing other records.  As perf report doesn't care about
> > > > > timing much, I decided to aggregate the result in a BPF map and dump
> > > > > them at the end of the profiling session.
> > > >
> > > > Should we try to adapt when there are too many context switches, i.e.
> > > > the BPF program can notice that the interval from the last context
> > > > switch is too small and then avoid adding samples, while if the interval
> > > > is a long one then indeed this is a problem where the workload is
> > > > waiting for a long time for something and we want to know what is that,
> > > > and in that case capturing callchains is both desirable and not costly,
> > > > no?
> >
> > Sounds interesting.  Yeah we could make it adaptive based on the
> > off-cpu time at the moment.
> >
> > > >
> > > > The tool could then at the end produce one of two outputs: the most
> > > > common reasons for being off cpu, or some sort of counter stating that
> > > > there are way too many context switches?
> > > >
> > > > And perhaps we should think about what is best to have as a default, not
> > > > to present just plain old cycles, but point out that the workload is
> > > > most of the time waiting for IO, etc, i.e. the default should give
> > > > interesting clues instead of expecting that the tool user knows all the
> > > > possible knobs and try them in all sorts of combinations to then reach
> > > > some conclusion.
> > > >
> > > > The default should use stuff that isn't that costly, thus not getting in
> > > > the way of what is being observed, but at the same time look for common
> > > > patterns, etc.
> > > >
> > > > - Arnaldo
> > >
> > > I really appreciate Howard doing this work!
> > >
> > > I wonder there are other cases where we want to synthesize events in
> > > BPF, for example, we may have fast and slow memory on a system, we
> > > could turn memory events on a system into either fast or slow ones in
> > > BPF based on the memory accessed, so that fast/slow memory systems can
> > > be simulated without access to hardware. This also feels like a perf
> > > script type problem. Perhaps we can add something to the bpf-output
> > > event so it can have multiple uses and not just off-cpu.
> > >
> > >
> > > I worry about dropping short samples we can create a property that
> > > off-cpu time + on-cpu time != wall clock time. Perhaps such short
> > > things can get pushed into Namhyung's "at the end" approach while
> > > longer things get samples. Perhaps we only do that when the frequency
> > > is too great.
> >
> > Sounds good.  We might add an option to specify the threshold to
> > determine whether to dump the data or to save it for later.  But ideally
> > it should be able to find a good default.
> >
> > >
>
> >
> > Agreed!
> >
> > Thanks,
> > Namhyung

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ