lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YK+LiSdWQngXjior@localhost.localdomain>
Date:   Thu, 27 May 2021 14:07:37 +0200
From:   Juri Lelli <juri.lelli@...hat.com>
To:     Daniel Bristot de Oliveira <bristot@...hat.com>
Cc:     linux-kernel@...r.kernel.org, Phil Auld <pauld@...hat.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Kate Carcia <kcarcia@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Alexandre Chartre <alexandre.chartre@...cle.com>,
        Clark Willaims <williams@...hat.com>,
        John Kacur <jkacur@...hat.com>, linux-doc@...r.kernel.org
Subject: Re: [PATCH V3 0/9] hwlat improvements and osnoise/timerlat tracers

Hi,

On 14/05/21 22:51, Daniel Bristot de Oliveira wrote:
> This series proposes a set of improvements and new features for the
> tracing subsystem to facilitate the debugging of low latency
> deployments.
> 
> Currently, hwlat runs on a single CPU at a time, migrating across a
> set of CPUs in a round-robin fashion. This series improves hwlat 
> to allow hwlat to run on multiple CPUs in parallel, increasing the
> chances of detecting a hardware latency, at the cost of using more
> CPU time.
> 
> It also proposes a new tracer named osnoise, that aims to help users
> of isolcpus= (or a similar method) to measure how much noise the OS
> and the hardware add to the isolated application. The osnoise tracer
> bases on the hwlat detector code. The difference is that, instead of
> sampling with interrupts disabled, the osnoise tracer samples the CPU with
> interrupts and preemption enabled. In this way, the sampling thread will
> suffer any source of noise from the OS. The detection and classification
> of the type of noise are then made by observing the entry points of NMIs,
> IRQs, SoftIRQs, and threads. If none of these sources of noise is detected,
> the tool associates the noise with the hardware. The tool periodically
> prints a status, printing the total noise of the period, the max single
> noise observed, the percentage of CPU available for the task, along with
> the counters of each source of the noise. To debug the sources of noise,
> the tracer also adds a set of tracepoints that print any NMI, IRQ, SofIRQ,
> and thread occurrence. These tracepoints print the starting time and the
> noise's net duration at the end of the noise. In this way, it reduces the
> number of tracepoints (one instead of two) and the need to manually
> accounting the contribution of each noise independently.
> 
> Finaly, the timerlat tracer aims to help the preemptive kernel developers
> to find sources of wakeup latencies of real-time threads. The tracer
> creates a per-cpu kernel thread with real-time priority. The tracer thread
> sets a periodic timer to wakeup itself, and goes to sleep waiting for the
> timer to fire. At the wakeup, the thread then computes a wakeup latency
> value as the difference between the current time and the absolute time
> that the timer was set to expire. The tracer prints two lines at every
> activation. The first is the timer latency observed at the hardirq context
> before the activation of the thread. The second is the timer latency
> observed by the thread, which is the same level that cyclictest reports.
> The ACTIVATION ID field serves to relate the irq execution to its
> respective thread execution. The tracer is build on top of osnoise tracer,
> and the osnoise: events can be used to trace the source of interference
> from NMI, IRQs and other threads. It also enables the capture of the
> stacktrace at the IRQ context, which helps to identify the code path
> that can cause thread delay.

FWIW, I've been using the new tracers extensively downstream for a while
now and I find them very useful and quite more precise to detect
problems than what we currently have available.

The fact that one can do almost everything needed to spot latency issues
from entirely inside the kernel with a simple interface is a big plus to me
as well.

I wouldn't mind if this gets accepted very soon! :)

Best,
Juri

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ