lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6808245d-208c-c6d2-1c6e-7410df158992@redhat.com>
Date:   Sat, 12 Jun 2021 11:41:41 +0200
From:   Daniel Bristot de Oliveira <bristot@...hat.com>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     linux-kernel@...r.kernel.org, Phil Auld <pauld@...hat.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Kate Carcia <kcarcia@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Alexandre Chartre <alexandre.chartre@...cle.com>,
        Clark Willaims <williams@...hat.com>,
        John Kacur <jkacur@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>, linux-doc@...r.kernel.org
Subject: Re: [PATCH V3 9/9] tracing: Add timerlat tracer

On 6/11/21 10:03 PM, Steven Rostedt wrote:
> On Fri, 11 Jun 2021 14:59:13 +0200
> Daniel Bristot de Oliveira <bristot@...hat.com> wrote:
> 
>> ------------------ %< -----------------------------
>> It is worth mentioning that the *duration* values reported
>> by the osnoise: events are *net* values. For example, the
>> thread_noise does not include the duration of the overhead caused
>> by the IRQ execution (which indeed accounted for 12736 ns). But
>> the values reported by the timerlat tracer (timerlat_latency)
>> are *gross* values.
>>
>> The art below illustrates a CPU timeline and how the timerlat tracer
>> observes it at the top and the osnoise: events at the bottom. Each "-"
>> in the timelines means 1 us, and the time moves ==>:
>>
>>      External          context irq                  context thread
>>       clock           timer_latency                 timer_latency
>>       event              18 us                          48 us 
>>         |                  ^                             ^
>>         v                  |                             |
>>         |------------------|                             |       <-- timerlat irq timeline
>>         |------------------+-----------------------------|       <-- timerlat thread timeline
>>                            ^                             ^
>>  ===================== CPU timeline ======================================
>>                    [timerlat/ irq]  [ dev irq ]                          
>>  [another thread...^             v..^         v........][timerlat/ thread]  
>>  ===================== CPU timeline ======================================
>>                    |-------------|  |---------|                  <-- irq_noise timeline
>>                                  |--^         v--------|         <-- thread_noise timeline
>>                                  |            |        |
>>                                  |            |        + thread_noise: 10 us
>>                                  |            +-> irq_noise: 9 us
>>                                  +-> irq_noise: 13 us
>>
>>  --------------- >% --------------------------------  
> 
> That's really busy, and honestly, I can't tell what is what.
> 
> The "context irq timer_latency" is a confusing name. Could we just have
> that be "timer irq latency"? And "context thread timer_latency" just be
> "thread latency". Adding too much text to the name actually makes it harder
> to understand. We want to simplify it, not make people have to think harder
> to see it.
> 
> I think we can get rid of the "<-- .* timeline" to the right.  I don't
> think they are necessary. Again, the more you add to the diagram, the
> busier it looks, and the harder it is to read.
> 
> Could we switch "[timerlat/ irq]" to just "[timer irq]" and explain how
> that "context irq timer_latency"/"timer irq latency" is related?
> 
> Should probably state that the "dev irq" is an unrelated device interrupt
> that happened.
> 
> What's with the two CPU timeline lines? Now there I think it would be
> better to have the arrow text by itself.
> 
> And finally, not sure if you plan on doing this, but have a output of the
> trace that would show the above.
> 
> Thus, here's what I would expect to see:
> 
>       External         
>        clock         timer irq latency             e    thread latency
>        event              18 us                          48 us 
>          |                  ^                             ^
>          v                  |                             |
>          |------------------|                             |
>          |------------------+-----------------------------|       
>                             ^                             ^
>   =========================================================================
>                     [timerlat/ irq]  [ dev irq ]                             
>   [another thread...^             v..^         v........][timerlat/ thread]  <-- CPU task timeline
>   =========================================================================
>                     |-------------|  |---------|
>                                   |--^         v--------|
>                                   |            |        |
>                                   |            |        + thread_noise: 10 us
>                                   |            +-> irq_noise: 9 us
>                                   +-> irq_noise: 13 us

It looks good to me!

>  The "[ dev irq ]" above is an interrupt from some device on the system that
>  causes extra noise to the timerlat task.
> 
> I think the above may be easier to understand, especially if the trace
> output that represents it is below.

ok, I can try to capture a trace sample and represent it into the ASCII art
format above.

> Also, I have to ask, shouldn't the "thread noise" really start at the
> "External clock event"?

To go in that direction, we need to track things that delayed the IRQ execution.
We are already tracking other IRQs' execution, but we will have to keep a
history of past executions and "playback" them. This will add some overhead
linear to the past event... and/or some pessimism.

We will also have to track IRQ disabled sections. The problem of tracking IRQ
disabled is that it depends on tracing infra-structure that is not enabled by
default of distros... And there are IRQ delay causes that are not related to the
thread... like idle states... (and all these things create more and more states
to track these things)...

So, I added the timer irq latency to figure out when the problem is related to
things that delay the IRQ, and the stack trace will help us figure out where the
problem is in the thread context. After the IRQ execution, the thread noise is
helpful - even without all the thread noise before the IRQ.

Furthermore, if we start trying to abstract the causes of delay, we will find
the rtsl :-). The rtls events and abstractions give us the worst-case scheduling
latency without adding unneeded pessimism (sound analysis). It covers all the
possible cases, for all any schedulers, even without the need of a measuring
thread like here (or with cyclictest) - and this is a good thing because it does
not change the target system's workload.

The problem is that... rtsl depends on tracing infra-structure that are not
enabled by default on distros, like preempt_ and irq_ disabled events.

So, I see timerlat as a tool for on-the-fly usage, like debugging on customers
(as we do at red hat). It can be enabled by default on distros because it only
depends on existing and already enabled events and causes no overhead when
disabled. rtsl targets more specific cases, like safety-critical systems, where
the overhead is acceptable because of the sound analysis of the scheduling bound
(which is rooted in a formal specification & analysis of the system).

-- Daniel

> -- Steve
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ