[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e5547e9a-d3d8-2cd1-7cb9-e567c798e78d@redhat.com>
Date: Thu, 15 Apr 2021 15:09:50 +0200
From: Daniel Bristot de Oliveira <bristot@...hat.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: linux-kernel@...r.kernel.org, kcarcia@...hat.com,
Jonathan Corbet <corbet@....net>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Alexandre Chartre <alexandre.chartre@...cle.com>,
Clark Willaims <williams@...hat.com>,
John Kacur <jkacur@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>, linux-doc@...r.kernel.org
Subject: Re: [RFC PATCH 1/5] tracing/hwlat: Add a cpus file specific for
hwlat_detector
On 4/14/21 4:10 PM, Steven Rostedt wrote:
> On Thu, 8 Apr 2021 16:13:19 +0200
> Daniel Bristot de Oliveira <bristot@...hat.com> wrote:
>
>> Provides a "cpus" interface to the hardware latency detector. By
>> default, it lists all CPUs, allowing hwlatd threads to run on any online
>> CPU of the system.
>>
>> It serves to restrict the execution of hwlatd to the set of CPUs writing
>> via this interface. Note that hwlatd also respects the "tracing_cpumask."
>> Hence, hwlatd threads will run only on the set of CPUs allowed here AND
>> on "tracing_cpumask."
>>
>> Why not keep just "tracing_cpumask"? Because the user might be interested
>> in tracing what is running on other CPUs. For instance, one might run
>> hwlatd in one HT CPU while observing what is running on the sibling HT
>> CPU. The cpu list format is also more intuitive.
>>
>> Also in preparation to the per-cpu mode.
>
> OK, I'm still not convinced that you couldn't use tracing_cpumask here.
> Because we have instances, and tracing_cpumask is defined per instance, you
> could simply do:
>
> # cd /sys/kernel/tracing
> # mkdir instances/hwlat
> # echo a > instances/hwlat/tracing_cpumask
> # echo hwlat > instances/hwlat/current_tracer
>
> Now the tracing_cpumask above only affects the hwlat tracer.
>
> I'm just reluctant to add more tracing files if the current ones can be
> used without too much trouble. For being intuitive, let's make user space
> tools hide the nastiness of the kernel interface ;-)
[discussing about the cpus file in both hwlat and osnoise here...]
I see your point, but by having two different instances give you two
different output "trace" files... and it is not that always practical to
merge them when using only the tracefs interface (I like to use it, and
it is very handy when dealing with immutable systems, on customers...).
Thinking aloud, one might say: sort the two trace files by timestamp...
and other might reply: but some lines do not have a timestamp associated,
e.g., the stacktrace.
Anyway, the cpus file on hwlat is not a super essential thing, I agree...
interrupts are disabled, so not much could go wrong (although I really
needed the trace from a sibling cpu in a real case).
But for the osnoise tracer the cpus file is really useful. For instance, on a
system with the CPU 7 isolated:
----- %< -----
# echo 7 > osnoise/cpus
# echo target_cpu == 7 > events/sched/sched_wakeup/filter
# echo stacktrace if target_cpu == 7 > events/sched/sched_wakeup/trigger
# echo 1 > events/sched/sched_wakeup/enable
# echo osnoise:thread_noise > set_event
# echo osnoise > current_tracer
# cat trace
[find...]
kworker/0:1-7 [000] d..5 1820.717780: <stack trace>
=> trace_event_raw_event_sched_wakeup_template
=> __traceiter_sched_wakeup
=> ttwu_do_wakeup
=> try_to_wake_up
=> __queue_work
=> queue_delayed_work_on
=> vmstat_shepherd
=> process_one_work
=> worker_thread
=> kthread
=> ret_from_fork
kworker/7:1-410 [007] d..3 1820.717790: thread_noise: kworker/7:1:410 start 1820.717786519 duration 3626 ns
osnoise/7-1000 [007] .... 1821.582340: 1000000 90 99.99100 15 1 0 12 6 1
----- >% -----
It was possible to easily find that the '1' thread noise was a kworker,
dispatched from CPU 0, and that it was dispatched by "vmstat_shepherd".
Also, the osnoise dir is not added to a new instance... so, it only
costs "one" file...
> -- Steve
>
Powered by blists - more mailing lists