[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230215121638.1e86ffa1@canb.auug.org.au>
Date: Wed, 15 Feb 2023 12:16:38 +1100
From: Stephen Rothwell <sfr@...b.auug.org.au>
To: Steven Rostedt <rostedt@...dmis.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Jonathan Corbet <corbet@....net>
Cc: Bagas Sanjaya <bagasdotme@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Next Mailing List <linux-next@...r.kernel.org>,
Ross Zwisler <zwisler@...omium.org>,
Ross Zwisler <zwisler@...gle.com>
Subject: linux-next: manual merge of the ftrace tree with the jc_docs tree
Hi all,
Today's linux-next merge of the ftrace tree got a conflict in:
Documentation/trace/histogram.rst
between commit:
2abfcd293b79 ("docs: ftrace: always use canonical ftrace path")
from the jc_docs tree and commit:
a2ff84a5d1e6 ("tracing/histogram: Wrap remaining shell snippets in code blocks")
from the ftrace tree.
I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging. You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
--
Cheers,
Stephen Rothwell
diff --cc Documentation/trace/histogram.rst
index 7b7e4893b8f6,8e95295e39b6..000000000000
--- a/Documentation/trace/histogram.rst
+++ b/Documentation/trace/histogram.rst
@@@ -1861,9 -1864,9 +1864,9 @@@ A histogram can now be defined for the
The above shows the latency "lat" in a power of 2 grouping.
Like any other event, once a histogram is enabled for the event, the
- output can be displayed by reading the event's 'hist' file.
+ output can be displayed by reading the event's 'hist' file::
- # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/hist
+ # cat /sys/kernel/tracing/events/synthetic/wakeup_latency/hist
# event histogram
#
@@@ -1908,10 -1911,10 +1911,10 @@@
The latency values can also be grouped linearly by a given size with
- the ".buckets" modifier and specify a size (in this case groups of 10).
+ the ".buckets" modifier and specify a size (in this case groups of 10)::
# echo 'hist:keys=pid,prio,lat.buckets=10:sort=lat' >> \
- /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
+ /sys/kernel/tracing/events/synthetic/wakeup_latency/trigger
# event histogram
#
@@@ -2052,13 -2182,13 +2182,13 @@@ The following commonly-used handler.act
# echo 'hist:keys=$testpid:testpid=pid:onmatch(sched.sched_wakeup_new).\
wakeup_new_test($testpid) if comm=="cyclictest"' >> \
- /sys/kernel/debug/tracing/events/sched/sched_wakeup_new/trigger
+ /sys/kernel/tracing/events/sched/sched_wakeup_new/trigger
- Or, equivalently, using the 'trace' keyword syntax:
+ Or, equivalently, using the 'trace' keyword syntax::
- # echo 'hist:keys=$testpid:testpid=pid:onmatch(sched.sched_wakeup_new).\
- trace(wakeup_new_test,$testpid) if comm=="cyclictest"' >> \
- /sys/kernel/tracing/events/sched/sched_wakeup_new/trigger
+ # echo 'hist:keys=$testpid:testpid=pid:onmatch(sched.sched_wakeup_new).\
+ trace(wakeup_new_test,$testpid) if comm=="cyclictest"' >> \
- /sys/kernel/debug/tracing/events/sched/sched_wakeup_new/trigger
++ /sys/kernel/tracing/events/sched/sched_wakeup_new/trigger
Creating and displaying a histogram based on those events is now
just a matter of using the fields and new synthetic event in the
@@@ -2191,48 -2321,48 +2321,48 @@@
resulting latency, stored in wakeup_lat, exceeds the current
maximum latency, a snapshot is taken. As part of the setup, all
the scheduler events are also enabled, which are the events that
- will show up in the snapshot when it is taken at some point:
+ will show up in the snapshot when it is taken at some point::
- # echo 1 > /sys/kernel/tracing/events/sched/enable
- # echo 1 > /sys/kernel/debug/tracing/events/sched/enable
++ # echo 1 > /sys/kernel/tracing/events/sched/enable
- # echo 'hist:keys=pid:ts0=common_timestamp.usecs \
- if comm=="cyclictest"' >> \
- /sys/kernel/tracing/events/sched/sched_waking/trigger
+ # echo 'hist:keys=pid:ts0=common_timestamp.usecs \
+ if comm=="cyclictest"' >> \
- /sys/kernel/debug/tracing/events/sched/sched_waking/trigger
++ /sys/kernel/tracing/events/sched/sched_waking/trigger
- # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0: \
- onmax($wakeup_lat).save(next_prio,next_comm,prev_pid,prev_prio, \
- prev_comm):onmax($wakeup_lat).snapshot() \
- if next_comm=="cyclictest"' >> \
- /sys/kernel/tracing/events/sched/sched_switch/trigger
+ # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0: \
+ onmax($wakeup_lat).save(next_prio,next_comm,prev_pid,prev_prio, \
+ prev_comm):onmax($wakeup_lat).snapshot() \
+ if next_comm=="cyclictest"' >> \
- /sys/kernel/debug/tracing/events/sched/sched_switch/trigger
++ /sys/kernel/tracing/events/sched/sched_switch/trigger
When the histogram is displayed, for each bucket the max value
and the saved values corresponding to the max are displayed
following the rest of the fields.
If a snapshot was taken, there is also a message indicating that,
- along with the value and event that triggered the global maximum:
+ along with the value and event that triggered the global maximum::
- # cat /sys/kernel/tracing/events/sched/sched_switch/hist
- { next_pid: 2101 } hitcount: 200
- max: 52 next_prio: 120 next_comm: cyclictest \
- prev_pid: 0 prev_prio: 120 prev_comm: swapper/6
- # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
++ # cat /sys/kernel/tracing/events/sched/sched_switch/hist
+ { next_pid: 2101 } hitcount: 200
+ max: 52 next_prio: 120 next_comm: cyclictest \
+ prev_pid: 0 prev_prio: 120 prev_comm: swapper/6
- { next_pid: 2103 } hitcount: 1326
- max: 572 next_prio: 19 next_comm: cyclictest \
- prev_pid: 0 prev_prio: 120 prev_comm: swapper/1
+ { next_pid: 2103 } hitcount: 1326
+ max: 572 next_prio: 19 next_comm: cyclictest \
+ prev_pid: 0 prev_prio: 120 prev_comm: swapper/1
- { next_pid: 2102 } hitcount: 1982 \
- max: 74 next_prio: 19 next_comm: cyclictest \
- prev_pid: 0 prev_prio: 120 prev_comm: swapper/5
+ { next_pid: 2102 } hitcount: 1982 \
+ max: 74 next_prio: 19 next_comm: cyclictest \
+ prev_pid: 0 prev_prio: 120 prev_comm: swapper/5
- Snapshot taken (see tracing/snapshot). Details:
- triggering value { onmax($wakeup_lat) }: 572 \
- triggered by event with key: { next_pid: 2103 }
+ Snapshot taken (see tracing/snapshot). Details:
+ triggering value { onmax($wakeup_lat) }: 572 \
+ triggered by event with key: { next_pid: 2103 }
- Totals:
- Hits: 3508
- Entries: 3
- Dropped: 0
+ Totals:
+ Hits: 3508
+ Entries: 3
+ Dropped: 0
In the above case, the event that triggered the global maximum has
the key with next_pid == 2103. If you look at the bucket that has
@@@ -2310,15 -2440,15 +2440,15 @@@
$cwnd variable. If the value has changed, a snapshot is taken.
As part of the setup, all the scheduler and tcp events are also
enabled, which are the events that will show up in the snapshot
- when it is taken at some point:
+ when it is taken at some point::
- # echo 1 > /sys/kernel/tracing/events/sched/enable
- # echo 1 > /sys/kernel/tracing/events/tcp/enable
- # echo 1 > /sys/kernel/debug/tracing/events/sched/enable
- # echo 1 > /sys/kernel/debug/tracing/events/tcp/enable
++ # echo 1 > /sys/kernel/tracing/events/sched/enable
++ # echo 1 > /sys/kernel/tracing/events/tcp/enable
- # echo 'hist:keys=dport:cwnd=snd_cwnd: \
- onchange($cwnd).save(snd_wnd,srtt,rcv_wnd): \
- onchange($cwnd).snapshot()' >> \
- /sys/kernel/tracing/events/tcp/tcp_probe/trigger
+ # echo 'hist:keys=dport:cwnd=snd_cwnd: \
+ onchange($cwnd).save(snd_wnd,srtt,rcv_wnd): \
+ onchange($cwnd).snapshot()' >> \
- /sys/kernel/debug/tracing/events/tcp/tcp_probe/trigger
++ /sys/kernel/tracing/events/tcp/tcp_probe/trigger
When the histogram is displayed, for each bucket the tracked value
and the saved values corresponding to that value are displayed
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists