[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240527090949.70151ecb2e7d98d4f284c2c8@kernel.org>
Date: Mon, 27 May 2024 09:09:49 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org, Masami
Hiramatsu <mhiramat@...nel.org>, Mark Rutland <mark.rutland@....com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Andrew Morton
<akpm@...ux-foundation.org>, Alexei Starovoitov
<alexei.starovoitov@...il.com>, Florent Revest <revest@...omium.org>,
Martin KaFai Lau <martin.lau@...ux.dev>, bpf <bpf@...r.kernel.org>, Sven
Schnelle <svens@...ux.ibm.com>, Alexei Starovoitov <ast@...nel.org>, Jiri
Olsa <jolsa@...nel.org>, Arnaldo Carvalho de Melo <acme@...nel.org>, Daniel
Borkmann <daniel@...earbox.net>, Alan Maguire <alan.maguire@...cle.com>,
Peter Zijlstra <peterz@...radead.org>, Thomas Gleixner
<tglx@...utronix.de>, Guo Ren <guoren@...nel.org>
Subject: Re: [PATCH 20/20] function_graph: Use bitmask to loop on fgraph
entry
On Fri, 24 May 2024 22:37:12 -0400
Steven Rostedt <rostedt@...dmis.org> wrote:
> From: "Steven Rostedt (Google)" <rostedt@...dmis.org>
>
> Instead of looping through all the elements of fgraph_array[] to see if
> there's an gops attached to one and then calling its gops->func(). Create
> a fgraph_array_bitmask that sets bits when an index in the array is
> reserved (via the simple lru algorithm). Then only the bits set in this
> bitmask needs to be looked at where only elements in the array that have
> ops registered need to be looked at.
>
> Note, we do not care about races. If a bit is set before the gops is
> assigned, it only wastes time looking at the element and ignoring it (as
> it did before this bitmask is added).
This is OK because anyway we check gops == &fgraph_stub.
By the way, shouldn't we also make "if (gops == &fgraph_stub)"
check unlikely()?
This change looks good to me.
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>
Thank you,
>
> Signed-off-by: Steven Rostedt (Google) <rostedt@...dmis.org>
> ---
> kernel/trace/fgraph.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
> index 5e8e13ffcfb6..1aae521e5997 100644
> --- a/kernel/trace/fgraph.c
> +++ b/kernel/trace/fgraph.c
> @@ -173,6 +173,7 @@ DEFINE_STATIC_KEY_FALSE(kill_ftrace_graph);
> int ftrace_graph_active;
>
> static struct fgraph_ops *fgraph_array[FGRAPH_ARRAY_SIZE];
> +static unsigned long fgraph_array_bitmask;
>
> /* LRU index table for fgraph_array */
> static int fgraph_lru_table[FGRAPH_ARRAY_SIZE];
> @@ -197,6 +198,8 @@ static int fgraph_lru_release_index(int idx)
>
> fgraph_lru_table[fgraph_lru_last] = idx;
> fgraph_lru_last = (fgraph_lru_last + 1) % FGRAPH_ARRAY_SIZE;
> +
> + clear_bit(idx, &fgraph_array_bitmask);
> return 0;
> }
>
> @@ -211,6 +214,8 @@ static int fgraph_lru_alloc_index(void)
>
> fgraph_lru_table[fgraph_lru_next] = -1;
> fgraph_lru_next = (fgraph_lru_next + 1) % FGRAPH_ARRAY_SIZE;
> +
> + set_bit(idx, &fgraph_array_bitmask);
> return idx;
> }
>
> @@ -632,7 +637,8 @@ int function_graph_enter(unsigned long ret, unsigned long func,
> if (offset < 0)
> goto out;
>
> - for (i = 0; i < FGRAPH_ARRAY_SIZE; i++) {
> + for_each_set_bit(i, &fgraph_array_bitmask,
> + sizeof(fgraph_array_bitmask) * BITS_PER_BYTE) {
> struct fgraph_ops *gops = fgraph_array[i];
> int save_curr_ret_stack;
>
> --
> 2.43.0
>
>
--
Masami Hiramatsu (Google) <mhiramat@...nel.org>
Powered by blists - more mailing lists