lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20250221083946.ad92f6914b7bc6fe7bcf0423@kernel.org>
Date: Fri, 21 Feb 2025 08:39:46 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org, Masami
 Hiramatsu <mhiramat@...nel.org>, Mark Rutland <mark.rutland@....com>,
 Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Andrew Morton
 <akpm@...ux-foundation.org>, Heiko Carstens <hca@...ux.ibm.com>, Sven
 Schnelle <svens@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
 Alexander Gordeev <agordeev@...ux.ibm.com>, stable@...r.kernel.org
Subject: Re: [PATCH v3 1/5] ftrace: Fix accounting of adding subops to a
 manager ops

On Thu, 20 Feb 2025 15:20:10 -0500
Steven Rostedt <rostedt@...dmis.org> wrote:

> From: Steven Rostedt <rostedt@...dmis.org>
> 
> Function graph uses a subops and manager ops mechanism to attach to
> ftrace.  The manager ops connects to ftrace and the functions it connects
> to is defined by a list of subops that it manages.
> 
> The function hash that defines what the above ops attaches to limits the
> functions to attach if the hash has any content. If the hash is empty, it
> means to trace all functions.
> 
> The creation of the manager ops hash is done by iterating over all the
> subops hashes. If any of the subops hashes is empty, it means that the
> manager ops hash must trace all functions as well.
> 
> The issue is in the creation of the manager ops. When a second subops is
> attached, a new hash is created by starting it as NULL and adding the
> subops one at a time. But the NULL ops is mistaken as an empty hash, and
> once an empty hash is found, it stops the loop of subops and just enables
> all functions.
> 
>   # echo "f:myevent1 kernel_clone" >> /sys/kernel/tracing/dynamic_events
>   # cat /sys/kernel/tracing/enabled_functions
> kernel_clone (1)           	tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> 
>   # echo "f:myevent2 schedule_timeout" >> /sys/kernel/tracing/dynamic_events
>   # cat /sys/kernel/tracing/enabled_functions
> trace_initcall_start_cb (1)             tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> run_init_process (1)            tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> try_to_run_init_process (1)             tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> x86_pmu_show_pmu_cap (1)                tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> cleanup_rapl_pmus (1)                   tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> uncore_free_pcibus_map (1)              tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> uncore_types_exit (1)                   tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> uncore_pci_exit.part.0 (1)              tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> kvm_shutdown (1)                tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> vmx_dump_msrs (1)               tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> vmx_cleanup_l1d_flush (1)               tramp: 0xffffffffc0309000 (ftrace_graph_func+0x0/0x60) ->ftrace_graph_func+0x0/0x60
> [..]
> 
> Fix this by initializing the new hash to NULL and if the hash is NULL do
> not treat it as an empty hash but instead allocate by copying the content
> of the first sub ops. Then on subsequent iterations, the new hash will not
> be NULL, but the content of the previous subops. If that first subops
> attached to all functions, then new hash may assume that the manager ops
> also needs to attach to all functions.
> 

Looks good to me.

Reviewed-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>

Thanks,

> Cc: stable@...r.kernel.org
> Fixes: 5fccc7552ccbc ("ftrace: Add subops logic to allow one ops to manage many")
> Signed-off-by: Steven Rostedt (Google) <rostedt@...dmis.org>
> ---
> Changes since v2: https://lore.kernel.org/20250219220510.888959028@goodmis.org
> 
> - Have append_hashes() return EMPTY_HASH and not NULL if the resulting
>   new hash is empty.
> 
>  kernel/trace/ftrace.c | 33 ++++++++++++++++++++++-----------
>  1 file changed, 22 insertions(+), 11 deletions(-)
> 
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 728ecda6e8d4..bec54dc27204 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -3220,15 +3220,22 @@ static struct ftrace_hash *copy_hash(struct ftrace_hash *src)
>   *  The filter_hash updates uses just the append_hash() function
>   *  and the notrace_hash does not.
>   */
> -static int append_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash)
> +static int append_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash,
> +		       int size_bits)
>  {
>  	struct ftrace_func_entry *entry;
>  	int size;
>  	int i;
>  
> -	/* An empty hash does everything */
> -	if (ftrace_hash_empty(*hash))
> -		return 0;
> +	if (*hash) {
> +		/* An empty hash does everything */
> +		if (ftrace_hash_empty(*hash))
> +			return 0;
> +	} else {
> +		*hash = alloc_ftrace_hash(size_bits);
> +		if (!*hash)
> +			return -ENOMEM;
> +	}
>  
>  	/* If new_hash has everything make hash have everything */
>  	if (ftrace_hash_empty(new_hash)) {
> @@ -3292,16 +3299,18 @@ static int intersect_hash(struct ftrace_hash **hash, struct ftrace_hash *new_has
>  /* Return a new hash that has a union of all @ops->filter_hash entries */
>  static struct ftrace_hash *append_hashes(struct ftrace_ops *ops)
>  {
> -	struct ftrace_hash *new_hash;
> +	struct ftrace_hash *new_hash = NULL;
>  	struct ftrace_ops *subops;
> +	int size_bits;
>  	int ret;
>  
> -	new_hash = alloc_ftrace_hash(ops->func_hash->filter_hash->size_bits);
> -	if (!new_hash)
> -		return NULL;
> +	if (ops->func_hash->filter_hash)
> +		size_bits = ops->func_hash->filter_hash->size_bits;
> +	else
> +		size_bits = FTRACE_HASH_DEFAULT_BITS;
>  
>  	list_for_each_entry(subops, &ops->subop_list, list) {
> -		ret = append_hash(&new_hash, subops->func_hash->filter_hash);
> +		ret = append_hash(&new_hash, subops->func_hash->filter_hash, size_bits);
>  		if (ret < 0) {
>  			free_ftrace_hash(new_hash);
>  			return NULL;
> @@ -3310,7 +3319,8 @@ static struct ftrace_hash *append_hashes(struct ftrace_ops *ops)
>  		if (ftrace_hash_empty(new_hash))
>  			break;
>  	}
> -	return new_hash;
> +	/* Can't return NULL as that means this failed */
> +	return new_hash ? : EMPTY_HASH;
>  }
>  
>  /* Make @ops trace evenything except what all its subops do not trace */
> @@ -3505,7 +3515,8 @@ int ftrace_startup_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int
>  		filter_hash = alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->filter_hash);
>  		if (!filter_hash)
>  			return -ENOMEM;
> -		ret = append_hash(&filter_hash, subops->func_hash->filter_hash);
> +		ret = append_hash(&filter_hash, subops->func_hash->filter_hash,
> +				  size_bits);
>  		if (ret < 0) {
>  			free_ftrace_hash(filter_hash);
>  			return ret;
> -- 
> 2.47.2
> 
> 


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ