lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240607093254.GN8774@noisy.programming.kicks-ass.net>
Date: Fri, 7 Jun 2024 11:32:54 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Ben Gainey <ben.gainey@....com>
Cc: mingo@...hat.com, acme@...nel.org, namhyung@...nel.org,
	james.clark@....com, mark.rutland@....com,
	alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
	irogers@...gle.com, adrian.hunter@...el.com,
	linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7 1/4] perf: Support PERF_SAMPLE_READ with inherit

On Thu, Jun 06, 2024 at 03:40:56PM +0100, Ben Gainey wrote:
> This change allows events to use PERF_SAMPLE READ with inherit
> so long as PERF_SAMPLE_TID is also set.
> 
> In this configuration, an event will be inherited into any
> child processes / threads, allowing convenient profiling of a
> multiprocess or multithreaded application, whilst allowing
> profiling tools to collect per-thread samples, in particular
> of groups of counters.

Perhaps a few words on *WHY* this is important.

> The read_format field of both PERF_RECORD_READ and PERF_RECORD_SAMPLE
> are changed by this new configuration, but calls to `read()` on the same
> event file descriptor are unaffected and continue to return the
> cumulative total.

This is unfortunate. Up to this point they were the same. Also, I see no
change to the uapi file. So were you trying to say that only
read_format::value is changed to be the thread local value as opposed to
the hierarchy total?

Please fix the wording to be unambiguous as to what is actually meant.
Also try and justify why it is okay to break this symmetry.

> @@ -3532,11 +3544,18 @@ perf_event_context_sched_out(struct task_struct *task, struct task_struct *next)
>  			perf_ctx_disable(ctx, false);
>  
>  			/* PMIs are disabled; ctx->nr_pending is stable. */
> -			if (local_read(&ctx->nr_pending) ||
> +			if (ctx->nr_inherit_read ||
> +			    next_ctx->nr_inherit_read ||
> +			    local_read(&ctx->nr_pending) ||
>  			    local_read(&next_ctx->nr_pending)) {

This seems unfortunate, nr_pending and nr_inherit_read are both used
exclusively to inhibit this context switch optimization. Surely they can
share the exact same counter.

That is, rename nr_pending and use it for both?

>  				/*
>  				 * Must not swap out ctx when there's pending
>  				 * events that rely on the ctx->task relation.
> +				 *
> +				 * Likewise, when a context contains inherit +
> +				 * SAMPLE_READ events they should be switched
> +				 * out using the slow path so that they are
> +				 * treated as if they were distinct contexts.
>  				 */
>  				raw_spin_unlock(&next_ctx->lock);
>  				rcu_read_unlock();
> @@ -4552,11 +4571,19 @@ static void __perf_event_read(void *info)
>  	raw_spin_unlock(&ctx->lock);
>  }
>  
> -static inline u64 perf_event_count(struct perf_event *event)
> +static inline u64 perf_event_count_cumulative(struct perf_event *event)

I don't think you need this -- overly long and hard to type function
name...

>  {
>  	return local64_read(&event->count) + atomic64_read(&event->child_count);
>  }
>  
> +static inline u64 perf_event_count(struct perf_event *event, bool self_value_only)
> +{
> +	if (self_value_only && has_inherit_and_sample_read(&event->attr))
> +		return local64_read(&event->count);

... if this @self_value_only argument was actually used as such -- it
isn't, see how you use 'from_sample' which is something else entirely.
Which then also caused to you fix it up and make a mess with that &&
has_inherit_and_sample_read() nonsense. (also, shorter function names,
more good)

> +
> +	return perf_event_count_cumulative(event);
> +}

That is, I would really rather you had:

static inline u64 perf_event_count(struct perf_event *event, bool self)
{
	if (self)
		return local64_read(&event->count);

	return local64_read(&event->count) + local64_read(&event->child_count);
}

And then actually use that argument as intended.

> @@ -7205,13 +7232,14 @@ void perf_event__output_id_sample(struct perf_event *event,
>  
>  static void perf_output_read_one(struct perf_output_handle *handle,
>  				 struct perf_event *event,
> -				 u64 enabled, u64 running)
> +				 u64 enabled, u64 running,
> +				 bool from_sample)
>  {
>  	u64 read_format = event->attr.read_format;
>  	u64 values[5];
>  	int n = 0;
>  
> -	values[n++] = perf_event_count(event);
> +	values[n++] = perf_event_count(event, from_sample);

...observe the fail... from_sample != self-value-only

>  	if (read_format & PERF_FORMAT_TOTAL_TIME_ENABLED) {
>  		values[n++] = enabled +
>  			atomic64_read(&event->child_total_time_enabled);


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ