lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200106103832.GO2810@hirez.programming.kicks-ass.net>
Date:   Mon, 6 Jan 2020 11:38:32 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     kan.liang@...ux.intel.com
Cc:     mingo@...hat.com, acme@...nel.org, tglx@...utronix.de,
        bp@...en8.de, linux-kernel@...r.kernel.org, eranian@...gle.com,
        alexey.budankov@...ux.intel.com, vitaly.slobodskoy@...el.com,
        ak@...ux.intel.com
Subject: Re: [RFC PATCH V2 2/7] perf: Init/fini PMU specific data

On Fri, Jan 03, 2020 at 11:39:19AM -0800, kan.liang@...ux.intel.com wrote:
> From: Kan Liang <kan.liang@...ux.intel.com>
> 
> The PMU specific data for the monitored tasks only be allocated during
> LBR call stack monitoring.
> 
> When a LBR call stack event is accounted, the perf_ctx_data for related
> tasks will be allocated/initialized by init_perf_ctx_data().
> When a LBR call stack event is unaccounted, the perf_ctx_data for
> related tasks will be detached/freed by fini_task_ctx_data().

I hate that 'fini' name. We don't use that anywhere else.

> +	raw_spin_lock_irqsave(&task_data_sys_wide_events_lock, flags);
> +
> +	/*
> +	 * Allocate perf_ctx_data for all existing threads with the first
> +	 * system-wide event.
> +	 * The perf_ctx_data for new threads will be allocated in
> +	 * perf_event_fork().
> +	 */
> +	if (atomic_inc_return(&nr_task_data_sys_wide_events) > 1)
> +		goto unlock;
> +
> +	rcu_read_lock();
> +	for_each_process_thread(g, p) {
> +		mutex_lock(&p->perf_event_mutex);
> +		if (p->perf_ctx_data) {
> +			/*
> +			 * The perf_ctx_data for this thread may has been
> +			 * allocated by per-task event.
> +			 * Only update refcount for the case.
> +			 */
> +			refcount_inc(&p->perf_ctx_data->refcount);
> +			mutex_unlock(&p->perf_event_mutex);
> +			continue;
> +		}
> +
> +		if (pos < num_thread) {
> +			refcount_set(&data[pos]->refcount, 1);
> +			rcu_assign_pointer(p->perf_ctx_data, data[pos++]);
> +		} else {
> +			/*
> +			 * There may be some new threads created,
> +			 * when we allocate space.
> +			 * Track the number in nr_new_tasks.
> +			 */
> +			nr_new_tasks++;
> +		}
> +		mutex_unlock(&p->perf_event_mutex);
> +	}
> +	rcu_read_unlock();
> +
> +	raw_spin_unlock_irqrestore(&task_data_sys_wide_events_lock, flags);

Still NAK. That's some mightly broken code there.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ