[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874kzz4pb0.fsf@ashishki-desk.ger.corp.intel.com>
Date:   Wed, 23 Oct 2019 15:30:27 +0300
From:   Alexander Shishkin <alexander.shishkin@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>, mingo@...nel.org,
        peterz@...radead.org, linux-kernel@...r.kernel.org
Cc:     acme@...nel.org, mark.rutland@....com, jolsa@...hat.com,
        namhyung@...nel.org, andi@...stfloor.org,
        kan.liang@...ux.intel.com, alexander.shishkin@...ux.intel.com
Subject: Re: [PATCH 1/3] perf: Optimize perf_install_in_event()
Peter Zijlstra <peterz@...radead.org> writes:
> +	/*
> +	 * perf_event_attr::disabled events will not run and can be initialized
> +	 * without IPI. Except when this is the first event for the context, in
> +	 * that case we need the magic of the IPI to set ctx->is_active.
> +	 *
> +	 * The IOC_ENABLE that is sure to follow the creation of a disabled
> +	 * event will issue the IPI and reprogram the hardware.
> +	 */
> +	if (__perf_effective_state(event) == PERF_EVENT_STATE_OFF && ctx->nr_events) {
> +		raw_spin_lock_irq(&ctx->lock);
> +		if (task && ctx->task == TASK_TOMBSTONE) {
Confused: isn't that redundant? If ctx->task reads TASK_TOMBSTONE, task
is always !NULL, afaict. And in any case, if a task context is going
away, we shouldn't probably be adding events there. Or am I missing
something?
Other than that, this makes sense to me, fwiw.
Regards,
--
Alex
Powered by blists - more mailing lists