lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 7 Nov 2014 13:06:12 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Matt Fleming <matt@...sole-pimps.org>
Cc:	Ingo Molnar <mingo@...nel.org>, Jiri Olsa <jolsa@...hat.com>,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	Andi Kleen <andi@...stfloor.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	linux-kernel@...r.kernel.org, "H. Peter Anvin" <hpa@...or.com>,
	Kanaka Juvva <kanaka.d.juvva@...el.com>,
	Matt Fleming <matt.fleming@...el.com>
Subject: Re: [PATCH v3 10/11] perf/x86/intel: Perform rotation on Intel CQM
 RMIDs

On Thu, Nov 06, 2014 at 12:23:21PM +0000, Matt Fleming wrote:
> +/*
> + * Exchange the RMID of a group of events.
> + */
> +static unsigned int
> +intel_cqm_xchg_rmid(struct perf_event *group, unsigned int rmid)
> +{
> +	struct perf_event *event;
> +	unsigned int old_rmid = group->hw.cqm_rmid;
> +	struct list_head *head = &group->hw.cqm_group_entry;
> +
> +	lockdep_assert_held(&cache_mutex);
> +
> +	/*
> +	 * If our RMID is being deallocated, perform a read now.
> +	 */
> +	if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) {
> +		struct intel_cqm_count_info info;
> +
> +		local64_set(&group->count, 0);
> +		info.event = group;
> +
> +		preempt_disable();
> +		smp_call_function_many(&cqm_cpumask, __intel_cqm_event_count,
> +				       &info, 1);
> +		preempt_enable();
> +	}

This suffers the same issue as before, why not call that one function
and not reimplement it?

Also, I don't think we'd ever swap an rmid for another valid one, right?
So we could do this read/update unconditionally.

> +
> +	raw_spin_lock_irq(&cache_lock);
> +
> +	group->hw.cqm_rmid = rmid;
> +	list_for_each_entry(event, head, hw.cqm_group_entry)
> +		event->hw.cqm_rmid = rmid;
> +
> +	raw_spin_unlock_irq(&cache_lock);
> +
> +	return old_rmid;
> +}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ