lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141110213140.GG1292@console-pimps.org>
Date:	Mon, 10 Nov 2014 21:31:40 +0000
From:	Matt Fleming <matt@...sole-pimps.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...nel.org>, Jiri Olsa <jolsa@...hat.com>,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	Andi Kleen <andi@...stfloor.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	linux-kernel@...r.kernel.org, "H. Peter Anvin" <hpa@...or.com>,
	Kanaka Juvva <kanaka.d.juvva@...el.com>,
	Matt Fleming <matt.fleming@...el.com>
Subject: Re: [PATCH v3 10/11] perf/x86/intel: Perform rotation on Intel CQM
 RMIDs

On Fri, 07 Nov, at 01:34:31PM, Peter Zijlstra wrote:
> On Thu, Nov 06, 2014 at 12:23:21PM +0000, Matt Fleming wrote:
> > +		min_queue_time = entry->queue_time +
> > +			msecs_to_jiffies(__rotation_period);
> > +
> > +		if (time_after(min_queue_time, now))
> > +			continue;
> 
> Why continue; this LRU is time ordered, later entries cannot be earlier,
> right?
 
Good point. We can just exit here.

> > +		set_bit(entry->rmid, cqm_limbo_bitmap);
> > +		set_bit(entry->rmid, cqm_free_bitmap);
> > +	}
> > +
> > +	/*
> > +	 * Fast return if none of the RMIDs on the limbo list have been
> > +	 * sitting on the queue for the minimum queue time.
> > +	 */
> > +	*available = !bitmap_empty(cqm_limbo_bitmap, nr_bits);
> > +	if (!*available)
> > +		return false;
> > +
> > +	/*
> > +	 * Test whether an RMID is free for each package.
> > +	 */
> > +	preempt_disable();
> > +	smp_call_function_many(&cqm_cpumask, intel_cqm_stable, NULL, true);
> > +	preempt_enable();
> 
> I don't get the whole list -> bitmap -> list juggle.
> 
> enum rmid_cycle_state {
> 	RMID_AVAILABLE = 0,
> 	RMID_LIMBO,
> 	RMID_YOUNG,
> };
> 
> struct cqm_rmid_entry {
> 	...
> 	enum rmid_cycle_state state;
> };
> 
> static void __intel_sqm_stable(void *arg)
> {
> 	list_for_each_entry(entry, &cqm_rmid_limbo_lru, list) {
> 		if (entry->state == RMID_YOUNG)
> 			break;
> 
> 		if (__rmid_read(entry->rmid) > __threshold)
> 			entry->state = RMID_LIMBO;
> 	}
> }
> 
> static bool intel_cqm_rmid_stabilize()
> {
> 	unsigned long queue_time = jiffies + msecs_to_jiffies(__rotation_period);
> 	unsigned int nr_limbo = 0;
> 	...
> 
> 	list_for_each_entry(entry, &cqm_rmid_limbo_lru, list) {
> 		if (time_after(entry->queue_time, queue_time))
> 			break;
> 
> 		entry->state = RMID_AVAILABLE;
> 		nr_limbo++;
> 	}
> 
> 	if (!nr_limbo)
> 		return;
> 
> 	on_each_cpu_mask(&cqm_cpumask, __intel_cqm_stable, NULL, true);
> 
> 	list_for_each_entry_safe(entry, tmp, &cqm_rmid_limbo_lru, list) {
> 		if (entry->state == RMID_YOUNG)
> 			break;
> 
> 		if (entry->state == RMID_AVAILABLE)
> 			list_move(&cqm_rmid_free_list, &entry->list);
> 	}
> }
> 
> 
> Would not something like that work?

Actually, yeah, that does look like it'd work. Are you OK with me adding
an enum to the cqm_rmid_entry? You had concerns in the past about
growing the size of the struct.

-- 
Matt Fleming, Intel Open Source Technology Center
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ