lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 06 May 2010 16:20:40 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Stephane Eranian <eranian@...gle.com>
Cc:	LKML <linux-kernel@...r.kernel.org>, mingo@...e.hu,
	Paul Mackerras <paulus@...ba.org>,
	Frédéric Weisbecker <fweisbec@...il.com>,
	"David S. Miller" <davem@...emloft.net>
Subject: Re: [RFC] perf_events: ctx_flexible_sched_in() not maximizing PMU 
 utilization

On Thu, 2010-05-06 at 16:03 +0200, Stephane Eranian wrote:
> Hi,
> 
> Looking at ctx_flexible_sched_in(), the logic is that if  group_sched_in()
> fails for a HW group, then no other HW group in the list is even tried.
> I don't understand this restriction. Groups are independent of each other.
> The failure of one group should not block others from being scheduled,
> otherwise you under-utilize the PMU.
> 
> What is the reason for this restriction? Can we lift it somehow?

Sure, but it will make scheduling much more expensive. The current
scheme will only ever check the first N events because it stops at the
first that fails, and since you can max fix N events on the PMU its
constant time.

To fix this issue you'd have to basically always iterate all events and
only stop once the PMU is fully booked, which reduces to an O(n) worst
case algorithm.

But yeah, I did think of making the thing an RB-tree and basically
schedule on service received, that should fix the lop-sided RR we get
with constrained events.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ