lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100506171141.GA5562@nowhere>
Date:	Thu, 6 May 2010 19:11:43 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Stephane Eranian <eranian@...gle.com>,
	LKML <linux-kernel@...r.kernel.org>, mingo@...e.hu,
	Paul Mackerras <paulus@...ba.org>,
	"David S. Miller" <davem@...emloft.net>
Subject: Re: [RFC] perf_events: ctx_flexible_sched_in() not maximizing PMU
	utilization

On Thu, May 06, 2010 at 04:20:40PM +0200, Peter Zijlstra wrote:
> On Thu, 2010-05-06 at 16:03 +0200, Stephane Eranian wrote:
> > Hi,
> > 
> > Looking at ctx_flexible_sched_in(), the logic is that if  group_sched_in()
> > fails for a HW group, then no other HW group in the list is even tried.
> > I don't understand this restriction. Groups are independent of each other.
> > The failure of one group should not block others from being scheduled,
> > otherwise you under-utilize the PMU.
> > 
> > What is the reason for this restriction? Can we lift it somehow?
> 
> Sure, but it will make scheduling much more expensive. The current
> scheme will only ever check the first N events because it stops at the
> first that fails, and since you can max fix N events on the PMU its
> constant time.
> 
> To fix this issue you'd have to basically always iterate all events and
> only stop once the PMU is fully booked, which reduces to an O(n) worst
> case algorithm.
> 
> But yeah, I did think of making the thing an RB-tree and basically
> schedule on service received, that should fix the lop-sided RR we get
> with constrained events.


I don't understand what you mean by schedule on service received, and why
an rbtree would solve that.

Unless you think about giving a weight to a group that has hardware events.
This weight is the number of "slots" a group would use in the hardware pmu
and you can compare this weight against the available remaining slots
in the pmu?

So yeah, if the hw group of events are sorted by weight, once one fails,
we know the following will fail too. But that doesn't seem a right solution
as it is going to always give more chances to low weight groups and very
fewer opportunities for the heavy ones to be scheduled.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ