lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1263825158.4283.590.camel@laptop>
Date:	Mon, 18 Jan 2010 15:32:38 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	Stephane Eranian <eranian@...gle.com>,
	linux-kernel@...r.kernel.org, mingo@...e.hu, paulus@...ba.org,
	davem@...emloft.net, perfmon2-devel@...ts.sf.net, eranian@...il.com
Subject: Re: [PATCH]  perf_events: improve x86 event scheduling (v5)

On Mon, 2010-01-18 at 15:20 +0100, Frederic Weisbecker wrote:
> 
> 
> Well in appearance, things go through one pass.
> 
> But actually not, there is a first iteration that collects
> the events (walking trhough the group list, filtering soft events),
> a second iteration that check the constraints and schedule (but
> not apply) the events.
> 
> And thereafter we schedule soft events (and revert the whole if
> needed).
> 
> This is a one pass from group_sched_in() POV but at the cost
> of reimplementating what the core does wrt soft events and iterations.
> And not only is it reinventing the wheel, it also produces more
> iterations than we need.
> 
> If we were using the common pmu->enable() from group/event_sched_in(),
> that would build the collection, with only one iteration through the
> group list (instead of one to collect, and one for the software
> events).
> 
> And the constraints can be validated in a second explicit iteration
> through hw_check_constraint(), like it's currently done explicitly
> from hw_perf_group_sched_in() that calls x86_schedule_event().

Thing is, we cannot do that, because we currently require ->enable() to
report schedulability. Now we could add an argument to ->enable, or add
callbacks like I suggested to convey that state.

> The fact is we have with this patch a _lot_ of iterations each
> time x86 get scheduled. This is really a lot for a fast path.
> But considering the dynamic cpu events / task events series
> we can have, I don't see other alternatives.

Luckily it tries to use a previous configuration, so in practise the
schedule phase is real quick amortized O(1) as long as we don't change
the set.

> Do you mean this:
> 
> hw_perf_group_sched_in_begin(&x86_pmu);
> 
> for_each_event(event, group) {
>         event->enable();        //do the collection here
> }
> 
> 
> if (hw_perf_group_sched_in_end(&x86_pmu)) {
>         rollback...
> }
> 
> That requires to know in advance if we have hardware pmu
> in the list though (can be a flag in the group).

Good point, but your proposed hw_check_constraint() call needs to know
the exact same.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ