lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 21 Jan 2010 11:45:15 +0100
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Stephane Eranian <eranian@...gle.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, mingo@...e.hu, paulus@...ba.org,
	davem@...emloft.net, perfmon2-devel@...ts.sf.net, eranian@...il.com
Subject: Re: [PATCH] perf_events: improve x86 event scheduling (v5)

On Thu, Jan 21, 2010 at 11:08:12AM +0100, Stephane Eranian wrote:
> >> > Do you mean this:
> >> >
> >> > hw_perf_group_sched_in_begin(&x86_pmu);
> >> >
> >> > for_each_event(event, group) {
> >> >         event->enable();        //do the collection here
> >> > }
> >> >
> >> >
> >> > if (hw_perf_group_sched_in_end(&x86_pmu)) {
> >> >         rollback...
> >> > }
> >> >
> >> > That requires to know in advance if we have hardware pmu
> >> > in the list though (can be a flag in the group).
> >>
> 
> I don't think this model can work without scheduling for each event.
> 
> Imagine the situation where you have more events than you have
> counters. At each tick you:
>    - disable all events
>    - rotate the list
>    - collect events from the list
>    - schedule events
>    - activate
> 
> Collection is the accumulation of events until you have as many as you
> have counters
> given you defer scheduling until the end (see loop above).
> 
> But that does not mean you can schedule what you have accumulated. And then what
> do you do, i.e., rollback to what?



If the scheduling validation fails, then you just need to rollback
the whole group.

That's sensibly what you did in your patch, right? Except the loop
is now handled by the core code.


> 
> With incremental, you can skip a group that is conflicting with the
> groups already
> accumulated. What hw_perf_group_sched_in() gives you is simply a way to do
> incremental on a whole event group at once.


I don't understand why that can't be done with the above model.
In your patch we iterate through the whole group, collect events,
and schedule them.

With the above, the collection is just done on enable(), and the scheduling
is done with the new pmu callbacks.

The thing is sensibly the same, where is the obstacle?


> 
> Given the perf_event model, I believe you have no other way but to do
> incremental
> scheduling of events. That is the only way you guarantee you maximize the use of
> the PMU. Regardless of that, the scheduling model has a bias towards smaller
> and less constrained event groups.


But the incremental is still the purpose of the above model. I feel
confused.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ