lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bd4cb8901001210344n1aea2f78l62848f55ea462e84@mail.gmail.com>
Date:	Thu, 21 Jan 2010 12:44:03 +0100
From:	Stephane Eranian <eranian@...gle.com>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, mingo@...e.hu, paulus@...ba.org,
	davem@...emloft.net, perfmon2-devel@...ts.sf.net, eranian@...il.com
Subject: Re: [PATCH] perf_events: improve x86 event scheduling (v5)

On Thu, Jan 21, 2010 at 11:45 AM, Frederic Weisbecker
<fweisbec@...il.com> wrote:
> On Thu, Jan 21, 2010 at 11:08:12AM +0100, Stephane Eranian wrote:
>> >> > Do you mean this:
>> >> >
>> >> > hw_perf_group_sched_in_begin(&x86_pmu);
>> >> >
>> >> > for_each_event(event, group) {
>> >> >         event->enable();        //do the collection here
>> >> > }
>> >> >
>> >> >
>> >> > if (hw_perf_group_sched_in_end(&x86_pmu)) {
>> >> >         rollback...
>> >> > }
>> >> >
>> >> > That requires to know in advance if we have hardware pmu
>> >> > in the list though (can be a flag in the group).
>> >>
>>
>> I don't think this model can work without scheduling for each event.
>>
>> Imagine the situation where you have more events than you have
>> counters. At each tick you:
>>    - disable all events
>>    - rotate the list
>>    - collect events from the list
>>    - schedule events
>>    - activate
>>
>> Collection is the accumulation of events until you have as many as you
>> have counters
>> given you defer scheduling until the end (see loop above).
>>
>> But that does not mean you can schedule what you have accumulated. And then what
>> do you do, i.e., rollback to what?
>
>
>
> If the scheduling validation fails, then you just need to rollback
> the whole group.
>
> That's sensibly what you did in your patch, right? Except the loop
> is now handled by the core code.
>
>
Ok, I think I missed where you were actually placing that loop.
So you want to do this in group_sched_in(), right?

>
> I don't understand why that can't be done with the above model.
> In your patch we iterate through the whole group, collect events,
> and schedule them.
>
> With the above, the collection is just done on enable(), and the scheduling
> is done with the new pmu callbacks.
>
> The thing is sensibly the same, where is the obstacle?
>
There is none. You've just hoisted the some of the code from
hw_perf_group_sched_in().
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ