lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bd4cb8901001291508w421898c2u8e8e7de32b030e67@mail.gmail.com>
Date:	Sat, 30 Jan 2010 00:08:12 +0100
From:	Stephane Eranian <eranian@...gle.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	eranian@...il.com, linux-kernel@...r.kernel.org, mingo@...e.hu,
	paulus@...ba.org, davem@...emloft.net, fweisbec@...il.com,
	perfmon2-devel@...ts.sf.net
Subject: Re: [PATCH] perf_events: improve x86 event scheduling (v6 
	incremental)

I think there is a problem with this following code:

void hw_perf_enable(void)
                for (i = 0; i < cpuc->n_events; i++) {

                        event = cpuc->event_list[i];
                        hwc = &event->hw;

                        if (hwc->idx == -1 || hwc->idx == cpuc->assign[i])
                                continue;

Here you are looking for events which are moving. I think the 2nd
part of the if is not good enough. It is not because hwc->idx is
identical to the assignment, that you can assume the event was
already there. It may have been there in the past, then scheduled
out and replaced at idx by another event. When it comes back,
it gets its spot back, but it needs to be reprogrammed.

That is why in v6 incremental, I have added last_cpu, last_tag
to have a stronger checks and match_prev_assignment().

Somehow it is missing in the series you've committed unless
I am missing something.


On Mon, Jan 25, 2010 at 6:59 PM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Mon, 2010-01-25 at 18:48 +0100, stephane eranian wrote:
>
>> >> It seems a solution would be to call x86_pmu_disable() before
>> >> assigning an event to a new counter for all events which are
>> >> moving. This is because we cannot assume all events have been
>> >> previously disabled individually. Something like
>> >>
>> >> if (!match_prev_assignment(hwc, cpuc, i)) {
>> >>    if (hwc->idx != -1)
>> >>       x86_pmu.disable(hwc, hwc->idx);
>> >>    x86_assign_hw_event(event, cpuc, cpuc->assign[i]);
>> >>    x86_perf_event_set_period(event, hwc, hwc->idx);
>> >> }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ