lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7c86c4470912110359i5a4416c2t9075eaa47d25865a@mail.gmail.com>
Date:	Fri, 11 Dec 2009 12:59:16 +0100
From:	stephane eranian <eranian@...glemail.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu, paulus@...ba.org,
	perfmon2-devel@...ts.sf.net,
	"David S. Miller" <davem@...emloft.net>, eranian@...gle.com
Subject: Re: [PATCH] perf_events: improve Intel event scheduling

>
>>> @@ -1120,9 +1136,15 @@ static void amd_pmu_disable_all(void)
>>>
>>>  void hw_perf_disable(void)
>>>  {
>>> +     struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
>>> +
>>>       if (!x86_pmu_initialized())
>>>               return;
>>> -     return x86_pmu.disable_all();
>>> +
>>> +     if (cpuc->enabled)
>>> +             cpuc->n_events = 0;
>>> +
>>> +     x86_pmu.disable_all();
>>>  }
>>
>> Right, so I cannot directly see the above being correct. You fully erase
>> the PMU state when we disable it, but things like single counter
>> add/remove doesn't reprogram the full PMU afaict.
>>

>>> +int hw_perf_group_sched_in(struct perf_event *leader,
>>> +            struct perf_cpu_context *cpuctx,
>>> +            struct perf_event_context *ctx, int cpu)
>>> +{
>>> +     struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu);
>>> +     int n, ret;
>>> +
>>> +     n = collect_events(cpuhw, leader);
>>> +     if (n < 0)
>>> +             return n;
>>> +
>>> +     ret = schedule_events(cpuhw, n, true);
>>> +     if (ret)
>>> +             return ret;
>>> +
>>> +     /* 0 means successful and enable each event in caller */
>>> +     return 0;
>>> +}
>>
>> This is where powerpc does n_added += n, and it delays the
>> schedule_events() bit to hw_perf_enable() conditional on n_added > 0.
>> When you remove events it simply keeps the current layout and disables
>> the one.
>>
There is a major difference between PPC and X86 here. PPC has a centralized
register to control start/stop. This register  uses bitmask  to enable
or disable counters. Thus, in hw_perf_enable(), if n_added=0, then you
just need to
use the pre-computed bitmask. Otherwise, you need to recompute the bitmask to
include the new registers. The assignment of events and validation is done in
hw_group_sched_in().

In X86, assignment and validation is done in hw_group_sched_in(). Activation is
done individually for each counter. There is no centralized register
used here, thus
no bitmask to update.

Disabling a counter does not trigger a complete reschedule of events.
This happens
only when hw_group_sched_in() is called.

The n_events = 0 in hw_perf_disable() is used to signal that something
is changing.
It should not be here but here. The problem is that
hw_group_sched_in() needs a way
to know that it is called for a completely new series of group
scheduling so it can
discard any previous assignment. This goes back to the issue I raised
in my previous
email. You could add a parameter to hw_group_sched_in() that would
indicate this is
the first group. that would cause n_events =0 and the function would
start accumulating
events for the new scheduling period.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ