lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABPqkBSH5UBaV7+0JKgr4YmEke8Tu4Hry9GAFYT5C_gsncqf3A@mail.gmail.com>
Date:   Tue, 19 Mar 2019 16:55:16 -0700
From:   Stephane Eranian <eranian@...gle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Ingo Molnar <mingo@...nel.org>, Jiri Olsa <jolsa@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>, tonyj@...e.com,
        nelson.dsouza@...el.com
Subject: Re: [RFC][PATCH 7/8] perf/x86: Optimize x86_schedule_events()

On Thu, Mar 14, 2019 at 6:11 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> Now that cpuc->event_constraint[] is retained, we can avoid calling
> get_event_constraints() over and over again.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  arch/x86/events/core.c       |   25 +++++++++++++++++++++----
>  arch/x86/events/intel/core.c |    3 ++-
>  2 files changed, 23 insertions(+), 5 deletions(-)
>
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -844,6 +844,12 @@ int perf_assign_events(struct event_cons
>  }
>  EXPORT_SYMBOL_GPL(perf_assign_events);
>
> +static inline bool is_ht_workaround_active(struct cpu_hw_events *cpuc)
> +{
> +       return is_ht_workaround_enabled() && !cpuc->is_fake &&
> +              READ_ONCE(cpuc->excl_cntrs->exclusive_present);
> +}
> +
>  int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
>  {
>         struct event_constraint *c;
> @@ -858,8 +864,20 @@ int x86_schedule_events(struct cpu_hw_ev
>                 x86_pmu.start_scheduling(cpuc);
>
>         for (i = 0, wmin = X86_PMC_IDX_MAX, wmax = 0; i < n; i++) {
> -               c = x86_pmu.get_event_constraints(cpuc, i, cpuc->event_list[i]);
> -               cpuc->event_constraint[i] = c;
> +               c = cpuc->event_constraint[i];
> +
> +               /*
> +                * Request constraints for new events; or for those events that
> +                * have a dynamic constraint due to the HT workaround -- for
> +                * those the constraint can change due to scheduling activity
> +                * on the other sibling.
> +                */
> +               if (!c || ((c->flags & PERF_X86_EVENT_DYNAMIC) &&
> +                          is_ht_workaround_active(cpuc))) {
> +
> +                       c = x86_pmu.get_event_constraints(cpuc, i, cpuc->event_list[i]);
> +                       cpuc->event_constraint[i] = c;
> +               }
On this one, I think there may be a problem with events with
shared_regs constraints.
Constraint is dynamic as it depends on other events which share the
same MSR, yet it
is not marked as DYNAMIC. But this may be okay because these other
events are all
on the same CPU and thus scheduled during the same ctx_sched_in(). Yet with the
swapping in intel_alt_er(), we need to double-check that we cannot
reuse a constraint
which could be stale. I believe this is okay, just double-check.

>
>                 wmin = min(wmin, c->weight);
>                 wmax = max(wmax, c->weight);
> @@ -903,8 +921,7 @@ int x86_schedule_events(struct cpu_hw_ev
>                  * N/2 counters can be used. This helps with events with
>                  * specific counter constraints.
>                  */
> -               if (is_ht_workaround_enabled() && !cpuc->is_fake &&
> -                   READ_ONCE(cpuc->excl_cntrs->exclusive_present))
> +               if (is_ht_workaround_active(cpuc))
>                         gpmax /= 2;
>
>                 unsched = perf_assign_events(cpuc->event_constraint, n, wmin,
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -2945,7 +2945,8 @@ intel_get_event_constraints(struct cpu_h
>          * - dynamic constraint: handled by intel_get_excl_constraints()
>          */
>         c2 = __intel_get_event_constraints(cpuc, idx, event);
> -       if (c1 && (c1->flags & PERF_X86_EVENT_DYNAMIC)) {
> +       if (c1) {
> +               WARN_ON_ONCE(!(c1->flags & PERF_X86_EVENT_DYNAMIC));
>                 bitmap_copy(c1->idxmsk, c2->idxmsk, X86_PMC_IDX_MAX);
>                 c1->weight = c2->weight;
>                 c2 = c1;
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ