lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 5 Apr 2019 10:00:03 -0700
From:   Stephane Eranian <eranian@...gle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Andi Kleen <ak@...ux.intel.com>,
        "Liang, Kan" <kan.liang@...el.com>, mingo@...e.hu,
        nelson.dsouza@...el.com, Jiri Olsa <jolsa@...hat.com>,
        tonyj@...e.com
Subject: Re: [PATCH 3/3] perf/x86/intel: force resched when TFA sysctl is modified

On Fri, Apr 5, 2019 at 12:13 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Thu, Apr 04, 2019 at 11:32:19AM -0700, Stephane Eranian wrote:
> > diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> > index a4b7711ef0ee..8d356c2096bc 100644
> > --- a/arch/x86/events/intel/core.c
> > +++ b/arch/x86/events/intel/core.c
> > @@ -4483,6 +4483,60 @@ static ssize_t freeze_on_smi_store(struct device *cdev,
> >       return count;
> >  }
> >
> > +static void update_tfa_sched(void *ignored)
> > +{
> > +     struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> > +     struct pmu *pmu = x86_get_pmu();
> > +     struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
> > +     struct perf_event_context *task_ctx = cpuctx->task_ctx;
> > +
> > +     /* prevent any changes to the two contexts */
> > +     perf_ctx_lock(cpuctx, task_ctx);
> > +
> > +     /*
> > +      * check if PMC3 is used
> > +      * and if so force schedule out for all event types all contexts
> > +      */
> > +     if (test_bit(3, cpuc->active_mask))
> > +             perf_ctx_resched(cpuctx, task_ctx, EVENT_ALL|EVENT_CPU);
> > +
> > +     perf_ctx_unlock(cpuctx, task_ctx);
>
> I'm not particularly happy with exporting all that. Can't we create this
> new perf_ctx_resched() to include the locking and everything. Then the
> above reduces to:
>
>         if (test_bit(3, cpuc->active_mask))
>                 perf_ctx_resched(cpuctx);
>
> And we don't get to export the tricky bits.
>
The only reason I exported the locking is to protect
cpuc->active_mask. But if you
think there is no race, then sure,  we can just export a new
perf_ctx_resched() that
does the locking and invokes the ctx_resched() function.

> > +}
> > +
> > +static ssize_t show_sysctl_tfa(struct device *cdev,
> > +                           struct device_attribute *attr,
> > +                           char *buf)
> > +{
> > +     return snprintf(buf, 40, "%d\n", allow_tsx_force_abort);
> > +}
> > +
> > +static ssize_t set_sysctl_tfa(struct device *cdev,
> > +                           struct device_attribute *attr,
> > +                           const char *buf, size_t count)
> > +{
> > +     unsigned long val;
> > +     ssize_t ret;
> > +
> > +     ret = kstrtoul(buf, 0, &val);
>
> You want kstrtobool()
>
ok.

> > +     if (ret)
> > +             return ret;
> > +
> > +     /* looking for boolean value */
> > +     if (val > 2)
> > +             return -EINVAL;
> > +
> > +     /* no change */
> > +     if (val == allow_tsx_force_abort)
> > +             return count;
> > +
> > +     allow_tsx_force_abort ^= 1;
>
>         allow_tsx_force_abort = val;
>
> is simpler
>
ok.

> > +
> > +     on_each_cpu(update_tfa_sched, NULL, 1);
> > +
> > +     return count;
> > +}

Powered by blists - more mailing lists