lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 5 Apr 2019 17:49:15 -0700
From:   Stephane Eranian <eranian@...gle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Andi Kleen <ak@...ux.intel.com>,
        "Liang, Kan" <kan.liang@...el.com>, mingo@...e.hu,
        nelson.dsouza@...el.com, Jiri Olsa <jolsa@...hat.com>,
        tonyj@...e.com
Subject: Re: [PATCH 3/3] perf/x86/intel: force resched when TFA sysctl is modified

On Fri, Apr 5, 2019 at 1:26 PM Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Fri, Apr 05, 2019 at 10:00:03AM -0700, Stephane Eranian wrote:
>
> > > > +static void update_tfa_sched(void *ignored)
> > > > +{
> > > > +     struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> > > > +     struct pmu *pmu = x86_get_pmu();
> > > > +     struct perf_cpu_context *cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
> > > > +     struct perf_event_context *task_ctx = cpuctx->task_ctx;
> > > > +
> > > > +     /* prevent any changes to the two contexts */
> > > > +     perf_ctx_lock(cpuctx, task_ctx);
> > > > +
> > > > +     /*
> > > > +      * check if PMC3 is used
> > > > +      * and if so force schedule out for all event types all contexts
> > > > +      */
> > > > +     if (test_bit(3, cpuc->active_mask))
> > > > +             perf_ctx_resched(cpuctx, task_ctx, EVENT_ALL|EVENT_CPU);
> > > > +
> > > > +     perf_ctx_unlock(cpuctx, task_ctx);
> > >
> > > I'm not particularly happy with exporting all that. Can't we create this
> > > new perf_ctx_resched() to include the locking and everything. Then the
> > > above reduces to:
> > >
> > >         if (test_bit(3, cpuc->active_mask))
> > >                 perf_ctx_resched(cpuctx);
> > >
> > > And we don't get to export the tricky bits.
> > >
> > The only reason I exported the locking is to protect
> > cpuc->active_mask. But if you
> > think there is no race, then sure,  we can just export a new
> > perf_ctx_resched() that
> > does the locking and invokes the ctx_resched() function.
>
> It doesn't matter if it races, if it was used and isn't anymore, it's
> a pointless reschedule, if it isn't used and we don't reschedule, it
> cannot be used because we've already set the flag.

True. I will post V2 shortly.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ