lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL_JsqKqKKb8uXSxQKT4ZMqMv8dt3ABpP+T8x+A1-zb2RKjCNA@mail.gmail.com>
Date:   Tue, 30 Mar 2021 16:08:11 -0500
From:   Rob Herring <robh@...nel.org>
To:     Will Deacon <will@...nel.org>
Cc:     Catalin Marinas <catalin.marinas@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Jiri Olsa <jolsa@...hat.com>,
        Mark Rutland <mark.rutland@....com>,
        Ian Rogers <irogers@...gle.com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Honnappa Nagarahalli <honnappa.nagarahalli@....com>,
        Zachary.Leaf@....com, Raphael Gault <raphael.gault@....com>,
        Jonathan Cameron <Jonathan.Cameron@...wei.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Itaru Kitayama <itaru.kitayama@...il.com>,
        linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 02/10] arm64: perf: Enable PMU counter direct access
 for perf event

On Tue, Mar 30, 2021 at 12:09 PM Rob Herring <robh@...nel.org> wrote:
>
> On Tue, Mar 30, 2021 at 10:31 AM Will Deacon <will@...nel.org> wrote:
> >
> > On Wed, Mar 10, 2021 at 05:08:29PM -0700, Rob Herring wrote:
> > > From: Raphael Gault <raphael.gault@....com>
> > >
> > > Keep track of event opened with direct access to the hardware counters
> > > and modify permissions while they are open.
> > >
> > > The strategy used here is the same which x86 uses: every time an event
> > > is mapped, the permissions are set if required. The atomic field added
> > > in the mm_context helps keep track of the different event opened and
> > > de-activate the permissions when all are unmapped.
> > > We also need to update the permissions in the context switch code so
> > > that tasks keep the right permissions.
> > >
> > > In order to enable 64-bit counters for userspace when available, a new
> > > config1 bit is added for userspace to indicate it wants userspace counter
> > > access. This bit allows the kernel to decide if chaining should be
> > > disabled and chaining and userspace access are incompatible.
> > > The modes for config1 are as follows:
> > >
> > > config1 = 0 or 2 : user access enabled and always 32-bit
> > > config1 = 1 : user access disabled and always 64-bit (using chaining if needed)
> > > config1 = 3 : user access enabled and counter size matches underlying counter.

[...]

> > > @@ -980,9 +1032,23 @@ static int __armv8_pmuv3_map_event(struct perf_event *event,
> > >                                      &armv8_pmuv3_perf_cache_map,
> > >                                      ARMV8_PMU_EVTYPE_EVENT);
> > >
> > > -     if (armv8pmu_event_is_64bit(event))
> > > +     if (armv8pmu_event_want_user_access(event) || !armv8pmu_event_is_64bit(event)) {
> > > +             event->hw.flags |= ARMPMU_EL0_RD_CNTR;
> >
> > Why do you set this for all 32-bit events?
>
> It goes back to the config1 bits as explained in the commit msg. We
> can always support user access for 32-bit counters, but for 64-bit
> counters the user has to request both user access and 64-bit counters.
> We could require explicit user access request for 32-bit access, but I
> thought it was better to not require userspace to do something Arm
> specific on open.
>
> > The logic here feels like it
> > could with a bit of untangling.
>
> Yes, I don't love it, but couldn't come up with anything better. It is
> complicated by the fact that flags have to be set before we assign the
> counter and can't set/change them when we assign the counter. It would
> take a lot of refactoring with armpmu code to fix that.

How's this instead?:

if (armv8pmu_event_want_user_access(event) || !armv8pmu_event_is_64bit(event))
        event->hw.flags |= ARMPMU_EL0_RD_CNTR;

/*
* At this point, the counter is not assigned. If a 64-bit counter is
* requested, we must make sure the h/w has 64-bit counters if we set
* the event size to 64-bit because chaining is not supported with
* userspace access. This may still fail later on if the CPU cycle
* counter is in use.
*/
if (armv8pmu_event_is_64bit(event) &&
    (!armv8pmu_event_want_user_access(event) ||
     armv8pmu_has_long_event(cpu_pmu) || (hw_event_id ==
ARMV8_PMUV3_PERFCTR_CPU_CYCLES)))
        event->hw.flags |= ARMPMU_EVT_64BIT;

> > > +             /*
> > > +              * At this point, the counter is not assigned. If a 64-bit
> > > +              * counter is requested, we must make sure the h/w has 64-bit
> > > +              * counters if we set the event size to 64-bit because chaining
> > > +              * is not supported with userspace access. This may still fail
> > > +              * later on if the CPU cycle counter is in use.
> > > +              */
> > > +             if (armv8pmu_event_is_64bit(event) &&
> > > +                 (armv8pmu_has_long_event(armpmu) ||
> > > +                  hw_event_id == ARMV8_PMUV3_PERFCTR_CPU_CYCLES))
> > > +                     event->hw.flags |= ARMPMU_EVT_64BIT;
> > > +     } else if (armv8pmu_event_is_64bit(event))
> > >               event->hw.flags |= ARMPMU_EVT_64BIT;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ