lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181012095001.GG9867@hirez.programming.kicks-ass.net>
Date:   Fri, 12 Oct 2018 11:50:01 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Song Liu <songliubraving@...com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        lkml <linux-kernel@...r.kernel.org>,
        "acme@...nel.org" <acme@...nel.org>,
        "alexander.shishkin@...ux.intel.com" 
        <alexander.shishkin@...ux.intel.com>,
        "jolsa@...hat.com" <jolsa@...hat.com>,
        "eranian@...gle.com" <eranian@...gle.com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "alexey.budankov@...ux.intel.com" <alexey.budankov@...ux.intel.com>,
        "mark.rutland@....com" <mark.rutland@....com>,
        "megha.dey@...el.com" <megha.dey@...el.com>,
        "frederic@...nel.org" <frederic@...nel.org>
Subject: Re: [RFC][PATCH] perf: Rewrite core context handling


Can we please not top-post?

On Thu, Oct 11, 2018 at 10:37:14PM +0000, Song Liu wrote:
> Thanks Peter! These are really really helpful. 
> 
> I am trying to think through the case of a group of two events on two 
> separate hardware PMUs. In current implementation, this will not trigger
> move_group,

Right, currently this is disallowed (or should be, I'll need to double
check the code).

> so they will not be able to rotate together? And actually, 
> they may not be able to run at all? Maybe this case is never supported? 

Indeed, we do not allow mixing events of different PMUs, with the
explicit exception of software events. Since software events must always
schedule, they're allowed to be fitted into any group.

> On the other hand, would something like this work:
> 
>     perf_cpu_context <-[1:2]-> perf_event_context <-[1:n]-> perf_event 
>               |                                                |
>               `----[1:n]---->     pmu    <----- [1:n]----------' 
> 
> 1. Every cpu has only one perf_cpu_context. No perf_cpu_pmu_context. 

The perf_event_pmu_context is currently needed to efficiently track
which events are active. And to determine if rotation is needed at all.

And the perf_cpu_pmu_context is needed because the rotation is per PMU
in ABI.

> 2. perf_cpu_context has two perf_event_context, one for the cpu, the 
>    other for the task. 

That doesn't work (or I'm not understanding), tasks come and go on CPUs,
at best it has a reference to the current active task's context. But it
already had that, and it still does, see perf_cpu_context::task_ctx.

> 3. Each perf_event_context has 3 perf_event_groups, pinned_groups, 
>    flexible_groups, and software_groups (for sw event only groups). 

So I'm thinking you want to split off the software groups because they
don't need rotation?

While doing this patch I noticed that we need to ignore attr.exclusive
for software events. Not sure that was intentional or not, but certainly
inconsistent.

> 4. All flexible_groups of the same cpu rotate a the same time. If 
>    there are two hardware PMUs on the cpu, the rotation will look 
>    like: 1) stop both PMUs; 2) rotate events; 3) start both PMUs. 

ABI precludes that currently, we have per PMU rotation intervals exposed
in sysfs.

> I feel this will make the implementation simpler. Is it too broken in 
> some cases? Or did I miss anything obvious? One thing I noticed is 
> that we need to drop per PMU config perf_event_mux_interval_ms. 

Right that. People added that for a reason (although it eludes me atm).
I don't think we can drop that easily.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ