lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d3c82708-dd09-80e0-4e9f-1cbab118a169@amd.com>
Date:   Thu, 5 Mar 2020 16:06:56 -0600
From:   Kim Phillips <kim.phillips@....com>
To:     Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
Cc:     Stephane Eranian <eranian@...gle.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linuxppc-dev@...ts.ozlabs.org, LKML <linux-kernel@...r.kernel.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Paul Mackerras <paulus@...ba.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...hat.com>,
        Namhyung Kim <namhyung@...nel.org>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Andi Kleen <ak@...ux.intel.com>,
        "Liang, Kan" <kan.liang@...ux.intel.com>,
        Alexey Budankov <alexey.budankov@...ux.intel.com>,
        yao.jin@...ux.intel.com, Robert Richter <robert.richter@....com>,
        maddy@...ux.ibm.com
Subject: Re: [RFC 00/11] perf: Enhancing perf to export processor hazard
 information

On 3/4/20 10:46 PM, Ravi Bangoria wrote:
> Hi Kim,

Hi Ravi,

> On 3/3/20 3:55 AM, Kim Phillips wrote:
>> On 3/2/20 2:21 PM, Stephane Eranian wrote:
>>> On Mon, Mar 2, 2020 at 2:13 AM Peter Zijlstra <peterz@...radead.org> wrote:
>>>>
>>>> On Mon, Mar 02, 2020 at 10:53:44AM +0530, Ravi Bangoria wrote:
>>>>> Modern processors export such hazard data in Performance
>>>>> Monitoring Unit (PMU) registers. Ex, 'Sampled Instruction Event
>>>>> Register' on IBM PowerPC[1][2] and 'Instruction-Based Sampling' on
>>>>> AMD[3] provides similar information.
>>>>>
>>>>> Implementation detail:
>>>>>
>>>>> A new sample_type called PERF_SAMPLE_PIPELINE_HAZ is introduced.
>>>>> If it's set, kernel converts arch specific hazard information
>>>>> into generic format:
>>>>>
>>>>>    struct perf_pipeline_haz_data {
>>>>>           /* Instruction/Opcode type: Load, Store, Branch .... */
>>>>>           __u8    itype;
>>>>>           /* Instruction Cache source */
>>>>>           __u8    icache;
>>>>>           /* Instruction suffered hazard in pipeline stage */
>>>>>           __u8    hazard_stage;
>>>>>           /* Hazard reason */
>>>>>           __u8    hazard_reason;
>>>>>           /* Instruction suffered stall in pipeline stage */
>>>>>           __u8    stall_stage;
>>>>>           /* Stall reason */
>>>>>           __u8    stall_reason;
>>>>>           __u16   pad;
>>>>>    };
>>>>
>>>> Kim, does this format indeed work for AMD IBS?
>>
>> It's not really 1:1, we don't have these separations of stages
>> and reasons, for example: we have missed in L2 cache, for example.
>> So IBS output is flatter, with more cycle latency figures than
>> IBM's AFAICT.
> 
> AMD IBS captures pipeline latency data incase Fetch sampling like the
> Fetch latency, tag to retire latency, completion to retire latency and
> so on. Yes, Ops sampling do provide more data on load/store centric
> information. But it also captures more detailed data for Branch instructions.
> And we also looked at ARM SPE, which also captures more details pipeline
> data and latency information.
> 
>>> Personally, I don't like the term hazard. This is too IBM Power
>>> specific. We need to find a better term, maybe stall or penalty.
>>
>> Right, IBS doesn't have a filter to only count stalled or otherwise
>> bad events.  IBS' PPR descriptions has one occurrence of the
>> word stall, and no penalty.  The way I read IBS is it's just
>> reporting more sample data than just the precise IP: things like
>> hits, misses, cycle latencies, addresses, types, etc., so words
>> like 'extended', or the 'auxiliary' already used today even
>> are more appropriate for IBS, although I'm the last person to
>> bikeshed.
> 
> We are thinking of using "pipeline" word instead of Hazard.

Hm, the word 'pipeline' occurs 0 times in IBS documentation.

I realize there are a couple of core pipeline-specific pieces
of information coming out of it, but the vast majority
are addresses, latencies of various components in the memory
hierarchy, and various component hit/miss bits.

What's needed here is a vendor-specific extended
sample information that all these technologies gather,
of which things like e.g., 'L1 TLB cycle latency' we
all should have in common.

I'm not sure why a new PERF_SAMPLE_PIPELINE_HAZ is needed
either.  Can we use PERF_SAMPLE_AUX instead?  Take a look at
commit 98dcf14d7f9c "perf tools: Add kernel AUX area sampling
definitions".  The sample identifier can be used to determine
which vendor's sampling IP's data is in it, and events can
be recorded just by copying the content of the SIER, etc.
registers, and then events get synthesized from the aux
sample at report/inject/annotate etc. time.  This allows
for less sample recording overhead, and moves all the vendor
specific decoding and common event conversions for userspace
to figure out.

>>> Also worth considering is the support of ARM SPE (Statistical
>>> Profiling Extension) which is their version of IBS.
>>> Whatever gets added need to cover all three with no limitations.
>>
>> I thought Intel's various LBR, PEBS, and PT supported providing
>> similar sample data in perf already, like with perf mem/c2c?
> 
> perf-mem is more of data centric in my opinion. It is more towards
> memory profiling. So proposal here is to expose pipeline related
> details like stalls and latencies.

Like I said, I don't see it that way, I see it as "any particular
vendor's event's extended details', and these pipeline details
have overlap with existing infrastructure within perf, e.g., L2
cache misses.

Kim

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ