lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 17 May 2024 12:01:02 +0200
From: James Clark <james.clark@....com>
To: Suzuki K Poulose <suzuki.poulose@....com>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
 Maxime Coquelin <mcoquelin.stm32@...il.com>,
 Alexandre Torgue <alexandre.torgue@...s.st.com>,
 Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
 Arnaldo Carvalho de Melo <acme@...nel.org>,
 Namhyung Kim <namhyung@...nel.org>, Mark Rutland <mark.rutland@....com>,
 Jiri Olsa <jolsa@...nel.org>, Ian Rogers <irogers@...gle.com>,
 Adrian Hunter <adrian.hunter@...el.com>, John Garry
 <john.g.garry@...cle.com>, Will Deacon <will@...nel.org>,
 Leo Yan <leo.yan@...ux.dev>, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org, linux-stm32@...md-mailman.stormreply.com,
 linux-perf-users@...r.kernel.org, gankulkarni@...amperecomputing.com,
 scclevenger@...amperecomputing.com, coresight@...ts.linaro.org,
 mike.leach@...aro.org
Subject: Re: [PATCH 16/17] coresight: Re-emit trace IDs when the sink changes
 in per-thread mode



On 07/05/2024 13:05, Suzuki K Poulose wrote:
> On 29/04/2024 16:22, James Clark wrote:
>> In per-cpu mode there are multiple aux buffers and each one has a
>> fixed sink, so the hw ID mappings which only need to be emitted once
>> for each buffer, even with the new per-sink trace ID pools.
>>
>> But in per-thread mode there is only a single buffer which can be
>> written to from any sink with now potentially overlapping trace IDs, so
>> hw ID mappings need to be re-emitted every time the sink changes.
>>
>> This will require a change in Perf to track this so it knows which
>> decode tree to use for each segment of the buffer. In theory it's also
>> possible to look at the CPU ID on the AUX records, but this is more
>> consistent with the existing system, and allows for correct decode using
>> either mechanism.
>>
>> Signed-off-by: James Clark <james.clark@....com>
>> ---
>>   drivers/hwtracing/coresight/coresight-etm-perf.c | 14 ++++++++++++++
>>   drivers/hwtracing/coresight/coresight-etm-perf.h |  2 ++
>>   2 files changed, 16 insertions(+)
>>
>> diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c
>> b/drivers/hwtracing/coresight/coresight-etm-perf.c
>> index f07173aa4d66..08f3958f9367 100644
>> --- a/drivers/hwtracing/coresight/coresight-etm-perf.c
>> +++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
>> @@ -499,6 +499,20 @@ static void etm_event_start(struct perf_event
>> *event, int flags)
>>                         &sink->perf_id_map))
>>           goto fail_disable_path;
>>   +    /*
>> +     * In per-cpu mode there are multiple aux buffers and each one has a
>> +     * fixed sink, so the hw ID mappings which only need to be
>> emitted once
>> +     * for each buffer.
>> +     *
>> +     * But in per-thread mode there is only a single buffer which can be
>> +     * written to from any sink with potentially overlapping trace
>> IDs, so
>> +     * hw ID mappings need to be re-emitted every time the sink changes.
>> +     */
>> +    if (event->cpu == -1 && event_data->last_sink_hwid != sink) {
>> +        cpumask_clear(&event_data->aux_hwid_done);
>> +        event_data->last_sink_hwid = sink;
>> +    }
>> +
> 
> With the traceid caching in the event_data per-cpu , we could avoid this
> step ?
> 
> Suzuki
> 
> 

I don't think so, this is to inform the tool that the mappings have
changed if the tool doesn't want to follow switch events.

Unless I'm missing something, moving where the trace ids are stored
doesn't mean that they will be re-sent when the mappings change?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ