[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c5760aa-76aa-3a56-9a12-9f331b254173@arm.com>
Date: Wed, 10 Feb 2021 12:16:58 +0200
From: James Clark <james.clark@....com>
To: Leo Yan <leo.yan@...aro.org>
Cc: linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
John Garry <john.garry@...wei.com>,
Will Deacon <will@...nel.org>,
Mathieu Poirier <mathieu.poirier@...aro.org>,
Al Grant <al.grant@....com>,
Andre Przywara <andre.przywara@....com>,
Wei Li <liwei391@...wei.com>,
Tan Xiaojun <tanxiaojun@...wei.com>,
Adrian Hunter <adrian.hunter@...el.com>
Subject: Re: [PATCH 8/8] perf arm-spe: Set thread TID
On 09/02/2021 17:36, James Clark wrote:
>
>
> On 04/02/2021 12:27, Leo Yan wrote:
>> On Mon, Feb 01, 2021 at 07:40:45PM +0200, James Clark wrote:
>>>
>>> On 31/01/2021 14:01, Leo Yan wrote:
>>>> Option 1: by merging patches 07/08 and 08/08, we can firstly support PID
>>>> tracing for root namespace, and later we can extend to support PID
>>>> tracing in container (and in VMs).
>>>>
>> Arm SPE has the problem for step2, due to the trace uses statistical
>> approach, it doesn't trace the complete branch instructions, so it
>> cannot promise to capture all branches for the symbol "__switch_to".
>> If we only use the events PERF_RECORD_SWITCH /
>> PERF_RECORD_SWITCH_CPU_WIDE, then it will lead to the coarse result
>> for PID tracing.
>>
>> For this reason, seems to me it's pragmatic to use CONTEXTIDR for
>> PID tracing at current stage, at least it can allow the root domain
>> tracing works accurately. But this will leave the issue for tracing
>> PID in non root namespace, we need to figure out solution later.
>>
>> Hi Mark.R, Al, do you have any comments for this?
>
> Hi Leo,
>
> I spoke with Al and his suggestion is to clear the PID value if the event
> was opened outside of the root namespace.
>
> I think that's not a bad idea as it gets us PIDs in most cases but also
> doesn't show any incorrect data. Do you know if it's possible to determine
> that from a perf.data file? Unfortunately it doesn't seem to be possible
> to disable CONTEXTIDR tracing when opening the event as it's compile time
> only and can't be disabled dynamically.
>
> James
>
I've had a think about it and I think we should do one of two things:
#1) Remove the PID setting from the data source patchset. This will keep the
existing behaviour of using the PID of the first traced process only even
if there are forks. Later we can implement #2 or attempt to make it work
even in non root namespaces.
I'm not sure how this will impact your c2c patchset if you are relying on
the PID data Leo?
#2) Make a change in the SPE driver to add an option for disabling CONTEXTIDR.
We will disable this from userspace if the event is opened in a non root
namespace. So we will only show PID data if we know it's valid, otherwise
the existing behaviour of only using the first PID will remain.
Hopefully those solutions will help to minimise changes in behaviour between
kernel releases that could be confusing.
>>
>> Thanks,
>> Leo
>>
>>>>> Signed-off-by: Leo Yan <leo.yan@...aro.org>
>>>>> Signed-off-by: James Clark <james.clark@....com>
>>>>
>>>> Besides for techinical question, you could add your "Co-developed-by"
>>>> tags for patches 06, 07, 08/08, which you have took time to refin them.
>>>>
>>>> Thanks you for kindly efforts.
>>>>
>>>> [1] https://lore.kernel.org/patchwork/patch/1353286/
>>>>
>>>>> Cc: Peter Zijlstra <peterz@...radead.org>
>>>>> Cc: Ingo Molnar <mingo@...hat.com>
>>>>> Cc: Arnaldo Carvalho de Melo <acme@...nel.org>
>>>>> Cc: Mark Rutland <mark.rutland@....com>
>>>>> Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
>>>>> Cc: Jiri Olsa <jolsa@...hat.com>
>>>>> Cc: Namhyung Kim <namhyung@...nel.org>
>>>>> Cc: John Garry <john.garry@...wei.com>
>>>>> Cc: Will Deacon <will@...nel.org>
>>>>> Cc: Mathieu Poirier <mathieu.poirier@...aro.org>
>>>>> Cc: Al Grant <al.grant@....com>
>>>>> Cc: Andre Przywara <andre.przywara@....com>
>>>>> Cc: Wei Li <liwei391@...wei.com>
>>>>> Cc: Tan Xiaojun <tanxiaojun@...wei.com>
>>>>> Cc: Adrian Hunter <adrian.hunter@...el.com>
>>>>> ---
>>>>> tools/perf/util/arm-spe.c | 75 ++++++++++++++++++++++++++-------------
>>>>> 1 file changed, 50 insertions(+), 25 deletions(-)
>>>>>
>>>>> diff --git a/tools/perf/util/arm-spe.c b/tools/perf/util/arm-spe.c
>>>>> index 27a0b9dfe22d..9828fad7e516 100644
>>>>> --- a/tools/perf/util/arm-spe.c
>>>>> +++ b/tools/perf/util/arm-spe.c
>>>>> @@ -223,6 +223,46 @@ static inline u8 arm_spe_cpumode(struct arm_spe *spe, u64 ip)
>>>>> PERF_RECORD_MISC_USER;
>>>>> }
>>>>>
>>>>> +static void arm_spe_set_pid_tid_cpu(struct arm_spe *spe,
>>>>> + struct auxtrace_queue *queue)
>>>>> +{
>>>>> + struct arm_spe_queue *speq = queue->priv;
>>>>> + pid_t tid;
>>>>> +
>>>>> + tid = machine__get_current_tid(spe->machine, speq->cpu);
>>>>> + if (tid != -1) {
>>>>> + speq->tid = tid;
>>>>> + thread__zput(speq->thread);
>>>>> + } else
>>>>> + speq->tid = queue->tid;
>>>>> +
>>>>> + if ((!speq->thread) && (speq->tid != -1)) {
>>>>> + speq->thread = machine__find_thread(spe->machine, -1,
>>>>> + speq->tid);
>>>>> + }
>>>>> +
>>>>> + if (speq->thread) {
>>>>> + speq->pid = speq->thread->pid_;
>>>>> + if (queue->cpu == -1)
>>>>> + speq->cpu = speq->thread->cpu;
>>>>> + }
>>>>> +}
>>>>> +
>>>>> +static int arm_spe_set_tid(struct arm_spe_queue *speq, pid_t tid)
>>>>> +{
>>>>> + int err;
>>>>> + struct arm_spe *spe = speq->spe;
>>>>> + struct auxtrace_queue *queue;
>>>>> +
>>>>> + err = machine__set_current_tid(spe->machine, speq->cpu, tid, tid);
>>>>> + if (err)
>>>>> + return err;
>>>>> +
>>>>> + queue = &speq->spe->queues.queue_array[speq->queue_nr];
>>>>> + arm_spe_set_pid_tid_cpu(speq->spe, queue);
>>>>> + return 0;
>>>>> +}
>>>>> +
>>>>> static void arm_spe_prep_sample(struct arm_spe *spe,
>>>>> struct arm_spe_queue *speq,
>>>>> union perf_event *event,
>>>>> @@ -431,6 +471,7 @@ static int arm_spe_sample(struct arm_spe_queue *speq)
>>>>> static int arm_spe_run_decoder(struct arm_spe_queue *speq, u64 *timestamp)
>>>>> {
>>>>> struct arm_spe *spe = speq->spe;
>>>>> + const struct arm_spe_record *record;
>>>>> int ret;
>>>>>
>>>>> if (!spe->kernel_start)
>>>>> @@ -450,6 +491,11 @@ static int arm_spe_run_decoder(struct arm_spe_queue *speq, u64 *timestamp)
>>>>> if (ret < 0)
>>>>> continue;
>>>>>
>>>>> + record = &speq->decoder->record;
>>>>> + ret = arm_spe_set_tid(speq, record->context_id);
>>>>> + if (ret)
>>>>> + return ret;
>>>>> +
>>>>> ret = arm_spe_sample(speq);
>>>>> if (ret)
>>>>> return ret;
>>>>> @@ -500,6 +546,10 @@ static int arm_spe__setup_queue(struct arm_spe *spe,
>>>>>
>>>>> record = &speq->decoder->record;
>>>>>
>>>>> + ret = arm_spe_set_tid(speq, record->context_id);
>>>>> + if (ret)
>>>>> + return ret;
>>>>> +
>>>>> speq->timestamp = record->timestamp;
>>>>> ret = auxtrace_heap__add(&spe->heap, queue_nr, speq->timestamp);
>>>>> if (ret)
>>>>> @@ -552,31 +602,6 @@ static bool arm_spe__is_timeless_decoding(struct arm_spe *spe)
>>>>> return timeless_decoding;
>>>>> }
>>>>>
>>>>> -static void arm_spe_set_pid_tid_cpu(struct arm_spe *spe,
>>>>> - struct auxtrace_queue *queue)
>>>>> -{
>>>>> - struct arm_spe_queue *speq = queue->priv;
>>>>> - pid_t tid;
>>>>> -
>>>>> - tid = machine__get_current_tid(spe->machine, speq->cpu);
>>>>> - if (tid != -1) {
>>>>> - speq->tid = tid;
>>>>> - thread__zput(speq->thread);
>>>>> - } else
>>>>> - speq->tid = queue->tid;
>>>>> -
>>>>> - if ((!speq->thread) && (speq->tid != -1)) {
>>>>> - speq->thread = machine__find_thread(spe->machine, -1,
>>>>> - speq->tid);
>>>>> - }
>>>>> -
>>>>> - if (speq->thread) {
>>>>> - speq->pid = speq->thread->pid_;
>>>>> - if (queue->cpu == -1)
>>>>> - speq->cpu = speq->thread->cpu;
>>>>> - }
>>>>> -}
>>>>> -
>>>>> static int arm_spe_process_queues(struct arm_spe *spe, u64 timestamp)
>>>>> {
>>>>> unsigned int queue_nr;
>>>>> --
>>>>> 2.28.0
>>>>>
Powered by blists - more mailing lists