[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <185f91de-f0cc-4a84-9e41-56370df718fd@linaro.org>
Date: Wed, 21 Jan 2026 09:51:30 +0000
From: James Clark <james.clark@...aro.org>
To: Arnaldo Carvalho de Melo <acme@...nel.org>,
Ian Rogers <irogers@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Namhyung Kim <namhyung@...nel.org>, Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>, Adrian Hunter <adrian.hunter@...el.com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
Mark Brown <broonie@...nel.org>
Subject: Re: [PATCH] perf jevents: Handle deleted JSONS in out of source
builds
On 1/20/26 6:54 PM, Arnaldo Carvalho de Melo wrote:
> On Tue, Jan 20, 2026 at 10:01:52AM -0800, Ian Rogers wrote:
>> On Tue, Jan 20, 2026 at 7:39 AM James Clark <james.clark@...aro.org> wrote:
>>>
>>> The cp command here doesn't remove files that have been removed from the
>>> sourcetree. That means incremental builds can either succeed with stale
>>> events or will fail completely if a stale json file has a broken
>>> reference in it.
>>>
>>> Fix it by using rsync instead of cp. legacy-cache.json has to be
>>> excluded as this is a generated file isn't present in the source tree.
>>>
>>> This only happens when deleting a JSON file, which has only happened
>>> once since the linked commit. The fixes commit is marked as the origin
>>> of the problem in case any future changes that delete JSONs are back
>>> ported, rather than the first commit that deleted a JSON file.
>>>
>>> Reported-by: Mark Brown <broonie@...nel.org>
>>> Closes: https://lore.kernel.org/linux-next/aW5XSAo88_LBPSYI@sirena.org.uk/
>>> Fixes: 4bb55de4ff03 ("perf jevents: Support copying the source json files to OUTPUT")
>>> Signed-off-by: James Clark <james.clark@...aro.org>
>>> ---
>>> This is a bit of a hack and I thought that making jevents.py handle
>>> multiple input folders would be a much better solution than this. Then
>>> we could have "gen-pmu-events" for only generated files and "pmu-events"
>>> for only in-tree input files. It would be very clear what's generated
>>> and what's not and all copying rules and special clean rules just
>>> disappear (and this isn't the first time these rules have caused build
>>> issues).
>>>
>>> Unfortunately, after a while of trying to modify the script I thought it
>>> was too invasive for now. The script does output per-file at the very
>>> bottom of the logic in process_one_file(), so adding files in another
>>> folder ends up re-emitting section headers when another chunk is output.
>>> Although other parts of the script do build things up in memory before
>>> outputting so it was possible to make those parts work with multiple
>>> folders transparently.
>>
>> Thanks James!
>> Acked-by: Ian Rogers <irogers@...gle.com>
>> I see other rsync uses in:
>> tools/testing/selftests/sparc64/Makefile
>> tools/testing/selftests/bpf/Makefile
>> but they aren't the most compelling mainstream uses. I wonder whether
>> we can test for rsync's availability and if not fall back on cp?
That will work if we completely wipe the destination directory every
time. But it would be an untested path, because in reality everyone is
going to have rsync. It might be better to re-implement rsync and then
at least the same thing runs everywhere.
>
> It is not mentioned at all in Documentation, so probably its best not to
> add a requirement for it?
>
> - Arnaldo
I can hack together something that does the delete in a few lines of
bash or make then? I did consider that originally but I thought that's
what rsync does so I'll just use it.
Powered by blists - more mailing lists