[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240507141210.195939-4-james.clark@arm.com>
Date: Tue, 7 May 2024 15:12:07 +0100
From: James Clark <james.clark@....com>
To: linux-perf-users@...r.kernel.org,
atrajeev@...ux.vnet.ibm.com,
irogers@...gle.com
Cc: James Clark <james.clark@....com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
"Liang, Kan" <kan.liang@...ux.intel.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH 3/4] perf symbols: Update kcore map before merging in remaining symbols
When loading kcore, the main vmlinux map is updated in the same loop
that merges the remaining maps. If a map that overlaps is merged in
before kcore, the list can become unsortable when the main map addresses
are updated. This will later trigger the check_invariants() assert:
$ perf record
$ perf report
util/maps.c:96: check_invariants: Assertion `map__end(prev) <=
map__start(map) || map__start(prev) == map__start(map)' failed.
Aborted
Fix it by moving the main map update prior to the loop so that
maps__merge_in() can split it if necessary.
Fixes: 659ad3492b91 ("perf maps: Switch from rbtree to lazily sorted array for addresses")
Signed-off-by: James Clark <james.clark@....com>
---
tools/perf/util/symbol.c | 40 +++++++++++++++++++++-------------------
1 file changed, 21 insertions(+), 19 deletions(-)
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 2d95f22d713d..e98dfe766da3 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1289,7 +1289,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
{
struct maps *kmaps = map__kmaps(map);
struct kcore_mapfn_data md;
- struct map *replacement_map = NULL;
+ struct map *map_ref, *replacement_map = NULL;
struct machine *machine;
bool is_64_bit;
int err, fd;
@@ -1367,6 +1367,24 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
if (!replacement_map)
replacement_map = list_entry(md.maps.next, struct map_list_node, node)->map;
+ /*
+ * Update addresses of vmlinux map. Re-insert it to ensure maps are
+ * correctly ordered. Do this before using maps__merge_in() for the
+ * remaining maps so vmlinux gets split if necessary.
+ */
+ map_ref = map__get(map);
+ maps__remove(kmaps, map_ref);
+
+ map__set_start(map_ref, map__start(replacement_map));
+ map__set_end(map_ref, map__end(replacement_map));
+ map__set_pgoff(map_ref, map__pgoff(replacement_map));
+ map__set_mapping_type(map_ref, map__mapping_type(replacement_map));
+
+ err = maps__insert(kmaps, map_ref);
+ map__put(map_ref);
+ if (err)
+ goto out_err;
+
/* Add new maps */
while (!list_empty(&md.maps)) {
struct map_list_node *new_node = list_entry(md.maps.next, struct map_list_node, node);
@@ -1374,24 +1392,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
list_del_init(&new_node->node);
- if (RC_CHK_EQUAL(new_map, replacement_map)) {
- struct map *map_ref;
-
- /* Ensure maps are correctly ordered */
- map_ref = map__get(map);
- maps__remove(kmaps, map_ref);
-
- map__set_start(map_ref, map__start(new_map));
- map__set_end(map_ref, map__end(new_map));
- map__set_pgoff(map_ref, map__pgoff(new_map));
- map__set_mapping_type(map_ref, map__mapping_type(new_map));
-
- err = maps__insert(kmaps, map_ref);
- map__put(map_ref);
- map__put(new_map);
- if (err)
- goto out_err;
- } else {
+ /* skip if replacement_map, already inserted above */
+ if (!RC_CHK_EQUAL(new_map, replacement_map)) {
/*
* Merge kcore map into existing maps,
* and ensure that current maps (eBPF)
--
2.34.1
Powered by blists - more mailing lists