lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7cjYvMndUmSuwnE1ETwnu_6WrxQ4UzsNHHvo4SVR250L7A@mail.gmail.com>
Date: Tue, 7 May 2024 21:10:54 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: James Clark <james.clark@....com>
Cc: linux-perf-users@...r.kernel.org, atrajeev@...ux.vnet.ibm.com, 
	irogers@...gle.com, Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, 
	Arnaldo Carvalho de Melo <acme@...nel.org>, Mark Rutland <mark.rutland@....com>, 
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, 
	Adrian Hunter <adrian.hunter@...el.com>, "Liang, Kan" <kan.liang@...ux.intel.com>, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/4] perf symbols: Update kcore map before merging in
 remaining symbols

On Tue, May 7, 2024 at 7:13 AM James Clark <james.clark@....com> wrote:
>
> When loading kcore, the main vmlinux map is updated in the same loop
> that merges the remaining maps. If a map that overlaps is merged in
> before kcore, the list can become unsortable when the main map addresses
> are updated. This will later trigger the check_invariants() assert:
>
>   $ perf record
>   $ perf report
>
>   util/maps.c:96: check_invariants: Assertion `map__end(prev) <=
>     map__start(map) || map__start(prev) == map__start(map)' failed.
>   Aborted
>
> Fix it by moving the main map update prior to the loop so that
> maps__merge_in() can split it if necessary.

Looks like you and Leo are working on the same problem.

https://lore.kernel.org/r/20240505202805.583253-1-leo.yan@arm.com/

>
> Fixes: 659ad3492b91 ("perf maps: Switch from rbtree to lazily sorted array for addresses")
> Signed-off-by: James Clark <james.clark@....com>
> ---
>  tools/perf/util/symbol.c | 40 +++++++++++++++++++++-------------------
>  1 file changed, 21 insertions(+), 19 deletions(-)
>
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index 2d95f22d713d..e98dfe766da3 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -1289,7 +1289,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  {
>         struct maps *kmaps = map__kmaps(map);
>         struct kcore_mapfn_data md;
> -       struct map *replacement_map = NULL;
> +       struct map *map_ref, *replacement_map = NULL;
>         struct machine *machine;
>         bool is_64_bit;
>         int err, fd;
> @@ -1367,6 +1367,24 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>         if (!replacement_map)
>                 replacement_map = list_entry(md.maps.next, struct map_list_node, node)->map;
>
> +       /*
> +        * Update addresses of vmlinux map. Re-insert it to ensure maps are
> +        * correctly ordered. Do this before using maps__merge_in() for the
> +        * remaining maps so vmlinux gets split if necessary.
> +        */
> +       map_ref = map__get(map);
> +       maps__remove(kmaps, map_ref);

A nitpick.  It'd be natural to use 'map' instead of 'map_ref'
(even if they are the same) since IIUC we want to remove
the old 'map' and update 'map_ref' then add it back.

> +
> +       map__set_start(map_ref, map__start(replacement_map));
> +       map__set_end(map_ref, map__end(replacement_map));
> +       map__set_pgoff(map_ref, map__pgoff(replacement_map));
> +       map__set_mapping_type(map_ref, map__mapping_type(replacement_map));

So here, replacement_map should not be NULL right?

Thanks,
Namhyung

> +
> +       err = maps__insert(kmaps, map_ref);
> +       map__put(map_ref);
> +       if (err)
> +               goto out_err;
> +
>         /* Add new maps */
>         while (!list_empty(&md.maps)) {
>                 struct map_list_node *new_node = list_entry(md.maps.next, struct map_list_node, node);
> @@ -1374,24 +1392,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>
>                 list_del_init(&new_node->node);
>
> -               if (RC_CHK_EQUAL(new_map, replacement_map)) {
> -                       struct map *map_ref;
> -
> -                       /* Ensure maps are correctly ordered */
> -                       map_ref = map__get(map);
> -                       maps__remove(kmaps, map_ref);
> -
> -                       map__set_start(map_ref, map__start(new_map));
> -                       map__set_end(map_ref, map__end(new_map));
> -                       map__set_pgoff(map_ref, map__pgoff(new_map));
> -                       map__set_mapping_type(map_ref, map__mapping_type(new_map));
> -
> -                       err = maps__insert(kmaps, map_ref);
> -                       map__put(map_ref);
> -                       map__put(new_map);
> -                       if (err)
> -                               goto out_err;
> -               } else {
> +               /* skip if replacement_map, already inserted above */
> +               if (!RC_CHK_EQUAL(new_map, replacement_map)) {
>                         /*
>                          * Merge kcore map into existing maps,
>                          * and ensure that current maps (eBPF)
> --
> 2.34.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ