lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 5 Apr 2024 16:41:43 -0700
From: Ian Rogers <irogers@...gle.com>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>, Kan Liang <kan.liang@...ux.intel.com>, 
	Jiri Olsa <jolsa@...nel.org>, Adrian Hunter <adrian.hunter@...el.com>, 
	Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org>, 
	LKML <linux-kernel@...r.kernel.org>, linux-perf-users@...r.kernel.org
Subject: Re: [PATCH 9/9] perf annotate: Move start field struct to annotated_source

On Thu, Apr 4, 2024 at 10:57 AM Namhyung Kim <namhyung@...nel.org> wrote:
>
> It's only used in perf annotate output which means functions with actual
> samples.  No need to consume memory for every symbol (annotation).
>
> Signed-off-by: Namhyung Kim <namhyung@...nel.org>
> ---
>  tools/perf/util/annotate.c | 10 +++++-----
>  tools/perf/util/annotate.h |  2 +-
>  2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
> index 5f79ae0bccfd..4db49611c386 100644
> --- a/tools/perf/util/annotate.c
> +++ b/tools/perf/util/annotate.c
> @@ -909,9 +909,9 @@ int symbol__annotate(struct map_symbol *ms, struct evsel *evsel,
>         args.arch = arch;
>         args.ms = *ms;
>         if (annotate_opts.full_addr)
> -               notes->start = map__objdump_2mem(ms->map, ms->sym->start);
> +               notes->src->start = map__objdump_2mem(ms->map, ms->sym->start);
>         else
> -               notes->start = map__rip_2objdump(ms->map, ms->sym->start);
> +               notes->src->start = map__rip_2objdump(ms->map, ms->sym->start);
>
>         return symbol__disassemble(sym, &args);
>  }
> @@ -1456,9 +1456,9 @@ void annotation__toggle_full_addr(struct annotation *notes, struct map_symbol *m
>         annotate_opts.full_addr = !annotate_opts.full_addr;
>
>         if (annotate_opts.full_addr)
> -               notes->start = map__objdump_2mem(ms->map, ms->sym->start);
> +               notes->src->start = map__objdump_2mem(ms->map, ms->sym->start);
>         else
> -               notes->start = map__rip_2objdump(ms->map, ms->sym->start);
> +               notes->src->start = map__rip_2objdump(ms->map, ms->sym->start);
>
>         annotation__update_column_widths(notes);
>  }
> @@ -1766,7 +1766,7 @@ static void __annotation_line__write(struct annotation_line *al, struct annotati
>                 int color = -1;
>
>                 if (!annotate_opts.use_offset)
> -                       addr += notes->start;
> +                       addr += notes->src->start;
>
>                 if (!annotate_opts.use_offset) {
>                         printed = scnprintf(bf, sizeof(bf), "%" PRIx64 ": ", addr);
> diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
> index d22b9e9a2fad..d5c821c22f79 100644
> --- a/tools/perf/util/annotate.h
> +++ b/tools/perf/util/annotate.h
> @@ -270,6 +270,7 @@ struct annotated_source {
>         int                     nr_entries;
>         int                     nr_asm_entries;
>         int                     max_jump_sources;
> +       u64                     start;

This introduces a 4 byte hole:

```
struct annotated_source {
       struct list_head           source;               /*     0    16 */
       struct sym_hist *          histograms;           /*    16     8 */
       struct hashmap *           samples;              /*    24     8 */
       int                        nr_histograms;        /*    32     4 */
       int                        nr_events;            /*    36     4 */
       int                        nr_entries;           /*    40     4 */
       int                        nr_asm_entries;       /*    44     4 */
       int                        max_jump_sources;     /*    48     4 */

       /* XXX 4 bytes hole, try to pack */

       u64                        start;                /*    56     8 */
       /* --- cacheline 1 boundary (64 bytes) --- */
       struct {
               u8                 addr;                 /*    64     1 */
               u8                 jumps;                /*    65     1 */
               u8                 target;               /*    66     1 */
               u8                 min_addr;             /*    67     1 */
               u8                 max_addr;             /*    68     1 */
               u8                 max_ins_name;         /*    69     1 */
               u16                max_line_len;         /*    70     2 */
       } widths;                                        /*    64     8 */

       /* size: 72, cachelines: 2, members: 10 */
       /* sum members: 68, holes: 1, sum holes: 4 */
       /* last cacheline: 8 bytes */
};
```

If you sort variables from largest to smallest then it generally
avoids holes (so max_line_len violates this, but it doesn't introduce
padding). Anyway, a simple fix is to just shuffle things a little.

Thanks,
Ian

>         struct {
>                 u8              addr;
>                 u8              jumps;
> @@ -312,7 +313,6 @@ struct annotated_branch {
>  };
>
>  struct LOCKABLE annotation {
> -       u64                     start;
>         struct annotated_source *src;
>         struct annotated_branch *branch;
>  };
> --
> 2.44.0.478.gd926399ef9-goog
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ