[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7ci5zL8NWMrJVq4FQ242LNx0cQoY3Z32B+yuO2HFu6R1gA@mail.gmail.com>
Date: Wed, 14 Jun 2023 17:34:29 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Ian Rogers <irogers@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Yuan Can <yuancan@...wei.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Huacai Chen <chenhuacai@...nel.org>,
Andres Freund <andres@...razel.de>,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org
Subject: Re: [PATCH v1 2/2] perf annotation: Switch lock from a mutex to a sharded_mutex
Hi Ian,
On Sun, Jun 11, 2023 at 12:28 AM Ian Rogers <irogers@...gle.com> wrote:
>
> Remove the "struct mutex lock" variable from annotation that is
> allocated per symbol. This removes in the region of 40 bytes per
> symbol allocation. Use a sharded mutex where the number of shards is
> set to the number of CPUs. Assuming good hashing of the annotation
> (done based on the pointer), this means in order to contend there
> needs to be more threads than CPUs, which is not currently true in any
> perf command. Were contention an issue it is straightforward to
> increase the number of shards in the mutex.
>
> On my Debian/glibc based machine, this reduces the size of struct
> annotation from 136 bytes to 96 bytes, or nearly 30%.
That's quite a good improvement given the number of symbols
we can have in a report session!
>
> Signed-off-by: Ian Rogers <irogers@...gle.com>
> ---
[SNIP]
> @@ -1291,17 +1292,64 @@ int disasm_line__scnprintf(struct disasm_line *dl, char *bf, size_t size, bool r
> return ins__scnprintf(&dl->ins, bf, size, &dl->ops, max_ins_name);
> }
>
> -void annotation__init(struct annotation *notes)
> +void annotation__exit(struct annotation *notes)
> {
> - mutex_init(¬es->lock);
> + annotated_source__delete(notes->src);
> }
>
> -void annotation__exit(struct annotation *notes)
> +static struct sharded_mutex *sharded_mutex;
> +
> +static void annotation__init_sharded_mutex(void)
> {
> - annotated_source__delete(notes->src);
> - mutex_destroy(¬es->lock);
> + /* As many mutexes as there are CPUs. */
> + sharded_mutex = sharded_mutex__new(cpu__max_present_cpu().cpu);
> +}
> +
> +static size_t annotation__hash(const struct annotation *notes)
> +{
> + return ((size_t)notes) >> 4;
But I'm afraid it might create more contention depending on the
malloc implementation. If it always returns 128-byte (or 256)
aligned memory for this struct then it could always collide in the
slot 0 if the number of CPU is 8 or less, right?
Thanks,
Namhyung
> }
>
> +static struct mutex *annotation__get_mutex(const struct annotation *notes)
> +{
> + static pthread_once_t once = PTHREAD_ONCE_INIT;
> +
> + pthread_once(&once, annotation__init_sharded_mutex);
> + if (!sharded_mutex)
> + return NULL;
> +
> + return sharded_mutex__get_mutex(sharded_mutex, annotation__hash(notes));
> +}
> +
> +void annotation__lock(struct annotation *notes)
> + NO_THREAD_SAFETY_ANALYSIS
> +{
> + struct mutex *mutex = annotation__get_mutex(notes);
> +
> + if (mutex)
> + mutex_lock(mutex);
> +}
> +
> +void annotation__unlock(struct annotation *notes)
> + NO_THREAD_SAFETY_ANALYSIS
> +{
> + struct mutex *mutex = annotation__get_mutex(notes);
> +
> + if (mutex)
> + mutex_unlock(mutex);
> +}
> +
> +bool annotation__trylock(struct annotation *notes)
> +{
> + struct mutex *mutex = annotation__get_mutex(notes);
> +
> + if (!mutex)
> + return false;
> +
> + return mutex_trylock(mutex);
> +}
> +
> +
> static void annotation_line__add(struct annotation_line *al, struct list_head *head)
> {
> list_add_tail(&al->node, head);
Powered by blists - more mailing lists