[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAP-5=fXnqZxWVbgMcainpTPc6mSHjUu3y1tELjdRk2zfLM4bOw@mail.gmail.com>
Date: Wed, 14 Jun 2023 18:49:40 -0700
From: Ian Rogers <irogers@...gle.com>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Yuan Can <yuancan@...wei.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Huacai Chen <chenhuacai@...nel.org>,
Andres Freund <andres@...razel.de>,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org
Subject: Re: [PATCH v1 2/2] perf annotation: Switch lock from a mutex to a sharded_mutex
On Wed, Jun 14, 2023 at 5:34 PM Namhyung Kim <namhyung@...nel.org> wrote:
>
> Hi Ian,
>
> On Sun, Jun 11, 2023 at 12:28 AM Ian Rogers <irogers@...gle.com> wrote:
> >
> > Remove the "struct mutex lock" variable from annotation that is
> > allocated per symbol. This removes in the region of 40 bytes per
> > symbol allocation. Use a sharded mutex where the number of shards is
> > set to the number of CPUs. Assuming good hashing of the annotation
> > (done based on the pointer), this means in order to contend there
> > needs to be more threads than CPUs, which is not currently true in any
> > perf command. Were contention an issue it is straightforward to
> > increase the number of shards in the mutex.
> >
> > On my Debian/glibc based machine, this reduces the size of struct
> > annotation from 136 bytes to 96 bytes, or nearly 30%.
>
> That's quite a good improvement given the number of symbols
> we can have in a report session!
>
> >
> > Signed-off-by: Ian Rogers <irogers@...gle.com>
> > ---
>
> [SNIP]
> > @@ -1291,17 +1292,64 @@ int disasm_line__scnprintf(struct disasm_line *dl, char *bf, size_t size, bool r
> > return ins__scnprintf(&dl->ins, bf, size, &dl->ops, max_ins_name);
> > }
> >
> > -void annotation__init(struct annotation *notes)
> > +void annotation__exit(struct annotation *notes)
> > {
> > - mutex_init(¬es->lock);
> > + annotated_source__delete(notes->src);
> > }
> >
> > -void annotation__exit(struct annotation *notes)
> > +static struct sharded_mutex *sharded_mutex;
> > +
> > +static void annotation__init_sharded_mutex(void)
> > {
> > - annotated_source__delete(notes->src);
> > - mutex_destroy(¬es->lock);
> > + /* As many mutexes as there are CPUs. */
> > + sharded_mutex = sharded_mutex__new(cpu__max_present_cpu().cpu);
> > +}
> > +
> > +static size_t annotation__hash(const struct annotation *notes)
> > +{
> > + return ((size_t)notes) >> 4;
>
> But I'm afraid it might create more contention depending on the
> malloc implementation. If it always returns 128-byte (or 256)
> aligned memory for this struct then it could always collide in the
> slot 0 if the number of CPU is 8 or less, right?
Right, I think we can use a secondary hash and hashmap.h has one lying
around for us:
https://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git/tree/tools/perf/util/hashmap.h?h=tmp.perf-tools-next#n15
it will mean that the sharded locks will need to be a power of 2
capacity. I'll work on a v2. Fwiw, the hash of a pointer for
collection like those in abseil is just the pointer value, so I'll
drop the shift to remove the low-bits once I'm using a more expensive
hash function.
Thanks,
Ian
> Thanks,
> Namhyung
>
>
> > }
> >
> > +static struct mutex *annotation__get_mutex(const struct annotation *notes)
> > +{
> > + static pthread_once_t once = PTHREAD_ONCE_INIT;
> > +
> > + pthread_once(&once, annotation__init_sharded_mutex);
> > + if (!sharded_mutex)
> > + return NULL;
> > +
> > + return sharded_mutex__get_mutex(sharded_mutex, annotation__hash(notes));
> > +}
> > +
> > +void annotation__lock(struct annotation *notes)
> > + NO_THREAD_SAFETY_ANALYSIS
> > +{
> > + struct mutex *mutex = annotation__get_mutex(notes);
> > +
> > + if (mutex)
> > + mutex_lock(mutex);
> > +}
> > +
> > +void annotation__unlock(struct annotation *notes)
> > + NO_THREAD_SAFETY_ANALYSIS
> > +{
> > + struct mutex *mutex = annotation__get_mutex(notes);
> > +
> > + if (mutex)
> > + mutex_unlock(mutex);
> > +}
> > +
> > +bool annotation__trylock(struct annotation *notes)
> > +{
> > + struct mutex *mutex = annotation__get_mutex(notes);
> > +
> > + if (!mutex)
> > + return false;
> > +
> > + return mutex_trylock(mutex);
> > +}
> > +
> > +
> > static void annotation_line__add(struct annotation_line *al, struct list_head *head)
> > {
> > list_add_tail(&al->node, head);
Powered by blists - more mailing lists