lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Mon, 5 Feb 2024 16:37:47 -0800
From: Namhyung Kim <namhyung@...nel.org>
To: Ian Rogers <irogers@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, 
	Arnaldo Carvalho de Melo <acme@...nel.org>, Mark Rutland <mark.rutland@....com>, 
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, 
	Adrian Hunter <adrian.hunter@...el.com>, Nick Terrell <terrelln@...com>, 
	Kan Liang <kan.liang@...ux.intel.com>, Andi Kleen <ak@...ux.intel.com>, 
	Kajol Jain <kjain@...ux.ibm.com>, Athira Rajeev <atrajeev@...ux.vnet.ibm.com>, 
	Huacai Chen <chenhuacai@...nel.org>, Masami Hiramatsu <mhiramat@...nel.org>, 
	Vincent Whitchurch <vincent.whitchurch@...s.com>, "Steinar H. Gunderson" <sesse@...gle.com>, 
	Liam Howlett <liam.howlett@...cle.com>, Miguel Ojeda <ojeda@...nel.org>, 
	Colin Ian King <colin.i.king@...il.com>, Dmitrii Dolgov <9erthalion6@...il.com>, 
	Yang Jihong <yangjihong1@...wei.com>, Ming Wang <wangming01@...ngson.cn>, 
	James Clark <james.clark@....com>, K Prateek Nayak <kprateek.nayak@....com>, 
	Sean Christopherson <seanjc@...gle.com>, Leo Yan <leo.yan@...aro.org>, 
	Ravi Bangoria <ravi.bangoria@....com>, German Gomez <german.gomez@....com>, 
	Changbin Du <changbin.du@...wei.com>, Paolo Bonzini <pbonzini@...hat.com>, Li Dong <lidong@...o.com>, 
	Sandipan Das <sandipan.das@....com>, liuwenyu <liuwenyu7@...wei.com>, 
	linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org, 
	Guilherme Amadio <amadio@...too.org>
Subject: Re: [PATCH v7 01/25] perf maps: Switch from rbtree to lazily sorted
 array for addresses

Hi Ian,

Sorry for the late reply.

On Thu, Feb 1, 2024 at 8:21 PM Ian Rogers <irogers@...gle.com> wrote:
>
> On Thu, Feb 1, 2024 at 6:48 PM Namhyung Kim <namhyung@...nel.org> wrote:
[SNIP]
> > > +int maps__copy_from(struct maps *dest, struct maps *parent)
> > > +{
> > > +       /* Note, if struct map were immutable then cloning could use ref counts. */
> > > +       struct map **parent_maps_by_address;
> > > +       int err = 0;
> > > +       unsigned int n;
> > > +
> > > +       down_write(maps__lock(dest));
> > >         down_read(maps__lock(parent));
> > >
> > > -       maps__for_each_entry(parent, rb_node) {
> > > -               struct map *new = map__clone(rb_node->map);
> > > +       parent_maps_by_address = maps__maps_by_address(parent);
> > > +       n = maps__nr_maps(parent);
> > > +       if (maps__empty(dest)) {
> > > +               /* No existing mappings so just copy from parent to avoid reallocs in insert. */
> > > +               unsigned int nr_maps_allocated = RC_CHK_ACCESS(parent)->nr_maps_allocated;
> > > +               struct map **dest_maps_by_address =
> > > +                       malloc(nr_maps_allocated * sizeof(struct map *));
> > > +               struct map **dest_maps_by_name = NULL;
> > >
> > > -               if (new == NULL) {
> > > +               if (!dest_maps_by_address)
> > >                         err = -ENOMEM;
> > > -                       goto out_unlock;
> > > +               else {
> > > +                       if (maps__maps_by_name(parent)) {
> > > +                               dest_maps_by_name =
> > > +                                       malloc(nr_maps_allocated * sizeof(struct map *));
> > > +                       }
> > > +
> > > +                       RC_CHK_ACCESS(dest)->maps_by_address = dest_maps_by_address;
> > > +                       RC_CHK_ACCESS(dest)->maps_by_name = dest_maps_by_name;
> > > +                       RC_CHK_ACCESS(dest)->nr_maps_allocated = nr_maps_allocated;
> > >                 }
> > >
> > > -               err = unwind__prepare_access(maps, new, NULL);
> > > -               if (err)
> > > -                       goto out_unlock;
> > > +               for (unsigned int i = 0; !err && i < n; i++) {
> > > +                       struct map *pos = parent_maps_by_address[i];
> > > +                       struct map *new = map__clone(pos);
> > >
> > > -               err = maps__insert(maps, new);
> > > -               if (err)
> > > -                       goto out_unlock;
> > > +                       if (!new)
> > > +                               err = -ENOMEM;
> > > +                       else {
> > > +                               err = unwind__prepare_access(dest, new, NULL);
> > > +                               if (!err) {
> > > +                                       dest_maps_by_address[i] = new;
> > > +                                       if (dest_maps_by_name)
> > > +                                               dest_maps_by_name[i] = map__get(new);
> > > +                                       RC_CHK_ACCESS(dest)->nr_maps = i + 1;
> > > +                               }
> > > +                       }
> > > +                       if (err)
> > > +                               map__put(new);
> > > +               }
> > > +               maps__set_maps_by_address_sorted(dest, maps__maps_by_address_sorted(parent));
> > > +               if (!err) {
> > > +                       RC_CHK_ACCESS(dest)->last_search_by_name_idx =
> > > +                               RC_CHK_ACCESS(parent)->last_search_by_name_idx;
> > > +                       maps__set_maps_by_name_sorted(dest,
> > > +                                               dest_maps_by_name &&
> > > +                                               maps__maps_by_name_sorted(parent));
> > > +               } else {
> > > +                       RC_CHK_ACCESS(dest)->last_search_by_name_idx = 0;
> > > +                       maps__set_maps_by_name_sorted(dest, false);
> > > +               }
> > > +       } else {
> > > +               /* Unexpected copying to a maps containing entries. */
> > > +               for (unsigned int i = 0; !err && i < n; i++) {
> > > +                       struct map *pos = parent_maps_by_address[i];
> > > +                       struct map *new = map__clone(pos);
> > >
> > > -               map__put(new);
> > > +                       if (!new)
> > > +                               err = -ENOMEM;
> > > +                       else {
> > > +                               err = unwind__prepare_access(dest, new, NULL);
> > > +                               if (!err)
> > > +                                       err = maps__insert(dest, new);
> >
> > Shouldn't it be __maps__insert()?
>
> On entry the read lock is taken on parent but no lock is taken on dest
> so the locked version is used.

I think you added the writer lock on dest.

Thanks,
Namhyung

>
> > > +                       }
> > > +                       map__put(new);
> > > +               }
> > >         }
> > > -
> > > -       err = 0;
> > > -out_unlock:
> > >         up_read(maps__lock(parent));
> > > +       up_write(maps__lock(dest));
> > >         return err;
> > >  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ