[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141023153932.GB25215@krava.brq.redhat.com>
Date: Thu, 23 Oct 2014 17:39:32 +0200
From: Jiri Olsa <jolsa@...hat.com>
To: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Sukadev Bhattiprolu <sukadev@...ux.vnet.ibm.com>,
Anton Blanchard <anton@....ibm.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf/powerpc: Cache the DWARF debug info
On Thu, Oct 23, 2014 at 12:33:22PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Thu, Oct 23, 2014 at 05:13:06PM +0200, Jiri Olsa escreveu:
> > On Thu, Oct 23, 2014 at 11:26:34AM -0300, Arnaldo Carvalho de Melo wrote:
> > > That is why I thought it would be a compromise to put what he did, it
> > > would not make the existing situation that much worse, work needs to be
> > > done in this area :-\
>
> > I think we just need to put that libdw handle into dso object
> > as I suggested above
>
> Isn't it there already?
>
> The patch does:
>
> +++ b/tools/perf/util/dso.h
> @@ -127,6 +127,7 @@ struct dso {
> const char *long_name;
> u16 long_name_len;
> u16 short_name_len;
> + void *dwfl; /* DWARF debug info */
>
>
> ----------
>
> What you want, on top of that, at a minimum, we somehow limit the number
> of simultaneously dwfl_begin()'ed DSOs, right?
no, the patch is doing that.. as I wrote:
---
I think this can stay in 'struct dso' which is limited
by that code you show below (dso__data_open code)
I think we need just add generic functions that allocates/destroys
the dwfl handle and lazy allocate&store this handle whenever it's
needed and destroy it in dso__delete
---
so just:
1) generic function to do the lazy allocation of dwlf handle
2) store the handle within dso object
3) destroy it in dso__delete
the limitation for opened dso objects number should cover
the dwlf limit
thanks,
jirka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists