lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 23 Oct 2014 10:37:24 -0300
From:	Arnaldo Carvalho de Melo <acme@...nel.org>
To:	Sukadev Bhattiprolu <sukadev@...ux.vnet.ibm.com>
Cc:	Jiri Olsa <jolsa@...hat.com>, Anton Blanchard <anton@....ibm.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf/powerpc: Cache the DWARF debug info

Em Wed, Oct 22, 2014 at 10:46:59AM -0700, Sukadev Bhattiprolu escreveu:
> Jiri Olsa [jolsa@...hat.com] wrote:
> | > +			goto out;
> | > +		}
> | > +		dso->dwfl = dwfl;
> | 
> | so by this we get powerpc arch code sharing dw handle via dso object,
> | but we have lot of generic code too ;-)
> 
> Well, this applies to powerpc...
> 
> | 
> | could you make this happen for unwind__get_entries.. probably
> | both sharing same generic code I guess
> 
> and unwind_get_entries() applies only to x86 and arm right ? ;-)
> Or at least thats what the config/Makefile says.
 
> I can take a look at unwind_get_entries(), but can you please merge
> this fix for now, since the current performance is bad?

Right, I think the way it is now is a good compromise, i.e. you seem to
be using the right place to cache this, this is restricted to powerpc,
i.e. if leaks or excessive memory usage happens in workloads with lots
of DSOs having dwfl handlers open at the same times happens, it doesn't
affect users in other arches.

Jiri: do you agree?

I'm applying it on my local tree, as I need to change some of those
functions, places where machine and thread are being passed are being
changed to receive just a thread pointer, since the machine where a
thread is can be obtained, after a patch I added on my local tree, can
be obtained from thread->mg->machine.

- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ