lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 31 May 2013 17:56:29 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Andrew Hunter <ahh@...gle.com>
Cc:	linux-kernel@...r.kernel.org, eranian@...gle.com, mingo@...hat.com
Subject: Re: [PATCH] perf: reduce stack usage of schedule_events

On Thu, May 23, 2013 at 11:07:03AM -0700, Andrew Hunter wrote:
> schedule_events caches event constraints on the stack during
> scheduling.  Given the number of possible events, this is 512 bytes of
> stack; since it can be invoked under schedule() under god-knows-what,
> this is causing stack blowouts.
> 
> Trade some space usage for stack safety: add a place to cache the
> constraint pointer to struct perf_event.  For 8 bytes per event (1% of
> its size) we can save the giant stack frame.
> 
> This shouldn't change any aspect of scheduling whatsoever and while in
> theory the locality's a tiny bit worse, I doubt we'll see any
> performance impact either.
> 
> Tested: `perf stat whatever` does not blow up and produces
> results that aren't hugely obviously wrong.  I'm not sure how to run
> particularly good tests of perf code, but this should not produce any
> functional change whatsoever.
> 
> Signed-off-by: Andrew Hunter <ahh@...gle.com>
> Reviewed-by: Stephane Eranian <eranian@...gle.com>

OK nothing really strange popped out during a quick read.

Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ