lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081104090405.GA507@elte.hu>
Date:	Tue, 4 Nov 2008 10:04:05 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <peterz@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Steven Rostedt <srostedt@...hat.com>
Subject: Re: [PATCH 3/3] ftrace: function tracer with irqs disabled


* Ingo Molnar <mingo@...e.hu> wrote:

> * Steven Rostedt <rostedt@...dmis.org> wrote:
> 
> > Running hackbench 3 times with the irqs disabled and 3 times with 
> > the preempt disabled function tracer yielded:
> > 
> > tracing type       times            entries recorded
> > ------------      --------          ----------------
> > irq disabled      43.393            166433066
> >                   43.282            166172618
> >                   43.298            166256704
> > 
> > preempt disabled  38.969            159871710
> >                   38.943            159972935
> >                   39.325            161056510
> 
> your numbers might be correct, but i found that hackbench is not 
> reliable boot-to-boot - it can easily produce 10% systematic noise 
> or more. (perhaps depending on how the various socket data 
> structures happen to be allocated)
> 
> the really conclusive way to test this would be to add a hack that 
> either does preempt disable or irqs disable, depending on a runtime 
> flag - and then observe how hackbench performance reacts to the 
> value of that flag.

... which is exactly what your patch implements :-)

> note that preempt-disable will also produce less trace entries, 
> especially in very irq-rich workloads. Hence it will be "faster".

this point still holds. Do we have any good guess about the 'captured 
trace events per second' rate in the two cases, are they the same?

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ