[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131128174001.GH10022@twins.programming.kicks-ass.net>
Date: Thu, 28 Nov 2013 18:40:01 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Eliezer Tamir <eliezer.tamir@...ux.intel.com>
Cc: Arjan van de Ven <arjan@...ux.intel.com>, lenb@...nel.org,
rjw@...ysocki.net, David Miller <davem@...emloft.net>,
rui.zhang@...el.com, jacob.jun.pan@...ux.intel.com,
Mike Galbraith <bitbucket@...ine.de>,
Ingo Molnar <mingo@...nel.org>, hpa@...or.com,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: [PATCH 7/8] sched, net: Fixup busy_loop_us_clock()
On Thu, Nov 28, 2013 at 06:49:00PM +0200, Eliezer Tamir wrote:
> I have tested this patch and I see a performance regression of about
> 1.5%.
Cute, can you qualify your metric? Since this is a poll loop the only
metric that would be interesting is the response latency. Is that what's
increased by 1.5%? Also, what's the standard deviation of your result?
Also, can you provide relevant perf results for this? Is it really the
sti;cli pair that's degrading your latency?
Better yet, can you provide us with a simple test-case that we can run
locally (preferably single machine setup, using localnet or somesuch).
> Maybe it would be better, rather then testing in the fast path, to
> simply disallow busy polling altogether when sched_clock_stable is
> not true?
Sadly that doesn't work; sched_clock_stable can become false at any time
after boot (and does, even on recent machines).
That said; let me see if I can come up with a few patches to optimize
the entire thing; that'd be something we all benefit from.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists