[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4BBD9C5A.90307@cn.fujitsu.com>
Date: Thu, 08 Apr 2010 17:05:30 +0800
From: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
To: Michel Lespinasse <walken@...gle.com>
CC: Hitoshi Mitake <mitake@....info.waseda.ac.jp>,
Ingo Molnar <mingo@...e.hu>,
Steven Rostedt <rostedt@...dmis.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Ming Lei <tom.leiming@...il.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: lock's trace events can improve mutex's performance in userspace?
Hi Michel,
Michel Lespinasse wrote:
> Sorry for the late reply...
>
> One thing to consider in locking micro-benchmarks is that often, code
> changes that slow down parts of the contended code path where the lock
> is not held, will result in an increase of the reported
> micro-benchmark metric. This effect is particularly marked for
> micro-benchmarks that consist of multiple threads doing empty
> acquire/release loops.
>
> As a thought experiment, imagine what would happen if you added a
> one-millisecond sleep in the contended code path for mutex
> acquisition. Soon all but one of your benchmark threads would be
> sleeping, and the only non-sleeping thread would be able to spin on
> that lock/unlock loop with no contention, resulting in very nice
> results for the micro-benchmark. Remove the sleep and the lock/unlock
> threads will have to contend, resulting in lower reported performance
> metrics.
Great thanks for your valuable reply that makes we see the
issue more clearly.
I've do the test address your conjecture that add usleep(1) in mutex
acquisition path, the test result shows contention is reduced.
And i also do the test that does more work in mutex holding path, the
result shows optimization ratio is decreased.
Thanks,
Xiao
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists