[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <6629E963-BF86-4344-9FB4-331C12684A86@oracle.com>
Date: Fri, 1 Dec 2006 11:21:31 -0800
From: Zach Brown <zach.brown@...cle.com>
To: "Chen, Kenneth W" <kenneth.w.chen@...el.com>
Cc: "Andrew Morton" <akpm@...l.org>,
"linux-kernel" <linux-kernel@...r.kernel.org>,
"Chris Mason" <chris.mason@...cle.com>
Subject: Re: [rfc patch] optimize o_direct on block device
On Nov 30, 2006, at 10:16 PM, Chen, Kenneth W wrote:
> Zach Brown wrote on Thursday, November 30, 2006 1:45 PM
>>> At that time, a patch was written for raw device to demonstrate that
>>> large performance head room is achievable (at ~20% speedup for
>>> micro-
>>> benchmark and ~2% for db transaction processing benchmark) with a
>>> tight I/O submission processing loop.
>>
>> Where exactly does the benefit come from? icache misses? "atomic"
>> ops leading to pipeline flushes?
>
> It benefit from shorter path length. It takes much shorter time to
> process
> one I/O request, both in the submit and completion path. I always
> think in
> terms of how many instructions, or clock ticks does it take to
> convert user
> request into bio, submit it and in the return path, to process the
> bio call
> back function and do the appropriate io completion (sync or async).
Sure.
What I'm hoping for is an understanding of what exactly the path is
doing with those cycles. Do we have any more detailed measurements
than, say, get_cycles() before and after the call?
Maybe it's time for me to have a good sit down with systemtap :)
- z
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists