[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130624071544.GR9422@kernel.dk>
Date: Mon, 24 Jun 2013 09:15:45 +0200
From: Jens Axboe <axboe@...nel.dk>
To: Ingo Molnar <mingo@...nel.org>
Cc: Matthew Wilcox <willy@...ux.intel.com>,
Al Viro <viro@...iv.linux.org.uk>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org, linux-scsi@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: RFC: Allow block drivers to poll for I/O instead of sleeping
On Sun, Jun 23 2013, Ingo Molnar wrote:
> I'm wondering why this makes such a performance difference.
They key ingredient here is simply not going to sleep, only to get an
IRQ and get woken up very shortly again. NAPI and similar approaches
work great for high IOPS cases, where you maintain a certain depth of
IO. For lower queue depth or sync IO (like Willy is running here),
nothing beats the pure app driven poll from a latency perspective. I've
seen plenty of systems where the power management is also so aggressive,
that you manage to enter lower C states very quickly and that then of
course makes things even worse. Intelligent polling would make that less
of a problem.
Willy, I think the general design is fine, hooking in via the bdi is the
only way to get back to the right place from where you need to sleep.
Some thoughts:
- This should be hooked in via blk-iopoll, both of them should call into
the same driver hook for polling completions.
- It needs to be more intelligent in when you want to poll and when you
want regular irq driven IO.
- With the former note, the app either needs to opt in (and hence
willingly sacrifice CPU cycles of its scheduling slice) or it needs to
be nicer in when it gives up and goes back to irq driven IO.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists