[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVNUzpfUBwhXOdETVr=jOY8Qwex68+cFG3_ypXj9TFc48g@mail.gmail.com>
Date: Mon, 11 May 2015 21:12:56 +0800
From: Ming Lei <ming.lei@...onical.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Dave Kleikamp <dave.kleikamp@...cle.com>,
Jens Axboe <axboe@...nel.dk>, Zach Brown <zab@...bo.net>,
Maxim Patlasov <mpatlasov@...allels.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v3 3/4] block: loop: use kthread_work
On Mon, May 11, 2015 at 3:20 PM, Christoph Hellwig <hch@...radead.org> wrote:
> On Thu, May 07, 2015 at 06:32:03PM +0800, Ming Lei wrote:
>> > I can't really parse this, what's the specific advantage here?
>>
>> Patch 4's commit log provides the test data.
>>
>> >From the data, it is observed that one thread is enough to get
>> similar throughput with previous one which submits IO from
>> work concurrently.
>>
>> Single thread can decrease context switch a lots, also one thread is
>> often used to submit AIO in reality.
>
> But we still need to support the non-AIO case. For one due to
> bisectablity, and second even with AIO support we'll still have people
> using it.
For non-AIO case, single thread has been used for long long time,
and it was just converted to work in v4.0, which has caused performance
regression for fedora live booting already. In discussion[1], even though
submitting I/O via work concurrently can improve random IO throughput,
meantime it may hurt sequential IO performance, so maybe better to restore
to single thread behaviour.
For AIO support, I think it is better to use multi hwqueue with per-hwq kthread
than current work approach if there is so high performance requirement for loop.
[1] http://marc.info/?t=143082678400002&r=1&w=2
Thanks,
Ming Lei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists