[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <566E9A7E.3030203@redhat.com>
Date: Mon, 14 Dec 2015 11:31:26 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Ming Lei <ming.lei@...onical.com>
Cc: Stefan Hajnoczi <stefanha@...il.com>, Jens Axboe <axboe@...nel.dk>,
linux-kernel <linux-kernel@...r.kernel.org>,
"Michael S. Tsirkin" <mst@...hat.com>, linux-api@...r.kernel.org,
Linux Virtualization <virtualization@...ts.linux-foundation.org>,
Stefan Hajnoczi <stefanha@...hat.com>
Subject: Re: [RFC PATCH 2/2] block: virtio-blk: support multi virt queues per
virtio-blk device
On 18/06/2014 06:04, Ming Lei wrote:
> For virtio-blk, I don't think it is always better to take more queues, and
> we need to leverage below things in host side:
>
> - host storage top performance, generally it reaches that with more
> than 1 jobs with libaio(suppose it is N, so basically we can use N
> iothread per device in qemu to try to get top performance)
>
> - iothreads' loading(if iothreads are at full loading, increasing
> queues doesn't help at all)
>
> In my test, I only use the current per-dev iothread(x-dataplane)
> in qemu to handle 2 vqs' notification and precess all I/O from
> the 2 vqs, and looks it can improve IOPS by ~30%.
>
> For virtio-scsi, the current usage doesn't make full use of blk-mq's
> advantage too because only one vq is active at the same time, so I
> guess the multi vqs' benefit won't be very much and I'd like to post
> patches to support that first, then provide test data with
> more queues(8, 16).
Hi Ming Lei,
would you like to repost these patches now that MQ support is in the kernel?
Also, I changed my mind about moving linux-aio to AioContext. I now
think it's a good idea, because it limits the number of io_getevents
syscalls. O:-) So I would be happy to review your patches for that as well.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists