[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190327081619.GG20525@lst.de>
Date: Wed, 27 Mar 2019 09:16:19 +0100
From: Christoph Hellwig <hch@....de>
To: luferry <luferry@....com>
Cc: Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
Dongli Zhang <dongli.zhang@...cle.com>,
Ming Lei <ming.lei@...hat.com>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] make blk_mq_map_queues more friendly for cpu topology
On Tue, Mar 26, 2019 at 03:55:10PM +0800, luferry wrote:
>
>
>
> At 2019-03-26 15:39:54, "Christoph Hellwig" <hch@....de> wrote:
> >Why isn't this using the automatic PCI-level affinity assignment to
> >start with?
>
> When enable virtio-blk with multi queues but with only 2 msix-vector.
> vp_dev->per_vq_vectors will be false, vp_get_vq_affintity will return NULL directly
> so blk_mq_virtio_map_queues will fallback to blk_mq_map_queues.
What is the point of the multiqueue mode if you don't have enough
(virtual) MSI-X vectors?
Powered by blists - more mailing lists