[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <764f3c.12445.169be70834b.Coremail.luferry@163.com>
Date: Wed, 27 Mar 2019 17:17:18 +0800 (CST)
From: luferry <luferry@....com>
To: "Christoph Hellwig" <hch@....de>
Cc: "Jens Axboe" <axboe@...nel.dk>,
"Dongli Zhang" <dongli.zhang@...cle.com>,
"Ming Lei" <ming.lei@...hat.com>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re:Re: [PATCH] make blk_mq_map_queues more friendly for cpu
topology
Actually, I just bought one vm from public cloud provider and run into this problem.
after reading code and compare pci device info, I reproduce this scenario.
Since common users cannot change msi vector numbers, so I suggest blk_mq_map_queues to be
more friendly. blk_mq_map_queues may be the last choice.
At 2019-03-27 16:16:19, "Christoph Hellwig" <hch@....de> wrote:
>On Tue, Mar 26, 2019 at 03:55:10PM +0800, luferry wrote:
>>
>>
>>
>> At 2019-03-26 15:39:54, "Christoph Hellwig" <hch@....de> wrote:
>> >Why isn't this using the automatic PCI-level affinity assignment to
>> >start with?
>>
>> When enable virtio-blk with multi queues but with only 2 msix-vector.
>> vp_dev->per_vq_vectors will be false, vp_get_vq_affintity will return NULL directly
>> so blk_mq_virtio_map_queues will fallback to blk_mq_map_queues.
>
>What is the point of the multiqueue mode if you don't have enough
>(virtual) MSI-X vectors?
Powered by blists - more mailing lists