[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <TY2PR06MB3424E673A1374CAC37E647CA85079@TY2PR06MB3424.apcprd06.prod.outlook.com>
Date: Wed, 16 Nov 2022 11:24:23 +0000
From: Angus Chen <angus.chen@...uarmicro.com>
To: Thomas Gleixner <tglx@...utronix.de>,
"Michael S. Tsirkin" <mst@...hat.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Ming Lei <ming.lei@...hat.com>,
Jason Wang <jasowang@...hat.com>
Subject: RE: IRQ affinity problem from virtio_blk
> -----Original Message-----
> From: Thomas Gleixner <tglx@...utronix.de>
> Sent: Wednesday, November 16, 2022 6:56 PM
> To: Angus Chen <angus.chen@...uarmicro.com>; Michael S. Tsirkin
> <mst@...hat.com>
> Cc: linux-kernel@...r.kernel.org; Ming Lei <ming.lei@...hat.com>; Jason
> Wang <jasowang@...hat.com>
> Subject: RE: IRQ affinity problem from virtio_blk
>
> On Wed, Nov 16 2022 at 01:02, Angus Chen wrote:
> >> On Wed, Nov 16, 2022 at 12:24:24AM +0100, Thomas Gleixner wrote:
> > Any other information I need to provide,pls tell me.
>
> A sensible use case for 180+ virtio block devices in a single guest.
>
Our card can provide more than 512 virtio_blk devices .
one virtio_blk passthrough to one container,like docker.
So we need so much devices.
In the first patch ,I del the IRQD_AFFINITY_MANAGED in virtio_blk .
As you know, if we just use small queues number ,like 1or 2,we
Still occupy 80 vector ,that is kind of waste,and it is easy to eahausted the
Irq resource.
IRQD_AFFINITY_MANAGED is not the problem,
but many devices use the IRQD_AFFINITY_MANAGED will be problem.
Thanks.
> Thanks,
>
> tglx
Powered by blists - more mailing lists