[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190318062150.GC6654@xz-x1>
Date: Mon, 18 Mar 2019 14:21:50 +0800
From: Peter Xu <peterx@...hat.com>
To: Christoph Hellwig <hch@....de>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Jason Wang <jasowang@...hat.com>,
Luiz Capitulino <lcapitulino@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"Michael S. Tsirkin" <mst@...hat.com>
Subject: Virtio-scsi multiqueue irq affinity
Hi, Christoph & all,
I noticed that starting from commit 0d9f0a52c8b9 ("virtio_scsi: use
virtio IRQ affinity", 2017-02-27) the virtio scsi driver is using a
new way (via irq_create_affinity_masks()) to automatically initialize
IRQ affinities for the multi-queues, which is different comparing to
all the other virtio devices (like virtio-net, which still uses
virtqueue_set_affinity(), which is actually, irq_set_affinity_hint()).
Firstly, it will definitely broke some of the userspace programs with
that when the scripts wanted to do the bindings explicitly like before
and they could simply fail with -EIO now every time when echoing to
/proc/irq/N/smp_affinity of any of the multi-queues (see
write_irq_affinity()).
Is there any specific reason to do it with the new way? Since AFAIU
we should still allow the system admins to decide what to do for such
configurations, .e.g., what if we only want to provision half of the
CPU resources to handle IRQs for a specific virtio-scsi controller?
We won't be able to achieve that with current policy. Or, could this
be a question for the IRQ system (irq_create_affinity_masks()) in
general? Any special considerations behind the big picture?
I believe I must have missed some contexts here and there... but I'd
like to raise the question up. Say, if the new way is preferred and
attempted, maybe it would worth it to spread the idea to the rest of
the virtio drivers who support multi-queues as well.
Thanks,
--
Peter Xu
Powered by blists - more mailing lists