[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEtL6a7vDKjbRdJnkiwtZMMh5vUaJ=tH7mf=omZrFy7AFQ@mail.gmail.com>
Date: Fri, 10 Mar 2023 17:41:55 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Xie Yongji <xieyongji@...edance.com>, tglx@...utronix.de,
hch@....de, virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 00/11] VDUSE: Improve performance
On Fri, Mar 10, 2023 at 4:50 PM Michael S. Tsirkin <mst@...hat.com> wrote:
>
> On Tue, Feb 28, 2023 at 05:40:59PM +0800, Xie Yongji wrote:
> > Hi all,
> >
> > This series introduces some ways to improve VDUSE performance.
>
>
> Pls fix warnings reported by 0-day infra, dropping this for now.
Note that I plan to review this next week.
Thanks
>
>
> > Patch 1 ~ 6 bring current interrupt affinity spreading mechanism
> > to vduse device and make it possible for the virtio-blk driver
> > to build the blk-mq queues based on it. This would be useful to
> > mitigate the virtqueue lock contention in virtio-blk driver. In
> > our test, with those patches, we could get ~50% improvement (600k
> > iops -> 900k iops) when using per-cpu virtqueue.
> >
> > Patch 7 adds a sysfs interface for each vduse virtqueue to change
> > the affinity for IRQ callback. It would be helpful for performance
> > tuning when the affinity mask contains more than one CPU.
> >
> > Patch 8 ~ 9 associate an eventfd to the vdpa callback so that
> > we can signal it directly during irq injection without scheduling
> > an additional workqueue thread to do that.
> >
> > Patch 10, 11 add a sysfs interface to support specifying bounce
> > buffer size in virtio-vdpa case. The high throughput workloads
> > can benefit from it. And we can also use it to reduce the memory
> > overhead for small throughput workloads.
> >
> > Please review, thanks!
> >
> > V2 to V3:
> > - Rebased to newest kernel tree
> > - Export group_cpus_evenly() instead of irq_create_affinity_masks() [MST]
> > - Remove the sysfs for workqueue control [Jason]
> > - Associate an eventfd to the vdpa callback [Jason]
> > - Signal the eventfd directly in vhost-vdpa case [Jason]
> > - Use round-robin to spread IRQs between CPUs in the affinity mask [Jason]
> > - Handle the cpu hotplug case on IRQ injection [Jason]
> > - Remove effective IRQ affinity and balance mechanism for IRQ allocation
> >
> > V1 to V2:
> > - Export irq_create_affinity_masks()
> > - Add set/get_vq_affinity and set_irq_affinity callbacks in vDPA
> > framework
> > - Add automatic irq callback affinity support in VDUSE driver [Jason]
> > - Add more backgrounds information in commit log [Jason]
> > - Only support changing effective affinity when the value is a subset
> > of the IRQ callback affinity mask
> >
> > Xie Yongji (11):
> > lib/group_cpus: Export group_cpus_evenly()
> > vdpa: Add set/get_vq_affinity callbacks in vdpa_config_ops
> > vdpa: Add set_irq_affinity callback in vdpa_config_ops
> > vduse: Refactor allocation for vduse virtqueues
> > vduse: Support automatic irq callback affinity
> > vduse: Support set/get_vq_affinity callbacks
> > vduse: Add sysfs interface for irq callback affinity
> > vdpa: Add eventfd for the vdpa callback
> > vduse: Signal interrupt's eventfd directly in vhost-vdpa case
> > vduse: Delay iova domain creation
> > vduse: Support specifying bounce buffer size via sysfs
> >
> > drivers/vdpa/vdpa_user/vduse_dev.c | 490 +++++++++++++++++++++++++----
> > drivers/vhost/vdpa.c | 2 +
> > drivers/virtio/virtio_vdpa.c | 33 ++
> > include/linux/vdpa.h | 25 ++
> > lib/group_cpus.c | 1 +
> > 5 files changed, 488 insertions(+), 63 deletions(-)
> >
> > --
> > 2.20.1
>
Powered by blists - more mailing lists