[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210713161906.457857-1-stefanha@redhat.com>
Date: Tue, 13 Jul 2021 17:19:03 +0100
From: Stefan Hajnoczi <stefanha@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: Daniel Lezcano <daniel.lezcano@...aro.org>,
Stefano Garzarella <sgarzare@...hat.com>,
Ming Lei <ming.lei@...hat.com>,
"Michael S . Tsirkin" <mst@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Jens Axboe <axboe@...nel.dk>, Jason Wang <jasowang@...hat.com>,
linux-block@...r.kernel.org,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
virtualization@...ts.linux-foundation.org,
linux-pm@...r.kernel.org, Christoph Hellwig <hch@...radead.org>,
Stefan Hajnoczi <stefanha@...hat.com>
Subject: [RFC 0/3] cpuidle: add poll_source API and virtio vq polling
These patches are not polished yet but I would like request feedback on this
approach and share performance results with you.
Idle CPUs tentatively enter a busy wait loop before halting when the cpuidle
haltpoll driver is enabled inside a virtual machine. This reduces wakeup
latency for events that occur soon after the vCPU becomes idle.
This patch series extends the cpuidle busy wait loop with the new poll_source
API so drivers can participate in polling. Such polling-aware drivers disable
their device's irq during the busy wait loop to avoid the cost of interrupts.
This reduces latency further than regular cpuidle haltpoll, which still relies
on irqs.
Virtio drivers are modified to use the poll_source API so all virtio device
types get this feature. The following virtio-blk fio benchmark results show the
improvement:
IOPS (numjobs=4, iodepth=1, 4 virtqueues)
before poll_source io_poll
4k randread 167102 186049 (+11%) 186654 (+11%)
4k randwrite 162204 181214 (+11%) 181850 (+12%)
4k randrw 159520 177071 (+11%) 177928 (+11%)
The comparison against io_poll shows that cpuidle poll_source achieves
equivalent performance to the block layer's io_poll feature (which I
implemented in a separate patch series [1]).
The advantage of poll_source is that applications do not need to explicitly set
the RWF_HIPRI I/O request flag. The poll_source approach is attractive because
few applications actually use RWF_HIPRI and it takes advantage of CPU cycles we
would have spent in cpuidle haltpoll anyway.
The current series does not improve virtio-net. I haven't investigated deeply,
but it is possible that NAPI and poll_source do not combine. See the final
patch for a starting point on making the two work together.
I have not tried this on bare metal but it might help there too. The cost of
disabling a device's irq must be less than the savings from avoiding irq
handling for this optimization to make sense.
[1] https://lore.kernel.org/linux-block/20210520141305.355961-1-stefanha@redhat.com/
Stefan Hajnoczi (3):
cpuidle: add poll_source API
virtio: add poll_source virtqueue polling
softirq: participate in cpuidle polling
drivers/cpuidle/Makefile | 1 +
drivers/virtio/virtio_pci_common.h | 7 ++
include/linux/interrupt.h | 2 +
include/linux/poll_source.h | 53 +++++++++++++++
include/linux/virtio.h | 2 +
include/linux/virtio_config.h | 2 +
drivers/cpuidle/poll_source.c | 102 +++++++++++++++++++++++++++++
drivers/cpuidle/poll_state.c | 6 ++
drivers/virtio/virtio.c | 34 ++++++++++
drivers/virtio/virtio_pci_common.c | 86 ++++++++++++++++++++++++
drivers/virtio/virtio_pci_modern.c | 2 +
kernel/softirq.c | 14 ++++
12 files changed, 311 insertions(+)
create mode 100644 include/linux/poll_source.h
create mode 100644 drivers/cpuidle/poll_source.c
--
2.31.1
Powered by blists - more mailing lists