lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 4 Jan 2015 13:36:13 +0200
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Jason Wang <jasowang@...hat.com>
Cc:	rusty@...tcorp.com.au, virtualization@...ts.linux-foundation.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs

On Sun, Jan 04, 2015 at 04:38:17PM +0800, Jason Wang wrote:
> 
> On 12/28/2014 03:52 PM, Michael S. Tsirkin wrote:
> > On Fri, Dec 26, 2014 at 10:53:42AM +0800, Jason Wang wrote:
> >> Hi all:
> >>
> >> This series try to share MSIX irq for each tx/rx queue pair. This is
> >> done through:
> >>
> >> - introducing virtio pci channel which are group of virtqueues that
> >>   sharing a single MSIX irq (Patch 1)
> >> - expose channel setting to virtio core api (Patch 2)
> >> - try to use channel setting in virtio-net (Patch 3)
> >>
> >> For the transport that does not support channel, channel paramters
> >> were simply ignored. For devices that does not use channel, it can
> >> simply pass NULL or zero to virito core.
> >>
> >> With the patch, 1 MSIX irq were saved for each TX/RX queue pair.
> >>
> >> Please review.
> > How does this sharing affect performance?
> >
> 
> Patch 3 only checks more_used() for tx ring which in fact reduces the
> effect of event index and may introduce more tx interrupts. After fixing
> this issue, tested with 1 vcpu and 1 queue. No obvious changes in
> performance were noticed.
> 
> Thanks

Is this with or without MQ?
With MQ, it seems easy to believe as interrupts are
distributed between CPUs.

Without MQ, it should be possible to create UDP workloads where
processing incoming and outgoing interrupts
on separate CPUs is a win.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ