lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1342114713.10815.25.camel@ul30vt>
Date:	Thu, 12 Jul 2012 11:38:33 -0600
From:	Alex Williamson <alex.williamson@...hat.com>
To:	Avi Kivity <avi@...hat.com>
Cc:	Jan Kiszka <jan.kiszka@...mens.com>,
	"mst@...hat.com" <mst@...hat.com>,
	"gleb@...hat.com" <gleb@...hat.com>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 0/2] kvm: level irqfd and new eoifd

On Thu, 2012-07-12 at 10:19 -0600, Alex Williamson wrote:
> On Thu, 2012-07-12 at 12:35 +0300, Avi Kivity wrote:
> > On 07/11/2012 10:57 PM, Alex Williamson wrote:
> > >> 
> > >> > We still have classic KVM device assignment to provide fast-path INTx.
> > >> > But if we want to replace it midterm, I think it's necessary for VFIO to
> > >> > be able to provide such a path as well.
> > >> 
> > >> I would like VFIO to have no regressions vs. kvm device assignment,
> > >> except perhaps in uncommon corner cases.  So I agree.
> > > 
> > > I ran a few TCP_RR netperf tests forcing a 1Gb tg3 nic to use INTx.
> > > Without irqchip support vfio gets a bit more than 60% of KVM device
> > > assignment.  That's a little bit of an unfair comparison since it's more
> > > than just the I/O path.  With the proposed interfaces here, enabling
> > > irqchip, vfio is within 10% of KVM device assignment for INTx.  For MSI,
> > > I can actually make vfio come out more than 30% better than KVM device
> > > assignment if I send the eventfd from the hard irq handler.  Using a
> > > threaded handler as the code currently does, vfio is still behind KVM.
> > > It's hard to beat a direct call chain.
> > 
> > We can have a direct call chain with vfio too, using a custom eventfd
> > poll function, no?  Assuming we set up a fast path for unicast msi.
> 
> You'll have to help me out a little, eventfd_signal walks the wait_queue
> and calls each function.  On the injection path that includes
> irqfd_wakeup.  For an MSI that seems to already provide direct
> injection.  For level we'll schedule_work, so that explains the overhead
> in that path, but it's not too dissimilar to a a threaded irq.  vfio
> does something very similar, so there's a schedule_work both on inject
> and on eoi.  I'll have to check whether anything prevents the unmask
> from the wait_queue function in vfio, that could be a significant chunk
> of the gap.

Yep, the schedule_work in the eoi is the culprit.  A direct unmask from
the wait queue function gives me better results than kvm for INTx.
We'll have to see how the leapfrogging goes once KVM switches to
injection from the hard handler.  I'm still curious what this custom
poll function would give us though.  Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ