lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <500426B6.2010200@redhat.com>
Date:	Mon, 16 Jul 2012 17:35:34 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Alex Williamson <alex.williamson@...hat.com>
CC:	Jan Kiszka <jan.kiszka@...mens.com>,
	"mst@...hat.com" <mst@...hat.com>,
	"gleb@...hat.com" <gleb@...hat.com>,
	"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 0/2] kvm: level irqfd and new eoifd

On 07/16/2012 05:03 PM, Alex Williamson wrote:
>> 
>> This is what I meant, except I forgot that we already do direct path for
>> MSI.
> 
> Ok, vfio now does it for the unmask irqfd-line interface too.  Except
> when we re-inject from eoifd we have to do the eventfd_signal from a
> work queue as we can't have nested eventfd_signals.  We probably need to
> do some benchmarks to see if that re-injection path saves us anything vs
> letting hardware fire again.

If you do that you might as well proxy it in userspace.  Yes the big
qemu lock will get in the way but we shouldn't code the kernel with the
expectation that userspace will be forever broken.

> 
>> > For an MSI that seems to already provide direct
>> > injection.  
>> 
>> Ugh, even for a broadcast MSI into 254 vcpu guests.  That's going to be
>> one slow interrupt.
>> 
>> > For level we'll schedule_work, so that explains the overhead
>> > in that path, but it's not too dissimilar to a a threaded irq.  vfio
>> > does something very similar, so there's a schedule_work both on inject
>> > and on eoi.  I'll have to check whether anything prevents the unmask
>> > from the wait_queue function in vfio, that could be a significant chunk
>> > of the gap.  Where's the custom poll function come into play?  Thanks,
>> 
>> So I don't understand where the gap comes from.  The number of context
>> switches for kvm and vfio is the same, as long as both use MSI (and
>> either both use threaded irq or both don't).
> 
> Right, we're not exactly apples to apples yet.  Using threaded
> interrupts and work queue injection, vfio is a little slower.  There's
> an extra work queue in that path vs kvm though.  Using non-threaded
> interrupts and direct injection, vfio is faster.  Once kvm moves to
> non-threaded interrupt handling, I expect we'll be pretty similar.  My
> benchmarks are just rough estimates at this point as I'm both trying to
> work out lockdep and get some ballpark performance comparison.  Thanks,

Okay.  I'm not really interested in how the code compares today, but
whether there is something in vfio that prevents it achieving kvm
performance once it's completely optimized.  Given the above, I don't
think there is.


-- 
error compiling committee.c: too many arguments to function


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ