lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50276B11.8020708@redhat.com>
Date:	Sun, 12 Aug 2012 11:36:33 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Alex Williamson <alex.williamson@...hat.com>
CC:	mst@...hat.com, gleb@...hat.com, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, jan.kiszka@...mens.com
Subject: Re: [PATCH v7 2/2] kvm: KVM_EOIFD, an eventfd for EOIs

On 08/09/2012 10:26 PM, Alex Williamson wrote:
> On Mon, 2012-08-06 at 13:40 +0300, Avi Kivity wrote:
>> On 08/06/2012 01:38 PM, Avi Kivity wrote:
>> 
>> > Regarding the implementation, instead of a linked list, would an array
>> > of counters parallel to the bitmap make it simpler?
>> 
>> Or even, replace the bitmap with an array of counters.
> 
> I'm not sure a counter array is what we're really after.  That gives us
> reference counting for the irq source IDs, but not the key->gsi lookup.

You can look up the gsi while registering the eoifd, so it's accessible
as eoifd->gsi instead of eoifd->source->gsi.  The irqfd can go away
while the eoifd is still active, but is this a problem?


> It also highlights another issue, that we have a limited set of source
> IDs.  Looks like we have BITS_PER_LONG IDs, with two already used, one
> for the shared userspace ID and another for the PIT.  How happy are we
> going to be with a limit of 62 level interrupts in use at one time?

When we start being unhappy we can increase that number.  On the other
hand more locks and lists makes me unhappy now.

> 
> It's arguably a reasonable number since the most virtualization friendly
> devices (sr-iov VFs) don't even support this kind of interrupt.  It's
> also very wasteful allocating an entire source ID for a single GSI
> within that source ID.  PCI supports interrupts A, B, C, and D, which,
> in the most optimal config, each go to different GSIs.  So we could
> theoretically be more efficient in our use and allocation of irq source
> IDs if we tracked use by the source ID, gsi pair.

There are, in one userspace, just three gsis available for PCI links, so
you're compressing the source id space by 3.

> That probably makes it less practical to replace anything at the top
> level with a counter array.  The key that we pass back is currently the
> actual source ID, but we don't specify what it is, so we could split it
> and have it encode a 16bit source ID plus 16 bit GSI.  It could also be
> an idr entry.

We can fix those kinds of problems by adding another layer of
indirection.  But I doubt they will be needed.  I don't see people
assigning 60 legacy devices to one guest.

> Michael, would the interface be more acceptable to you if we added
> separate ioctls to allocate and free some representation of an irq
> source ID, gsi pair?  For instance, an ioctl might return an idr entry
> for an irq source ID/gsi object which would then be passed as a
> parameter in struct kvm_irqfd and struct kvm_eoifd so that the object
> representing the source id/gsi isn't magically freed on it's own.  This
> would also allow us to deassign/close one end and reconfigure it later.
> Thanks,

Another option is to push the responsibility for allocating IDs for the
association to userspace.  Let userspace both create the irqfd and the
eoifd with the same ID, the kernel matches them at registration time and
copies the gsi/sourceid from the first to the second eventfd.

-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ