[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CFD1208.4070600@redhat.com>
Date: Mon, 06 Dec 2010 18:40:40 +0200
From: Avi Kivity <avi@...hat.com>
To: Jan Kiszka <jan.kiszka@...mens.com>
CC: Jan Kiszka <jan.kiszka@....de>,
Thomas Gleixner <tglx@...utronix.de>,
Marcelo Tosatti <mtosatti@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
kvm <kvm@...r.kernel.org>, Tom Lyon <pugs@...co.com>,
Alex Williamson <alex.williamson@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH 5/5] KVM: Allow host IRQ sharing for passed-through PCI
2.3 devices
On 12/06/2010 06:34 PM, Jan Kiszka wrote:
> >
> > What's the protocol for doing this? I suppose userspace has to disable
> > interrupts, ioctl(SET_INTX_MASK, masked), ..., ioctl(SET_INTX_MASK,
> > unmasked), enable interrupts?
>
> Userspace just has to synchronize against itself - what it already does:
> qemu_mutex, and masking/unmasking is synchronous /wrt the the executing
> VCPU. Otherwise, masking/unmasking is naturally racy, also in Real Life.
> The guest resolves the remaining races.
I meant when qemu sets INTX_MASK and the kernel clears it immediately
afterwards because the two are not synchronized. I guess that won't
happen in practice because playing with INTX_MASK is very rare.
> >
> > Isn't there a race window between the two operations?
> >
> > Maybe we should give the kernel full ownership of that bit.
>
> I think this is what VFIO does and is surely cleaner than this approach.
> But it's not possible with the existing interface (sysfs + KVM ioctls) -
> or can you restrict the sysfs access to the config space in such details?
I'm sure you can, not sure it's worth it. Can the situation be
exploited? what if userspace lies?
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists