[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120130160155.GC6875@mgebm.net>
Date: Mon, 30 Jan 2012 11:01:55 -0500
From: Eric B Munson <emunson@...bm.net>
To: Avi Kivity <avi@...hat.com>
Cc: mingo@...hat.com, hpa@...or.com, ryanh@...ux.vnet.ibm.com,
aliguori@...ibm.com, mtosatti@...hat.com,
jeremy.fitzhardinge@...rix.com, kvm@...r.kernel.org,
linux-arch@...r.kernel.org, x86@...nel.org,
linux-kernel@...r.kernel.org, Jan Kiszka <jan.kiszka@....de>
Subject: Re: [PATCH 3/4 V10] Add ioctl for KVMCLOCK_GUEST_STOPPED
On Mon, 30 Jan 2012, Avi Kivity wrote:
> On 01/30/2012 05:32 PM, Eric B Munson wrote:
> > >
> > > Can you point me to the discussion that moved this to be a vm ioctl? In
> > > general vm ioctls that do things for all vcpus are racy, like here.
> > > You're accessing variables that are protected by the vcpu mutex, and not
> > > taking the mutex (nor can you, since it is held while the guest is
> > > running, unlike most kernel mutexes).
> > >
> >
> > Jan Kiszka suggested that becuase there isn't a use case for notifying
> > individual vcpus (can vcpu's be paused individually?
>
> They can, though the guest will grind to a halt very soon.
>
> > ) that it makes more sense
> > to have a vm ioctl.
> >
> > http://thread.gmane.org/gmane.comp.emulators.qemu/131624
> >
> > If the per vcpu ioctl is the right choice I can resend those patches.
>
> The races are solvable but I think it's easier in userspace. It's also
> more flexible, though I don't really see a use for this flexibility.
>
Okay, I will rebase the per vcpu patches and resend those.
Eric
Download attachment "signature.asc" of type "application/pgp-signature" (837 bytes)
Powered by blists - more mailing lists