[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220419124245-mutt-send-email-mst@kernel.org>
Date: Tue, 19 Apr 2022 12:43:03 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: "Jason A. Donenfeld" <Jason@...c4.com>
Cc: Alexander Graf <graf@...zon.com>,
LKML <linux-kernel@...r.kernel.org>,
KVM list <kvm@...r.kernel.org>,
QEMU Developers <qemu-devel@...gnu.org>,
linux-hyperv@...r.kernel.org,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
"Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
adrian@...ity.io, Laszlo Ersek <lersek@...hat.com>,
Daniel P. Berrangé <berrange@...hat.com>,
Dominik Brodowski <linux@...inikbrodowski.net>,
Jann Horn <jannh@...gle.com>,
"Rafael J. Wysocki" <rafael@...nel.org>,
"Brown, Len" <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
Linux PM <linux-pm@...r.kernel.org>,
Colm MacCarthaigh <colmmacc@...zon.com>,
Theodore Ts'o <tytso@....edu>, Arnd Bergmann <arnd@...db.de>
Subject: Re: propagating vmgenid outward and upward
On Tue, Apr 19, 2022 at 05:12:36PM +0200, Jason A. Donenfeld wrote:
> Hey Alex,
>
> On Thu, Mar 10, 2022 at 12:18 PM Alexander Graf <graf@...zon.com> wrote:
> > I agree on the slightly racy compromise and that it's a step into the
> > right direction. Doing this is a no brainer IMHO and I like the proc
> > based poll approach.
>
> Alright. I'm going to email a more serious patch for that in the next
> few hours and you can have a look. Let's do that for 5.19.
>
> > I have an additional problem you might have an idea for with the poll
> > based path. In addition to the clone notification, I'd need to know at
> > which point everyone who was listening to a clone notification is
> > finished acting up it. If I spawn a tiny VM to do "work", I want to know
> > when it's safe to hand requests into it. How do I find out when that
> > point in time is?
>
> Seems tricky to solve. Even a count of current waiters and a
> generation number won't be sufficient, since it wouldn't take into
> account users who haven't _yet_ gotten to waiting. But maybe it's not
> the right problem to solve? Or somehow not necessary? For example, if
> the problem is a bit more constrained a solution becomes easier: you
> have a fixed/known set of readers that you know about, and you
> guarantee that they're all waiting before the fork. Then after the
> fork, they all do something to alert you in their poll()er, and you
> count up how many alerts you get until it matches the number of
> expected waiters. Would that work? It seems like anything more general
> than that is just butting heads with the racy compromise we're already
> making.
>
> Jason
I have some ideas here ... but can you explain the use-case a bit more?
--
MST
Powered by blists - more mailing lists