[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YnIrU/oTP1h2aawQ@google.com>
Date: Wed, 4 May 2022 07:29:23 +0000
From: Sebastian Ene <sebastianene@...gle.com>
To: Rob Herring <robh@...nel.org>
Cc: linux-kernel@...r.kernel.org,
Derek Kiernan <derek.kiernan@...inx.com>,
Dragan Cvetic <dragan.cvetic@...inx.com>,
Arnd Bergmann <arnd@...db.de>, devicetree@...r.kernel.org,
qperret@...gle.com, will@...nel.org, maz@...nel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH v4 2/2] misc: Add a mechanism to detect stalls on guest
vCPUs
On Fri, Apr 29, 2022 at 04:03:45PM -0500, Rob Herring wrote:
> On Fri, Apr 29, 2022 at 11:38:52AM +0200, Greg Kroah-Hartman wrote:
> > On Fri, Apr 29, 2022 at 09:26:26AM +0000, Sebastian Ene wrote:
> > > On Fri, Apr 29, 2022 at 10:51:14AM +0200, Greg Kroah-Hartman wrote:
> > > > On Fri, Apr 29, 2022 at 08:30:33AM +0000, Sebastian Ene wrote:
> > > > > This driver creates per-cpu hrtimers which are required to do the
> > > > > periodic 'pet' operation. On a conventional watchdog-core driver, the
> > > > > userspace is responsible for delivering the 'pet' events by writing to
> > > > > the particular /dev/watchdogN node. In this case we require a strong
> > > > > thread affinity to be able to account for lost time on a per vCPU.
> > > > >
> > > > > This part of the driver is the 'frontend' which is reponsible for
> > > > > delivering the periodic 'pet' events, configuring the virtual peripheral
> > > > > and listening for cpu hotplug events. The other part of the driver
> > > > > handles the peripheral emulation and this part accounts for lost time by
> > > > > looking at the /proc/{}/task/{}/stat entries and is located here:
> > > > > https://chromium-review.googlesource.com/c/chromiumos/platform/crosvm/+/3548817
> > > > >
> > > > > Signed-off-by: Sebastian Ene <sebastianene@...gle.com>
> > > > > ---
> > > > > drivers/misc/Kconfig | 12 +++
> > > > > drivers/misc/Makefile | 1 +
> > > > > drivers/misc/vm-watchdog.c | 206 +++++++++++++++++++++++++++++++++++++
> > > > > 3 files changed, 219 insertions(+)
> > > > > create mode 100644 drivers/misc/vm-watchdog.c
> > > > >
> > > > > diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
> > > > > index 2b9572a6d114..26c3a99e269c 100644
> > > > > --- a/drivers/misc/Kconfig
> > > > > +++ b/drivers/misc/Kconfig
> > > > > @@ -493,6 +493,18 @@ config OPEN_DICE
> > > > >
> > > > > If unsure, say N.
> > > > >
> > > > > +config VM_WATCHDOG
> > > > > + tristate "Virtual Machine Watchdog"
> > > > > + select LOCKUP_DETECTOR
> > > > > + help
> > > > > + Detect CPU locks on the virtual machine. This driver relies on the
> > > > > + hrtimers which are CPU-binded to do the 'pet' operation. When a vCPU
> > > > > + has to do a 'pet', it exits the guest through MMIO write and the
> > > > > + backend driver takes into account the lost ticks for this particular
> > > > > + CPU.
> > >
> > > Hi,
> > >
> > > >
> > > > There's nothing to keep this tied to a virtual machine at all, right?
> > > > You are just relying on some iomem address to be updated, so it should
> > > > be a "generic_iomem_watchdog" driver as there's nothing specific to vms
> > > > at all from what I can tell.
> > > >
> > > > thanks,
> > > >
> > > > greg k-h
> > >
> > > That's right although I might think of using the term "generic lockup detector"
> > > instead of watchdog. The only reason why I would keep "virtual machine"
> > > word in, is that there is no actual hardware for this.
> >
> > That doesn't really matter, it's just a memory location in device tree
> > that you are needing, odds are some hardware device could use it just
> > like this.
Hi,
>
> Such as a shared on-chip memory that both a system control processor and
> the main processors can access. Of course, those also typically already
> have a comnunication channel.
>
> But for a VM-hypervisor interface, why isn't one of the existing
> communications interfaces being used? One that is discoverable would be
> better than using DT.
>
In a protected VM we don't trust the host to present and control the loaded
peripherals. We rely on another entity to generate a trusted device tree
for us. I hope this clarifies the need for DT and I think this information
should also be added in the changelog.
> Rob
Thanks,
Seb
Powered by blists - more mailing lists