[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1312889194.26263.107.camel@zakaz.uk.xensource.com>
Date: Tue, 9 Aug 2011 12:26:34 +0100
From: Ian Campbell <Ian.Campbell@...citrix.com>
To: Olaf Hering <olaf@...fle.de>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Jeremy Fitzhardinge" <jeremy@...p.org>,
Konrad <konrad.wilk@...cle.com>,
"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/pv-on-hvm kexec+kdump: reset PV
devices in kexec or crash kernel
On Tue, 2011-08-09 at 10:44 +0100, Olaf Hering wrote:
> On Tue, Aug 09, Ian Campbell wrote:
>
> > > static int frontend_probe_and_watch(struct notifier_block *notifier,
> > > unsigned long event,
> > > void *data)
> > > {
> > > + /* reset devices in Connected or Closed state */
> > > + if (xen_hvm_domain())
> > && reset_devices ??
>
> No, reset_devices is passed as kernel cmdline option to a kdump boot.
> But its not part of a kexec boot.
>
> > How long should we wait for the backend to respond? Should we add a
> > timeout and countdown similar to wait_for_devices?
>
> Adding a timeout to catch a confused backend is a good idea. That would
> give one at least a chance to poke around in a rescue shell.
>
> > It's unfortunate that this code is effectively serialising on each
> > device. It would be much preferable to kick off all the resets and then
> > wait for them to occur. You could probably do this by incrementing a
> > counter for each device you reset and decrementing it each time a watch
> > triggers then wait for the counter to hit zero.
>
> That feature needs more thought. Since xenbus_reset_state() is only
> executed in the kexec/kdump case, the average use case is not slowed
> down.
I was thinking more of avoiding slowing down the kexec/kdump case
unnecessarily.
Ian.
>
> Olaf
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists