[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110809094443.GD7283@aepfle.de>
Date: Tue, 9 Aug 2011 11:44:43 +0200
From: Olaf Hering <olaf@...fle.de>
To: Ian Campbell <Ian.Campbell@...rix.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Konrad <konrad.wilk@...cle.com>,
"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/pv-on-hvm kexec+kdump: reset PV
devices in kexec or crash kernel
On Tue, Aug 09, Ian Campbell wrote:
> > static int frontend_probe_and_watch(struct notifier_block *notifier,
> > unsigned long event,
> > void *data)
> > {
> > + /* reset devices in Connected or Closed state */
> > + if (xen_hvm_domain())
> && reset_devices ??
No, reset_devices is passed as kernel cmdline option to a kdump boot.
But its not part of a kexec boot.
> How long should we wait for the backend to respond? Should we add a
> timeout and countdown similar to wait_for_devices?
Adding a timeout to catch a confused backend is a good idea. That would
give one at least a chance to poke around in a rescue shell.
> It's unfortunate that this code is effectively serialising on each
> device. It would be much preferable to kick off all the resets and then
> wait for them to occur. You could probably do this by incrementing a
> counter for each device you reset and decrementing it each time a watch
> triggers then wait for the counter to hit zero.
That feature needs more thought. Since xenbus_reset_state() is only
executed in the kexec/kdump case, the average use case is not slowed
down.
Olaf
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists