lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200707171346.52542.rjw@sisk.pl>
Date:	Tue, 17 Jul 2007 13:46:51 +0200
From:	"Rafael J. Wysocki" <rjw@...k.pl>
To:	david@...g.hm
Cc:	"Huang, Ying" <ying.huang@...el.com>,
	Jeremy Maitin-Shepard <jbms@....edu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Pavel Machek <pavel@....cz>, nigel@...el.suspend2.net,
	linux-kernel@...r.kernel.org, linux-pm@...ts.linux-foundation.org
Subject: Re: [PATCH 0/2] Kexec jump: The first step to kexec base hibernation

On Tuesday, 17 July 2007 06:18, david@...g.hm wrote:
> On Mon, 16 Jul 2007, Rafael J. Wysocki wrote:
> 
> > On Monday, 16 July 2007 16:42, Huang, Ying wrote:
> >> On Mon, 2007-07-16 at 14:17 +0200, Rafael J. Wysocki wrote:
> >>>> is this  a matter of running some test to find out, or is this a question
> >>>> for the kexec implemantors?
> >>>
> >>> Actually, I'd like someone to tell me. ;-)
> >>>
> >>> I've browsed the kexec code, but haven't found anything related to the devices
> >>> in it.  Perhaps I didn't know where to look ...
> >>
> >> There are two stages for kexec. For "normal" kexec, first the
> >> sys_kexe_load is called to load the kernel image, then
> >> sys_reboot(LINUX_REBOOT_CMD_KEXEC) is called to boot the new kernel.
> >
> > OK, thanks.  This is the information that I was missing.
> >
> >> The call chain is as follow:
> >>
> >> sys_reboot(LINUX_REBOOT_CMD_KEXEC)
> >>     kernel_kexec
> >>         kernel_restart_prepare
> >>             device_shutdown
> >>         machine_shutdown
> >>         machine_kexec
> >>
> >> In device_shutdown, the dev->bus->shutdown or dev->driver->shutdown of
> >> every device is called to put device in quiescent state. In
> >> machine_kexec, the new kernel is booted.
> >
> > Yes.
> >
> >> So, for normal kexec, there is no code path for device state saving and
> >> restoring.
> >
> > Exactly.
> >
> >> State of device can be restore after shutdown? I don't think so.
> >
> > No, it can't, but we need something like this for hibernation and
> > device_shutdown() is not appropriate for this purpose IMO.
> 
> is the only reason that device_shutdown() is not appropriate the amount of 
> time it takes to shutdown some devices and then start them up again? (I'm 
> specificly thinking of drive spin down/up as an example)

Not only that.  You also need to save some driver data so that it can restore
the devices state from before the hibernation.  [Say you have a task blocked
on the driver's mutex in .read().  In that case you'd want the .read() to be
carried out after the restore in the same way in which it would have been
caried out if the hibernation hadn't occur.]

> if so, it is probably worth implementing a demo with the long times 
> involved to hash out any other problems, and then implemnt shortcuts to 
> avoid the device_shutdown only where the time involved is excessive.

I think that device_shutdown() is just inappropriate, because in principle
it doesn't allow you to save any information related to the device state
before hibernation that may be needed after the restore.

> so, exactly where in the process above does the memory map need to be 
> created? is this in the machine_shutdown step or would it need to be in 
> the machine_kexec step?

I would do in in the machine_kexec step.

Greetings,
Rafael


-- 
"Premature optimization is the root of all evil." - Donald Knuth
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ