lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 09 Feb 2007 19:21:27 -0800
From:	Kevin Fox <Kevin.Fox@....gov>
To:	Lee Revell <rlrevell@...-job.com>
Cc:	nigel@...el.suspend2.net, Robert Hancock <hancockr@...w.ca>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Jeff Garzik <jeff@...zik.org>
Subject: Re: NAK new drivers without proper power management?

On Fri, 2007-02-09 at 18:22 -0800, Lee Revell wrote:
> On 2/9/07, Nigel Cunningham <nigel@...el.suspend2.net> wrote:
> > On Fri, 2007-02-09 at 20:59 -0500, Lee Revell wrote:
> > > On 2/9/07, Robert Hancock <hancockr@...w.ca> wrote:
> > > > I would disagree that it's a peripheral issue, it's pretty core
> these
> > > > days, at least for any hardware that you can stuff in a laptop
> (though a
> > > > fair number of desktops get suspended and resumed these days
> too).
> > >
> > > Servers are still the most important Linux market, and don't care
> > > about suspend/resume.  I would consider implementing
> suspend./resume
> > > for a driver that will only be used in server or HPC class
> hardware a
> > > waste of valuable development resources.
> >
> > Not necessarily. Imagine suspending to disk in order to replace a
> faulty
> > card. That could be way faster and less disruptive than shutting
> down
> > normally and loosing caches and so on.
> >
> 
> Hmm.  If uptime is critical I would make sure to have redundant
> systems anyway and I would just reboot the thing.  I would not expect
> the suspend/resume paths on server class hardware like 10gig ethernet,
> Infiniband adapters, or high end SCSI to be particularly well tested.

Speaking from the HPC standpoint, we are gaining more and more nodes in
clusters as time goes on, so the potential for single failures affecting
performance is growing. A lot of the server class nodes have redundancy
such that the node slows down but not die on failure. Unfortunately,
slowdown in a single node in a tightly coupled job can greatly affect
performance. A good example would be ECC memory. If a chip is going bad,
the machine can detect it but it will run slower until the memory is
replaced. This one node can affect thousands of other nodes in the same
job. Having a mechanism to migrate the operating system that is running
on this failing node to another node would be quite beneficial to
performance. If all drivers properly supported suspend/resume, it could
possibly be extended to support migration to another node as well. At
least for the HPC world, we'd like to see, and encourage, the hardware
you describe getting full support for suspend/resume.

Kevin

> > Irrespective of the above, servers tend not to have too much in the
> way
> > of hardware unique to them anyway, and even if you don't find it
> useful,
> > that's not to say others won't want it.
> 
> Yes but for such hardware, suspend/resume is likely to be a lot of
> work to implement, and I'd rather the developers devote those
> resources to making the driver as stable and performant as possible.
> 
> I agree 100% that drivers for desktop and laptop hardware should be
> rejected if missing suspend/resume.
> 
> Lee
> -
> To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 
> 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ