lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 01 Jul 2015 09:21:46 -0300
From:	Henrique de Moraes Holschuh <hmh@....eng.br>
To:	Len Brown <lenb@...nel.org>
Cc:	Alan Stern <stern@...land.harvard.edu>,
	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>,
	Linux PM list <linux-pm@...r.kernel.org>,
	linux-kernel@...r.kernel.org, Len Brown <len.brown@...el.com>
Subject: Re: [PATCH 1/1] suspend: delete sys_sync()

On Tue, Jun 30, 2015, at 17:04, Len Brown wrote:
> > Entering "mem" suspend mode through sysfs currently has the implied meaning
> > of "prepare the *entire* system to stay on a powered down state for
> > pontentially a _long_ time", where long means "certainly more than 10
> > seconds" ;-) This is unlikely to be written anywhere, of course, that's just
> > how it was used by the vast majority for years, at least on traditional
> > server/desktop/laptop platforms such as x86.
> 
> The _vast_ majority of systems using Linux suspend today are under
> an Android user-space.  Android has no assumption that that suspend to
> mem will necessarily stay suspended for a long time.

Indeed, however your change was not android-specific, and it is not
"comfortable" on x86-style hardware and usage patterns.

> > IMO, we would actually benefit from *adding* new system-wide sleep/suspend
> > modes that are optimized for oportunistic, short-lived system-wide sleep
> > cycles (aka "catnap") that is fast to enter and exit from, and which will be
> > triggered very frequently, instead of trying to change the assumptions and
> > expected behavior of the current "deep-sleep" mode...
> 
> Thank you for sharing your opinion.
> 
> I am going to give up trying to change your mind, and those who share
> your view.  I plan to revive my patch from 2014 which
> makes sys_sync() optional.  That will not change the historic behavior,
> and will still allow everybody to do what they want.
> Rafael has said that he can live with the resulting kernel clutter.

...

> BTW. the answer does not appear to be creating a new system sleep state.
> Android invokes "mem", and they don't seem excited about teaching
> user-space that runs on multiple platforms that  what used to be a "mem" and no
> "freeze" could be a "mem" plus a "freeze", or a "freeze" and no "mem".

Hmm, maybe we could:

1. Make the behavior of "mem" configurable (select default at compile
time, allow it to be changed at runtime).

2. Add a way to always enter the "heavyweight" (x86-style) mem sleep in
platforms where it exists.

3. Add a way to always enter the "light" (android-style) mem sleep in
platforms where it exists.

And make (2) and (3) optional (as in: you can compile out the clutter). 
That at least provides a way forward for userspace, at the price of more
gunk on the kernel side.

We already have a lot of stuff that works that way (idle and freq
governors, io elevators, etc).

That said, as long as x86 will still try to safeguard my data during mem
sleep/resume as it does today, I have no strong feelings about
light/heavy-weight "mem" sleep being strictly a compile-time selectable
thing, or a more flexible runtime-selectable behavior.

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique Holschuh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ