lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 1 Mar 2009 19:37:36 +0900 (JST)
From:	KOSAKI Motohiro <>
To:	Pavel Machek <>
	Andrew Morton <>,
	Johannes Weiner <>,,,,
Subject: Re: [PATCH 3/3][RFC] swsusp: shrink file cache first

> > hm?  And if this approach leads to less-than-optimum performance after
> > resume then the fault lies with core page reclaim - it reclaimed the
> > wrong pages!
> > 
> > That actually was my thinking when I first worked on
> > shrink_all_memory() and it did turn out to be surprisingly hard to
> > simply reuse the existing reclaim code for this application.  Things
> > kept on going wrong.  IIRC this was because we were freeing pages as we
> > were reclaiming, so the page reclaim logic kept on seeing all these
> > free pages and kept on wanting to bale out.
> > 
> > Now, the simple and obvious fix to this is not to free the pages - just
> > keep on allocating pages and storing them locally until we have
> > "enough" memory.  Then when we're all done, dump them all straight onto
> > to the freelists.
> > 
> > But for some reason which I do not recall, we couldn't do that.
> We used to do that. I remember having loop doing get_free_page and
> doing linklist of them. I believe it was considered quite an hack.
> reason is that ee don't want to OOMkill anything if memory is
> low, we want to abort the hibernation...
> Sorry for being late.....

Not at all.
your information is really helpful.

maybe, I expect we can make simplification without oomkill...

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists