lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090227132726.GE1482@ucw.cz>
Date:	Fri, 27 Feb 2009 14:27:27 +0100
From:	Pavel Machek <pavel@....cz>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Johannes Weiner <hannes@...xchg.org>,
	kosaki.motohiro@...fujitsu.com, rjw@...k.pl, riel@...hat.com,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 3/3][RFC] swsusp: shrink file cache first

On Fri 2009-02-06 13:00:09, Andrew Morton wrote:
> On Fri, 6 Feb 2009 05:49:07 +0100
> Johannes Weiner <hannes@...xchg.org> wrote:
> 
> > > and, I think you should mesure performence result.
> > 
> > Yes, I'm still thinking about ideas how to quantify it properly.  I
> > have not yet found a reliable way to check for whether the working set
> > is intact besides seeing whether the resumed applications are
> > responsive right away or if they first have to swap in their pages
> > again.
> 
> Describing your subjective non-quantitative impressions would be better
> than nothing...
> 
> The patch bugs me.
> 
> The whole darn point behind the whole darn page reclaim is "reclaim the
> pages which we aren't likely to need soon".  There's nothing special
> about the swsusp code at all!  We want it to do exactly what page
> reclaim normally does, only faster.
> 
> So why do we need to write special hand-rolled code to implement
> something which we've already spent ten years writing?
> 
> hm?  And if this approach leads to less-than-optimum performance after
> resume then the fault lies with core page reclaim - it reclaimed the
> wrong pages!
> 
> That actually was my thinking when I first worked on
> shrink_all_memory() and it did turn out to be surprisingly hard to
> simply reuse the existing reclaim code for this application.  Things
> kept on going wrong.  IIRC this was because we were freeing pages as we
> were reclaiming, so the page reclaim logic kept on seeing all these
> free pages and kept on wanting to bale out.
> 
> Now, the simple and obvious fix to this is not to free the pages - just
> keep on allocating pages and storing them locally until we have
> "enough" memory.  Then when we're all done, dump them all straight onto
> to the freelists.
> 
> But for some reason which I do not recall, we couldn't do that.

We used to do that. I remember having loop doing get_free_page and
doing linklist of them. I believe it was considered quite an hack.

.....one reason is that ee don't want to OOMkill anything if memory is
low, we want to abort the hibernation...

Sorry for being late.....


-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ