lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 9 May 2009 02:08:43 +0200
From:	"Rafael J. Wysocki" <rjw@...k.pl>
To:	Wu Fengguang <fengguang.wu@...el.com>
Cc:	David Rientjes <rientjes@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	"linux-pm@...ts.linux-foundation.org" 
	<linux-pm@...ts.linux-foundation.org>,
	"pavel@....cz" <pavel@....cz>,
	"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
	"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
	"alan-jenkins@...fmail.co.uk" <alan-jenkins@...fmail.co.uk>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"kernel-testers@...r.kernel.org" <kernel-testers@...r.kernel.org>
Subject: Re: [PATCH 1/5] mm: Add __GFP_NO_OOM_KILL flag

On Friday 08 May 2009, Rafael J. Wysocki wrote:
> On Friday 08 May 2009, Wu Fengguang wrote:
[--snip--]
> > But hey, that 'count' counts "savable+free" memory.
> > We don't have a counter for an estimation of "free+freeable" memory,
> > ie. we are sure we cannot preallocate above that threshold. 
> > 
> > One applicable situation is, when there are 800M anonymous memory,
> > but only 500M image_size and no swap space.
> > 
> > In that case we will otherwise goto the oom code path. Sure oom is
> > (and shall be) reliably disabled in hibernation, but still we shall be
> > cautious enough not to create a low memory situation, which will hurt:
> > - hibernation speed
> >   (vmscan goes mad trying to squeeze the last free page)
> > - user experiences after resume
> >   (all *active* file data and metadata have to reloaded)
> 
> Strangely enough, my recent testing with this patch doesn't confirm the
> theory. :-)  Namely, I set image_size too low on purpose and it only caused
> preallocate_image_memory() to return NULL at one point and that was it.
> 
> It didn't even took too much time.
> 
> I'll carry out more testing to verify this observation.

I can confirm that even if image_size is below the minimum we can get,
the second preallocate_image_memory() just returns after allocating fewer pages
that it's been asked for (that's with the original __GFP_NO_OOM_KILL-based
approach, as I wrote in the previous message in this thread) and nothing bad
happens.

That may be because we freeze the mm kernel threads, but I've also tested
without freezing them and it's still worked the same way.

> > The current code simply tries *too hard* to meet image_size.
> > I'd rather take that as a mild advice, and to only free
> > "free+freeable-margin" pages when image_size is not approachable.
> > 
> > The safety margin can be totalreserve_pages, plus enough pages for
> > retaining the "hard core working set".
> 
> How to compute the size of the "hard core working set", then?

Well, I'm still interested in the answer here. ;-)

Best,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ