lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 04 Nov 2010 11:30:20 -0400
From:	Rik van Riel <riel@...hat.com>
To:	Mandeep Singh Baines <msb@...omium.org>
CC:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mel@....ul.ie>,
	Minchan Kim <minchan.kim@...il.com>,
	Johannes Weiner <hannes@...xchg.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, wad@...omium.org,
	olofj@...omium.org, hughd@...omium.org
Subject: Re: [PATCH] RFC: vmscan: add min_filelist_kbytes sysctl for protecting
 the working set

On 11/03/2010 06:40 PM, Mandeep Singh Baines wrote:

> I've created a patch which takes a slightly different approach.
> Instead of limiting how fast pages get reclaimed, the patch limits
> how fast the active list gets scanned. This should result in the
> active list being a better measure of the working set. I've seen
> fairly good results with this patch and a scan inteval of 1
> centisecond. I see no thrashing when the scan interval is non-zero.
>
> I've made it a tunable because I don't know what to set the scan
> interval. The final patch could set the value based on HZ and some
> other system parameters. Maybe relate it to sched_period?

I like your approach. For file pages it looks like it
could work fine, since new pages always start on the
inactive file list.

However, for anonymous pages I could see your patch
leading to problems, because all anonymous pages start
on the active list.  With a scan interval of 1
centiseconds, that means there would be a limit of 3200
pages, or 12MB of anonymous memory that can be moved to
the inactive list a second.

I have seen systems with single SATA disks push out
several times that to swap per second, which matters
when someone starts up a program that is just too big
to fit in memory and requires that something is pushed
out.

That would reduce the size of the inactive list to
zero, reducing our page replacement to a slow FIFO
at best, causing false OOM kills at worst.

Staying with a default of 0 would of course not do
anything, which would make merging the code not too
useful.

I believe we absolutely need to preserve the ability
to evict pages quickly, when new pages are brought
into memory or allocated quickly.

However, speed limits are probably a very good idea
once a cache has been reduced to a smaller size, or
when most IO bypasses the reclaim-speed-limited cache.

-- 
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ