lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 23 Mar 2010 18:29:59 -0400
From:	Rik van Riel <riel@...hat.com>
To:	Mel Gorman <mel@....ul.ie>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Christian Ehrhardt <ehrhardt@...ux.vnet.ibm.com>,
	linux-mm@...ck.org, Nick Piggin <npiggin@...e.de>,
	Chris Mason <chris.mason@...cle.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	linux-kernel@...r.kernel.org, gregkh@...ell.com,
	Corrado Zoccolo <czoccolo@...il.com>,
	Johannes Weiner <hannes@...xchg.org>
Subject: Re: [RFC PATCH 0/3] Avoid the use of congestion_wait under zone	pressure

On 03/22/2010 07:50 PM, Mel Gorman wrote:

> Test scenario
> =============
> X86-64 machine 1 socket 4 cores
> 4 consumer-grade disks connected as RAID-0 - software raid. RAID controller
> 	on-board and a piece of crap, and a decent RAID card could blow
> 	the budget.
> Booted mem=256 to ensure it is fully IO-bound and match closer to what
> 	Christian was doing

With that many disks, you can easily have dozens of megabytes
of data in flight to the disk at once.  That is a major
fraction of memory.

In fact, you might have all of the inactive file pages under
IO...

> 3. Page reclaim evict-once logic from 56e49d21 hurts really badly
> 	fix title: revertevict
> 	fixed in mainline? no
> 	affects: 2.6.31 to now
>
> 	For reasons that are not immediately obvious, the evict-once patches
> 	*really* hurt the time spent on congestion and the number of pages
> 	reclaimed. Rik, I'm afaid I'm punting this to you for explanation
> 	because clearly you tested this for AIM7 and might have some
> 	theories. For the purposes of testing, I just reverted the changes.

The patch helped IO tests with reasonable amounts of memory
available, because the VM can cache frequently used data
much more effectively.

This comes at the cost of caching less recently accessed
use-once data, which should not be an issue since the data
is only used once...

> Rik, any theory on evict-once?

No real theories yet, just the observation that your revert
appears to be buggy (see below) and the possibility that your
test may have all of the inactive file pages under IO...

Can you reproduce the stall if you lower the dirty limits?

>   static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
>   	struct zone *zone, struct scan_control *sc, int priority)
>   {
>   	int file = is_file_lru(lru);
>
> -	if (is_active_lru(lru)) {
> -		if (inactive_list_is_low(zone, sc, file))
> -		    shrink_active_list(nr_to_scan, zone, sc, priority, file);
> +	if (lru == LRU_ACTIVE_FILE) {
> +		shrink_active_list(nr_to_scan, zone, sc, priority, file);
>   		return 0;
>   	}

Your revert is buggy.  With this change, anonymous pages will
never get deactivated via shrink_list.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ