lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Nov 2011 02:30:18 +0900
From:	Minchan Kim <minchan.kim@...il.com>
To:	Mel Gorman <mgorman@...e.de>
Cc:	Linux-MM <linux-mm@...ck.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Jan Kara <jack@...e.cz>, Andy Isaacson <adi@...apodia.org>,
	Johannes Weiner <jweiner@...hat.com>,
	Rik van Riel <riel@...hat.com>, Nai Xia <nai.xia@...il.com>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 5/7] mm: compaction: make isolate_lru_page() filter-aware
 again

On Mon, Nov 21, 2011 at 06:36:46PM +0000, Mel Gorman wrote:
> Commit [39deaf85: mm: compaction: make isolate_lru_page() filter-aware]
> noted that compaction does not migrate dirty or writeback pages and
> that is was meaningless to pick the page and re-add it to the LRU list.
> This had to be partially reverted because some dirty pages can be
> migrated by compaction without blocking.
> 
> This patch updates "mm: compaction: make isolate_lru_page" by skipping
> over pages that migration has no possibility of migrating to minimise
> LRU disruption.
> 
> Signed-off-by: Mel Gorman <mgorman@...e.de>
> ---
>  include/linux/mmzone.h |    2 ++
>  mm/compaction.c        |    3 +++
>  mm/vmscan.c            |   36 ++++++++++++++++++++++++++++++++++--
>  3 files changed, 39 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 188cb2f..ac5b522 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -173,6 +173,8 @@ static inline int is_unevictable_lru(enum lru_list l)
>  #define ISOLATE_CLEAN		((__force isolate_mode_t)0x4)
>  /* Isolate unmapped file */
>  #define ISOLATE_UNMAPPED	((__force isolate_mode_t)0x8)
> +/* Isolate for asynchronous migration */
> +#define ISOLATE_ASYNC_MIGRATE	((__force isolate_mode_t)0x10)
>  
>  /* LRU Isolation modes. */
>  typedef unsigned __bitwise__ isolate_mode_t;
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 615502b..0379263 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -349,6 +349,9 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
>  			continue;
>  		}
>  
> +		if (!cc->sync)
> +			mode |= ISOLATE_ASYNC_MIGRATE;
> +
>  		/* Try isolate the page */
>  		if (__isolate_lru_page(page, mode, 0) != 0)
>  			continue;
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 3421746..28df0ed 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1061,8 +1061,40 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
>  
>  	ret = -EBUSY;
>  
> -	if ((mode & ISOLATE_CLEAN) && (PageDirty(page) || PageWriteback(page)))
> -		return ret;
> +	/*
> +	 * To minimise LRU disruption, the caller can indicate that it only
> +	 * wants to isolate pages it will be able to operate on without
> +	 * blocking - clean pages for the most part.
> +	 *
> +	 * ISOLATE_CLEAN means that only clean pages should be isolated. This
> +	 * is used by reclaim when it is cannot write to backing storage
> +	 *
> +	 * ISOLATE_ASYNC_MIGRATE is used to indicate that it only wants to pages
> +	 * that it is possible to migrate without blocking with a ->migratepage
> +	 * handler
> +	 */
> +	if (mode & (ISOLATE_CLEAN|ISOLATE_ASYNC_MIGRATE)) {
> +		/* All the caller can do on PageWriteback is block */
> +		if (PageWriteback(page))
> +			return ret;
> +
> +		if (PageDirty(page)) {
> +			struct address_space *mapping;
> +
> +			/* ISOLATE_CLEAN means only clean pages */
> +			if (mode & ISOLATE_CLEAN)
> +				return ret;
> +
> +			/*
> +			 * Only the ->migratepage callback knows if a dirty
> +			 * page can be migrated without blocking. Skip the
> +			 * page unless there is a ->migratepage callback.
> +			 */
> +			mapping = page_mapping(page);
> +			if (!mapping || !mapping->a_ops->migratepage)

I didn't review 4/7 carefully yet.
In case of page_mapping is NULL, move_to_new_page calls migrate_page
which is non-blocking function. So, I guess it could be migrated without blocking.
 
-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ