lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20070730134903.d7bd67b6.akpm@linux-foundation.org>
Date:	Mon, 30 Jul 2007 13:49:03 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Andy Whitcroft <apw@...dowen.org>
Cc:	Mel Gorman <mel@....ul.ie>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] Wait for page writeback when directly reclaiming
 contiguous areas

On Sat, 28 Jul 2007 23:52:30 +0100
Andy Whitcroft <apw@...dowen.org> wrote:

> 
> From: Mel Gorman <mel@....ul.ie>
> 
> Lumpy reclaim works by selecting a lead page from the LRU list and then
> selecting pages for reclaim from the order-aligned area of pages. In the
> situation were all pages in that region are inactive and not referenced by
> any process over time, it works well.
> 
> In the situation where there is even light load on the system, the pages may
> not free quickly. Out of a area of 1024 pages, maybe only 950 of them are
> freed when the allocation attempt occurs because lumpy reclaim returned early.
> This patch alters the behaviour of direct reclaim for large contiguous blocks.
> 
> The first attempt to call shrink_page_list() is asynchronous but if it
> fails, the pages are submitted a second time and the calling process waits
> for the IO to complete. It'll retry up to 5 times for the pages to be
> fully freed. This may stall allocators waiting for contiguous memory but
> that should be expected behaviour for high-order users. It is preferable
> behaviour to potentially queueing unnecessary areas for IO. Note that kswapd
> will not stall in this fashion.

I agree with the intent.

> +/* Request for sync pageout. */
> +typedef enum {
> +	PAGEOUT_IO_ASYNC,
> +	PAGEOUT_IO_SYNC,
> +} pageout_io_t;

no typedefs.

(checkpatch.pl knew that ;))

>  /* possible outcome of pageout() */
>  typedef enum {
>  	/* failed to write page out, page is locked */
> @@ -287,7 +293,8 @@ typedef enum {
>   * pageout is called by shrink_page_list() for each dirty page.
>   * Calls ->writepage().
>   */
> -static pageout_t pageout(struct page *page, struct address_space *mapping)
> +static pageout_t pageout(struct page *page, struct address_space *mapping,
> +						pageout_io_t sync_writeback)
>  {
>  	/*
>  	 * If the page is dirty, only perform writeback if that write
> @@ -346,6 +353,15 @@ static pageout_t pageout(struct page *page, struct address_space *mapping)
>  			ClearPageReclaim(page);
>  			return PAGE_ACTIVATE;
>  		}
> +
> +		/*
> +		 * Wait on writeback if requested to. This happens when
> +		 * direct reclaiming a large contiguous area and the
> +		 * first attempt to free a ranage of pages fails

cnat tpye.

> +		 */
> +		if (PageWriteback(page) && sync_writeback == PAGEOUT_IO_SYNC)
> +			wait_on_page_writeback(page);
> +
>
>  		if (!PageWriteback(page)) {
>  			/* synchronous write or broken a_ops? */
>  			ClearPageReclaim(page);
> @@ -423,7 +439,8 @@ cannot_free:
>   * shrink_page_list() returns the number of reclaimed pages
>   */
>  static unsigned long shrink_page_list(struct list_head *page_list,
> -					struct scan_control *sc)
> +					struct scan_control *sc,
> +					pageout_io_t sync_writeback)
>  {
>  	LIST_HEAD(ret_pages);
>  	struct pagevec freed_pvec;
> @@ -458,8 +475,12 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>  		if (page_mapped(page) || PageSwapCache(page))
>  			sc->nr_scanned++;
>  
> -		if (PageWriteback(page))
> -			goto keep_locked;
> +		if (PageWriteback(page)) {
> +			if (sync_writeback == PAGEOUT_IO_SYNC)
> +				wait_on_page_writeback(page);
> +			else
> +				goto keep_locked;
> +		}

This is unneeded and conceivably deadlocky for !__GFP_FS allocations. 
Probably we avoid doing all this if the test which may_enter_fs uses is
false.

It's unlikely that any very-high-order allocators are using GFP_NOIO or
whatever, but still...


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ