lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 14 Mar 2013 20:39:27 +0800
From:	Hillf Danton <dhillf@...il.com>
To:	Will Huck <will.huckk@...il.com>
Cc:	Lenky Gao <lenky.gao@...il.com>,
	Zlatko Calusic <zlatko.calusic@...on.hr>,
	Greg KH <gregkh@...uxfoundation.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
	"olaf@...fle.de" <olaf@...fle.de>, Linux-MM <linux-mm@...ck.org>,
	Hugh Dickins <hughd@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Rik van Riel <riel@...hat.com>
Subject: Re: Inactive memory keep growing and how to release it?

On Sat, Mar 9, 2013 at 10:14 AM, Will Huck <will.huckk@...il.com> wrote:
> Cc experts. Hugh, Johannes,
>
> On 03/04/2013 08:21 PM, Lenky Gao wrote:
>>
>> 2013/3/4 Zlatko Calusic <zlatko.calusic@...on.hr>:
>>>
>>> The drop_caches mechanism doesn't free dirty page cache pages. And your
>>> bash
>>> script is creating a lot of dirty pages. Run it like this and see if it
>>> helps your case:
>>>
>>> sync; echo 3 > /proc/sys/vm/drop_caches
>>
>> Thanks for your advice.
>>
>> The inactive memory still cannot be reclaimed after i execute the sync
>> command:
>>
>> # cat /proc/meminfo | grep Inactive\(file\);
>> Inactive(file):   882824 kB
>> # sync;
>> # echo 3 > /proc/sys/vm/drop_caches
>> # cat /proc/meminfo | grep Inactive\(file\);
>> Inactive(file):   777664 kB
>>
>> I find these page becomes orphaned in this function, but do not understand
>> why:
>>
>> /*
>>   * If truncate cannot remove the fs-private metadata from the page, the
>> page
>>   * becomes orphaned.  It will be left on the LRU and may even be mapped
>> into
>>   * user pagetables if we're racing with filemap_fault().
>>   *
>>   * We need to bale out if page->mapping is no longer equal to the
>> original
>>   * mapping.  This happens a) when the VM reclaimed the page while we
>> waited on
>>   * its lock, b) when a concurrent invalidate_mapping_pages got there
>> first and
>>   * c) when tmpfs swizzles a page between a tmpfs inode and swapper_space.
>>   */
>> static int
>> truncate_complete_page(struct address_space *mapping, struct page *page)
>> {
>> ...
>>
>> My file system type is ext3, mounted with the opteion data=journal and
>> it is easy to reproduce.
>>

Perhaps we have to consider page count for orphan page if it
could be reproduced with mainline.

Hillf
---
--- a/mm/vmscan.c	Sun Mar 10 13:36:26 2013
+++ b/mm/vmscan.c	Thu Mar 14 20:29:40 2013
@@ -315,14 +315,14 @@ out:
 	return ret;
 }

-static inline int is_page_cache_freeable(struct page *page)
+static inline int is_page_cache_freeable(struct page *page, int has_mapping)
 {
 	/*
 	 * A freeable page cache page is referenced only by the caller
 	 * that isolated the page, the page cache radix tree and
 	 * optional buffer heads at page->private.
 	 */
-	return page_count(page) - page_has_private(page) == 2;
+	return page_count(page) - page_has_private(page) == has_mapping + 1;
 }

 static int may_write_to_queue(struct backing_dev_info *bdi,
@@ -393,7 +393,7 @@ static pageout_t pageout(struct page *pa
 	 * swap_backing_dev_info is bust: it doesn't reflect the
 	 * congestion state of the swapdevs.  Easy to fix, if needed.
 	 */
-	if (!is_page_cache_freeable(page))
+	if (!is_page_cache_freeable(page, mapping ? 1 : 0))
 		return PAGE_KEEP;
 	if (!mapping) {
 		/*
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ