lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090210033652.GA28435@Krystal>
Date:	Mon, 9 Feb 2009 22:36:53 -0500
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Jens Axboe <jens.axboe@...cle.com>, akpm@...ux-foundation.org,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>, thomas.pi@...or.dea,
	Yuriy Lalym <ylalym@...il.com>
Cc:	ltt-dev@...ts.casi.polymtl.ca, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: [PATCH] mm fix page writeback accounting to fix oom condition
	under heavy I/O

Related to :
http://bugzilla.kernel.org/show_bug.cgi?id=12309

Very annoying I/O latencies (20-30 seconds) are occuring under heavy I/O since
~2.6.18.

Yuriy Lalym noticed that the oom killer was eventually called. So I took a look
at /proc/meminfo and noticed that under my test case (fio job created from a
LTTng block I/O trace, reproducing dd writing to a 20GB file and ssh sessions
being opened), the Inactive(file) value increased, and the total memory consumed
increased until only 80kB (out of 16GB) were left.

So I first used cgroups to limit the memory usable by fio (or dd). This seems to
fix the problem.

Thomas noted that there seems to be a problem with pages being passed to the
block I/O elevator not being counted as dirty. I looked at
clear_page_dirty_for_io and noticed that page_mkclean clears the dirty bit and
then set_page_dirty(page) is called on the page. This calls
mm/page-writeback.c:set_page_dirty(). I assume that the
mapping->a_ops->set_page_dirty is NULL, so it calls
buffer.c:__set_page_dirty_buffers(). This calls set_buffer_dirty(bh).

So we come back in clear_page_dirty_for_io where we decrement the dirty
accounting. This is a problem, because we assume that the block layer will
re-increment it when it gets the page, but because the buffer is marked as
dirty, this won't happen.

So this patch fixes this behavior by only decrementing the page accounting
_after_ the block I/O writepage has been done.

The effect on my workload is that the memory stops being completely filled by
page cache under heavy I/O. The vfs_cache_pressure value seems to work again.

However, this does not fully solve the high latency issue : when there are
enough vfs pages in cache that the pages are being written directly to disk
rather than left in the page cache, the CFQ I/O scheduler does not seem to be
able to correctly prioritize I/O requests. I think this might be because when
this high pressure point is reached, all tasks are blocked in the same way when
they try to add pages to the page cache, independently of their I/O priority.
Any idea on how to fix this is welcome.

Related commits :
commit 7658cc289288b8ae7dd2c2224549a048431222b3
Author: Linus Torvalds <torvalds@...mini.osdl.org>
Date:   Fri Dec 29 10:00:58 2006 -0800
    VM: Fix nasty and subtle race in shared mmap'ed page writeback

commit 8c08540f8755c451d8b96ea14cfe796bc3cd712d
Author: Andrew Morton <akpm@...l.org>
Date:   Sun Dec 10 02:19:24 2006 -0800
    [PATCH] clean up __set_page_dirty_nobuffers()

Both were merged Dec 2006, which is between kernel v2.6.19 and v2.6.20-rc3.

This patch applies on 2.6.29-rc3.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
CC: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
CC: Jens Axboe <jens.axboe@...cle.com>
CC: akpm@...ux-foundation.org
CC: Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Ingo Molnar <mingo@...e.hu>
CC: thomas.pi@...or.dea
CC: Yuriy Lalym <ylalym@...il.com>
---
 mm/page-writeback.c |   33 +++++++++++++++++++++++++--------
 1 file changed, 25 insertions(+), 8 deletions(-)

Index: linux-2.6-lttng/mm/page-writeback.c
===================================================================
--- linux-2.6-lttng.orig/mm/page-writeback.c	2009-02-09 20:18:41.000000000 -0500
+++ linux-2.6-lttng/mm/page-writeback.c	2009-02-09 20:42:39.000000000 -0500
@@ -945,6 +945,7 @@ int write_cache_pages(struct address_spa
 	int cycled;
 	int range_whole = 0;
 	long nr_to_write = wbc->nr_to_write;
+	int lazyaccounting;
 
 	if (wbc->nonblocking && bdi_write_congested(bdi)) {
 		wbc->encountered_congestion = 1;
@@ -1028,10 +1029,18 @@ continue_unlock:
 			}
 
 			BUG_ON(PageWriteback(page));
-			if (!clear_page_dirty_for_io(page))
+			lazyaccounting = clear_page_dirty_for_io(page);
+			if (!lazyaccounting)
 				goto continue_unlock;
 
 			ret = (*writepage)(page, wbc, data);
+
+			if (lazyaccounting == 2) {
+				dec_zone_page_state(page, NR_FILE_DIRTY);
+				dec_bdi_stat(mapping->backing_dev_info,
+						BDI_RECLAIMABLE);
+			}
+
 			if (unlikely(ret)) {
 				if (ret == AOP_WRITEPAGE_ACTIVATE) {
 					unlock_page(page);
@@ -1149,6 +1158,7 @@ int write_one_page(struct page *page, in
 {
 	struct address_space *mapping = page->mapping;
 	int ret = 0;
+	int lazyaccounting;
 	struct writeback_control wbc = {
 		.sync_mode = WB_SYNC_ALL,
 		.nr_to_write = 1,
@@ -1159,7 +1169,8 @@ int write_one_page(struct page *page, in
 	if (wait)
 		wait_on_page_writeback(page);
 
-	if (clear_page_dirty_for_io(page)) {
+	lazyaccounting = clear_page_dirty_for_io(page);
+	if (lazyaccounting) {
 		page_cache_get(page);
 		ret = mapping->a_ops->writepage(page, &wbc);
 		if (ret == 0 && wait) {
@@ -1167,6 +1178,11 @@ int write_one_page(struct page *page, in
 			if (PageError(page))
 				ret = -EIO;
 		}
+		if (lazyaccounting == 2) {
+			dec_zone_page_state(page, NR_FILE_DIRTY);
+			dec_bdi_stat(mapping->backing_dev_info,
+					BDI_RECLAIMABLE);
+		}
 		page_cache_release(page);
 	} else {
 		unlock_page(page);
@@ -1312,6 +1328,11 @@ EXPORT_SYMBOL(set_page_dirty_lock);
  *
  * This incoherency between the page's dirty flag and radix-tree tag is
  * unfortunate, but it only exists while the page is locked.
+ *
+ * Return values :
+ * 0 : page is not dirty
+ * 1 : page is dirty, no lazy accounting update still have to be performed
+ * 2 : page is direct *and* lazy accounting update must still be performed
  */
 int clear_page_dirty_for_io(struct page *page)
 {
@@ -1358,12 +1379,8 @@ int clear_page_dirty_for_io(struct page 
 		 * the desired exclusion. See mm/memory.c:do_wp_page()
 		 * for more comments.
 		 */
-		if (TestClearPageDirty(page)) {
-			dec_zone_page_state(page, NR_FILE_DIRTY);
-			dec_bdi_stat(mapping->backing_dev_info,
-					BDI_RECLAIMABLE);
-			return 1;
-		}
+		if (TestClearPageDirty(page))
+			return 2;
 		return 0;
 	}
 	return TestClearPageDirty(page);

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ