lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <E1JKur1-0001Dh-Ol@localhost.localdomain>
Date:	Fri, 1 Feb 2008 20:18:47 +0800
From:	Fengguang Wu <wfg@...l.ustc.edu.cn>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Michael Rubin <mrubin@...gle.com>, linux-ext4@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH] clear PAGECACHE_TAG_DIRTY for truncated pages

The `truncated' page in block_write_full_page()/nobh_writepage() may
stick for a long time.  For example, ext2_rmdir() will set i_size to
0, and then the dir inode and its pages may hang around because of
being referenced by some process.

To produce this situation:

In terminal 1:
                $ mkdir hi; cd hi
                $ sleep 1h; cd ..
In terminal 2:
                $ rmdir hi

The dir 'hi' is deleted in terminal 2 while still being referenced by
the shell(as working dir) in terminal 1.  The deleted inode 'hi' with
its dirty-and-truncated page will stay for 1 hour, during which the
pdflush will retry the page _at least_ once every 5s.

So clear PAGECACHE_TAG_DIRTY to prevent pdflush from retrying on it.

Tested-by: Joerg Platte <jplatte@...sa.net>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Michael Rubin <mrubin@...gle.com>
Signed-off-by: Fengguang Wu <wfg@...l.ustc.edu.cn>
---
 fs/buffer.c |   16 ++++++++++++++++
 1 files changed, 16 insertions(+)

--- linux-mm.orig/fs/buffer.c
+++ linux-mm/fs/buffer.c
@@ -2631,7 +2631,13 @@ int nobh_writepage(struct page *page, ge
 		if (page->mapping->a_ops->invalidatepage)
 			page->mapping->a_ops->invalidatepage(page, offset);
 #endif
+		/*
+		 * Clear PAGECACHE_TAG_DIRTY to stop pdflush from retrying.
+		 * Read block_write_full_page() for more details.
+		 */
+		set_page_writeback(page);
 		unlock_page(page);
+		end_page_writeback(page);
 		return 0; /* don't care */
 	}
 
@@ -2826,7 +2832,17 @@ int block_write_full_page(struct page *p
 		 * freeable here, so the page does not leak.
 		 */
 		do_invalidatepage(page, 0);
+		/*
+		 * Clear PAGECACHE_TAG_DIRTY to stop pdflush from retrying.
+		 *
+		 * Some truncated pages may hang around for long time.
+		 * For example, ext2_rmdir() will set i_size to 0, and then
+		 * keep the pages as long as the dir is still referenced(as
+		 * the working dir of some process).
+		 */
+		set_page_writeback(page);
 		unlock_page(page);
+		end_page_writeback(page);
 		return 0; /* don't care */
 	}
 

-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ