lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46F2E103.8000907@redhat.com>
Date:	Thu, 20 Sep 2007 17:07:15 -0400
From:	Chuck Ebbert <cebbert@...hat.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Matthias Hensler <matthias@...se.de>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	richard kennedy <richard@....demon.co.uk>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: Processes spinning forever, apparently in lock_timer_base()?

On 08/09/2007 12:55 PM, Andrew Morton wrote:
> On Thu, 9 Aug 2007 11:59:43 +0200 Matthias Hensler <matthias@...se.de> wrote:
> 
>> On Sat, Aug 04, 2007 at 10:44:26AM +0200, Matthias Hensler wrote:
>>> On Fri, Aug 03, 2007 at 11:34:07AM -0700, Andrew Morton wrote:
>>> [...]
>>> I am also willing to try the patch posted by Richard.
>> I want to give some update here:
>>
>> 1. We finally hit the problem on a third system, with a total different
>>    setup and hardware. However, again high I/O load caused the problem
>>    and the affected filesystems were mounted with noatime.
>>
>> 2. I installed a recompiled kernel with just the two line patch from
>>    Richard Kennedy (http://lkml.org/lkml/2007/8/2/89). That system has 5
>>    days uptime now and counting. I believe the patch fixed the problem.
>>    However, I will continue running "vmstat 1" and the endless loop of
>>    "cat /proc/meminfo", just in case I am wrong.
>>
> 
> Did we ever see the /proc/meminfo and /proc/vmstat output during the stall?
> 
> If Richard's patch has indeed fixed it then this confirms that we're seeing
> contention over the dirty-memory limits.  Richard's patch isn't really the
> right one because it allows unlimited dirty-memory windup in some situations
> (large number of disks with small writes, or when we perform queue congestion
> avoidance).
> 
> As you're seeing this happening when multiple disks are being written to it is
> possible that the per-device-dirty-threshold patches which recently went into
> -mm (and which appear to have a bug) will fix it.
> 
> But I worry that the stall appears to persist *forever*.  That would indicate
> that we have a dirty-memory accounting leak, or that for some reason the
> system has decided to stop doing writeback to one or more queues (might be
> caused by an error in a lower-level driver's queue congestion state management).
> 
> If it is the latter, then it could be that running "sync" will clear the
> problem.  Temporarily, at least.  Because sync will ignore the queue congestion
> state.
> 

This is still a problem for people, and no fix is in sight until 2.6.24.
Can we get some kind of band-aid, like making the endless 'for' loop in
balance_dirty_pages() terminate after some number of iterations? Clearly
if we haven't written "write_chunk" pages after a few tries, *and* we
haven't encountered congestion, there's no point in trying forever...

[not even compile tested patch follows]

---
 mm/page-writeback.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- linux-2.6.22.noarch.orig/mm/page-writeback.c
+++ linux-2.6.22.noarch/mm/page-writeback.c
@@ -208,11 +208,12 @@ static void balance_dirty_pages(struct a
 	long background_thresh;
 	long dirty_thresh;
 	unsigned long pages_written = 0;
+	int i;
 	unsigned long write_chunk = sync_writeback_pages();
 
 	struct backing_dev_info *bdi = mapping->backing_dev_info;
 
-	for (;;) {
+	for (i = 0; ; i++) {
 		struct writeback_control wbc = {
 			.bdi		= bdi,
 			.sync_mode	= WB_SYNC_NONE,
@@ -250,6 +251,8 @@ static void balance_dirty_pages(struct a
 			pages_written += write_chunk - wbc.nr_to_write;
 			if (pages_written >= write_chunk)
 				break;		/* We've done our duty */
+			if (i >= write_chunk && !wbc.encountered_congestion)
+				break;		/* nothing to write? */
 		}
 		congestion_wait(WRITE, HZ/10);
 	}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ