lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Oct 2009 11:33:58 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Justin Piszcz <jpiszcz@...idpixels.com>
Cc:	linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org,
	xfs@....sgi.com, Alan Piszcz <ap@...arrain.com>
Subject: Re: 2.6.31+2.6.31.4: XFS - All I/O locks up to D-state after 24-48
	hours (sysrq-t+w available)

On Mon, Oct 19, 2009 at 06:18:58AM -0400, Justin Piszcz wrote:
> On Mon, 19 Oct 2009, Dave Chinner wrote:
>> On Sun, Oct 18, 2009 at 04:17:42PM -0400, Justin Piszcz wrote:
>>> It has happened again, all sysrq-X output was saved this time.
>> .....
>>
>> All pointing to log IO not completing.
>>
....
> So far I do not have a reproducible test case,

Ok. What sort of load is being placed on the machine?

> the only other thing not posted was the output of ps auxww during
> the time of the lockup, not sure if it will help, but here it is:
>
> USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
> root         1  0.0  0.0  10320   684 ?        Ss   Oct16   0:00 init [2] 
....
> root       371  0.0  0.0      0     0 ?        R<   Oct16   0:01 [xfslogd/0]
> root       372  0.0  0.0      0     0 ?        S<   Oct16   0:00 [xfslogd/1]
> root       373  0.0  0.0      0     0 ?        S<   Oct16   0:00 [xfslogd/2]
> root       374  0.0  0.0      0     0 ?        S<   Oct16   0:00 [xfslogd/3]
> root       375  0.0  0.0      0     0 ?        R<   Oct16   0:00 [xfsdatad/0]
> root       376  0.0  0.0      0     0 ?        S<   Oct16   0:00 [xfsdatad/1]
> root       377  0.0  0.0      0     0 ?        S<   Oct16   0:03 [xfsdatad/2]
> root       378  0.0  0.0      0     0 ?        S<   Oct16   0:01 [xfsdatad/3]
> root       379  0.0  0.0      0     0 ?        S<   Oct16   0:00 [xfsconvertd/0]
> root       380  0.0  0.0      0     0 ?        S<   Oct16   0:00 [xfsconvertd/1]
> root       381  0.0  0.0      0     0 ?        S<   Oct16   0:00 [xfsconvertd/2]
> root       382  0.0  0.0      0     0 ?        S<   Oct16   0:00 [xfsconvertd/3]
.....

It appears that both the xfslogd and the xfsdatad on CPU 0 are in
the running state but don't appear to be consuming any significant
CPU time. If they remain like this then I think that means they are
stuck waiting on the run queue.  Do these XFS threads always appear
like this when the hang occurs? If so, is there something else that
is hogging CPU 0 preventing these threads from getting the CPU?

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ