lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <alpine.DEB.2.00.0910210618210.10288@p34.internal.lan> Date: Wed, 21 Oct 2009 06:19:54 -0400 (EDT) From: Justin Piszcz <jpiszcz@...idpixels.com> To: Dave Chinner <david@...morbit.com> cc: linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org, xfs@....sgi.com, Alan Piszcz <ap@...arrain.com> Subject: Re: 2.6.31+2.6.31.4: XFS - All I/O locks up to D-state after 24-48 hours (sysrq-t+w available) On Tue, 20 Oct 2009, Justin Piszcz wrote: > > > On Tue, 20 Oct 2009, Dave Chinner wrote: > >> On Mon, Oct 19, 2009 at 06:18:58AM -0400, Justin Piszcz wrote: >>> On Mon, 19 Oct 2009, Dave Chinner wrote: >>>> On Sun, Oct 18, 2009 at 04:17:42PM -0400, Justin Piszcz wrote: >>>>> It has happened again, all sysrq-X output was saved this time. >>>> ..... >>>> >>>> All pointing to log IO not completing. >>>> >> .... >>> So far I do not have a reproducible test case, >> >> Ok. What sort of load is being placed on the machine? > Hello, generally the load is low, it mainly serves out some samba shares. > >> >> It appears that both the xfslogd and the xfsdatad on CPU 0 are in >> the running state but don't appear to be consuming any significant >> CPU time. If they remain like this then I think that means they are >> stuck waiting on the run queue. Do these XFS threads always appear >> like this when the hang occurs? If so, is there something else that >> is hogging CPU 0 preventing these threads from getting the CPU? > Yes, the XFS threads show up like this on each time the kernel crashed. So > far > with 2.6.30.9 after ~48hrs+ it has not crashed. So it appears to be some > issue > between 2.6.30.9 and 2.6.31.x when this began happening. Any recommendations > on how to catch this bug w/certain options enabled/etc? > > >> >> Cheers, >> >> Dave. >> -- >> Dave Chinner >> david@...morbit.com >> > Uptime with 2.6.30.9: 06:18:41 up 2 days, 14:10, 14 users, load average: 0.41, 0.21, 0.07 No issues yet, so it first started happening in 2.6.(31).(x). Any further recommendations on how to debug this issue? BTW: Do you view this as an XFS bug or MD/VFS layer issue based on the logs/output thus far? Justin. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists