lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080612123031.21594jyk6rjc1lfk@imp.ku-gbr.de>
Date:	Thu, 12 Jun 2008 12:30:31 +0200
From:	lists@...gbr.de
To:	linux-kernel@...r.kernel.org
Subject: XFS internal error xfs_trans_cancel at line 1163 of file
	fs/xfs/xfs_trans.c

Hi!

Today morning my server at home bailed out two times (reboot between):

Jun 12 07:23:40 zappa Filesystem "sda7": XFS internal error  
xfs_trans_cancel at line 1163 of file fs/xfs/xfs_trans.c.  Caller  
0xffffffff802fa8f5
Jun 12 07:23:40 zappa Pid: 2379, comm: procmail Not tainted  
2.6.25-gentoo-r4 #3
Jun 12 07:23:40 zappa
Jun 12 07:23:40 zappa Call Trace:
Jun 12 07:23:40 zappa [<ffffffff802fa8f5>]
Jun 12 07:23:40 zappa [<ffffffff802f4d71>]
Jun 12 07:23:40 zappa [<ffffffff802fa8f5>]
Jun 12 07:23:40 zappa [<ffffffff803038bb>]
Jun 12 07:23:40 zappa [<ffffffff8025f1e1>]
Jun 12 07:23:40 zappa [<ffffffff80261899>]
Jun 12 07:23:40 zappa [<ffffffff8025ae00>]
Jun 12 07:23:40 zappa [<ffffffff80257010>]
Jun 12 07:23:40 zappa [<ffffffff80256d92>]
Jun 12 07:23:40 zappa [<ffffffff80257077>]
Jun 12 07:23:40 zappa [<ffffffff8020ad3b>]
Jun 12 07:23:40 zappa
Jun 12 07:23:40 zappa xfs_force_shutdown(sda7,0x8) called from line  
1164 of file fs/xfs/xfs_trans.c.  Return address = 0xffffffff802f4d8a
Jun 12 07:23:40 zappa Filesystem "sda7": Corruption of in-memory data  
detected.  Shutting down filesystem: sda7
Jun 12 07:23:40 zappa Please umount the filesystem, and rectify the problem(s)

Jun 12 08:15:58 zappa Filesystem "sda7": XFS internal error  
xfs_trans_cancel at line 1163 of file fs/xfs/xfs_trans.c.  Caller  
0xffffffff802fa8f5
Jun 12 08:15:58 zappa Pid: 2161, comm: procmail Not tainted  
2.6.25-gentoo-r4 #3
Jun 12 08:15:58 zappa
Jun 12 08:15:58 zappa Call Trace:
Jun 12 08:15:58 zappa [<ffffffff802fa8f5>]
Jun 12 08:15:58 zappa [<ffffffff802f4d71>]
Jun 12 08:15:58 zappa [<ffffffff802fa8f5>]
Jun 12 08:15:58 zappa [<ffffffff803038bb>]
Jun 12 08:15:58 zappa [<ffffffff8025f1e1>]
Jun 12 08:15:58 zappa [<ffffffff80261899>]
Jun 12 08:15:58 zappa [<ffffffff8025ae00>]
Jun 12 08:15:58 zappa [<ffffffff80257010>]
Jun 12 08:15:58 zappa [<ffffffff80256d92>]
Jun 12 08:15:58 zappa [<ffffffff80257077>]
Jun 12 08:15:58 zappa [<ffffffff8020ad3b>]
Jun 12 08:15:58 zappa
Jun 12 08:15:58 zappa xfs_force_shutdown(sda7,0x8) called from line  
1164 of file fs/xfs/xfs_trans.c.  Return address = 0xffffffff802f4d8a
Jun 12 08:15:58 zappa Filesystem "sda7": Corruption of in-memory data  
detected.  Shutting down filesystem: sda7
Jun 12 08:15:58 zappa Please umount the filesystem, and rectify the problem(s)


The partitition sda7 is my /home directory located on a SATA 750GB  
Harddisk, kernel is vanilla 2.6.25. The fs is 100GB sized and 12%  
filled with several small files (imap mailspool).

I investigated the system and while anything else behaves normal, I  
found no error in syslog or with smartctl regarding a possible sector  
reaad/write error or anything else.
Sadly I lost my ssh connection now until this evening, but xfs_check  
put out a line like "xxx-count is 1 but counted 0 in ag17" after I was  
instructed to remount the system once to replay the log (which  
worked). Backups are finished, I wanted to run xfs_repair now...

Is this something I should worry about or the xfs Folks?

Kind Regards, Konsti


----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ