lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <24dd01cadc85$b1d9ea10$0400a8c0@dcccs>
Date:	Thu, 15 Apr 2010 12:23:26 +0200
From:	"Janos Haar" <janos.haar@...center.hu>
To:	"Dave Chinner" <david@...morbit.com>
Cc:	<xiyou.wangcong@...il.com>, <linux-kernel@...r.kernel.org>,
	<kamezawa.hiroyu@...fujitsu.com>, <linux-mm@...ck.org>,
	<xfs@....sgi.com>, <axboe@...nel.dk>
Subject: Re: Kernel crash in xfs_iflush_cluster (was Somebody take a look please!...)


----- Original Message ----- 
From: "Dave Chinner" <david@...morbit.com>
To: "Janos Haar" <janos.haar@...center.hu>
Cc: <xiyou.wangcong@...il.com>; <linux-kernel@...r.kernel.org>; 
<kamezawa.hiroyu@...fujitsu.com>; <linux-mm@...ck.org>; <xfs@....sgi.com>; 
<axboe@...nel.dk>
Sent: Thursday, April 15, 2010 11:23 AM
Subject: Re: Kernel crash in xfs_iflush_cluster (was Somebody take a look 
please!...)


> On Thu, Apr 15, 2010 at 09:00:49AM +0200, Janos Haar wrote:
>> Dave,
>>
>> The corruption + crash reproduced. (unfortunately)
>>
>> http://download.netcenter.hu/bughunt/20100413/messages-15
>>
>> Apr 14 01:06:33 alfa kernel: XFS mounting filesystem sdb2
>>
>> This was the point of the xfs_repair more times.
>
> OK, the inodes that are corrupted are different, so there's still
> something funky going on here. I still would suggest replacing the
> RAID controller to rule that out as the cause.

This was not a cheap card and i can't replace, because have only one, and 
the owner decided allready about i need to replace the entire server @ 
saturday.
I have only 2 day to get useful debug information when the server is online.
This is bad too for testing, becasue the workload will disappear, and we 
need to figure out something to reproduce the problem offline...

>
> FWIW, do you have any other servers with similar h/w, s/w and
> workloads? If so, are they seeing problems?

This is a web based game, wich generates a loooot of small files on the 
corrupted filesystem, and as far as i see, the corruption happens only @ 
writing, but not when reading.
Because i can copy multiple times big gz files across the partitions, and 
compare, and test for crc, and there is a cron-tester wich tests 12GB gz 
files hourly but can't find any problem, this shows me, the corruption only 
happens when writing, and not on the content, but on the FS.
This scores the RAID card problem more lower, am i right? :-)

Additionally in the last 3 days i have tried 2 times to cp -aR the entire 
partition to another, and both times the corruption appears ON THE SOURCE 
and finally the kernel crashed.

step 1. repair
step 2 run the game (files generated...)
step 3 start copy partition's data in background
step 4 corruption reported by kernel
step 5 kernel crashed during write

Can this be a race between read and write?

Btw i have 2 server with this game, the difference are these:

- The game's language
- The HW's structure similar, but totally different branded all the parts, 
except the Intel CPU. :-)
- The workload is lower on the stable server
- The stable server is not selected for replace. :-)

The important matches:
- The base OS is FC6 on both
- The actual kernel on the stable server is 2.6.28.10
(This kernel starts to crash @ the beginnig of Marc. month on which we are 
working on.)
- The FS and the internal structure is the same

>
> Can you recompile the kernel with CONFIG_XFS_DEBUG enabled and
> reboot into it before you repair and remount the filesystem again?

Yes, of course!
I will do it now, we have 2 days left to get useful infos....

> (i.e. so that we know that we have started with a clean filesystem
> and the debug kernel) I'm hoping that this will catch the corruption
> much sooner, perhaps before it gets to disk. Note that this will
> cause the machine to panic when corruption is detected, and it is
> much,much more careful about checking in memory structures so there
> is a CPU overhead involved as well.

not a problem.


>
> Cheers,
>
> Dave.
> -- 
> Dave Chinner
> david@...morbit.com 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ