[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <n2z87f94c371004151225q6aed9b2eud2d6bbd01419a189@mail.gmail.com>
Date: Thu, 15 Apr 2010 15:25:33 -0400
From: Greg Freemyer <greg.freemyer@...il.com>
To: djwong@...ibm.com
Cc: Akira Fujita <a-fujita@...jp.nec.com>,
linux-ext4 <linux-ext4@...r.kernel.org>
Subject: Re: EXT4_IOC_MOVE_EXT file corruption!
On Thu, Apr 15, 2010 at 3:17 PM, Darrick J. Wong <djwong@...ibm.com> wrote:
> On Thu, Apr 15, 2010 at 05:27:50PM +0900, Akira Fujita wrote:
>> Hi Darrick,
>>
>> (2010/04/06 7:02), Darrick J. Wong wrote:
>>> Hi all,
>>>
>>> I wrote a program called e4frag that deliberately tries to fragment an ext4
>>> filesystem via EXT4_IOC_MOVE_EXT so that I could run e4defrag through its
>>> paces. While running e4frag and e4defrag concurrently on a kernel source tree,
>>> I discovered ongoing file corruption. It appears that if e4frag and e4defrag
>>> hit the same file at same time, the file ends up with a 4K data block from
>>> somewhere else. "Somewhere else" seems to be a small chunk of binary gibberish
>>> followed by contents from other files(!) Obviously this isn't a good thing to
>>> see, since today it's header files but tomorrow it could be the credit card/SSN
>>> database. :)
>>>
>>> Ted asked me to send out a copy of the program ASAP, so the test program source
>>> code is at the end of this message. To build it, run:
>>>
>>> $ gcc -o e4frag -O2 -Wall e4frag.c
>>>
>>> and then to run it:
>>>
>>> (unpack something in /path/to/files)
>>> $ cp -pRdu /path/to/files /path/to/intact_files
>>> $ while true; do e4defrag /path/to/files& done
>>> $ while true; do ./e4frag -m 500 -s random /path/to/files& done
>>> $ while true; do diff -Naurp /path/to/intact_files /path/to/files; done
>>>
>>> ...and wait for diff to cough up differences. This seems to happen on
>>> 2.6.34-rc3, and only if e4frag and e4defrag are running concurrently. Running
>>> e4frag or e4defrag in a serial loop doesn't produce this corruption, so I think
>>> it's purely a concurrent access problem.
>>
>> I couldn't reproduce this problem, somehow.
>>
>> My environment is:
>> Arch: i386
>> Kernel: 2.6.34-rc3
>> e2fsprogs: 1.41.11
>> Mount option: delalloc, data=ordered, async
>> Block size: 4KB
>> Partition size: 100GB
>>
>> Is there any difference in your case?
>> And how long does this file corruption take to be detected?
>>
>> I ran below program all day long, but problem did not occur.
>
> Hmm. I was running with 2.6.34-rc3 on x86-64, same block size, though with a
> 2TB mdraid0. It usually took a few hours to reproduce, though I've noticed
> that if I kick off at least as many e4defrags and e4frags, it will show up much
> sooner. Thank you for trying this out!
If its not reproducible on a simple disk, it could be a bug in the
barrier code for mdraid.
But I think raid0 support for barriers has been around for a long time.
Greg
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists