[<prev] [next>] [day] [month] [year] [list]
Message-Id: <201003222141.o2MLf6Xd025669@demeter.kernel.org>
Date: Mon, 22 Mar 2010 21:41:06 GMT
From: bugzilla-daemon@...zilla.kernel.org
To: linux-ext4@...r.kernel.org
Subject: [Bug 15579] ext4 -o discard produces incorrect blocks of zeroes in
newly created files under heavy read+truncate+append-new-file load
https://bugzilla.kernel.org/show_bug.cgi?id=15579
Eric Sandeen <sandeen@...hat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |sandeen@...hat.com
--- Comment #4 from Eric Sandeen <sandeen@...hat.com> 2010-03-22 21:40:52 ---
Just for what it's worth, I've had trouble reproducing this on another brand of
SSD... something like this (don't let the xfs_io throw you; it's just a
convenient way to generate the IO). I did this on a 512M filesystem.
#!/bin/bash
SCRATCH_MNT=/mnt/scratch
rm -f $SCRATCH_MNT/*
touch $SCRATCH_MNT/outputfile
# Create several large-ish files
for I in `seq 1 240`; do
xfs_io -F -f -c "pwrite 0 2m" $SCRATCH_MNT/file$I &>/dev/null
done
# reread the last bit of each, just for kicks, and truncate off 1m
for I in `seq 1 240`; do
xfs_io -F -c "pread 1m 2m" $SCRATCH_MNT/file$I &>/dev/null
xfs_io -F -c "truncate 1m" $SCRATCH_MNT/file$I
done
# Append the outputfile
xfs_io -F -c "pwrite 0 250m" $SCRATCH_MNT/outputfile &>/dev/null
In the end I don't get any corruption. I was hoping to write a testcase for
this (one that didn't take 250G) :)
Does the above reflect your use case? Does the above corrupt the outputfile on
your filesystem? (note the "rm -rf" above, careful with that). You could
substitute dd for xfs_io without much trouble if desired.
--
Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists