lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFk8rvYtyraezVEP1e3LFEE+9t5YoRH2YwjKxex8hgSY74Lw7A@mail.gmail.com>
Date:   Thu, 5 Oct 2017 19:34:10 -0500
From:   Ashlie Martinez <ashmrtn@...xas.edu>
To:     Amir Goldstein <amir73il@...il.com>
Cc:     Xiao Yang <yangx.jy@...fujitsu.com>,
        "Theodore Ts'o" <tytso@....edu>, Eryu Guan <eguan@...hat.com>,
        Josef Bacik <jbacik@...com>, fstests <fstests@...r.kernel.org>,
        Ext4 <linux-ext4@...r.kernel.org>,
        Vijay Chidambaram <vvijay03@...il.com>
Subject: Re: [RFC][PATCH] fstest: regression test for ext4 crash consistency bug

On Thu, Oct 5, 2017 at 2:10 PM, Amir Goldstein <amir73il@...il.com> wrote:
> On Thu, Oct 5, 2017 at 6:04 PM, Ashlie Martinez <ashmrtn@...xas.edu> wrote:
>> On Thu, Oct 5, 2017 at 2:27 AM, Xiao Yang <yangx.jy@...fujitsu.com> wrote:
>>> On 2017/09/30 22:15, Ashlie Martinez wrote:
>>>>
>>>> Hi Xiao,
>>>>
>>>> I am a student at the University of Texas at Austin. Some researchers
>>>> in the computer science department at UT, myself included, have
>>>> recently been working to develop a file system crash consistency test
>>>> harness called CrashMonkey [1][2]. I have been working on the
>>>> CrashMonkey project since it was started late last year. With
>>>> CrashMonkey we have also been able to reproduce the incorrect i_size
>>>> error you noted but we have not been able to reproduce the other
>>>> output that Amir found. CrashMonkey works by logging and replaying
>>>> operations for a workload, so it should not be sensitive to
>>>> differences in timing that could be caused by things like KVM+virtio.
>>>> I also did a few experiments with Amir's new xfstests test 456 (both
>>>> with and without KVM and virtio) and I was unable to reproduce the
>>>> output noted in the xfstest. I have not spent a lot of time looking
>>>> into the cause of the bug that Amir found and it is rather unfortunate
>>>> that I was unable to reproduce it with either xfstests or CrashMonkey.
>>>
>>> Hi Ashlie,
>>>
>>> Thanks for your detailed comments.
>>>
>>> 1) Do you think the output that Amir noted in xfstests is a false positive?
>>
>> It almost seems that way, though to be honest, I don't think I know
>> enough about 1. the setup used by Amir and 2. all the internal working
>> of KVM+virtio to say for sure.
>
> I believe you misread my email.
> The problem was NOT reproduced on KVM+virtio.
> The problem *is* reproduced on a 10G LVM volume over SSD on
> Ubuntu 16.04 with latest kernel and latest e2fsprogs.

I have also tried running generic/456 on non-KVM, non-virtio machines
and I was unable to reproduce it. Without information on test setups
individuals are using, it is very hard to tell where I, or other
people, are going wrong in testing other than maybe using a virtual
machine.

As an aside, it does not appear to be simply a timing issue due to
KVM+virtio. I would hope that CrashMonkey would still be able to find
the extent error you saw, even on a KVM VM since it is not dependent
on timing.

>
> There is no use in speculating why the problem doesn't happen on your
> systems. If the issue reproduces on A system (2 systems actually that I tested)
> that it is a problem.
>
> Attached is an e2image dump of the inconsistent file system following test
> generic/456 on my system.  I would have attached the image sooner,
> but since on my system problem reproduces 100% on the time, I did not think
> that was a need. Without an ext4 developer looking into this image, I don't
> really see the point in further discussion.
>
> I would be interesting to get more test samples from people running the test
> on other systems. If only some people see the error
> "end of extent exceeds allowed value"
> it would be interesting to find the commonality between those setups.
>

I agree it would be interesting to see why this error appears only for
some people.


>> One thing I have identified as being a
>> potential source of false positives is code in the kernel to remap the
>> block number sent to device drives to something relative to the start
>> of a block device, not the start of a partition. I'm thinking this
>> could cause trouble if 1. a partition higher than the first partition
>> is monitored by dm-log-writes, 2. the block numbers are recorded
>> verbatim in dm-log-writes, and 3. the dm-log-writes log is replayed on
>> a different device with different partitions (or possibly the same
>> device, but a different partition).
>
> You do realize that the test generic/456 is not using dm-log-writes at all.
> I intentionally took it out of the equation and used the even dumber dm-flakey
> target to demonstrate the crash inconsistency.

Oh, yes, I recall that now. The past few weeks have been very busy for
me with all of my other school work, so getting all of my i's dotted
and t's crossed has been a challenge. At any rate, I don't think it is
a real minus to mention the potential remapping of blocks in the cited
function. If people are going to use any sort of block device module
(be dm-log-writes or otherwise) then I think it would be good for
developers to be aware of.

>
>> I know some other undergrad
>> students at UT on the CrashMonkey team are looking into this right
>> now, but I have not had time to test this myself. The offending code
>> in the kernel is in the completely misnamed
>> `generic_make_request_checks` function which has given the CrashMonkey
>> team trouble in the past. Links to that function and the remapping
>> function it calls are below.
>>
>> http://elixir.free-electrons.com/linux/latest/source/block/blk-core.c#L2041
>> http://elixir.free-electrons.com/linux/latest/source/block/blk-core.c#L1902
>>
>>>
>>> 2) About the output that both i and you reproduced,  did you look into it
>>> and find its root cause?
>>
>> I have not looked to find the root cause of the issue that we both
>> reliably reproduce.
>>
>
> Now you have a broken file system image and the exact set of operations
> that led to it. That's should be a pretty big lead for investigation.
>
> Amir.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ