lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 28 Mar 2018 19:30:06 +0000
From:   Sasha Levin <Alexander.Levin@...rosoft.com>
To:     Dave Chinner <david@...morbit.com>
CC:     Sasha Levin <levinsasha928@...il.com>,
        "Luis R. Rodriguez" <mcgrof@...nel.org>,
        "Darrick J. Wong" <darrick.wong@...cle.com>,
        Christoph Hellwig <hch@....de>,
        xfs <linux-xfs@...r.kernel.org>,
        "linux-kernel@...r.kernel.org List" <linux-kernel@...r.kernel.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Julia Lawall <julia.lawall@...6.fr>,
        Josh Triplett <josh@...htriplett.org>,
        Takashi Iwai <tiwai@...e.de>, Michal Hocko <mhocko@...nel.org>,
        Joerg Roedel <joro@...tes.org>
Subject: Re: [PATCH] xfs: always free inline data before resetting inode fork
 during ifree

On Wed, Mar 28, 2018 at 02:32:28PM +1100, Dave Chinner wrote:
>How much time are your test rigs going to be able to spend running
>xfstests? A single pass on a single filesysetm config on spinning
>disks will take 3-4 hours of run time. And we have at least 4 common
>configs that need validation (v4, v4 w/ 512b block size, v5
>(defaults), and v5 w/ reflink+rmap) and so you're looking at a
>minimum 12-24 hours of machine test time per kernel you'd need to
>test.

No reason they can't run in parallel, right?

>> > From: Sasha Levin <alexander.levin@...rosoft.com>
>> > To: Sasha Levin <alexander.levin@...rosoft.com>
>> > To: linux-xfs@...r.kernel.org, "Darrick J . Wong" <darrick.wong@...cle.com>
>> > Cc: Brian Foster <bfoster@...hat.com>, linux-kernel@...r.kernel.org
>> > Subject: Re: [PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic
>> > In-Reply-To: <20180306102638.25322-1-vbendel@...hat.com>
>> > References: <20180306102638.25322-1-vbendel@...hat.com>
>> >
>> > Hi Vratislav Bendel,
>> >
>> > [This is an automated email]
>> >
>> > This commit has been processed by the -stable helper bot and determined
>> > to be a high probability candidate for -stable trees. (score: 6.4845)
>> >
>> > The bot has tested the following trees: v4.15.12, v4.14.29, v4.9.89, v4.4.123, v4.1.50, v3.18.101.
>> >
>> > v4.15.12: OK!
>> > v4.14.29: OK!
>> > v4.9.89: OK!
>> > v4.4.123: OK!
>> > v4.1.50: OK!
>> > v3.18.101: OK!
>> >
>> > Please reply with "ack" to have this patch included in the appropriate stable trees.
>
>That might help, but the testing and validation is completely
>opaque. If I wanted to know what that "OK!" actually meant, where
>do I go to find that out?

This is actually something I want maintainers to dictate. What sort of
testing would make the XFS folks happy here? Right now I'm doing
"./check 'xfs/*'" with xfstests. Is it sufficient? Anything else you'd like to see?

--
Thanks,
Sasha

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ