lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 5 Aug 2021 15:48:36 +0000
From:   Konstantin Komarov <>
To:     "Darrick J. Wong" <>,
        Theodore Ts'o <>
CC:     Linus Torvalds <>,
        Matthew Wilcox <>,
        "Leonidas P. Papadakos" <>,
        "" <>,
        Greg Kroah-Hartman <>,
        Hans de Goede <>,
        linux-fsdevel <>,
        Linux Kernel Mailing List <>,
        Al Viro <>
Subject: RE: [GIT PULL] vboxsf fixes for 5.14-1

> From: Darrick J. Wong <>
> Sent: Wednesday, August 4, 2021 4:04 AM
> To: Theodore Ts'o <>
> Cc: Linus Torvalds <>; Matthew Wilcox <>; Leonidas P. Papadakos
> <>; Konstantin Komarov <>;; Greg Kroah-
> Hartman <>; Hans de Goede <>; linux-fsdevel <>;
> Linux Kernel Mailing List <>; Al Viro <>
> Subject: Re: [GIT PULL] vboxsf fixes for 5.14-1
> On Tue, Aug 03, 2021 at 08:49:28PM -0400, Theodore Ts'o wrote:
> > On Tue, Aug 03, 2021 at 05:10:22PM -0700, Linus Torvalds wrote:
> > > The user-space FUSE thing does indeed work reasonably well.
> > >
> > > It performs horribly badly if you care about things like that, though.
> > >
> > > In fact, your own numbers kind of show that:
> > >
> > >   ntfs/default: 670 tests, 55 failures, 211 skipped, 34783 seconds
> > >   ntfs3/default: 664 tests, 67 failures, 206 skipped, 8106 seconds
> > >
> > > and that's kind of the point of ntfs3.
> >
> > Sure, although if you run fstress in parallel ntfs3 will lock up, the
> > system hard, and it has at least one lockdep deadlock complaints.
> > It's not up to me, but personally, I'd feel better if *someone* at
> > Paragon Software responded to Darrrick and my queries about their
> > quality assurance, and/or made commitments that they would at least
> > *try* to fix the problems that about 5 minutes of testing using
> > fstests turned up trivially.
> <cough> Yes, my aim was to gauge their interest in actively QAing the
> driver's current problems so that it doesn't become one of the shabby
> Linux filesystem drivers, like <cough>ntfs.
> Note I didn't even ask for a particular percentage of passing tests,
> because I already know that non-Unix filesystems fail the tests that
> look for the more Unix-specific behaviors.
> I really only wanted them to tell /us/ what the baseline is.  IMHO the
> silence from them is a lot more telling.  Both generic/013 and
> generic/475 are basic "try to create files and read and write data to
> them" exercisers; failing those is a red flag.

Hi Darrick and Theodore! First of all, apologies for the silence on your questions.
Let me please clarify and summarize the QA topic for you.

The main thing to outline is that: we have the number of autotests executed
for ntfs3 code. More specifically, we are using TeamCity as our CI tool, which
is handling autotests. Those are being executed against each commit to the
ntfs3 codebase.

Autotests are divided into the "promotion" levels, which are quite standard:
L0, L1, L2. Those levels have the division from the shortest "smoke" (L0)
to the longest set (L2). This we need to cover the ntfs3 functionality with
tests under given amount of time (feedback loop for L0 is minutes, while for
L2 is up to 24hrs).

As for suites we are using - it is the mix of open/well known suites:
- xfstests, ltp, pjd suite, fsx, dirstress, fstorture - those are of known utilites/suites
And number of internal autotests which were developed for covering various parts of
fs specs, regression autotests which are introduced to the infrastructure after bugfixes
and autotests written to test the driver operation on various data sets.

This approach is settled in Paragon for years, and ntfs3, from the first line of code written,
is being developed this way. You may refer the artifacts linked below, where the progress/coverage
during the last year is spoken by autotest results:

the 27th patch-series code (July'2021):
25th (March'2021):
2nd (August, 2020):

Those are results on ntfs3 ran within the 'linux-next' (the most recent one given the tests start date)
As may be observed, we never skipped the "tests day" :)

There is a note should be provided on xfstests specifically. We have been using this suite
as a part of our autotests for several years already. However the suite originate for Linux
native file systems and a lot of cases are not applicable to the NTFS. This is one of the reasons
why some of "red-flag" failures are there (e.g. generic/475) - they were excluded at some point of time
and we've missed to enable it back when it was the time :)

Thank you all for this effort to run and look closer on our code, on the next patchset, the
91, 317 and 475 should be resolved. And now we are looking up to other excluded tests to find out more of such.

Hope this will resolve some of your concerns.

> --D
> > I can even give them patches and configsto make it trivially easy for
> > them to run fstests using KVM or GCE....
> >
> > 				- Ted

Powered by blists - more mailing lists