lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 18 May 2013 16:13:25 +0400
From:	Dmitry Monakhov <dmonakhov@...nvz.org>
To:	Dave Chinner <david@...morbit.com>
Cc:	xfs@....sgi.com, linux-ext4@...r.kernel.org,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] xfstests: test data integrity under disk failure

On Fri, 17 May 2013 09:31:53 +1000, Dave Chinner <david@...morbit.com> wrote:
> On Thu, May 16, 2013 at 04:07:32PM +0400, Dmitry Monakhov wrote:
> > Parallels team have old good tool called hwflush-check which is server/client
> > application for testing data integrity under system/disk failure conditions.
> > Usually we run hwflush-check on two different hosts and use PMU to trigger real
> > power failure of the client as a whole unit. This tests may be used for
> > SSD checking (some of them are known to have probelms with hwflush).
> > I hope it will be good to share it with community.
> > 
> > This tests simulate just one disk failure while client system should
> > survive this failure. This test extend idea of shared/305.
> > 1) Run hwflush-check server and client on same host as usual
> > 2) Simulare disk failure via blkdev failt injection API aka 'make-it-fail'
> > 3) Umount failed device
> > 4) Makes disk operatable again
> > 5) Mount filesystem
> > 3) Check data integrity
> 
> So, for local disk failure, why do we need a client/server network
> architecture? That just complicates the code, and AFAICT
> 
> all the client does is send report report packets to server which
> contain an id number that is kept in memory. If on restart of the
> client after failure the ID in the report packet doesn't match what
> the server wants, then it fails the test.
> 
> So, why is the server needed here? Just dump the IDs the client
> writes to the file on a device not being tested, and either diff
> them against a golden image or run a check to see all the IDs are
> monotonically increasing. That removes all the networking code from
> the test, the need for a client/server architecture, etc, and makes
> the test far easier to review
In fact the reason is quite simple. Initially the this tool was designed
for real disk cache testing under power failure conditions. And want to
share it with community. Off course it is possible to simplify things 
for 'one hose' case but it is not too big. Let's review it one and keep
it simple but useful not just for local but also for real power failure
tests.
Fairly to say that initial idea was to add persistent state to FIO.
But logic starts to getting too complex so we write hwflush-check.

> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@...morbit.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ