[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CC45318.3080002@ddn.com>
Date: Sun, 24 Oct 2010 17:39:04 +0200
From: Bernd Schubert <bschubert@....com>
To: Ric Wheeler <rwheeler@...hat.com>
CC: Ted Ts'o <tytso@....edu>, Amir Goldstein <amir73il@...il.com>,
Bernd Schubert <bs_lists@...ef.fastmail.fm>,
"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
Andreas Dilger <adilger@....com>
Subject: Re: ext4_clear_journal_err: Filesystem error recorded from previous
mount: IO failure
On 10/24/2010 05:20 PM, Ric Wheeler wrote:
>
> This still sounds more like a Lustre issue than an ext4 one, Andreas can fill in
> the technical details.
The underlying device handling is unrelated to Lustre. In that sense it
is just a local filesystem.
>
> What ever shared storage sits under ext4 is irrelevant to the fail over case.
>
> Unless Lustre does other magic, they still need to obey the basic cluster rules
> - one mount per cluster.
Yes, one mount per cluster.
>
> If Lustre is doing the same trick you would do with active/passive failure over
> clusters that export ext4 via NFS, you would still need to clean up the file
> system before being able to re-export it from a fail over node.
What exactly is your question here? We use pacemaker/stonith to do the
fencing job.
What exactly do you want to clean up? The device is recovered by
journals, Lustre goes into recovery mode, clients reconnect, locks are
updated and incomplete transactions resend.
Cheers,
Bernd
Download attachment "signature.asc" of type "application/pgp-signature" (263 bytes)
Powered by blists - more mailing lists