lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <93e74f9f-6694-a3e9-4fac-981389522d25@dupond.be>
Date:   Mon, 9 Mar 2020 16:33:52 +0100
From:   Jean-Louis Dupond <jean-louis@...ond.be>
To:     "Theodore Y. Ts'o" <tytso@....edu>
Cc:     linux-ext4@...r.kernel.org
Subject: Re: Filesystem corruption after unreachable storage

On 9/03/2020 16:18, Theodore Y. Ts'o wrote:
> Did the panic happen immediately, or did things hang until the storage
> recovered, and*then*  it rebooted.  Or did the hard reset and reboot
> happened before the storage network connection was restored?

The panic (well it was just frozen, no stacktrace or automatic reboot) 
did happen *after* storage came back online.
So nothing happens while the storage is offline, even if we wait until 
the scsi timeout is exceeded (180s * 6).
It's only when the storage returns that the filesystem goes read-only / 
panic (depending on the error setting).
>
> Fundamentally I think what's going on is that even though there is an
> I/O error reported back to the OS, but in some cases, the outstanding
> I/O actually happens.  So in the error=panic case, we do update the
> superblock saying that the file system contains inconsistencies.  And
> then we reboot.  But it appears that even though host rebooted, the
> storage area network*did*  manage to send the I/O to the device.
It seems that by updating the superblock to state that filesystem 
contains errors, things are made worse.
At the moment it does this, the storage is already accessible again, so 
it seems logic the I/O is written.
>
> I'm not sure what we can really do here, other than simply making the
> SCSI timeout infinite.  The problem is that storage area networks are
> flaky.  Sometimes I/O's make it through, and even though we get an
> error, it's an error from the local SCSI layer --- and it's possible
> that I/O will make it through.  In other cases, even though the
> storage area network was disconnected at the time we sent the I/O
> saying the file system has problems, and then rebooted, the I/O
> actually makes it through.  Given that, assuming that if we're not
> sure, forcing an full file system check is better part of valor.
If we do reset the VM before storage is back, the filesystem check just 
goes fine in automatic mode.
So I think we should (in some cases) not try to update the superblock 
anymore on I/O errors, but just go read-only/panic.
Cause it seems like updating the superblock makes things worse.

Or changes could be made to e2fsck to allow automatic repair of this 
kind of error for example?
>
> And if it hangs forever, and we do a hard reset reboot, I don't know
> *what*  to trust from the storage area network.  Ideally, there would
> be some way to do a hard reset of the storage area network so that all
> outstanding I/O's from the host that we are about to reset will get
> forgotten before we do actually the hard reset.
>
> 						- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ