lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 1 Apr 2007 22:54:02 -0400 (EDT)
From:	Alan Stern <stern@...land.harvard.edu>
To:	"Rafael J. Wysocki" <rjw@...k.pl>
cc:	Pavel Machek <pavel@....cz>, David Zeuthen <davidz@...hat.com>,
	Maxim <maximlevitsky@...il.com>,
	Pete Zaitcev <zaitcev@...hat.com>, <gregkh@...e.de>,
	Kernel development list <linux-kernel@...r.kernel.org>,
	USB development list <linux-usb-devel@...ts.sourceforge.net>
Subject: Re: USB: on suspend to ram/disk all usb devices are replugged

On Sun, 1 Apr 2007, Rafael J. Wysocki wrote:

> Hi,
> 
> On Sunday, 1 April 2007 20:34, Pavel Machek wrote:
> > Hi!
> > 
> > > > Problem is that suspending _with_ removable mass storage devices
> > > > attached just will not work. User will unplug them, then complain
> > > > about corruption. Advanced user will unplug them, work with them
> > > > somewhere else, replug them, then loose filesystem.
> > > > 
> > > > Feel free to send patch to teach filesystems to handle this.
> > > 
> > > Actually what's needed is a Persistent Logical Volume Manager.  With it,
> > > you could even mount a filesystem on a USB device, unplug the device, plug
> > > it back into a different port, and still be able to use the filesystem.
> > > 
> > > But you're still likely to run into trouble if you unplug a storage
> > > device, move it to another system and write on it, then plug it back into
> > > the original system.  The PLVM would somehow have to recognize that the
> > > data had been changed.  I don't know a foolproof way of doing that.
> > 
> > Such detection should be possible when done at filesystem level.
> > 
> > I.e. ext3 would notice that someone replayed the journal.
> > 
> > Or we could create ext5 where each r/w mount would update mount
> > time... actually we probably already have last mount time in ext3,
> > so...
> 
> I'm thinking we'll need to introduce something like freezing notifiers, ie.
> the ability to register a notifier by an interested subsystem that will be
> called right after user space processes have been frozen and right before
> we start to thaw them (that may allow us to handle the microcode issue in
> a clean way, for example).
> 
> Now if a filesystem registers a freezing notifier, it may be unmounted during
> the suspend and remounted during the resume in more or less transparent
> way.  I think an additional mount flag would be needed for filesystem that
> should install such notifiers, like "removable".

"Unmounted" and "remounted" aren't quite the right words.  All you really
need to do is check (during the resume) that the superblock is still in
the same state as it was when the suspend occurred.  After all, if someone
else had mounted the dirty filesystem during the interim, they surely
would have altered the superblock.  Note that even a read-only mount of a
dirty ext3 filesystem will recover the journal; presumably that will alter
the superblock too.

I don't think a "removable" flag is needed.  There's no reason not to do 
this for every mounted filesystem.

(Also "removable" isn't the right word either, since hot-pluggable devices 
with non-removable media would need the same treatment.  In fact, while 
the system is hibernating someone could even remove an internal IDE drive 
and then put it back!)

One final nit:  It's possible for someone to alter the data sectors 
directly, without mounting the filesystem.  This is sufficiently perverse 
that we probably shouldn't worry about it.

Alan Stern

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ