[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1275650416.3403.5.camel@maxim-laptop>
Date: Fri, 04 Jun 2010 14:20:16 +0300
From: Maxim Levitsky <maximlevitsky@...il.com>
To: Pavel Machek <pavel@....cz>
Cc: Alan Stern <stern@...land.harvard.edu>,
Jens Axboe <jens.axboe@...cle.com>,
"Rafael J. Wysocki" <rjw@...k.pl>,
linux-pm <linux-pm@...ts.linux-foundation.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [linux-pm] Is it supposed to be ok to call del_gendisk while
userspace is frozen?
On Thu, 2010-03-04 at 14:53 +0100, Pavel Machek wrote:
> Hi!
>
> > > journalling assumptions broken: commit block is there, but previous
> > > blocks are not intact. Data loss.
> > >
> > > ...and that was the first I could think about. Lets not do
> > > this. Barriers were invented for a reason.
> >
> > Very well. Then we still need a solution to the original problem:
> > Devices sometimes need to be unregistered during resume, but
> > del_gendisk() blocks on the writeback thread, which is frozen until
> > after the resume finishes. How do you suggest this be fixed?
>
> Avoid unregistering device during resume. Instead, return errors until
> resume is done and you can call del_gendisk?
This won't help ether. The same driver needs to unregister perfectly
working device on suspend, because the user might replace the card
during suspend and fool the os.
There is a setting, CONFIG_MMC_UNSAFE_RESUME and I use it, but it isn't
default.
Anyway to revive that old thread, how about introducing new
del_gendisk_no_sync?
A less safe version of del_gendisk, but which won't sync the filesystem.
Since driver knows that card is gone, there is no point of syncing it.
(the sync is done by invalidate_partition, so some flag should be
propagated to it).
Best regards,
Maxim Levitsky
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists