lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 3 Aug 2011 11:57:54 +0200
From:	Jan Kara <jack@...e.cz>
To:	Toshiyuki Okajima <toshi.okajima@...fujitsu.com>
Cc:	Jan Kara <jack@...e.cz>, akpm@...ux-foundation.org,
	adilger.kernel@...ger.ca, linux-ext4@...r.kernel.org
Subject: Re: [PATCH] ext3: fix message in ext3_remount for rw-remount case

  Hello,

On Wed 03-08-11 11:42:03, Toshiyuki Okajima wrote:
> >(2011/08/01 18:57), Jan Kara wrote:
> >>On Mon 01-08-11 18:45:58, Toshiyuki Okajima wrote:
> >>>(2011/08/01 17:45), Jan Kara wrote:
> >>>>On Mon 01-08-11 13:54:51, Toshiyuki Okajima wrote:
> >>>>>If there are some inodes in orphan list while a filesystem is being
> >>>>>read-only mounted, we should recommend that pepole umount and then
> >>>>>mount it when they try to remount with read-write. But the current
> >>>>>message/comment recommends that they umount and then remount it.
> <SNIP>
> >>>>the most... BTW, I guess you didn't really see this message in practice, did
> >>>>you?
> >>>No.
> >>>I have seen this message in practice while quotacheck command was repeatedly
> >>>executed per an hour.
> >>Interesting. Are you able to reproduce this? Quotacheck does remount
> >>read-only + remount read-write but you cannot really remount the filesystem
> >>read-only when it has orphan inodes and so you should not see those when
> >>you remount read-write again. Possibly there's race between remounting and
> >>unlinking...
> >Yes. I can reproduce it. However, it is not frequently reproduced
> >by using the original procedure (qutacheck per an hour). So, I made a
> >reproducer.
> To tell the truth, I think the race creates the message:
> -----------------------------------------------------------------------
>  EXT3-fs: <dev>: couldn't remount RDWR because of
>       unprocessed orphan inode list.  Please umount/remount instead.
> -----------------------------------------------------------------------
> which hides a serious problem.
  I've inquired about this at linux-fsdevel (I think you were in CC unless
I forgot). It's a race in VFS remount code as you properly analyzed below.
People are working on fixing it but it's not trivial. Filesystem is really
a wrong place to fix such problem. If there is a trivial fix for ext3 to
workaround the issue, I can take it but I'm not willing to push anything
complex - effort should better be spent working on a generic fix.

								Honza

> By using my reproducer, I found that it can show another message that
> is not the above mentioned message:
> -----------------------------------------------------------------------
> EXT3-fs error (device <dev>) in start_transaction: Readonly filesystem	
> -----------------------------------------------------------------------
> After I examined the code path which message could display, I found
> it can display if the following steps are satisfied:
> 
> [[CASE 1]]
> ( 1)  [process A] do_unlinkat
> ( 2)  [process B] do_remount_sb(, RDONLY, )
> ( 3)  [process A]  vfs_unlink
> ( 4)  [process A]   ext3_unlink
> ( 5)  [process A]    ext3_journal_start
> ( 6)  [process B]  fs_may_remount_ro   (=> return 0)
> ( 7)  [process A]    inode->i_nlink-- (i_nlink=0)
> ( 8)  [process A]    ext3_orphan_add
> ( 9)  [process A]    ext3_journal_stop
> (10)  [process A]  dput
> (11)  [process A]   iput
> (12)  [process A]    ext3_evict_inode
> (13)  [process B]  ext3_remount
> (14)  [process A]     start_transaction
> (15)  [process B]   sb->s_flags |= MS_RDONLY
> (16)  [process B]   ext3_mark_recovery_complete
> (17)  [process A]      start_this_handle (new transaction is created)
> (18)  [process A]     ext3_truncate
> (19)  [process A]      start_transaction (failed => this message is displayed)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> (20)  [process A]     ext3_orphan_del
> (21)  [process A]     ext3_journal_stop
> 
> * "Process A" deletes a file successfully(21). However the file data is left
>    because (18) fails. **Furthermore, new transaction can be created after
>    ext3_mark_recovery_complete finishes.**
> 
> [[CASE2]]
> ( 1)  [process A] do_unlinkat
> ( 2)  [process B] do_remount_sb(, RDONLY, )
> ( 3)  [process A]  vfs_unlink
> ( 4)  [process A]   ext3_unlink
> ( 5)  [process A]    ext3_journal_start
> ( 6)  [process B]  fs_may_remount_ro   (=> return 0)
> ( 7)  [process A]    inode->i_nlink-- (i_nlink=0)
> ( 8)  [process A]    ext3_orphan_add
> ( 9)  [process A]    ext3_journal_stop
> (10)  [process A]  dput
> (11)  [process A]   iput
> (12)  [process A]    ext3_evict_inode
> (13)  [process B]  ext3_remount
> (14)  [process A]     start_transaction
> (15)  [process B]   sb->s_flags |= MS_RDONLY
> (17)  [process A]      start_this_handle (new transaction is created)
> (18)  [process A]     ext3_truncate
> (19)  [process A]      start_transaction (failed => this message is displayed)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> (20)  [process A]     ext3_orphan_del
> (21)  [process A]     ext3_journal_stop
> (22)  [process B]   ext3_mark_recovery_complete
> 
> * "Process A" deletes a file successfully(21). However the file data is left
>    because (18) fails. This transaction can finish before
>    ext3_mark_recovery_complete finishes.
> 
> I will try to fix this problem not to do with fs-error.
> Please comment about the fix if I have created one.
> 
> Thanks,
> Toshiyuki Okajima
> 
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ