lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 23 Mar 2014 14:14:16 +0100
From:	Jan Kara <jack@...e.cz>
To:	Benjamin LaHaise <bcrl@...ck.org>
Cc:	Alexander Viro <viro@...iv.linux.org.uk>,
	linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org
Subject: Re: bdi has dirty inode after umount of ext4 fs in 3.4.83

On Fri 21-03-14 11:25:41, Benjamin LaHaise wrote:
  Hello,

> After adding some debugging code in an application to check for dirty 
> buffers on a bdi after umount, I'm seeing instances where b_dirty has 
> exactly 1 dirty inode listed on a 3.4.83 kernel after umount() of a 
> filesystem.  Roughly what the application does is to umount an ext3 
> filesystem (using the ext4 codebase), perform an fsync() of the block 
> device, then check the bdi stats in /sys/kernel/debug/252:4/stats (this 
> is a dm partition on top of a dm multipath device for an FC LUN).  I've 
> found that if I add a sync() call instead of the fsync(), the b_dirty 
> count usually drops to 0, but not always.  I've added some debugging 
> code to the bdi stats dump, and the inode on the b_dirty list shows up as:
> 
> 	inode=ffff88081beaada0, i_ino=0, i_nlink=1 i_sb=ffff88083c03e400
> 	i_state=0x00000004 i_data.nrpages=4 i_count=3
> 	i_sb->s_dev=0x00000002
> 
> The fact that the inode number is 0 looks very odd.
  So the dirty inode is almost certainly a block device inode. Another clue
is that fsync(2) actually doesn't clean inode dirty state (especially not
for block device inodes since that inode is a special one and fs usually
doesn't get to inspecting it). sync(2) does in general clear inode dirty
state because that's handled by flusher thread. However if ->sync_fs()
dirties the block device inode, subsequent sync_blockdev() call only writes
the data but doesn't clean the inode state. So even with sync(2) it can
happen the block device inode remains dirty.

In general inode dirty state isn't reliable. I_DIRTY_DATA can be set when
inode is in fact clean. You have to use mapping_tagged(inode->i_mapping,
PAGECACHE_TAG_DIRTY) to determine whether the inode has actually any dirty
data.

> Testing the application on top of a newer kernel is a bit of a challenge 
> as other parts of the system have yet to be forward ported from the 3.4 
> kernel, but I'll try to come up with a test case that shows the issue.  
> In the meantime, is anyone aware of any umount()/sync related issues that 
> might be affecting ext4 in 3.4.83?  Thanks in advance for any ideas on 
> how to track this down.  Cheers,
  Newer kernels don't bring anything substantially new to the picture...

								Honza

-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ