lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 11 Jun 2024 13:05:05 +0200
From: Jan Kara <jack@...e.cz>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: Jan Kara <jack@...e.cz>, brauner@...nel.org, viro@...iv.linux.org.uk,
	david@...morbit.com, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] vfs: partially sanitize i_state zeroing on inode creation

On Tue 11-06-24 12:23:59, Mateusz Guzik wrote:
> On Tue, Jun 11, 2024 at 12:02:22PM +0200, Jan Kara wrote:
> > On Tue 11-06-24 06:15:40, Mateusz Guzik wrote:
> > > new_inode used to have the following:
> > > 	spin_lock(&inode_lock);
> > > 	inodes_stat.nr_inodes++;
> > > 	list_add(&inode->i_list, &inode_in_use);
> > > 	list_add(&inode->i_sb_list, &sb->s_inodes);
> > > 	inode->i_ino = ++last_ino;
> > > 	inode->i_state = 0;
> > > 	spin_unlock(&inode_lock);
> > > 
> > > over time things disappeared, got moved around or got replaced (global
> > > inode lock with a per-inode lock), eventually this got reduced to:
> > > 	spin_lock(&inode->i_lock);
> > > 	inode->i_state = 0;
> > > 	spin_unlock(&inode->i_lock);
> > > 
> > > But the lock acquire here does not synchronize against anyone.
> > > 
> > > Additionally iget5_locked performs i_state = 0 assignment without any
> > > locks to begin with and the two combined look confusing at best.
> > > 
> > > It looks like the current state is a leftover which was not cleaned up.
> > > 
> > > Ideally it would be an invariant that i_state == 0 to begin with, but
> > > achieving that would require dealing with all filesystem alloc handlers
> > > one by one.
> > > 
> > > In the meantime drop the misleading locking and move i_state zeroing to
> > > alloc_inode so that others don't need to deal with it by hand.
> > > 
> > > Signed-off-by: Mateusz Guzik <mjguzik@...il.com>
> > 
> > Good point. But the initialization would seem more natural in
> > inode_init_always(), wouldn't it? And that will also address your "FIXME"
> > comment.
> > 
> 
> My point is that by the time the inode is destroyed some of the fields
> like i_state should be set to a well-known value, this one preferably
> plain 0.

Well, i_state is set to a more or less well defined value but it is not
zero. I don't see a performance difference in whether set it to 0 on
freeing or on allocation and on allocation it is actually much easier to
find when reading the code.

> I did not patch inode_init_always because it is exported and xfs uses it
> in 2 spots, only one of which zeroing the thing immediately after.
> Another one is a little more involved, it probably would not be a
> problem as the value is set altered later anyway, but I don't want to
> mess with semantics of the func if it can be easily avoided.

Well, I'd consider that as another good reason to actually clean this up.
Look, inode_init_always() is used in bcachefs and xfs. bcachefs sets
i_state to 0 just before calling inode_init_always(), xfs just after one
call of inode_init_always() and the call in xfs_reinit_inode() is used
only from xfs_iget_recycle() which sets i_state to I_NEW. So I claim that
moving i_state clearing to inode_init_always() will not cause any issue and
is actually desirable.

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ