lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 8 Aug 2023 18:05:33 +0200
From:   Mateusz Guzik <mjguzik@...il.com>
To:     Al Viro <viro@...iv.linux.org.uk>,
        Christian Brauner <brauner@...nel.org>
Cc:     linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: new_inode_pseudo vs locked inode->i_state = 0

Hello,

new_inode_pseudo is:
        struct inode *inode = alloc_inode(sb);

	if (inode) {
		spin_lock(&inode->i_lock);
		inode->i_state = 0;
		spin_unlock(&inode->i_lock);
	}

I'm trying to understand:
1. why is it zeroing i_state (as opposed to have it happen in inode_init_always)
2. why is zeroing taking place with i_lock held

The inode is freshly allocated, not yet added to the hash -- I would
expect that nobody else can see it.

Moreover, another consumer of alloc_inode zeroes without bothering to
lock -- see iget5_locked:
[snip]
	struct inode *new = alloc_inode(sb);

		if (new) {
			new->i_state = 0;
[/snip]

I tried to find justification for it in git, the pre-history-wipe repo
(git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git)
says it came in "Import 2.1.45pre1" in 1997. This is where my digging
stopped.

As is, I strongly suspect this is a leftover waiting for clean up.
Moving i_state = 0 back to inode_init_always would result in a few
simplifications in the area. I'm happy to make them, provided this is
indeed safe.

If the lock is required, then it should be added to iget5_locked?

UNRELATED:

While here, new_inode starts with: spin_lock_prefetch(&sb->s_inode_list_lock)

This was also *way* back in a huge commit, since the line was only
getting patched to remain compilable.

This is the only remaining spin_lock_prefetch use in the tree.

I don't know the original justification nor whether it made sense at
the time, this is definitely problematic today in the rather heavy
multicore era -- there is tons of work happening between the prefetch
and actually take the s_inode_list_lock lock, meaning if there is
contention, the cacheline is going to be marked invalid by the time
spin_lock on it is called. But then this only adds to cacheline
bouncing.

Personally I would just remove this line without even trying to benchmark.
-- 
Mateusz Guzik <mjguzik gmail.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ