lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220308124432.etmyoi3gmepg2buq@quack3.lan>
Date:   Tue, 8 Mar 2022 13:44:32 +0100
From:   Jan Kara <jack@...e.cz>
To:     Harshad Shirwadkar <harshadshirwadkar@...il.com>
Cc:     linux-ext4@...r.kernel.org, riteshh@...ux.ibm.com, jack@...e.cz,
        tytso@....edu
Subject: Re: [PATCH 4/5] ext4: rework fast commit commit path

On Tue 08-03-22 02:51:11, Harshad Shirwadkar wrote:
> From: Harshad Shirwadkar <harshadshirwadkar@...il.com>
> 
> This patch reworks fast commit's commit path to remove locking the
> journal for the entire duration of a fast commit. Instead, we only lock
> the journal while marking all the eligible inodes as "committing". This
> allows handles to make progress in parallel with the fast commit.
> 
> Signed-off-by: Harshad Shirwadkar <harshadshirwadkar@...il.com>

...

> @@ -1044,6 +1025,18 @@ static int ext4_fc_perform_commit(journal_t *journal)
>  	int ret = 0;
>  	u32 crc = 0;
>  
> +	/* Lock the journal */
> +	jbd2_journal_lock_updates(journal);
> +	spin_lock(&sbi->s_fc_lock);
> +	list_for_each_entry(iter, &sbi->s_fc_q[FC_Q_MAIN], i_fc_list) {
> +		spin_lock(&iter->i_fc_lock);
> +		ext4_set_inode_state(&iter->vfs_inode,
> +				     EXT4_STATE_FC_COMMITTING);
> +		spin_unlock(&iter->i_fc_lock);
> +	}
> +	spin_unlock(&sbi->s_fc_lock);
> +	jbd2_journal_unlock_updates(journal);
> +

Again, i_fc_lock does not seem to be necessary here...

> @@ -1094,6 +1087,14 @@ static int ext4_fc_perform_commit(journal_t *journal)
>  		ret = ext4_fc_write_inode(inode, &crc);
>  		if (ret)
>  			goto out;
> +		spin_lock(&iter->i_fc_lock);
> +		ext4_clear_inode_state(inode, EXT4_STATE_FC_COMMITTING);
> +		spin_unlock(&iter->i_fc_lock);
> +#if (BITS_PER_LONG < 64)
> +		wake_up_bit(&iter->i_state_flags, EXT4_STATE_FC_COMMITTING);
> +#else
> +		wake_up_bit(&iter->i_flags, EXT4_STATE_FC_COMMITTING);
> +#endif
>  		spin_lock(&sbi->s_fc_lock);

And here we can do without i_fc_lock as well, we just need smp_mb() between
ext4_clear_inode_state() and wake_up_bit() to pair with the implicit
barrier inside prepare_to_wait(). 

> @@ -1227,13 +1228,15 @@ static void ext4_fc_cleanup(journal_t *journal, int full, tid_t tid)
>  	spin_lock(&sbi->s_fc_lock);
>  	list_for_each_entry_safe(iter, iter_n, &sbi->s_fc_q[FC_Q_MAIN],
>  				 i_fc_list) {
> -		list_del_init(&iter->i_fc_list);
> +		spin_lock(&iter->i_fc_lock);
>  		ext4_clear_inode_state(&iter->vfs_inode,
>  				       EXT4_STATE_FC_COMMITTING);
> +		spin_unlock(&iter->i_fc_lock);
>  		if (iter->i_sync_tid <= tid)
>  			ext4_fc_reset_inode(&iter->vfs_inode);
>  		/* Make sure EXT4_STATE_FC_COMMITTING bit is clear */
>  		smp_mb();
> +		list_del_init(&iter->i_fc_list);
>  #if (BITS_PER_LONG < 64)
>  		wake_up_bit(&iter->i_state_flags, EXT4_STATE_FC_COMMITTING);
>  #else

Again, i_fc_lock not needed here. As a note the comment in the above about
the barrier is a bit incorrect. The barrier does not make sure any store
happens. It is just an ordering requirement for the CPU. As such barriers
really only work in pairs because both cooperating tasks need to force
proper ordering. So we usually document barriers like:

		/*
		 * Make sure clearing of EXT4_STATE_FC_COMMITTING is
		 * visible before we send the wakeup. Pairs with implicit
		 * barrier in prepare_to_wait() in ext4_fc_track_inode().
		 */

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ