lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220907131432.4i3bh3xt6egesv7g@quack3>
Date:   Wed, 7 Sep 2022 15:14:32 +0200
From:   Jan Kara <jack@...e.cz>
To:     Mike Galbraith <efault@....de>
Cc:     Jan Kara <jack@...e.cz>, LKML <linux-kernel@...r.kernel.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        linux-ext4@...r.kernel.org
Subject: Re: 307af6c879377 "mbcache: automatically delete entries from cache
 on freeing" ==> PREEMPT_RT grumble

Hi Mike!

[added ext4 list to CC]

On Tue 06-09-22 18:21:37, Mike Galbraith wrote:
> diff --git a/fs/mbcache.c b/fs/mbcache.c
> index d1ebb5df2856..96f1d49d30a5 100644
> --- a/fs/mbcache.c
> +++ b/fs/mbcache.c
> @ -106,21 +106,28 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
>  		}
>  	}
>  	hlist_bl_add_head(&entry->e_hash_list, head);
> -	hlist_bl_unlock(head);
> -
> +	/*
> +	 * Add entry to LRU list before it can be found by
> +	 * mb_cache_entry_delete() to avoid races
> +	 */
>  	spin_lock(&cache->c_list_lock);
>  	list_add_tail(&entry->e_list, &cache->c_list);
> -	/* Grab ref for LRU list */
> -	atomic_inc(&entry->e_refcnt);
>  	cache->c_entry_count++;
>  	spin_unlock(&cache->c_list_lock);
> +	hlist_bl_unlock(head);
> 
>  	return 0;
>  }
>  EXPORT_SYMBOL(mb_cache_entry_create);
> 
> The above movement of hlist_bl_unlock() is a problem for RT wrt both
> taking and releasing of ->c_list_lock, it becoming an rtmutex in RT and
> hlist_bl_unlock() taking a preemption blocking bit spinlock.
> 
> Is that scope increase necessary?  If so, looks like ->c_list_lock
> could probably become a raw_spinlock_t without anyone noticing.

Well, it was an easy solution but there's relatively simple workaround that
should remove the need of nesting. I'll send a patch.

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ