lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <9f593b8ac1b731cbbf92dc1c7b497b668752b325.camel@gmx.de>
Date:   Tue, 06 Sep 2022 18:21:37 +0200
From:   Mike Galbraith <efault@....de>
To:     Jan Kara <jack@...e.cz>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: 307af6c879377 "mbcache: automatically delete entries from cache on
 freeing" ==> PREEMPT_RT grumble

Hi Jan,

diff --git a/fs/mbcache.c b/fs/mbcache.c
index d1ebb5df2856..96f1d49d30a5 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@ -106,21 +106,28 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
 		}
 	}
 	hlist_bl_add_head(&entry->e_hash_list, head);
-	hlist_bl_unlock(head);
-
+	/*
+	 * Add entry to LRU list before it can be found by
+	 * mb_cache_entry_delete() to avoid races
+	 */
 	spin_lock(&cache->c_list_lock);
 	list_add_tail(&entry->e_list, &cache->c_list);
-	/* Grab ref for LRU list */
-	atomic_inc(&entry->e_refcnt);
 	cache->c_entry_count++;
 	spin_unlock(&cache->c_list_lock);
+	hlist_bl_unlock(head);

 	return 0;
 }
 EXPORT_SYMBOL(mb_cache_entry_create);

The above movement of hlist_bl_unlock() is a problem for RT wrt both
taking and releasing of ->c_list_lock, it becoming an rtmutex in RT and
hlist_bl_unlock() taking a preemption blocking bit spinlock.

Is that scope increase necessary?  If so, looks like ->c_list_lock
could probably become a raw_spinlock_t without anyone noticing.

	-Mike

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ