lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080422011233.GG9153@linux.vnet.ibm.com>
Date:	Mon, 21 Apr 2008 18:12:33 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu,
	akpm@...ux-foundation.org, dwalker@...sta.com,
	sdietrich@...ell.com, dvhltc@...ibm.com, niv@...ibm.com
Subject: Re: [PATCH] SELinux fixups needed for preemptable RCU from -rt

On Mon, Apr 21, 2008 at 08:54:51PM -0400, Steven Rostedt wrote:
> 
> On Mon, 21 Apr 2008, Paul E. McKenney wrote:
> > @@ -807,8 +810,14 @@ int avc_ss_reset(u32 seqno)
> >
> >  	for (i = 0; i < AVC_CACHE_SLOTS; i++) {
> >  		spin_lock_irqsave(&avc_cache.slots_lock[i], flag);
> > +		/*
> > +		 * On -rt the outer spinlock does not prevent RCU
> > +		 * from being performed:
> 
> I would suggest to change this comment to "With preemptible RCU" from
> "On -rt".

Good point...  How about the following?

> -- Steve
> 
> > +		 */
> > +		rcu_read_lock();
> >  		list_for_each_entry(node, &avc_cache.slots[i], list)
> >  			avc_node_delete(node);
> > +		rcu_read_unlock();
> >  		spin_unlock_irqrestore(&avc_cache.slots_lock[i], flag);
> >  	}


Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> (comment change)
---
 security/selinux/avc.c   |    9 +++++++++
 security/selinux/netif.c |    2 ++
 2 files changed, 11 insertions(+)

Index: linux-2.6.24.4-rt4/security/selinux/avc.c
===================================================================
--- linux-2.6.24.4-rt4.orig/security/selinux/avc.c	2008-03-24 19:05:09.000000000 -0400
+++ linux-2.6.24.4-rt4/security/selinux/avc.c	2008-03-24 19:06:41.000000000 -0400
@@ -312,6 +312,7 @@ static inline int avc_reclaim_node(void)
 		if (!spin_trylock_irqsave(&avc_cache.slots_lock[hvalue], flags))
 			continue;
 
+		rcu_read_lock();
 		list_for_each_entry(node, &avc_cache.slots[hvalue], list) {
 			if (atomic_dec_and_test(&node->ae.used)) {
 				/* Recently Unused */
@@ -319,11 +320,13 @@ static inline int avc_reclaim_node(void)
 				avc_cache_stats_incr(reclaims);
 				ecx++;
 				if (ecx >= AVC_CACHE_RECLAIM) {
+					rcu_read_unlock();
 					spin_unlock_irqrestore(&avc_cache.slots_lock[hvalue], flags);
 					goto out;
 				}
 			}
 		}
+		rcu_read_unlock();
 		spin_unlock_irqrestore(&avc_cache.slots_lock[hvalue], flags);
 	}
 out:
@@ -807,8 +810,14 @@ int avc_ss_reset(u32 seqno)
 
 	for (i = 0; i < AVC_CACHE_SLOTS; i++) {
 		spin_lock_irqsave(&avc_cache.slots_lock[i], flag);
+		/*
+		 * With preemptable RCU, the outer spinlock does not
+		 * prevent RCU grace periods from ending.
+		 */
+		rcu_read_lock();
 		list_for_each_entry(node, &avc_cache.slots[i], list)
 			avc_node_delete(node);
+		rcu_read_unlock();
 		spin_unlock_irqrestore(&avc_cache.slots_lock[i], flag);
 	}
 
Index: linux-2.6.24.4-rt4/security/selinux/netif.c
===================================================================
--- linux-2.6.24.4-rt4.orig/security/selinux/netif.c	2008-03-24 19:05:09.000000000 -0400
+++ linux-2.6.24.4-rt4/security/selinux/netif.c	2008-03-24 19:06:41.000000000 -0400
@@ -210,6 +210,7 @@ static void sel_netif_flush(void)
 {
 	int idx;
 
+	rcu_read_lock();
 	spin_lock_bh(&sel_netif_lock);
 	for (idx = 0; idx < SEL_NETIF_HASH_SIZE; idx++) {
 		struct sel_netif *netif;
@@ -218,6 +219,7 @@ static void sel_netif_flush(void)
 			sel_netif_destroy(netif);
 	}
 	spin_unlock_bh(&sel_netif_lock);
+	rcu_read_unlock();
 }
 
 static int sel_netif_avc_callback(u32 event, u32 ssid, u32 tsid,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ