lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1333032100-4159-8-git-send-email-zohar@linux.vnet.ibm.com>
Date:	Thu, 29 Mar 2012 10:41:35 -0400
From:	Mimi Zohar <zohar@...ux.vnet.ibm.com>
To:	linux-security-module@...r.kernel.org
Cc:	Mimi Zohar <zohar@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	Al Viro <viro@...IV.linux.org.uk>,
	David Safford <safford@...ux.vnet.ibm.com>,
	Dmitry Kasatkin <dmitry.kasatkin@...el.com>,
	Mimi Zohar <zohar@...ibm.com>
Subject: [PATCH v4 07/12] ima: replace iint spinlock with rwlock/read_lock

For performance, replace the iint spinlock with rwlock/read_lock.

Eric Paris questioned this change, from spinlocks to rwlocks, saying
"rwlocks have been shown to actually be slower on multi processor
systems in a number of cases due to the cache line bouncing required."

Based on performance measurements compiling the kernel on a cold
boot with multiple jobs with/without this patch, Dmitry Kasatkin
and I found that rwlocks performed better than spinlocks, but very
insignificantly.  For example with total compilation time around 6
minutes, with rwlocks time was 1 - 3 seconds shorter... but always
like that.

Changelog v2:
- new patch taken from the 'allocating iint improvements' patch

Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@...el.com>
Signed-off-by: Mimi Zohar <zohar@...ibm.com>
---
 security/integrity/iint.c |   16 +++++++---------
 1 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/security/integrity/iint.c b/security/integrity/iint.c
index c91a436..d82a5a1 100644
--- a/security/integrity/iint.c
+++ b/security/integrity/iint.c
@@ -22,7 +22,7 @@
 #include "integrity.h"
 
 static struct rb_root integrity_iint_tree = RB_ROOT;
-static DEFINE_SPINLOCK(integrity_iint_lock);
+static DEFINE_RWLOCK(integrity_iint_lock);
 static struct kmem_cache *iint_cache __read_mostly;
 
 int iint_initialized;
@@ -35,8 +35,6 @@ static struct integrity_iint_cache *__integrity_iint_find(struct inode *inode)
 	struct integrity_iint_cache *iint;
 	struct rb_node *n = integrity_iint_tree.rb_node;
 
-	assert_spin_locked(&integrity_iint_lock);
-
 	while (n) {
 		iint = rb_entry(n, struct integrity_iint_cache, rb_node);
 
@@ -63,9 +61,9 @@ struct integrity_iint_cache *integrity_iint_find(struct inode *inode)
 	if (!IS_IMA(inode))
 		return NULL;
 
-	spin_lock(&integrity_iint_lock);
+	read_lock(&integrity_iint_lock);
 	iint = __integrity_iint_find(inode);
-	spin_unlock(&integrity_iint_lock);
+	read_unlock(&integrity_iint_lock);
 
 	return iint;
 }
@@ -100,7 +98,7 @@ struct integrity_iint_cache *integrity_inode_get(struct inode *inode)
 	if (!iint)
 		return NULL;
 
-	spin_lock(&integrity_iint_lock);
+	write_lock(&integrity_iint_lock);
 
 	p = &integrity_iint_tree.rb_node;
 	while (*p) {
@@ -119,7 +117,7 @@ struct integrity_iint_cache *integrity_inode_get(struct inode *inode)
 	rb_link_node(node, parent, p);
 	rb_insert_color(node, &integrity_iint_tree);
 
-	spin_unlock(&integrity_iint_lock);
+	write_unlock(&integrity_iint_lock);
 	return iint;
 }
 
@@ -136,10 +134,10 @@ void integrity_inode_free(struct inode *inode)
 	if (!IS_IMA(inode))
 		return;
 
-	spin_lock(&integrity_iint_lock);
+	write_lock(&integrity_iint_lock);
 	iint = __integrity_iint_find(inode);
 	rb_erase(&iint->rb_node, &integrity_iint_tree);
-	spin_unlock(&integrity_iint_lock);
+	write_unlock(&integrity_iint_lock);
 
 	iint_free(iint);
 }
-- 
1.7.6.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ