lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20141204184937.GA28571@ret.masoncoding.com>
Date:	Thu, 4 Dec 2014 13:49:38 -0500
From:	Chris Mason <clm@...com>
To:	Davide Libenzi <davidel@...ilserver.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	<linux-fsdevel@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: [PATCH] eventfd: don't take the spinlock in eventfd_poll

The spinlock in eventfd_poll is trying to protect the count of events
so it can decide if it should return POLLIN, POLLERR, or POLLOUT.  But,
because of the way we drop the lock after calling poll_wait, and drop it
again before returning, we have the same pile of races with the lock as
we do with a single read of ctx->count().

This replaces the lock with a read barrier and single read.

eventfd_write does a single bump of ctx->count, so this should not add
new races with adding events.  eventfd_read is similar, it will do a
single decrement with the lock held, and so we're making the race with
concurrent readers slightly larger.

This spinlock is the top CPU user in kernel code during one of our workloads.
Removing it gives us a ~2% boost.

Signed-off-by: Chris Mason <clm@...com>
---
 fs/eventfd.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/fs/eventfd.c b/fs/eventfd.c
index d6a88e7..cdbdb96 100644
--- a/fs/eventfd.c
+++ b/fs/eventfd.c
@@ -119,17 +119,18 @@ static unsigned int eventfd_poll(struct file *file, poll_table *wait)
 	struct eventfd_ctx *ctx = file->private_data;
 	unsigned int events = 0;
 	unsigned long flags;
+	unsigned int count;
 
 	poll_wait(file, &ctx->wqh, wait);
+	smp_rmb();
+	count = ctx->count;
 
-	spin_lock_irqsave(&ctx->wqh.lock, flags);
-	if (ctx->count > 0)
+	if (count > 0)
 		events |= POLLIN;
-	if (ctx->count == ULLONG_MAX)
+	if (count == ULLONG_MAX)
 		events |= POLLERR;
-	if (ULLONG_MAX - 1 > ctx->count)
+	if (ULLONG_MAX - 1 > count)
 		events |= POLLOUT;
-	spin_unlock_irqrestore(&ctx->wqh.lock, flags);
 
 	return events;
 }
-- 
1.8.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ