lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200130201451.253115-1-edumazet@google.com>
Date:   Thu, 30 Jan 2020 12:14:51 -0800
From:   Eric Dumazet <edumazet@...gle.com>
To:     Christoph Hellwig <hch@....de>, Joerg Roedel <jroedel@...e.de>
Cc:     linux-kernel <linux-kernel@...r.kernel.org>,
        iommu@...ts.linux-foundation.org,
        Eric Dumazet <edumazet@...gle.com>,
        Eric Dumazet <eric.dumazet@...il.com>
Subject: [PATCH] dma-debug: add a per-cpu cache to avoid lock contention

Networking drivers very often have to replace one page with
another for their RX ring buffers.

A multi-queue NIC will severly hit a contention point
in dma-debug while grabbing free_entries_lock spinlock.

Adding a one entry per-cpu cache removes the need
to grab this spinlock twice per page replacement.

Tested on a 40Gbit mlx4 NIC, with 16 RX queues and about
1,000,000 replacements per second.

Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Cc: Christoph Hellwig <hch@....de>
---
 kernel/dma/debug.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
index a310dbb1515e92c081f8f3f9a7290dd5e53fc889..b7221426ef49cf640db5bcb261b0817d714a3033 100644
--- a/kernel/dma/debug.c
+++ b/kernel/dma/debug.c
@@ -97,6 +97,8 @@ static LIST_HEAD(free_entries);
 /* Lock for the list above */
 static DEFINE_SPINLOCK(free_entries_lock);
 
+static DEFINE_PER_CPU(struct dma_debug_entry *, dma_debug_entry_cache);
+
 /* Global disable flag - will be set in case of an error */
 static bool global_disable __read_mostly;
 
@@ -676,6 +678,10 @@ static struct dma_debug_entry *dma_entry_alloc(void)
 	struct dma_debug_entry *entry;
 	unsigned long flags;
 
+	entry = this_cpu_xchg(dma_debug_entry_cache, NULL);
+	if (entry)
+		goto end;
+
 	spin_lock_irqsave(&free_entries_lock, flags);
 	if (num_free_entries == 0) {
 		if (dma_debug_create_entries(GFP_ATOMIC)) {
@@ -690,7 +696,7 @@ static struct dma_debug_entry *dma_entry_alloc(void)
 	entry = __dma_entry_alloc();
 
 	spin_unlock_irqrestore(&free_entries_lock, flags);
-
+end:
 #ifdef CONFIG_STACKTRACE
 	entry->stack_len = stack_trace_save(entry->stack_entries,
 					    ARRAY_SIZE(entry->stack_entries),
@@ -705,6 +711,9 @@ static void dma_entry_free(struct dma_debug_entry *entry)
 
 	active_cacheline_remove(entry);
 
+	if (!this_cpu_cmpxchg(dma_debug_entry_cache, NULL, entry))
+		return;
+
 	/*
 	 * add to beginning of the list - this way the entries are
 	 * more likely cache hot when they are reallocated.
-- 
2.25.0.341.g760bfbb309-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ