[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87bde968-0103-c90d-380d-e0793cd39aa5@arm.com>
Date: Fri, 31 Jan 2020 13:14:36 +0000
From: Robin Murphy <robin.murphy@....com>
To: Eric Dumazet <edumazet@...gle.com>, Christoph Hellwig <hch@....de>,
Joerg Roedel <jroedel@...e.de>
Cc: iommu@...ts.linux-foundation.org,
Eric Dumazet <eric.dumazet@...il.com>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] dma-debug: add a per-cpu cache to avoid lock contention
On 2020-01-30 8:14 pm, Eric Dumazet via iommu wrote:
> Networking drivers very often have to replace one page with
> another for their RX ring buffers.
>
> A multi-queue NIC will severly hit a contention point
> in dma-debug while grabbing free_entries_lock spinlock.
>
> Adding a one entry per-cpu cache removes the need
> to grab this spinlock twice per page replacement.
>
> Tested on a 40Gbit mlx4 NIC, with 16 RX queues and about
> 1,000,000 replacements per second.
Looks like a decent benefit for minimal collateral impact - from what
I've seen, networking appears to be a particularly easy place to run
into awkward races, so mitigating even a little of the debug-induced
serialisation for the common case can only be good.
Reviewed-by: Robin Murphy <robin.murphy@....com>
Thanks,
Robin.
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: Christoph Hellwig <hch@....de>
> ---
> kernel/dma/debug.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
> index a310dbb1515e92c081f8f3f9a7290dd5e53fc889..b7221426ef49cf640db5bcb261b0817d714a3033 100644
> --- a/kernel/dma/debug.c
> +++ b/kernel/dma/debug.c
> @@ -97,6 +97,8 @@ static LIST_HEAD(free_entries);
> /* Lock for the list above */
> static DEFINE_SPINLOCK(free_entries_lock);
>
> +static DEFINE_PER_CPU(struct dma_debug_entry *, dma_debug_entry_cache);
> +
> /* Global disable flag - will be set in case of an error */
> static bool global_disable __read_mostly;
>
> @@ -676,6 +678,10 @@ static struct dma_debug_entry *dma_entry_alloc(void)
> struct dma_debug_entry *entry;
> unsigned long flags;
>
> + entry = this_cpu_xchg(dma_debug_entry_cache, NULL);
> + if (entry)
> + goto end;
> +
> spin_lock_irqsave(&free_entries_lock, flags);
> if (num_free_entries == 0) {
> if (dma_debug_create_entries(GFP_ATOMIC)) {
> @@ -690,7 +696,7 @@ static struct dma_debug_entry *dma_entry_alloc(void)
> entry = __dma_entry_alloc();
>
> spin_unlock_irqrestore(&free_entries_lock, flags);
> -
> +end:
> #ifdef CONFIG_STACKTRACE
> entry->stack_len = stack_trace_save(entry->stack_entries,
> ARRAY_SIZE(entry->stack_entries),
> @@ -705,6 +711,9 @@ static void dma_entry_free(struct dma_debug_entry *entry)
>
> active_cacheline_remove(entry);
>
> + if (!this_cpu_cmpxchg(dma_debug_entry_cache, NULL, entry))
> + return;
> +
> /*
> * add to beginning of the list - this way the entries are
> * more likely cache hot when they are reallocated.
>
Powered by blists - more mailing lists