[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230815165827.GG907732@google.com>
Date: Wed, 16 Aug 2023 01:58:27 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Robin Murphy <robin.murphy@....com>
Cc: Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Petr Mladek <pmladek@...e.com>,
Rob Clark <robdclark@...omium.org>,
John Ogness <john.ogness@...utronix.de>,
linux-kernel@...r.kernel.org,
Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCH] dma-debug: defer __dma_entry_alloc_check_leak() printk
output
On (23/08/16 01:52), Sergey Senozhatsky wrote:
> On (23/08/15 17:42), Robin Murphy wrote:
> > On 15/08/2023 4:26 pm, Sergey Senozhatsky wrote:
> > > __dma_entry_alloc_check_leak() calls printk -> serial console
> > > output (qcom geni) and grabs port->lock under free_entries_lock,
> > > which is a conflicting locking dependency chain as qcom_geni IRQ
> > > handler can call into dma-debug code and grab free_entries_lock
> > > under port->lock.
> > >
> > > Use deferred printk in __dma_entry_alloc_check_leak() so that we
> > > don't acquire serial console's port->lock under free_entries_lock.
> >
> > Hmm, the print really doesn't need to be under the lock anyway, it only
> > needs to key off whether the "num_free_entries == 0" condition was hit or
> > not.
>
> I thought about it, briefly. __dma_entry_alloc_check_leak() reads
> global nr_total_entries / nr_prealloc_entries which are updated
> (inc/dec) under free_entries_lock, so I didn't want to move
> __dma_entry_alloc_check_leak() outside of free_entries_lock scope.
Something like this?
---
diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
index 9e11ceadc69d..ca0508de4e78 100644
--- a/kernel/dma/debug.c
+++ b/kernel/dma/debug.c
@@ -637,15 +637,15 @@ static struct dma_debug_entry *__dma_entry_alloc(void)
return entry;
}
-static void __dma_entry_alloc_check_leak(void)
+static void __dma_entry_alloc_check_leak(u32 total_entries)
{
- u32 tmp = nr_total_entries % nr_prealloc_entries;
+ u32 tmp = total_entries % nr_prealloc_entries;
/* Shout each time we tick over some multiple of the initial pool */
if (tmp < DMA_DEBUG_DYNAMIC_ENTRIES) {
- printk_deferred(KERN_INFO "dma_debug_entry pool grown to %u (%u00%%)\n",
- nr_total_entries,
- (nr_total_entries / nr_prealloc_entries));
+ pr_info("dma_debug_entry pool grown to %u (%u00%%)\n",
+ total_entries,
+ (total_entries / nr_prealloc_entries));
}
}
@@ -658,6 +658,8 @@ static struct dma_debug_entry *dma_entry_alloc(void)
{
struct dma_debug_entry *entry;
unsigned long flags;
+ bool alloc_check_leak = false;
+ u32 total_entries;
spin_lock_irqsave(&free_entries_lock, flags);
if (num_free_entries == 0) {
@@ -667,13 +669,17 @@ static struct dma_debug_entry *dma_entry_alloc(void)
pr_err("debugging out of memory - disabling\n");
return NULL;
}
- __dma_entry_alloc_check_leak();
+ alloc_check_leak = true;
+ total_entries = nr_total_entries;
}
entry = __dma_entry_alloc();
spin_unlock_irqrestore(&free_entries_lock, flags);
+ if (alloc_check_leak)
+ __dma_entry_alloc_check_leak(total_entries);
+
#ifdef CONFIG_STACKTRACE
entry->stack_len = stack_trace_save(entry->stack_entries,
ARRAY_SIZE(entry->stack_entries),
Powered by blists - more mailing lists