[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1473932399-23224-3-git-send-email-Mark_Craske@mentor.com>
Date: Thu, 15 Sep 2016 10:39:59 +0100
From: Mark Craske <Mark_Craske@...tor.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
CC: <linux-kernel@...r.kernel.org>,
"George G. Davis" <George_Davis@...tor.com>,
Jiada Wang <Jiada_Wang@...tor.com>
Subject: [PATCH v1 2/2] drivers: dma-coherent: Move spinlock in dma_alloc_from_coherent()
From: Bastian Hecht <bhecht@...adit-jv.com>
We don't need to hold the spinlock while zeroing the allocated memory.
In case we handle big buffers this is a severe issue as other CPUs might
be spinning half a second or longer.
Signed-off-by: Bastian Hecht <bhecht@...adit-jv.com>
Signed-off-by: George G. Davis <george_davis@...tor.com>
Signed-off-by: Mark Craske <Mark_Craske@...tor.com>
---
drivers/base/dma-coherent.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
index abd83b7..d092df4 100644
--- a/drivers/base/dma-coherent.c
+++ b/drivers/base/dma-coherent.c
@@ -165,6 +165,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
int order = get_order(size);
unsigned long flags;
int pageno;
+ int dma_memory_map;
if (!dev)
return 0;
@@ -187,11 +188,12 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
*/
*dma_handle = mem->device_base + (pageno << PAGE_SHIFT);
*ret = mem->virt_base + (pageno << PAGE_SHIFT);
- if (mem->flags & DMA_MEMORY_MAP)
+ dma_memory_map = (mem->flags & DMA_MEMORY_MAP);
+ spin_unlock_irqrestore(&mem->spinlock, flags);
+ if (dma_memory_map)
memset(*ret, 0, size);
else
memset_io(*ret, 0, size);
- spin_unlock_irqrestore(&mem->spinlock, flags);
return 1;
--
1.7.9.5
Powered by blists - more mailing lists