[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240731073752.1225177-1-link@vivo.com>
Date: Wed, 31 Jul 2024 15:37:51 +0800
From: Huan Yang <link@...o.com>
To: Gerd Hoffmann <kraxel@...hat.com>,
Sumit Semwal <sumit.semwal@...aro.org>,
Christian König <christian.koenig@....com>,
dri-devel@...ts.freedesktop.org,
linux-media@...r.kernel.org,
linaro-mm-sig@...ts.linaro.org,
linux-kernel@...r.kernel.org
Cc: opensource.kernel@...o.com,
Huan Yang <link@...o.com>
Subject: [PATCH v3] udmabuf: use kmem_cache to alloc udmabuf folio
The current udmabuf_folio contains a list_head and the corresponding
folio pointer, with a size of 24 bytes. udmabuf_folio uses kmalloc to
allocate memory.
However, kmalloc is a public pool, starting from 8,16,32 bytes.
Additionally, if the size is not aligned with the kmalloc size, it will
be rounded up to the corresponding size.
This means that each udmabuf_folio allocation will get 32 bytes, and
waste 8 bytes.
Considering that each udmabuf creates a folio corresponding to a
udmabuf_folio, the wasted memory can be significant in the case of
memory fragmentation.
Furthermore, if udmabuf is frequently used, the allocation and
deallocation of udmabuf_folio will also be frequent.
Therefore, this patch adds a kmem_cache dedicated to the allocation and
deallocation of udmabuf_folio.This is expected to improve the
performance of allocation and deallocation within the expected range,
while also avoiding memory waste.
Signed-off-by: Huan Yang <link@...o.com>
---
v3 -> v2: fix error description.
v2 -> v1: fix double unregister, remove unlikely.
drivers/dma-buf/udmabuf.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 047c3cd2ceff..c112c58ef09a 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -24,6 +24,8 @@ static int size_limit_mb = 64;
module_param(size_limit_mb, int, 0644);
MODULE_PARM_DESC(size_limit_mb, "Max size of a dmabuf, in megabytes. Default is 64.");
+static struct kmem_cache *udmabuf_folio_cachep;
+
struct udmabuf {
pgoff_t pagecount;
struct folio **folios;
@@ -169,7 +171,7 @@ static void unpin_all_folios(struct list_head *unpin_list)
unpin_folio(ubuf_folio->folio);
list_del(&ubuf_folio->list);
- kfree(ubuf_folio);
+ kmem_cache_free(udmabuf_folio_cachep, ubuf_folio);
}
}
@@ -178,7 +180,7 @@ static int add_to_unpin_list(struct list_head *unpin_list,
{
struct udmabuf_folio *ubuf_folio;
- ubuf_folio = kzalloc(sizeof(*ubuf_folio), GFP_KERNEL);
+ ubuf_folio = kmem_cache_alloc(udmabuf_folio_cachep, GFP_KERNEL);
if (!ubuf_folio)
return -ENOMEM;
@@ -491,11 +493,20 @@ static int __init udmabuf_dev_init(void)
DMA_BIT_MASK(64));
if (ret < 0) {
pr_err("Could not setup DMA mask for udmabuf device\n");
- misc_deregister(&udmabuf_misc);
- return ret;
+ goto err;
+ }
+
+ udmabuf_folio_cachep = KMEM_CACHE(udmabuf_folio, 0);
+ if (!udmabuf_folio_cachep) {
+ ret = -ENOMEM;
+ goto err;
}
return 0;
+
+err:
+ misc_deregister(&udmabuf_misc);
+ return ret;
}
static void __exit udmabuf_dev_exit(void)
base-commit: cd19ac2f903276b820f5d0d89de0c896c27036ed
--
2.45.2
Powered by blists - more mailing lists