[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240221085558.166774-1-chentao@kylinos.cn>
Date: Wed, 21 Feb 2024 16:55:58 +0800
From: Kunwu Chan <chentao@...inos.cn>
To: ltuikov89@...il.com,
maarten.lankhorst@...ux.intel.com,
mripard@...nel.org,
tzimmermann@...e.de,
airlied@...il.com,
daniel@...ll.ch
Cc: dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org,
Kunwu Chan <chentao@...inos.cn>
Subject: [PATCH] drm/scheduler: Simplify the allocation of slab caches in drm_sched_fence_slab_init
Use the new KMEM_CACHE() macro instead of direct kmem_cache_create
to simplify the creation of SLAB caches.
Signed-off-by: Kunwu Chan <chentao@...inos.cn>
---
drivers/gpu/drm/scheduler/sched_fence.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
index 06cedfe4b486..0f35f009b9d3 100644
--- a/drivers/gpu/drm/scheduler/sched_fence.c
+++ b/drivers/gpu/drm/scheduler/sched_fence.c
@@ -33,9 +33,7 @@ static struct kmem_cache *sched_fence_slab;
static int __init drm_sched_fence_slab_init(void)
{
- sched_fence_slab = kmem_cache_create(
- "drm_sched_fence", sizeof(struct drm_sched_fence), 0,
- SLAB_HWCACHE_ALIGN, NULL);
+ sched_fence_slab = KMEM_CACHE(drm_sched_fence, SLAB_HWCACHE_ALIGN);
if (!sched_fence_slab)
return -ENOMEM;
--
2.39.2
Powered by blists - more mailing lists