[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9e7e3e55-1f23-4e71-90d3-83b1309b34d6@arm.com>
Date: Tue, 27 Feb 2024 12:28:26 +0000
From: Robin Murphy <robin.murphy@....com>
To: Cong Liu <liucong2@...inos.cn>, Joerg Roedel <joro@...tes.org>,
Will Deacon <will@...nel.org>
Cc: iommu@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] iommu/iova: Simplify IOVA cache allocation with
KMEM_CACHE()
On 27/02/2024 9:08 am, Cong Liu wrote:
> Use the new KMEM_CACHE() macro instead of direct kmem_cache_create
> to simplify the creation of SLAB caches.
Heh, this "new" macro has existed for more than half the lifetime of
Linux itself ;) ...and given that users are still outnumbered at least
5:1 by users of kmem_cache_alloc(), I think it's fair to say that it
hasn't really caught on all that well.
Most critically, though, as I mentioned on the previous thread, this
would change the userspace-visible names - where I think the "iommu_"
namespace is helpful in itself - and so impact anyone who's already
using meminfo to track IOVA memory consumption, so I don't think such
disruption is worthwhile just to save a mere 3 lines of code.
Thanks,
Robin.
> Signed-off-by: Cong Liu <liucong2@...inos.cn>
> ---
> drivers/iommu/iova.c | 7 ++-----
> 1 file changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index d59d0ea2fd21..9134acae76f5 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -950,14 +950,11 @@ int iova_cache_get(void)
>
> mutex_lock(&iova_cache_mutex);
> if (!iova_cache_users) {
> - iova_cache = kmem_cache_create("iommu_iova", sizeof(struct iova), 0,
> - SLAB_HWCACHE_ALIGN, NULL);
> + iova_cache = KMEM_CACHE(iova, SLAB_HWCACHE_ALIGN);
> if (!iova_cache)
> goto out_err;
>
> - iova_magazine_cache = kmem_cache_create("iommu_iova_magazine",
> - sizeof(struct iova_magazine),
> - 0, SLAB_HWCACHE_ALIGN, NULL);
> + iova_magazine_cache = KMEM_CACHE(iova_magazine, SLAB_HWCACHE_ALIGN);
> if (!iova_magazine_cache)
> goto out_err;
>
Powered by blists - more mailing lists