[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f20160a-b9da-4fa3-3796-ed90c6175ebe@arm.com>
Date: Fri, 18 Sep 2020 15:41:00 +0100
From: Robin Murphy <robin.murphy@....com>
To: vjitta@...eaurora.org, joro@...tes.org,
iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org
Cc: vinmenon@...eaurora.org, kernel-team@...roid.com
Subject: Re: [PATCH v2 2/2] iommu/iova: Free global iova rcache on iova alloc
failure
On 2020-08-20 13:49, vjitta@...eaurora.org wrote:
> From: Vijayanand Jitta <vjitta@...eaurora.org>
>
> When ever an iova alloc request fails we free the iova
> ranges present in the percpu iova rcaches and then retry
> but the global iova rcache is not freed as a result we could
> still see iova alloc failure even after retry as global
> rcache is holding the iova's which can cause fragmentation.
> So, free the global iova rcache as well and then go for the
> retry.
>
> Signed-off-by: Vijayanand Jitta <vjitta@...eaurora.org>
> ---
> drivers/iommu/iova.c | 23 +++++++++++++++++++++++
> include/linux/iova.h | 6 ++++++
> 2 files changed, 29 insertions(+)
>
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 4e77116..5836c87 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -442,6 +442,7 @@ struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn)
> flush_rcache = false;
> for_each_online_cpu(cpu)
> free_cpu_cached_iovas(cpu, iovad);
> + free_global_cached_iovas(iovad);
> goto retry;
> }
>
> @@ -1055,5 +1056,27 @@ void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad)
> }
> }
>
> +/*
> + * free all the IOVA ranges of global cache
> + */
> +void free_global_cached_iovas(struct iova_domain *iovad)
As John pointed out last time, this should be static and the header
changes dropped.
(TBH we should probably register our own hotplug notifier instance for a
flush queue, so that external code has no need to poke at the per-CPU
caches either)
Robin.
> +{
> + struct iova_rcache *rcache;
> + unsigned long flags;
> + int i, j;
> +
> + for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
> + rcache = &iovad->rcaches[i];
> + spin_lock_irqsave(&rcache->lock, flags);
> + for (j = 0; j < rcache->depot_size; ++j) {
> + iova_magazine_free_pfns(rcache->depot[j], iovad);
> + iova_magazine_free(rcache->depot[j]);
> + rcache->depot[j] = NULL;
> + }
> + rcache->depot_size = 0;
> + spin_unlock_irqrestore(&rcache->lock, flags);
> + }
> +}
> +
> MODULE_AUTHOR("Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>");
> MODULE_LICENSE("GPL");
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index a0637ab..a905726 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -163,6 +163,7 @@ int init_iova_flush_queue(struct iova_domain *iovad,
> struct iova *split_and_remove_iova(struct iova_domain *iovad,
> struct iova *iova, unsigned long pfn_lo, unsigned long pfn_hi);
> void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad);
> +void free_global_cached_iovas(struct iova_domain *iovad);
> #else
> static inline int iova_cache_get(void)
> {
> @@ -270,6 +271,11 @@ static inline void free_cpu_cached_iovas(unsigned int cpu,
> struct iova_domain *iovad)
> {
> }
> +
> +static inline void free_global_cached_iovas(struct iova_domain *iovad)
> +{
> +}
> +
> #endif
>
> #endif
>
Powered by blists - more mailing lists