[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <04790bd9-4f20-4d51-b988-ad4b44eeee98@arm.com>
Date: Wed, 17 Aug 2022 13:50:09 +0100
From: Robin Murphy <robin.murphy@....com>
To: Christoph Hellwig <hch@....de>, iommu@...ts.linux.dev,
Marek Szyprowski <m.szyprowski@...sung.com>
Cc: Michal Hocko <mhocko@...e.com>,
David Rientjes <rientjes@...gle.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] dma-pool: limit DMA and DMA32 zone size pools
On 2022-08-17 07:06, Christoph Hellwig wrote:
> Limit the sizing of the atomic pools for allocations from
> ZONE_DMA and ZONE_DMA32 based on the number of pages actually
> present in those zones instead of the total memory.
>
> Fixes: c84dc6e68a1d ("dma-pool: add additional coherent pools to map to gfp mask")
> Reported-by: Michal Hocko <mhocko@...e.com>
> Signed-off-by: Christoph Hellwig <hch@....de>
> ---
> kernel/dma/pool.c | 40 +++++++++++++++++++++++++++++-----------
> 1 file changed, 29 insertions(+), 11 deletions(-)
>
> diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> index 5b07b0379a501..f629c6dfd8555 100644
> --- a/kernel/dma/pool.c
> +++ b/kernel/dma/pool.c
> @@ -193,28 +193,46 @@ static unsigned long calculate_pool_size(unsigned long zone_pages)
> return max_t(unsigned long, nr_pages << PAGE_SHIFT, SZ_128K);
> }
>
> +#if defined(CONFIG_ZONE_DMA) || defined(CONFIG_ZONE_DMA32)
This #ifdeffery seems horribly clunky - I think it would be much nicer
to mark this __maybe_unused, and preserve the has_managed_dma/IS_ENABLED
logic below.
> +static unsigned long __init nr_managed_pages(int zone_idx)
> +{
> + struct pglist_data *pgdat;
> + unsigned long nr_pages = 0;
> +
> + for_each_online_pgdat(pgdat)
> + nr_pages += zone_managed_pages(&pgdat->node_zones[zone_idx]);
> + return nr_pages;
> +}
> +#endif /* CONFIG_ZONE_DMA || CONFIG_ZONE_DMA32 */
> +
> static int __init dma_atomic_pool_init(void)
> {
> + unsigned long nr_zone_dma_pages, nr_zone_dma32_pages;
...otherwise, I expect the buildbots will be along shortly with unused
variable warnings for these :)
Cheers,
Robin.
> +
> +#ifdef CONFIG_ZONE_DMA
> + nr_zone_dma_pages = nr_managed_pages(ZONE_DMA);
> + if (nr_zone_dma_pages)
> + atomic_pool_dma = __dma_atomic_pool_init(
> + calculate_pool_size(nr_zone_dma_pages),
> + GFP_DMA);
> +#endif
> +#ifdef CONFIG_ZONE_DMA32
> + nr_zone_dma32_pages = nr_managed_pages(ZONE_DMA32);
> + if (nr_zone_dma32_pages)
> + atomic_pool_dma32 = __dma_atomic_pool_init(
> + calculate_pool_size(nr_zone_dma32_pages),
> + GFP_DMA32);
> +#endif
> /*
> * If coherent_pool was not used on the command line, default the pool
> * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1.
> */
> if (!atomic_pool_size)
> atomic_pool_size = calculate_pool_size(totalram_pages());
> -
> - INIT_WORK(&atomic_pool_work, atomic_pool_work_fn);
> -
> atomic_pool_kernel = __dma_atomic_pool_init(atomic_pool_size,
> GFP_KERNEL);
> - if (has_managed_dma()) {
> - atomic_pool_dma = __dma_atomic_pool_init(atomic_pool_size,
> - GFP_KERNEL | GFP_DMA);
> - }
> - if (IS_ENABLED(CONFIG_ZONE_DMA32)) {
> - atomic_pool_dma32 = __dma_atomic_pool_init(atomic_pool_size,
> - GFP_KERNEL | GFP_DMA32);
> - }
>
> + INIT_WORK(&atomic_pool_work, atomic_pool_work_fn);
> dma_atomic_pool_debugfs_init();
> return 0;
> }
Powered by blists - more mailing lists