[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.22.394.2006272124470.591864@chino.kir.corp.google.com>
Date: Sat, 27 Jun 2020 21:25:21 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Guenter Roeck <linux@...ck-us.net>
cc: Christoph Hellwig <hch@....de>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
Linux IOMMU <iommu@...ts.linux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [patch] dma-pool: warn when coherent pool is depleted
On Sun, 21 Jun 2020, Guenter Roeck wrote:
> > When a DMA coherent pool is depleted, allocation failures may or may not
> > get reported in the kernel log depending on the allocator.
> >
> > The admin does have a workaround, however, by using coherent_pool= on the
> > kernel command line.
> >
> > Provide some guidance on the failure and a recommended minimum size for
> > the pools (double the size).
> >
> > Signed-off-by: David Rientjes <rientjes@...gle.com>
>
> Tested-by: Guenter Roeck <linux@...ck-us.net>
>
> Also confirmed that coherent_pool=256k works around the crash
> I had observed.
>
Thanks Guenter. Christoph, does it make sense to apply this patch since
there may not be an artifact left behind in the kernel log on allocation
failure by the caller?
> Guenter
>
> > ---
> > kernel/dma/pool.c | 6 +++++-
> > 1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> > --- a/kernel/dma/pool.c
> > +++ b/kernel/dma/pool.c
> > @@ -239,12 +239,16 @@ void *dma_alloc_from_pool(struct device *dev, size_t size,
> > }
> >
> > val = gen_pool_alloc(pool, size);
> > - if (val) {
> > + if (likely(val)) {
> > phys_addr_t phys = gen_pool_virt_to_phys(pool, val);
> >
> > *ret_page = pfn_to_page(__phys_to_pfn(phys));
> > ptr = (void *)val;
> > memset(ptr, 0, size);
> > + } else {
> > + WARN_ONCE(1, "DMA coherent pool depleted, increase size "
> > + "(recommended min coherent_pool=%zuK)\n",
> > + gen_pool_size(pool) >> 9);
> > }
> > if (gen_pool_avail(pool) < atomic_pool_size)
> > schedule_work(&atomic_pool_work);
>
Powered by blists - more mailing lists