[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240216043426.1218-1-hdanton@sina.com>
Date: Fri, 16 Feb 2024 12:34:24 +0800
From: Hillf Danton <hdanton@...a.com>
To: Takashi Iwai <tiwai@...e.de>
Cc: Sven van Ashbrook <svenva@...omium.org>,
Karthikeyan Ramasubramanian <kramasub@...omium.org>,
LKML <linux-kernel@...r.kernel.org>,
Brian Geffon <bgeffon@...gle.com>,
linux-sound@...r.kernel.org,
Kai Vehmanen <kai.vehmanen@...ux.intel.com>
Subject: Re: [PATCH v1] ALSA: memalloc: Fix indefinite hang in non-iommu case
On Thu, 15 Feb 2024 18:03:01 +0100 Takashi Iwai <tiwai@...e.de> wrote:
>
> So it sounds like that we should go back for __GFP_NORETRY in general
> for non-zero order allocations, not only the call you changed, as
> __GFP_RETRY_MAYFAIL doesn't guarantee the stuck.
>
> How about the changes like below?
>
> +/* default GFP bits for our allocations */
> +static gfp_t default_gfp(size_t size)
> +{
> + /* don't allocate intensively for high-order pages */
> + if (size > PAGE_SIZE)
> + return GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY;
> + else
> + return GFP_KERNEL | __GFP_NOWARN | __GFP_RETRY_MAYFAIL;
> +}
Looks like an overdose because both __GFP_NORETRY and __GFP_RETRY_MAYFAIL
are checked in __alloc_pages_slowpath().
--- x/sound/core/memalloc.c
+++ y/sound/core/memalloc.c
@@ -540,13 +540,20 @@ static void *snd_dma_noncontig_alloc(str
{
struct sg_table *sgt;
void *p;
+ gfp_t gfp = DEFAULT_GFP;
#ifdef CONFIG_SND_DMA_SGBUF
if (cpu_feature_enabled(X86_FEATURE_XENPV))
return snd_dma_sg_fallback_alloc(dmab, size);
+ /*
+ * Given fallback, quit allocation in case of PAGE_ALLOC_COSTLY_ORDER with
+ * lower orders handled by page allocator
+ */
+ if (!get_dma_ops(dmab->dev.dev))
+ gfp &= ~__GFP_RETRY_MAYFAIL;
#endif
- sgt = dma_alloc_noncontiguous(dmab->dev.dev, size, dmab->dev.dir,
- DEFAULT_GFP, 0);
+ sgt = dma_alloc_noncontiguous(dmab->dev.dev, size, dmab->dev.dir, gfp, 0);
+
#ifdef CONFIG_SND_DMA_SGBUF
if (!sgt && !get_dma_ops(dmab->dev.dev))
return snd_dma_sg_fallback_alloc(dmab, size);
Powered by blists - more mailing lists