[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wmzqv64o.wl-tiwai@suse.de>
Date: Mon, 26 Jun 2023 13:02:15 +0200
From: Takashi Iwai <tiwai@...e.de>
To: Jaroslav Kysela <perex@...ex.cz>
Cc: Tuo Li <islituo@...il.com>, tiwai@...e.com,
alsa-devel@...a-project.org,
Linux Kernel <linux-kernel@...r.kernel.org>,
baijiaju1990@...look.com
Subject: Re: [BUG] ALSA: core: pcm_memory: a possible data race in do_alloc_pages()
On Mon, 26 Jun 2023 09:56:47 +0200,
Jaroslav Kysela wrote:
>
> On 26. 06. 23 9:33, Takashi Iwai wrote:
> > On Mon, 26 Jun 2023 09:31:18 +0200,
> > Tuo Li wrote:
> >>
> >>
> >> Hello,
> >>
> >> Thank you for your reply!
> >
> > FWIW, the simplest fix would be something like below, just extending
> > the mutex coverage. But it'll serialize the all calls, so it might
> > influence on the performance, while it's the safest way.
>
> It may be better to update total_pcm_alloc_bytes before
> snd_dma_alloc_dir_pages() call and decrease this value when allocation
> fails to allow parallel allocations. Then the mutex can be held only
> for the total_pcm_alloc_bytes variable update.
Yes, it'd work. But a tricky part is that the actual allocation size
can be bigger, and we need to correct the total_pcm_alloc_bytes after
the allocation result. So the end result would be a patch like below,
which is a bit more complex than the previous simpler approach. But
it might be OK.
> Eventually, total_pcm_alloc_bytes may be atomic.
Possible, but the same problem like the above applies, so I believe
the mutex is good enough.
Another alternative would be to move the size check after the
successful allocation, assuming that the size exceeds a very
exceptional scenario. The code flow would be a bit simpler.
thanks,
Takashi
--- a/sound/core/pcm_memory.c
+++ b/sound/core/pcm_memory.c
@@ -37,9 +37,14 @@ static int do_alloc_pages(struct snd_card *card, int type, struct device *dev,
enum dma_data_direction dir;
int err;
+ mutex_lock(&card->memory_mutex);
if (max_alloc_per_card &&
- card->total_pcm_alloc_bytes + size > max_alloc_per_card)
+ card->total_pcm_alloc_bytes + size > max_alloc_per_card) {
+ mutex_unlock(&card->memory_mutex);
return -ENOMEM;
+ }
+ card->total_pcm_alloc_bytes += size
+ mutex_unlock(&card->memory_mutex);
if (str == SNDRV_PCM_STREAM_PLAYBACK)
dir = DMA_TO_DEVICE;
@@ -47,8 +52,18 @@ static int do_alloc_pages(struct snd_card *card, int type, struct device *dev,
dir = DMA_FROM_DEVICE;
err = snd_dma_alloc_dir_pages(type, dev, dir, size, dmab);
if (!err) {
+ /* the actual allocation size might be bigger than requested,
+ * and we need to correct the account
+ */
+ if (dmab->bytes != size) {
+ mutex_lock(&card->memory_mutex);
+ card->total_pcm_alloc_bytes += dmab->bytes - size;
+ mutex_unlock(&card->memory_mutex);
+ }
+ } else {
+ /* allocation failure, take back */
mutex_lock(&card->memory_mutex);
- card->total_pcm_alloc_bytes += dmab->bytes;
+ card->total_pcm_alloc_bytes -= size;
mutex_unlock(&card->memory_mutex);
}
return err;
Powered by blists - more mailing lists