[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1457391064-6660-162-git-send-email-kamal@canonical.com>
Date: Mon, 7 Mar 2016 14:49:12 -0800
From: Kamal Mostafa <kamal@...onical.com>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
kernel-team@...ts.ubuntu.com
Cc: Takashi Iwai <tiwai@...e.de>, Kamal Mostafa <kamal@...onical.com>
Subject: [PATCH 4.2.y-ckt 161/273] ALSA: seq: Fix leak of pool buffer at concurrent writes
4.2.8-ckt5 -stable review patch. If anyone has any objections, please let me know.
---8<------------------------------------------------------------
From: Takashi Iwai <tiwai@...e.de>
commit d99a36f4728fcbcc501b78447f625bdcce15b842 upstream.
When multiple concurrent writes happen on the ALSA sequencer device
right after the open, it may try to allocate vmalloc buffer for each
write and leak some of them. It's because the presence check and the
assignment of the buffer is done outside the spinlock for the pool.
The fix is to move the check and the assignment into the spinlock.
(The current implementation is suboptimal, as there can be multiple
unnecessary vmallocs because the allocation is done before the check
in the spinlock. But the pool size is already checked beforehand, so
this isn't a big problem; that is, the only possible path is the
multiple writes before any pool assignment, and practically seen, the
current coverage should be "good enough".)
The issue was triggered by syzkaller fuzzer.
BugLink: http://lkml.kernel.org/r/CACT4Y+bSzazpXNvtAr=WXaL8hptqjHwqEyFA+VN2AWEx=aurkg@mail.gmail.com
Reported-by: Dmitry Vyukov <dvyukov@...gle.com>
Tested-by: Dmitry Vyukov <dvyukov@...gle.com>
Signed-off-by: Takashi Iwai <tiwai@...e.de>
Signed-off-by: Kamal Mostafa <kamal@...onical.com>
---
sound/core/seq/seq_memory.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
index 8010766..c850345 100644
--- a/sound/core/seq/seq_memory.c
+++ b/sound/core/seq/seq_memory.c
@@ -383,15 +383,20 @@ int snd_seq_pool_init(struct snd_seq_pool *pool)
if (snd_BUG_ON(!pool))
return -EINVAL;
- if (pool->ptr) /* should be atomic? */
- return 0;
- pool->ptr = vmalloc(sizeof(struct snd_seq_event_cell) * pool->size);
- if (!pool->ptr)
+ cellptr = vmalloc(sizeof(struct snd_seq_event_cell) * pool->size);
+ if (!cellptr)
return -ENOMEM;
/* add new cells to the free cell list */
spin_lock_irqsave(&pool->lock, flags);
+ if (pool->ptr) {
+ spin_unlock_irqrestore(&pool->lock, flags);
+ vfree(cellptr);
+ return 0;
+ }
+
+ pool->ptr = cellptr;
pool->free = NULL;
for (cell = 0; cell < pool->size; cell++) {
--
2.7.0
Powered by blists - more mailing lists