[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <s5hbmtmm2ah.wl-tiwai@suse.de>
Date: Tue, 28 Feb 2017 22:30:14 +0100
From: Takashi Iwai <tiwai@...e.de>
To: "Dmitry Vyukov" <dvyukov@...gle.com>
Cc: <alsa-devel@...a-project.org>, "Jaroslav Kysela" <perex@...ex.cz>,
"LKML" <linux-kernel@...r.kernel.org>,
"syzkaller" <syzkaller@...glegroups.com>
Subject: Re: sounds: deadlocked processed in snd_seq_pool_done
On Tue, 28 Feb 2017 17:26:27 +0100,
Dmitry Vyukov wrote:
>
> Hello,
>
> The following program creates deadlocked processed in snd_seq_pool_done:
>
> https://gist.githubusercontent.com/dvyukov/1c9b1e4641f46c16a42cdb4458bbe8fd/raw/a09912f3cff4d86f412cedf5f5b4a5c98a9f0040/gistfile1.txt
>
> # cat /proc/2964/status
> Name: a.out
> State: D (disk sleep)
> Tgid: 2964
> Ngid: 0
> Pid: 2964
> PPid: 2956
> TracerPid: 0
> Uid: 0 0 0 0
> Gid: 0 0 0 0
> FDSize: 0
> Groups: 0
> NStgid: 2964
> NSpid: 2964
> NSpgid: 2964
> NSsid: 2956
> Threads: 1
> SigQ: 1/3304
> SigPnd: 0000000000000000
> ShdPnd: 0000000000000100
> SigBlk: 0000000000000000
> SigIgn: 0000000000000000
> SigCgt: 0000000180000000
> CapInh: 0000000000000000
> CapPrm: 0000003fffffffff
> CapEff: 0000003fffffffff
> CapBnd: 0000003fffffffff
> CapAmb: 0000000000000000
> NoNewPrivs: 0
> Seccomp: 0
> Cpus_allowed: f
> Cpus_allowed_list: 0-3
> Mems_allowed: 00000000,00000001
> Mems_allowed_list: 0
> voluntary_ctxt_switches: 103997
> nonvoluntary_ctxt_switches: 0
>
> # cat /proc/2964/task/*/stack
> [<ffffffff8356c96b>] snd_seq_pool_done+0x31b/0x620
> sound/core/seq/seq_memory.c:435
> [<ffffffff83571961>] snd_seq_fifo_delete+0x161/0x1f0
> sound/core/seq/seq_fifo.c:83
> [<ffffffff83562b4f>] snd_seq_release+0x7f/0xe0
> sound/core/seq/seq_clientmgr.c:368
> [<ffffffff81a8aae2>] __fput+0x332/0x7f0 fs/file_table.c:208
> [<ffffffff81a8b025>] ____fput+0x15/0x20 fs/file_table.c:244
> [<ffffffff814c782a>] task_work_run+0x18a/0x260 kernel/task_work.c:116
> [<ffffffff81451a86>] exit_task_work include/linux/task_work.h:21 [inline]
> [<ffffffff81451a86>] do_exit+0x1956/0x2900 kernel/exit.c:873
> [<ffffffff81457579>] do_group_exit+0x149/0x420 kernel/exit.c:977
> [<ffffffff8145786d>] SYSC_exit_group kernel/exit.c:988 [inline]
> [<ffffffff8145786d>] SyS_exit_group+0x1d/0x20 kernel/exit.c:986
> [<ffffffff8457c781>] entry_SYSCALL_64_fastpath+0x1f/0xc2
> [<ffffffffffffffff>] 0xffffffffffffffff
Thanks for the report.
It's indeed a bug in the sequencer FIFO management. The patch below
fixes it.
Takashi
-- 8< --
From: Takashi Iwai <tiwai@...e.de>
Subject: [PATCH] ALSA: seq: Fix link corruption by event error handling
The sequencer FIFO management has a bug that may lead to a corruption
(shortage) of the cell linked list. When a sequencer client faces an
error at the event delivery, it tries to put back the dequeued cell.
When the first queue was put back, this forgot the tail pointer
tracking, and the link will be screwed up.
Although there is no memory corruption, the sequencer client may stall
forever at exit while flushing the pending FIFO cells in
snd_seq_pool_done(), as spotted by syzkaller.
This patch addresses the missing tail pointer tracking at
snd_seq_fifo_cell_putback(). Also the patch makes sure to clear the
cell->enxt pointer at snd_seq_fifo_event_in() for avoiding a similar
mess-up of the FIFO linked list.
Reported-by: Dmitry Vyukov <dvyukov@...gle.com>
Cc: <stable@...r.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@...e.de>
---
sound/core/seq/seq_fifo.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
index 1d5acbe0c08b..86240d02b530 100644
--- a/sound/core/seq/seq_fifo.c
+++ b/sound/core/seq/seq_fifo.c
@@ -135,6 +135,7 @@ int snd_seq_fifo_event_in(struct snd_seq_fifo *f,
f->tail = cell;
if (f->head == NULL)
f->head = cell;
+ cell->next = NULL;
f->cells++;
spin_unlock_irqrestore(&f->lock, flags);
@@ -214,6 +215,8 @@ void snd_seq_fifo_cell_putback(struct snd_seq_fifo *f,
spin_lock_irqsave(&f->lock, flags);
cell->next = f->head;
f->head = cell;
+ if (!f->tail)
+ f->tail = cell;
f->cells++;
spin_unlock_irqrestore(&f->lock, flags);
}
--
2.11.1
Powered by blists - more mailing lists