[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87h646r3zn.wl-tiwai@suse.de>
Date: Thu, 06 Mar 2025 08:51:40 +0100
From: Takashi Iwai <tiwai@...e.de>
To: Lizhi Xu <lizhi.xu@...driver.com>
Cc: <syzbot+2d373c9936c00d7e120c@...kaller.appspotmail.com>,
<linux-kernel@...r.kernel.org>,
<linux-sound@...r.kernel.org>,
<perex@...ex.cz>,
<syzkaller-bugs@...glegroups.com>,
<tiwai@...e.com>
Subject: Re: [PATCH] ALSA: seq: Use atomic to prevent data races in total_elements
On Thu, 06 Mar 2025 02:17:45 +0100,
Lizhi Xu wrote:
>
> syzbot reported a data-race in snd_seq_poll / snd_seq_pool_init. [1]
>
> Just use atomic_set/atomic_read for handling this case.
>
> [1]
> BUG: KCSAN: data-race in snd_seq_poll / snd_seq_pool_init
>
> write to 0xffff888114535610 of 4 bytes by task 7006 on cpu 1:
> snd_seq_pool_init+0x1c1/0x200 sound/core/seq/seq_memory.c:469
> snd_seq_write+0x17f/0x500 sound/core/seq/seq_clientmgr.c:1022
> vfs_write+0x27d/0x920 fs/read_write.c:677
> ksys_write+0xe8/0x1b0 fs/read_write.c:731
> __do_sys_write fs/read_write.c:742 [inline]
> __se_sys_write fs/read_write.c:739 [inline]
> __x64_sys_write+0x42/0x50 fs/read_write.c:739
> x64_sys_call+0x287e/0x2dc0 arch/x86/include/generated/asm/syscalls_64.h:2
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0xc9/0x1c0 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> read to 0xffff888114535610 of 4 bytes by task 7005 on cpu 0:
> snd_seq_total_cells sound/core/seq/seq_memory.h:83 [inline]
> snd_seq_write_pool_allocated sound/core/seq/seq_clientmgr.c:95 [inline]
> snd_seq_poll+0x103/0x170 sound/core/seq/seq_clientmgr.c:1139
> vfs_poll include/linux/poll.h:82 [inline]
> __io_arm_poll_handler+0x1e5/0xd50 io_uring/poll.c:582
> io_arm_poll_handler+0x464/0x5b0 io_uring/poll.c:707
> io_queue_async+0x89/0x320 io_uring/io_uring.c:1925
> io_queue_sqe io_uring/io_uring.c:1954 [inline]
> io_req_task_submit+0xb9/0xc0 io_uring/io_uring.c:1373
> io_handle_tw_list+0x1b9/0x200 io_uring/io_uring.c:1059
> tctx_task_work_run+0x6e/0x1c0 io_uring/io_uring.c:1123
> tctx_task_work+0x40/0x80 io_uring/io_uring.c:1141
> task_work_run+0x13a/0x1a0 kernel/task_work.c:227
> get_signal+0xe78/0x1000 kernel/signal.c:2809
> arch_do_signal_or_restart+0x95/0x4b0 arch/x86/kernel/signal.c:337
> exit_to_user_mode_loop kernel/entry/common.c:111 [inline]
> exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline]
> __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
> syscall_exit_to_user_mode+0x62/0x120 kernel/entry/common.c:218
> do_syscall_64+0xd6/0x1c0 arch/x86/entry/common.c:89
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>
> value changed: 0x00000000 -> 0x000001f4
This is harmless as it's only a reference in poll() and that's rather
volatile. So changing the whole with atomic_t is an overkill only for
that.
OTOH, the check of pool size in the caller side is fragile, and it can
purely rely on snd_seq_pool_poll_wait(). And, there, it should take
the pool->lock for the data consistency.
So, if any, an alternative fix would be something like below.
thanks,
Takashi
-- 8< --
--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -1150,8 +1150,7 @@ static __poll_t snd_seq_poll(struct file *file, poll_table * wait)
if (snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_OUTPUT) {
/* check if data is available in the pool */
- if (!snd_seq_write_pool_allocated(client) ||
- snd_seq_pool_poll_wait(client->pool, file, wait))
+ if (snd_seq_pool_poll_wait(client->pool, file, wait))
mask |= EPOLLOUT | EPOLLWRNORM;
}
--- a/sound/core/seq/seq_memory.c
+++ b/sound/core/seq/seq_memory.c
@@ -427,6 +427,7 @@ int snd_seq_pool_poll_wait(struct snd_seq_pool *pool, struct file *file,
poll_table *wait)
{
poll_wait(file, &pool->output_sleep, wait);
+ guard(spinlock_irq)(&pool->lock);
return snd_seq_output_ok(pool);
}
Powered by blists - more mailing lists