lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87a67vsb0w.wl-tiwai@suse.de>
Date:   Tue, 23 Aug 2022 08:56:47 +0200
From:   Takashi Iwai <tiwai@...e.de>
To:     Gabriel Ryan <gabe@...columbia.edu>
Cc:     abhishek.shah@...umbia.edu, alsa-devel@...a-project.org,
        perex@...ex.cz, tiwai@...e.com, linux-kernel@...r.kernel.org
Subject: Re: data-race in snd_seq_oss_midi_check_exit_port / snd_seq_oss_midi_setup

On Mon, 22 Aug 2022 22:00:42 +0200,
Gabriel Ryan wrote:
> 
> Hi Takashi,
> 
> Makes sense, we'll note this race as benign for our future reference.
> 
> Thanks for taking the time to look at this!

Although it's more or less harmless, the data-race should be still
addressed.  I'm going to submit the fixes for both issues you've
reported.


thanks,

Takashi

> 
> Best,
> 
> Gabe
> 
> On Fri, Aug 19, 2022 at 3:41 AM Takashi Iwai <tiwai@...e.de> wrote:
> >
> > On Fri, 19 Aug 2022 03:00:00 +0200,
> > Abhishek Shah wrote:
> > >
> > >
> > > Hi all,
> > >
> > > We found a race involving the max_midi_devs variable. We see an interleaving
> > > where the following check here passes before the
> > > snd_seq_oss_midi_check_exit_port() finishes, but this check should not pass
> > > if max_midi_devs will become zero, but we are not sure of its implications in
> > > terms of security impact. Please let us know what you think.
> >
> > Through a quick glance, I guess it's rather harmless (although a bit
> > fragile from the code sanity POV).
> >
> > A MIDI port could be closed at any time, and the dp->max_mididevs
> > holds locally the upper bound of currently possibly accessible ports.
> > The actual access to each port is done via get_mdev() in
> > seq_oss_midi.c, which is a sort of refcount managed, and it should be
> > fine that a port disappears meanwhile.
> >
> > That said, it'd be even feasible just dropping dp->max_mididevs field
> > and scan all MIDI ports at each time, but it won't bring much benefit,
> > either.
> >
> >
> > thanks,
> >
> > Takashi
> >
> > >
> > > Thanks!
> > >
> > > -------------------Report---------------------
> > >
> > > write to 0xffffffff88382f80 of 4 bytes by task 6541 on cpu 0:
> > >  snd_seq_oss_midi_check_exit_port+0x1a6/0x270 sound/core/seq/oss/
> > > seq_oss_midi.c:237
> > >  receive_announce+0x193/0x1b0 sound/core/seq/oss/seq_oss_init.c:143
> > >  snd_seq_deliver_single_event+0x30d/0x4e0 sound/core/seq/seq_clientmgr.c:640
> > >  deliver_to_subscribers sound/core/seq/seq_clientmgr.c:695 [inline]
> > >  snd_seq_deliver_event+0x38c/0x490 sound/core/seq/seq_clientmgr.c:830
> > >  snd_seq_kernel_client_dispatch+0x189/0x1a0 sound/core/seq/
> > > seq_clientmgr.c:2339
> > >  snd_seq_system_broadcast+0x98/0xd0 sound/core/seq/seq_system.c:86
> > >  snd_seq_ioctl_delete_port+0x9a/0xc0 sound/core/seq/seq_clientmgr.c:1356
> > >  snd_seq_ioctl+0x198/0x2d0 sound/core/seq/seq_clientmgr.c:2173
> > >  vfs_ioctl fs/ioctl.c:51 [inline]
> > >  __do_sys_ioctl fs/ioctl.c:870 [inline]
> > >  __se_sys_ioctl+0xe1/0x150 fs/ioctl.c:856
> > >  __x64_sys_ioctl+0x43/0x50 fs/ioctl.c:856
> > >  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > >  do_syscall_64+0x3d/0x90 arch/x86/entry/common.c:80
> > >  entry_SYSCALL_64_after_hwframe+0x44/0xae
> > >
> > > read to 0xffffffff88382f80 of 4 bytes by task 6542 on cpu 1:
> > >  snd_seq_oss_midi_setup+0x1b/0x40 sound/core/seq/oss/seq_oss_midi.c:273
> > >  snd_seq_oss_open+0x364/0x900 sound/core/seq/oss/seq_oss_init.c:198
> > >  odev_open+0x55/0x70 sound/core/seq/oss/seq_oss.c:128
> > >  soundcore_open+0x315/0x3a0 sound/sound_core.c:593
> > >  chrdev_open+0x373/0x3f0 fs/char_dev.c:414
> > >  do_dentry_open+0x543/0x8f0 fs/open.c:824
> > >  vfs_open+0x47/0x50 fs/open.c:958
> > >  do_open fs/namei.c:3476 [inline]
> > >  path_openat+0x1906/0x1dc0 fs/namei.c:3609
> > >  do_filp_open+0xef/0x200 fs/namei.c:3636
> > >  do_sys_openat2+0xa5/0x2a0 fs/open.c:1213
> > >  do_sys_open fs/open.c:1229 [inline]
> > >  __do_sys_openat fs/open.c:1245 [inline]
> > >  __se_sys_openat fs/open.c:1240 [inline]
> > >  __x64_sys_openat+0xf0/0x120 fs/open.c:1240
> > >  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > >  do_syscall_64+0x3d/0x90 arch/x86/entry/common.c:80
> > >  entry_SYSCALL_64_after_hwframe+0x44/0xae
> > >
> > > Reported by Kernel Concurrency Sanitizer on:
> > > CPU: 1 PID: 6542 Comm: syz-executor2-n Not tainted 5.18.0-rc5+ #107
> > > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/
> > > 2014
> > >
> > > Reproducing Inputs
> > >
> > > Input CPU 0:
> > > r0 = openat$sndseq(0xffffffffffffff9c, &(0x7f0000000040)='/dev/snd/seq\x00',
> > > 0x0)
> > > ioctl$SNDRV_SEQ_IOCTL_CREATE_PORT(r0, 0xc0a85320, &(0x7f0000000240)={{0x80},
> > > 'port1\x00', 0x10})
> > > ioctl$SNDRV_SEQ_IOCTL_SET_CLIENT_POOL(r0, 0x40a85321, &(0x7f0000000100)=
> > > {0x80})
> > >
> > > Input CPU 1:
> > > r0 = openat$sequencer2(0xffffff9c, &(0x7f0000000000)='/dev/sequencer2\x00',
> > > 0x0, 0x0)
> > > ioctl$SNDCTL_SYNTH_INFO(r0, 0xc08c5102, &(0x7f0000000200)=
> > > {"02961a3ce6d4828f8b5559726313251b55fa11d8d65406f1f33c9af8e3f8", 0xffffffff})
> > >
> > >
> 
> -- 
> Gabriel Ryan
> PhD Candidate at Columbia University
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ