lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180724052929.GI30522@ZenIV.linux.org.uk>
Date:   Tue, 24 Jul 2018 06:29:29 +0100
From:   Al Viro <viro@...IV.linux.org.uk>
To:     "Dae R. Jeong" <threeearcat@...il.com>
Cc:     linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        byoungyoung@...due.edu, kt0755@...il.com, bammanag@...due.edu
Subject: Re: KASAN: use-after-free Read in link_path_walk

On Tue, Jul 24, 2018 at 06:17:26AM +0100, Al Viro wrote:
> On Tue, Jul 24, 2018 at 12:45:42PM +0900, Dae R. Jeong wrote:
> > Diagnosis:
> > We think that it is possible that link_path_walk() dereferences a
> > freed pointer when cleanup_mnt() is executed between path_init() and
> > link_path_walk().
> > 
> > Since I'm not an expert on a file system and don't fully understand
> > the crash, please see a executed program and a crash log below in
> > case that my understanding is wrong.
> > 
> > 
> > Executed Program:
> > Thread0                     Thread1
> > mkdir("./file0")
> >      |--------------------------|
> >      |                      mount("./file0", "./file0", "devpts", 0x0, "")
> >      |                          |
> > openat(AT_FDCWD,            chroot("./file0")
> > "/dev/vcs", 0x200, 0x0)     umount("./file0", 0x2)
> > 
> > openat(), chroot(), umount() syscalls are executed after mount() syscall.
> > We think a race occurs between openat() and chroot() because RaceFuzzer
> > executed openat() and chroot() concurrently.
> > 
> > 
> > (Possible) Thread interleaving:
> > CPU0 (path_openat)                      CPU1 (cleanup_mnt)

Wait a bloody minute.  Where does cleanup_mnt() come from in that thing?
You are doing lazy-umount of the thing you've chrooted into; if it ends
up with zero refcount on that mount, we are already in deep, deep trouble,
races with open() on not.  Simply following that with stat / (in thread 1,
without thread0 at all) would end up accessing the same vfsmount.  And
if it's been freed, we are well and truly fucked, race or no race.

I really want details.  *Is* cleanup_mnt() called by thread 1 in your
reproducer before the use-after-free hits?  And what's the root of
thread 0 at that point?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ