lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <202106041112.D89F8B21@keescook>
Date:   Fri, 4 Jun 2021 11:19:54 -0700
From:   Kees Cook <keescook@...omium.org>
To:     Jarmo Tiitto <jarmo.tiitto@...il.com>
Cc:     Sami Tolvanen <samitolvanen@...gle.com>,
        Bill Wendling <wcw@...gle.com>,
        Nathan Chancellor <nathan@...nel.org>,
        Nick Desaulniers <ndesaulniers@...gle.com>,
        clang-built-linux@...glegroups.com, linux-kernel@...r.kernel.org,
        morbo@...gle.com
Subject: Re: [PATCH v2 1/1] pgo: Fix sleep in atomic section in prf_open()

On Fri, Jun 04, 2021 at 01:15:43PM +0300, Jarmo Tiitto wrote:
> Kees Cook wrote perjantaina 4. kesäkuuta 2021 0.47.23 EEST:
> > On Thu, Jun 03, 2021 at 06:53:17PM +0300, Jarmo Tiitto wrote:
> > > In prf_open() the required buffer size can be so large that
> > > vzalloc() may sleep thus triggering bug:
> > > 
> > > ======
> > >  BUG: sleeping function called from invalid context at include/linux/sched/mm.h:201
> > >  in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 337, name: cat
> > >  CPU: 1 PID: 337 Comm: cat Not tainted 5.13.0-rc2-24-hack+ #154
> > >  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
> > >  Call Trace:
> > >   dump_stack+0xc7/0x134
> > >   ___might_sleep+0x177/0x190
> > >   __might_sleep+0x5a/0x90
> > >   kmem_cache_alloc_node_trace+0x6b/0x3a0
> > >   ? __get_vm_area_node+0xcd/0x1b0
> > >   ? dput+0x283/0x300
> > >   __get_vm_area_node+0xcd/0x1b0
> > >   __vmalloc_node_range+0x7b/0x420
> > >   ? prf_open+0x1da/0x580
> > >   ? prf_open+0x32/0x580
> > >   ? __llvm_profile_instrument_memop+0x36/0x50
> > >   vzalloc+0x54/0x60
> > >   ? prf_open+0x1da/0x580
> > >   prf_open+0x1da/0x580
> > >   full_proxy_open+0x211/0x370
> > >   ....
> > > ======
> > > 
> > > Since we can't vzalloc while holding pgo_lock,
> > > split the code into steps:
> > > * First get buffer size via prf_buffer_size()
> > >   and release the lock.
> > > * Round up to the page size and allocate the buffer.
> > > * Finally re-acquire the pgo_lock and call prf_serialize().
> > >   prf_serialize() will now check if the buffer is large enough
> > >   and returns -EAGAIN if it is not.
> > > 
> > > New in this v2 patch:
> > > The -EAGAIN case was determined to be such rare event that
> > > running following in a loop:
> > > 
> > > $cat /sys/kernel/debug/pgo/vmlinux.profraw > vmlinux.profdata;
> > > 
> > > Didn't trigger it, and I don't know if it ever may occur at all.
> > 
> > Hm, I remain nervous that it'll pop up when we least expect it. But, I
> > went to go look at this, and I don't understand why we need a lock at
> > all for prf_buffer_size(). These appear to be entirely static in size.
> > 
> 
> I would think the reasoning of taking the pgo_lock for prf_buffer_size() is that because
> __prf_get_value_size() walks linked lists that are modified by 
> __llvm_profile_instrument_target() in instrument.c.

Ooooh. Thanks; I missed this!

Let's go with the loop I proposed so we don't have to involve userspace
in the EAGAIN failure, etc.

-Kees

-- 
Kees Cook

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ