lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 7 Feb 2017 09:03:43 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     "Luis R. Rodriguez" <mcgrof@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Ingo Molnar <mingo@...nel.org>,
        Andy Lutomirski <luto@...nel.org>,
        Kees Cook <keescook@...omium.org>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        Mateusz Guzik <mguzik@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: kmemleak splat on copy_process()

On Tue 07-02-17 02:37:02, Luis R. Rodriguez wrote:
> On Mon, Feb 06, 2017 at 10:47:41AM +0100, Michal Hocko wrote:
> > On Fri 03-02-17 13:06:04, Luis R. Rodriguez wrote:
> > > On next-20170125 running some kselftest not yet upstream I eventually
> > > get a kmemleak splat:
> > > 
> > > unreferenced object 0xffffa7b1034b4000 (size 16384):
> > >   comm "driver_data.sh", pid 6506, jiffies 4295068366 (age 1697.272s)
> > >   hex dump (first 32 bytes):
> > >     9d 6e ac 57 00 00 00 00 74 2d 64 72 69 76 65 72  .n.W....t-driver
> > >     5f 64 61 74 61 2e 62 69 6e 0a 00 00 00 00 00 00  _data.bin.......
> > >   backtrace:
> > >     [<ffffffff9005f7fa>] kmemleak_alloc+0x4a/0xa0
> > >     [<ffffffff8fbe7006>] __vmalloc_node_range+0x206/0x2a0
> > >     [<ffffffff8fa7f3e9>] copy_process.part.36+0x609/0x1cc0
> > >     [<ffffffff8fa80c77>] _do_fork+0xd7/0x390
> > >     [<ffffffff8fa80fd9>] SyS_clone+0x19/0x20
> > >     [<ffffffff8fa03b4b>] do_syscall_64+0x5b/0xc0
> > >     [<ffffffff9006b3af>] return_from_SYSCALL_64+0x0/0x6a
> > >     [<ffffffffffffffff>] 0xffffffffffffffff
> > > 
> > > As per gdb:
> > > 
> > > (gdb) l *(copy_process+0x609)
> > > 0xffffffff8107f3e9 is in copy_process (kernel/fork.c:204).
> > > warning: Source file is more recent than executable.
> > > 199             /*
> > > 200              * We can't call find_vm_area() in interrupt context, and
> > > 201              * free_thread_stack() can be called in interrupt context,
> > > 202              * so cache the vm_struct.
> > > 203              */
> > > 204             if (stack) {
> > > 205                     tsk->stack_vm_area = find_vm_area(stack);
> > > 206             }
> > > 207             return stack;
> > > 208     #else
> > 
> > Could you check the state of the above process (pid 6506)? Does it still
> > own its stack? 
> 
> Although I can reproduce the splat on kmemleak, getting it to trigger
> at a point I can stop the kernel and inspect the process seems rather hard.

Can you make the kernel BUG_ON in this case and check the vmcore?

> > From a quick check I do not see any leak there either.
> 
> Then in that case what about:

This just disables the kmemleak altogether which doesn't sound like a
good idea to me.
 
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 937ba59709c9..3c96aafa1f82 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -196,6 +196,7 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
>  				     PAGE_KERNEL,
>  				     0, node, __builtin_return_address(0));
>  
> +	kmemleak_ignore(stack);
>  	/*
>  	 * We can't call find_vm_area() in interrupt context, and
>  	 * free_thread_stack() can be called in interrupt context,
> 
> I no longer get the spurious splats from kmemleak after this.
> 
>   Luis

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ