lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <855d58ca-c458-9749-2086-eea7d38d4dea@gmail.com>
Date:   Wed, 20 Mar 2019 21:49:34 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     Daniel Borkmann <daniel@...earbox.net>,
        Alexei Starovoitov <ast@...com>,
        Networking <netdev@...r.kernel.org>,
        Eric Dumazet <edumazet@...gle.com>
Subject: Re: [BUG] BPF splat on latest kernels



On 03/08/2019 04:29 PM, Alexei Starovoitov wrote:
> On Fri, Mar 8, 2019 at 12:33 PM Eric Dumazet <eric.dumazet@...il.com> wrote:
>>
>> Running test_progs on a LOCKDEP enabled kernel (latest David Miller net tree)
>>
>> I got the following splat.
>>
>> It is not immediately obvious to me. Any idea ?
> 
> I think I saw something similar running test_maps,
> but I had other changes in the tree. And later I was
> never able to reproduce it, so figured it must have
> been my local change. Looks like it was for real.
> Do you have a reproducer?
> I'll take a look on monday.

Hi Alexei

Has this problem been root caused ?

Thanks !

> 
>> [ 4169.908826] ======================================================
>> [ 4169.914996] WARNING: possible circular locking dependency detected
>> [ 4169.921172] 5.0.0-dbg-DEV #597 Not tainted
>> [ 4169.925294] ------------------------------------------------------
>> [ 4169.931482] swapper/29/0 is trying to acquire lock:
>> [ 4169.936379] 00000000541e3f72 (&htab->buckets[i].lock){....}, at: htab_map_update_elem+0x1b6/0x3a0
>> [ 4169.945262]
>>                but task is already holding lock:
>> [ 4169.951084] 00000000e213eabd (&rq->lock){-.-.}, at: __schedule+0xd6/0xb80
>> [ 4169.957904]
>>                which lock already depends on the new lock.
>>
>> [ 4169.966101]
>>                the existing dependency chain (in reverse order) is:
>> [ 4169.973605]
>>                -> #4 (&rq->lock){-.-.}:
>> [ 4169.978651]        _raw_spin_lock+0x2f/0x40
>> [ 4169.982866]        task_fork_fair+0x37/0x160
>> [ 4169.987150]        sched_fork+0x206/0x310
>> [ 4169.991167]        copy_process.part.41+0x7eb/0x2370
>> [ 4169.996128]        _do_fork+0xda/0x6a0
>> [ 4169.999872]        kernel_thread+0x29/0x30
>> [ 4170.003962]        rest_init+0x22/0x260
>> [ 4170.007820]        arch_call_rest_init+0xe/0x10
>> [ 4170.012382]        start_kernel+0x4d8/0x4f9
>> [ 4170.016557]        x86_64_start_reservations+0x24/0x26
>> [ 4170.021697]        x86_64_start_kernel+0x6f/0x72
>> [ 4170.026312]        secondary_startup_64+0xa4/0xb0
>> [ 4170.031039]
>>                -> #3 (&p->pi_lock){-.-.}:
>> [ 4170.036265]        _raw_spin_lock_irqsave+0x3a/0x50
>> [ 4170.041172]        try_to_wake_up+0x41/0x610
>> [ 4170.045466]        wake_up_process+0x15/0x20
>> [ 4170.049745]        create_worker+0x16b/0x1e0
>> [ 4170.054039]        workqueue_init+0x1ff/0x31d
>> [ 4170.058397]        kernel_init_freeable+0x116/0x2a5
>> [ 4170.063291]        kernel_init+0xf/0x180
>> [ 4170.067212]        ret_from_fork+0x24/0x30
>> [ 4170.071307]
>>                -> #2 (&(&pool->lock)->rlock){-.-.}:
>> [ 4170.077425]        _raw_spin_lock+0x2f/0x40
>> [ 4170.081602]        __queue_work+0xb2/0x540
>> [ 4170.085709]        queue_work_on+0x38/0x80
>> [ 4170.089827]        free_percpu+0x231/0x270
>> [ 4170.093933]        array_map_free+0x47/0x80
>> [ 4170.098131]        bpf_map_free_deferred+0x3c/0x50
>> [ 4170.102926]        process_one_work+0x1f4/0x590
>> [ 4170.107466]        worker_thread+0x6f/0x430
>> [ 4170.111654]        kthread+0x132/0x170
>> [ 4170.115431]        ret_from_fork+0x24/0x30
>> [ 4170.119520]
>>                -> #1 (pcpu_lock){..-.}:
>> [ 4170.124562]        _raw_spin_lock_irqsave+0x3a/0x50
>> [ 4170.129448]        pcpu_alloc+0xfa/0x790
>> [ 4170.133383]        __alloc_percpu_gfp+0x12/0x20
>> [ 4170.137905]        alloc_htab_elem+0x182/0x2b0
>> [ 4170.142351]        __htab_percpu_map_update_elem+0x270/0x2c0
>> [ 4170.148036]        bpf_percpu_hash_update+0x7c/0x130
>> [ 4170.153035]        __do_sys_bpf+0x194a/0x1c10
>> [ 4170.157385]        __x64_sys_bpf+0x1a/0x20
>> [ 4170.161485]        do_syscall_64+0x5a/0x460
>> [ 4170.165674]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
>> [ 4170.171255]
>>                -> #0 (&htab->buckets[i].lock){....}:
>> [ 4170.177433]        lock_acquire+0xa7/0x190
>> [ 4170.181549]        _raw_spin_lock_irqsave+0x3a/0x50
>> [ 4170.186448]        htab_map_update_elem+0x1b6/0x3a0
>> [ 4170.191347]
>>                other info that might help us debug this:
>>
>> [ 4170.199345] Chain exists of:
>>                  &htab->buckets[i].lock --> &p->pi_lock --> &rq->lock
>>
>> [ 4170.209867]  Possible unsafe locking scenario:
>>
>> [ 4170.215776]        CPU0                    CPU1
>> [ 4170.220301]        ----                    ----
>> [ 4170.224823]   lock(&rq->lock);
>> [ 4170.227882]                                lock(&p->pi_lock);
>> [ 4170.233646]                                lock(&rq->lock);
>> [ 4170.239208]   lock(&htab->buckets[i].lock);
>> [ 4170.243384]
>>                 *** DEADLOCK ***
>>
>> [ 4170.249297] 2 locks held by swapper/29/0:
>> [ 4170.253307]  #0: 00000000e213eabd (&rq->lock){-.-.}, at: __schedule+0xd6/0xb80
>> [ 4170.260558]  #1: 000000006891c367 (rcu_read_lock){....}, at: trace_call_bpf+0x40/0x260
>> [ 4170.268476]
>>                stack backtrace:
>> [ 4170.272838] CPU: 29 PID: 0 Comm: swapper/29 Not tainted 5.0.0-dbg-DEV #597
>> [ 4170.279708] Hardware name: Intel RML,PCH/Iota_QC_19, BIOS 2.54.0 06/07/2018
>> [ 4170.286658] Call Trace:
>> [ 4170.289108]  dump_stack+0x67/0x95
>> [ 4170.292424]  print_circular_bug.isra.36+0x1fd/0x20b
>> [ 4170.297302]  __lock_acquire+0x15b3/0x1850
>> [ 4170.301308]  ? __pcpu_freelist_pop+0x87/0xb0
>> [ 4170.305579]  lock_acquire+0xa7/0x190
>> [ 4170.309157]  ? lock_acquire+0xa7/0x190
>> [ 4170.312900]  ? htab_map_update_elem+0x1b6/0x3a0
>> [ 4170.317434]  _raw_spin_lock_irqsave+0x3a/0x50
>> [ 4170.321791]  ? htab_map_update_elem+0x1b6/0x3a0
>> [ 4170.326324]  htab_map_update_elem+0x1b6/0x3a0
>> [ 4170.330684]  ? trace_call_bpf+0x100/0x260
>> [ 4170.334695]  ? perf_trace_run_bpf_submit+0x42/0xb0
>> [ 4170.339487]  ? perf_trace_sched_switch+0x129/0x1a0
>> [ 4170.344279]  ? __schedule+0x44d/0xb80
>> [ 4170.347943]  ? lockdep_hardirqs_on+0x14d/0x1d0
>> [ 4170.352388]  ? trace_hardirqs_on+0x3f/0xf0
>> [ 4170.356487]  ? schedule_idle+0x2c/0x40
>> [ 4170.360237]  ? do_idle+0x184/0x280
>> [ 4170.363634]  ? cpu_startup_entry+0x1d/0x20
>> [ 4170.367735]  ? start_secondary+0x184/0x1d0
>> [ 4170.371830]  ? secondary_startup_64+0xa4/0xb0
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ