lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 26 Nov 2019 10:33:11 +0100
From:   Christian Borntraeger <borntraeger@...ibm.com>
To:     Michal Hocko <mhocko@...nel.org>, Roman Gushchin <guro@...com>
Cc:     linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        linux-kernel@...r.kernel.org, kernel-team@...com,
        stable@...r.kernel.org
Subject: Re: [PATCH] mm: memcg/slab: wait for !root kmem_cache refcnt killing
 on root kmem_cache destruction



On 26.11.19 10:29, Michal Hocko wrote:
> On Mon 25-11-19 10:54:53, Roman Gushchin wrote:
>> Christian reported a warning like the following obtained during running some
>> KVM-related tests on s390:
>>
>> WARNING: CPU: 8 PID: 208 at lib/percpu-refcount.c:108 percpu_ref_exit+0x50/0x58
>> Modules linked in: kvm(-) xt_CHECKSUM xt_MASQUERADE bonding xt_tcpudp ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_na>
>> CPU: 8 PID: 208 Comm: kworker/8:1 Not tainted 5.2.0+ #66
>> Hardware name: IBM 2964 NC9 712 (LPAR)
>> Workqueue: events sysfs_slab_remove_workfn
>> Krnl PSW : 0704e00180000000 0000001529746850 (percpu_ref_exit+0x50/0x58)
>>            R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
>> Krnl GPRS: 00000000ffff8808 0000001529746740 000003f4e30e8e18 0036008100000000
>>            0000001f00000000 0035008100000000 0000001fb3573ab8 0000000000000000
>>            0000001fbdb6de00 0000000000000000 0000001529f01328 0000001fb3573b00
>>            0000001fbb27e000 0000001fbdb69300 000003e009263d00 000003e009263cd0
>> Krnl Code: 0000001529746842: f0a0000407fe        srp        4(11,%r0),2046,0
>>            0000001529746848: 47000700            bc         0,1792
>>           #000000152974684c: a7f40001            brc        15,152974684e
>>           >0000001529746850: a7f4fff2            brc        15,1529746834
>>            0000001529746854: 0707                bcr        0,%r7
>>            0000001529746856: 0707                bcr        0,%r7
>>            0000001529746858: eb8ff0580024        stmg       %r8,%r15,88(%r15)
>>            000000152974685e: a738ffff            lhi        %r3,-1
>> Call Trace:
>> ([<000003e009263d00>] 0x3e009263d00)
>>  [<00000015293252ea>] slab_kmem_cache_release+0x3a/0x70
>>  [<0000001529b04882>] kobject_put+0xaa/0xe8
>>  [<000000152918cf28>] process_one_work+0x1e8/0x428
>>  [<000000152918d1b0>] worker_thread+0x48/0x460
>>  [<00000015291942c6>] kthread+0x126/0x160
>>  [<0000001529b22344>] ret_from_fork+0x28/0x30
>>  [<0000001529b2234c>] kernel_thread_starter+0x0/0x10
>> Last Breaking-Event-Address:
>>  [<000000152974684c>] percpu_ref_exit+0x4c/0x58
>> ---[ end trace b035e7da5788eb09 ]---
>>
>> The problem occurs because kmem_cache_destroy() is called immediately
>> after deleting of a memcg, so it races with the memcg kmem_cache
>> deactivation.
>>
>> flush_memcg_workqueue() at the beginning of kmem_cache_destroy()
>> is supposed to guarantee that all deactivation processes are finished,
>> but failed to do so. It waits for an rcu grace period, after which all
>> children kmem_caches should be deactivated. During the deactivation
>> percpu_ref_kill() is called for non root kmem_cache refcounters,
>> but it requires yet another rcu grace period to finish the transition
>> to the atomic (dead) state.
>>
>> So in a rare case when not all children kmem_caches are destroyed
>> at the moment when the root kmem_cache is about to be gone, we need
>> to wait another rcu grace period before destroying the root
>> kmem_cache.
> 
> Could you explain how rare this really is please? I still have to wrap
> my head around the overall logic here. It looks quite fragile to me TBH.
> I am worried that is relies on implementation detail of the PCP ref
> counters too much.

I can actually reproduce this very reliably by doing an

# virsh destroy <lastguest>; rmmod kvm

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ