lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 24 May 2019 14:03:23 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Roman Gushchin <guro@...com>
Cc:     Alexei Starovoitov <ast@...nel.org>, bpf@...r.kernel.org,
        Daniel Borkmann <daniel@...earbox.net>, netdev@...r.kernel.org,
        Tejun Heo <tj@...nel.org>, kernel-team@...com,
        cgroups@...r.kernel.org, Stanislav Fomichev <sdf@...ichev.me>,
        Yonghong Song <yhs@...com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 bpf-next 0/4] cgroup bpf auto-detachment

On Thu, May 23, 2019 at 12:45:28PM -0700, Roman Gushchin wrote:
> This patchset implements a cgroup bpf auto-detachment functionality:
> bpf programs are detached as soon as possible after removal of the
> cgroup, without waiting for the release of all associated resources.

The idea looks great, but doesn't quite work:

$ ./test_cgroup_attach
#override:PASS
[   66.475219] BUG: sleeping function called from invalid context at ../include/linux/percpu-rwsem.h:34
[   66.476095] in_atomic(): 1, irqs_disabled(): 0, pid: 21, name: ksoftirqd/2
[   66.476706] CPU: 2 PID: 21 Comm: ksoftirqd/2 Not tainted 5.2.0-rc1-00211-g1861420d0162 #1564
[   66.477595] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.el7 04/01/2014
[   66.478360] Call Trace:
[   66.478591]  dump_stack+0x5b/0x8b
[   66.478892]  ___might_sleep+0x22f/0x290
[   66.479230]  cpus_read_lock+0x18/0x50
[   66.479550]  static_key_slow_dec+0x41/0x70
[   66.479914]  cgroup_bpf_release+0x1a6/0x400
[   66.480285]  percpu_ref_switch_to_atomic_rcu+0x203/0x330
[   66.480754]  rcu_core+0x475/0xcc0
[   66.481047]  ? switch_mm_irqs_off+0x684/0xa40
[   66.481422]  ? rcu_note_context_switch+0x260/0x260
[   66.481842]  __do_softirq+0x1cf/0x5ff
[   66.482174]  ? takeover_tasklets+0x5f0/0x5f0
[   66.482542]  ? smpboot_thread_fn+0xab/0x780
[   66.482911]  run_ksoftirqd+0x1a/0x40
[   66.483225]  smpboot_thread_fn+0x3ad/0x780
[   66.483583]  ? sort_range+0x20/0x20
[   66.483894]  ? __kthread_parkme+0xb0/0x190
[   66.484253]  ? sort_range+0x20/0x20
[   66.484562]  ? sort_range+0x20/0x20
[   66.484878]  kthread+0x2e2/0x3e0
[   66.485166]  ? kthread_create_worker_on_cpu+0xb0/0xb0
[   66.485620]  ret_from_fork+0x1f/0x30

Same test runs fine before the patches.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ