lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 14 Apr 2022 10:51:18 -0700
From:   Tadeusz Struk <tadeusz.struk@...aro.org>
To:     Michal Koutný <mkoutny@...e.com>
Cc:     cgroups@...r.kernel.org, Tejun Heo <tj@...nel.org>,
        Zefan Li <lizefan.x@...edance.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Christian Brauner <brauner@...nel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>, netdev@...r.kernel.org,
        bpf@...r.kernel.org, stable@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        syzbot+e42ae441c3b10acf9e9d@...kaller.appspotmail.com
Subject: Re: [PATCH] cgroup: don't queue css_release_work if one already
 pending

Hi Michal,
Thanks for your analysis.

On 4/14/22 09:44, Michal Koutný wrote:
> Hello Tadeusz.
> 
> Thanks for analyzing this syzbot report. Let me provide my understanding
> of the test case and explanation why I think your patch fixes it but is
> not fully correct.
> 
> On Tue, Apr 12, 2022 at 12:24:59PM -0700, Tadeusz Struk <tadeusz.struk@...aro.org> wrote:
>> Syzbot found a corrupted list bug scenario that can be triggered from
>> cgroup css_create(). The reproduces writes to cgroup.subtree_control
>> file, which invokes cgroup_apply_control_enable(), css_create(), and
>> css_populate_dir(), which then randomly fails with a fault injected -ENOMEM.
> 
> The reproducer code makes it hard for me to understand which function
> fails with ENOMEM.
> But I can see your patch fixes the reproducer and your additional debug
> patch which proves that css->destroy_work is re-queued.

Yes, it is hard to see the actual failing point because, I think it is randomly
failing in different places. I think in the actual case that causes the list
corruption is in fact in css_create().
It is the css_create() error path that does fist rcu enqueue in:

https://elixir.bootlin.com/linux/v5.10.109/source/kernel/cgroup/cgroup.c#L5228

and the second is triggered by the css->refcnt calling css_release()

The reason why we don't see it actually failing in css_create() in the trace
dump is that the fail_dump() is rate-limited, see:
https://elixir.bootlin.com/linux/v5.18-rc2/source/lib/fault-inject.c#L44

I was confused as well, so I put additional debug prints in every place
where css_release() can fail, and it was actually in
css_create()->cgroup_idr_alloc() that failed in my case.

What happened was, the write triggered:
cgroup_subtree_control_write()->cgroup_apply_control()->cgroup_apply_control_enable()->css_create()

which, allocates and initializes the css, then fails in cgroup_idr_alloc(),
bails out and calls queue_rcu_work(cgroup_destroy_wq, &css->destroy_rwork);

then cgroup_subtree_control_write() bails out to out_unlock:, which then goes:

cgroup_kn_unlock()->cgroup_put()->css_put()->percpu_ref_put(&css->refcnt)->percpu_ref_put_many(ref)

which then calls ref->data->release(ref) and enqueues the same
&css->destroy_rwork on cgroup_destroy_wq causing list corruption in insert_work.

>> In such scenario the css_create() error path rcu enqueues css_free_rwork_fn
>> work for an css->refcnt initialized with css_release() destructor,
> 
> Note that css_free_rwork_fn() utilizes css->destroy_*r*work.
> The error path in css_create() open codes relevant parts of
> css_release_work_fn() so that css_release() can be skipped and the
> refcnt is eventually just percpu_ref_exit()'d.
> 
>> and there is a chance that the css_release() function will be invoked
>> for a cgroup_subsys_state, for which a destroy_work has already been
>> queued via css_create() error path.
> 
> But I think the problem is css_populate_dir() failing in
> cgroup_apply_control_enable(). (Is this what you actually meant?
> css_create() error path is then irrelevant, no?)

I thought so too at first as the the crushdump shows that this is failing
in css_populate_dir(), but this is not the fail that causes the list corruption.
The code can recover from the fail in css_populate_dir().
The fail that causes trouble is in css_create(), that makes it go to its error path.
I can dig out the patch with my debug prints and request syzbot to run it
if you want.

> 
> The already created csses should then be rolled back via
> 	cgroup_restore_control(cgrp);
> 	cgroup_apply_control_disable(cgrp);
> 	   ...
> 	   kill_css(css)
> 
> I suspect the double-queuing is a result of the fact that there exists
> only the single reference to the css->refcnt. I.e. it's
> percpu_ref_kill_and_confirm()'d and released both at the same time.
> 
> (Normally (when not killing the last reference), css->destroy_work reuse
> is not a problem because of the sequenced chain
> css_killed_work_fn()->css_put()->css_release().)
> 
>> This can be avoided by adding a check to css_release() that checks
>> if it has already been enqueued.
> 
> If that's what's happening, then your patch omits the final
> css_release_work_fn() in favor of css_killed_work_fn() but both should
> be run during the rollback upon css_populate_dir() failure.

This change only prevents from double queue:

queue_[rcu]_work(cgroup_destroy_wq, &css->destroy_rwork);

I don't see how it affects the css_killed_work_fn() clean path.
I didn't look at it, since I thought it is irrelevant in this case.

-- 
Thanks,
Tadeusz

Powered by blists - more mailing lists