lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211104184939.GA23576@blackbody.suse.cz>
Date:   Thu, 4 Nov 2021 19:49:39 +0100
From:   Michal Koutný <mkoutny@...e.com>
To:     Mathias Krause <minipli@...ecurity.net>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <valentin.schneider@....com>,
        linux-kernel@...r.kernel.org, Odin Ugedal <odin@...d.al>,
        Kevin Tanguy <kevin.tanguy@...p.ovh.com>,
        Brad Spengler <spender@...ecurity.net>
Subject: Re: [PATCH] sched/fair: Prevent dead task groups from regaining
 cfs_rq's

Hi.

On Wed, Nov 03, 2021 at 08:06:13PM +0100, Mathias Krause <minipli@...ecurity.net> wrote:
> When unregister_fair_sched_group() unlinks all cfs_rq's from the dying
> task group, it doesn't protect itself from getting interrupted. If the
> timer interrupt triggers while we iterate over all CPUs or after
> unregister_fair_sched_group() has finished but prior to unlinking the
> task group, sched_cfs_period_timer() will execute and walk the list of
> task groups, trying to unthrottle cfs_rq's, i.e. re-add them to the
> dying task group. These will later -- in free_fair_sched_group() -- be
> kfree()'ed while still being linked, leading to the fireworks Kevin and
> Michal are seeing.

[...]
 
>     CPU1:                                      CPU2:
>       :                                        timer IRQ:
>       :                                          do_sched_cfs_period_timer():
>       :                                            :
>       :                                            distribute_cfs_runtime():
>       :                                              rcu_read_lock();
>       :                                              :
>       :                                              unthrottle_cfs_rq():
>     sched_offline_group():                             :
>       :                                                walk_tg_tree_from(…,tg_unthrottle_up,…):
>       list_del_rcu(&tg->list);                           :
>  (1)  :                                                  list_for_each_entry_rcu(child, &parent->children, siblings)
>       :                                                    :
>  (2)  list_del_rcu(&tg->siblings);                         :
>       :                                                    tg_unthrottle_up():
>       unregister_fair_sched_group():                         struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>         :                                                    :
>         list_del_leaf_cfs_rq(tg->cfs_rq[cpu]);               :
>         :                                                    :
>         :                                                    if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
>  (3)    :                                                        list_add_leaf_cfs_rq(cfs_rq);
>       :                                                      :
>       :                                                    :
>       :                                                  :
>       :                                                :
>       :                                              :
>  (4)  :                                              rcu_read_unlock();

The list traversal (1) may happen in some scenarios (quota on non-leaf
task_group) but in the presented reproducer, the quota is set on the
leaf task_group. That means it has no children and this list iteration
is irrelevant.
The cause is that walk_tg_tree_from includes `from` task_group and
calls tg_unthrottle_up() on it too.
What I mean is that the unlinking of tg->list and tg->siblings is
irrelevant in this case.

The timer can still fire after
sched_offline_group()/unregister_fair_sched_group() finished (i.e. after
synchronize_rcu())


> This patch survives Michal's reproducer[2] for 8h+ now, which used to
> trigger within minutes before.

Note that the reproducer is sensitive to the sleep between last task
exit and cgroup rmdir. I assume that the added synchronize_rcu() before
list_del_leaf_cfs_rq() shifted the list removal after the last timer
callback and prevented re-adding of the offlined task_group in
unthrottle_cfs_rq().

(Of course, it'd more convincing if I backed this theory by results from
the reproducer with the increased interval to crash again. I may get
down to that later.)

Does your patch fix the crashes also in your real workload?

Michal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ