lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b48a4e5f-a9b7-1aff-7f27-6b8fddc34da0@grsecurity.net>
Date:   Fri, 5 Nov 2021 15:55:35 +0100
From:   Mathias Krause <minipli@...ecurity.net>
To:     Michal Koutný <mkoutny@...e.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <valentin.schneider@....com>,
        linux-kernel@...r.kernel.org, Odin Ugedal <odin@...d.al>,
        Kevin Tanguy <kevin.tanguy@...p.ovh.com>,
        Brad Spengler <spender@...ecurity.net>
Subject: Re: [PATCH] sched/fair: Prevent dead task groups from regaining
 cfs_rq's

Am 04.11.21 um 19:49 schrieb Michal Koutný:
> On Wed, Nov 03, 2021 at 08:06:13PM +0100, Mathias Krause <minipli@...ecurity.net> wrote:
>> When unregister_fair_sched_group() unlinks all cfs_rq's from the dying
>> task group, it doesn't protect itself from getting interrupted. If the
>> timer interrupt triggers while we iterate over all CPUs or after
>> unregister_fair_sched_group() has finished but prior to unlinking the
>> task group, sched_cfs_period_timer() will execute and walk the list of
>> task groups, trying to unthrottle cfs_rq's, i.e. re-add them to the
>> dying task group. These will later -- in free_fair_sched_group() -- be
>> kfree()'ed while still being linked, leading to the fireworks Kevin and
>> Michal are seeing.
> 
> [...]
>  
>>     CPU1:                                      CPU2:
>>       :                                        timer IRQ:
>>       :                                          do_sched_cfs_period_timer():
>>       :                                            :
>>       :                                            distribute_cfs_runtime():
>>       :                                              rcu_read_lock();
>>       :                                              :
>>       :                                              unthrottle_cfs_rq():
>>     sched_offline_group():                             :
>>       :                                                walk_tg_tree_from(…,tg_unthrottle_up,…):
>>       list_del_rcu(&tg->list);                           :
>>  (1)  :                                                  list_for_each_entry_rcu(child, &parent->children, siblings)
>>       :                                                    :
>>  (2)  list_del_rcu(&tg->siblings);                         :
>>       :                                                    tg_unthrottle_up():
>>       unregister_fair_sched_group():                         struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>>         :                                                    :
>>         list_del_leaf_cfs_rq(tg->cfs_rq[cpu]);               :
>>         :                                                    :
>>         :                                                    if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
>>  (3)    :                                                        list_add_leaf_cfs_rq(cfs_rq);
>>       :                                                      :
>>       :                                                    :
>>       :                                                  :
>>       :                                                :
>>       :                                              :
>>  (4)  :                                              rcu_read_unlock();
> 
> The list traversal (1) may happen in some scenarios (quota on non-leaf
> task_group) but in the presented reproducer, the quota is set on the
> leaf task_group. That means it has no children and this list iteration
> is irrelevant.
> The cause is that walk_tg_tree_from includes `from` task_group and
> calls tg_unthrottle_up() on it too.
> What I mean is that the unlinking of tg->list and tg->siblings is
> irrelevant in this case.

Interesting.

> The timer can still fire after
> sched_offline_group()/unregister_fair_sched_group() finished (i.e. after
> synchronize_rcu())

Yeah, I also noticed the timer gets disabled rather late, in
free_fair_sched_group() via destroy_cfs_bandwidth(). But as I saw no
more warnings from my debug patch I was under the impression,
do_sched_cfs_period_timer() won't see this thread group any more.
Apparently, this is not true?

Anyhow, see below.

>> This patch survives Michal's reproducer[2] for 8h+ now, which used to
>> trigger within minutes before.
> 
> Note that the reproducer is sensitive to the sleep between last task
> exit and cgroup rmdir. I assume that the added synchronize_rcu() before
> list_del_leaf_cfs_rq() shifted the list removal after the last timer
> callback and prevented re-adding of the offlined task_group in
> unthrottle_cfs_rq().

As Vincent reported in the other thread, synchronize_rcu() is actually
problematic, as we're not allowed to block here. :( So I'd go for the
kfree_rcu() route and move unregister_fair_sched_group() to
free_fair_sched_group(), after disabling the timers.

> (Of course, it'd more convincing if I backed this theory by results from
> the reproducer with the increased interval to crash again. I may get
> down to that later.)
> 
> Does your patch fix the crashes also in your real workload?

I haven't heard back from Kevin since. But he might just be busy.

Thanks,
Mathias

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ