lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190706061631.GV26519@linux.ibm.com>
Date:   Fri, 5 Jul 2019 23:16:31 -0700
From:   "Paul E. McKenney" <paulmck@...ux.ibm.com>
To:     "Theodore Ts'o" <tytso@....edu>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        syzbot <syzbot+4bfbbf28a2e50ab07368@...kaller.appspotmail.com>,
        Andreas Dilger <adilger.kernel@...ger.ca>,
        David Miller <davem@...emloft.net>, eladr@...lanox.com,
        Ido Schimmel <idosch@...lanox.com>,
        Jiri Pirko <jiri@...lanox.com>,
        John Stultz <john.stultz@...aro.org>,
        linux-ext4@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
        netdev <netdev@...r.kernel.org>,
        syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>
Subject: Re: INFO: rcu detected stall in ext4_write_checks

On Sat, Jul 06, 2019 at 12:28:01AM -0400, Theodore Ts'o wrote:
> On Fri, Jul 05, 2019 at 12:10:55PM -0700, Paul E. McKenney wrote:
> > 
> > Exactly, so although my patch might help for CONFIG_PREEMPT=n, it won't
> > help in your scenario.  But looking at the dmesg from your URL above,
> > I see the following:
> 
> I just tested with CONFIG_PREEMPT=n
> 
> % grep CONFIG_PREEMPT /build/ext4-64/.config
> CONFIG_PREEMPT_NONE=y
> # CONFIG_PREEMPT_VOLUNTARY is not set
> # CONFIG_PREEMPT is not set
> CONFIG_PREEMPT_COUNT=y
> CONFIG_PREEMPTIRQ_TRACEPOINTS=y
> # CONFIG_PREEMPTIRQ_EVENTS is not set
> 
> And with your patch, it's still not helping.
> 
> I think that's because SCHED_DEADLINE is a real-time style scheduler:
> 
>        In  order  to fulfill the guarantees that are made when a thread is ad‐
>        mitted to the SCHED_DEADLINE policy,  SCHED_DEADLINE  threads  are  the
>        highest  priority  (user  controllable)  threads  in the system; if any
>        SCHED_DEADLINE thread is runnable, it will preempt any thread scheduled
>        under one of the other policies.
> 
> So a SCHED_DEADLINE process is not going yield control of the CPU,
> even if it calls cond_resched() until the thread has run for more than
> the sched_runtime parameter --- which for the syzkaller repro, was set
> at 26 days.
> 
> There are some safety checks when using SCHED_DEADLINE:
> 
>        The kernel requires that:
> 
>            sched_runtime <= sched_deadline <= sched_period
> 
>        In  addition,  under  the  current implementation, all of the parameter
>        values must be at least 1024 (i.e., just over one microsecond, which is
>        the  resolution  of the implementation), and less than 2^63.  If any of
>        these checks fails, sched_setattr(2) fails with the error EINVAL.
> 
>        The  CBS  guarantees  non-interference  between  tasks,  by  throttling
>        threads that attempt to over-run their specified Runtime.
> 
>        To ensure deadline scheduling guarantees, the kernel must prevent situ‐
>        ations where the set of SCHED_DEADLINE threads is not feasible (schedu‐
>        lable)  within  the given constraints.  The kernel thus performs an ad‐
>        mittance test when setting or changing SCHED_DEADLINE  policy  and  at‐
>        tributes.   This admission test calculates whether the change is feasi‐
>        ble; if it is not, sched_setattr(2) fails with the error EBUSY.
> 
> The problem is that SCHED_DEADLINE is designed for sporadic tasks:
> 
>        A  sporadic  task is one that has a sequence of jobs, where each job is
>        activated at most once per period.  Each job also has a relative  dead‐
>        line,  before which it should finish execution, and a computation time,
>        which is the CPU time necessary for executing the job.  The moment when
>        a  task wakes up because a new job has to be executed is called the ar‐
>        rival time (also referred to as the request time or release time).  The
>        start time is the time at which a task starts its execution.  The abso‐
>        lute deadline is thus obtained by adding the relative deadline  to  the
>        arrival time.
> 
> It appears that kernel's admission control before allowing
> SCHED_DEADLINE to be set on a thread was designed for sane
> applications, and not abusive ones.  Given that process started doing
> abusive things *after* SCHED_DEADLINE policy was set, in order kernel
> to figure out that in fact SCHED_DEADLINE should be denied for any
> arbitrary kernel thread would require either (a) solving the halting
> problem, or (b) being able to anticipate the future (in which case,
> we should be using that kernel algorithm to play the stock market  :-)

26 days will definitely get you a large collection of RCU CPU stall
warnings!  Thank you for digging into this, Ted.

I suppose RCU could take the dueling-banjos approach and use increasingly
aggressive scheduler policies itself, up to and including SCHED_DEADLINE,
until it started getting decent forward progress.  However, that
sounds like the something that just might have unintended consequences,
particularly if other kernel subsystems were to also play similar
games of dueling banjos.

Alternatively, is it possible to provide stricter admission control?
For example, what sorts of policies do SCHED_DEADLINE users actually use?

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ