lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200906041403.GS29330@paulmck-ThinkPad-P72>
Date:   Sat, 5 Sep 2020 21:14:03 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     Davidlohr Bueso <dave@...olabs.net>, peterz@...radead.org,
        mingo@...hat.com, will@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: Question on task_blocks_on_rt_mutex()

On Sat, Sep 05, 2020 at 05:45:02PM -0400, Joel Fernandes wrote:
> On Sat, Sep 05, 2020 at 05:24:06PM -0400, Joel Fernandes wrote:
> > Hi Paul,
> > 
> > On Thu, Sep 03, 2020 at 01:06:39PM -0700, Paul E. McKenney wrote:
> > > On Wed, Sep 02, 2020 at 08:54:10AM -0700, Paul E. McKenney wrote:
> > > > On Tue, Sep 01, 2020 at 06:51:28PM -0700, Davidlohr Bueso wrote:
> > > > > On Tue, 01 Sep 2020, Paul E. McKenney wrote:
> > > > > 
> > > > > > And it appears that a default-niced CPU-bound SCHED_OTHER process is
> > > > > > not preempted by a newly awakened MAX_NICE SCHED_OTHER process.  OK,
> > > > > > OK, I never waited for more than 10 minutes, but on my 2.2GHz that is
> > > > > > close enough to a hang for most people.
> > > > > > 
> > > > > > Which means that the patch below prevents the hangs.  And maybe does
> > > > > > other things as well, firing rcutorture up on it to check.
> > > > > > 
> > > > > > But is this indefinite delay expected behavior?
> > > > > > 
> > > > > > This reproduces for me on current mainline as follows:
> > > > > > 
> > > > > > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --torture lock --duration 3 --configs LOCK05
> > > > > > 
> > > > > > This hangs within a minute of boot on my setup.  Here "hangs" is defined
> > > > > > as stopping the per-15-second console output of:
> > > > > > 	Writes:  Total: 569906696 Max/Min: 81495031/63736508   Fail: 0
> [...]
> > > ------------------------------------------------------------------------
> > > 
> > > commit d93a64389f4d544ded241d0ba30b2586497f5dc0
> > > Author: Paul E. McKenney <paulmck@...nel.org>
> > > Date:   Tue Sep 1 16:58:41 2020 -0700
> > > 
> > >     torture: Periodically pause in stutter_wait()
> > >     
> > >     Running locktorture scenario LOCK05 results in hangs:
> > >     
> > >     tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --torture lock --duration 3 --configs LOCK05
> > >     
> > >     The lock_torture_writer() kthreads set themselves to MAX_NICE while
> > >     running SCHED_OTHER.  Other locktorture kthreads run at default niceness,
> > >     also SCHED_OTHER.  This results in these other locktorture kthreads
> > >     indefinitely preempting the lock_torture_writer() kthreads.  Note that
> > 
> > In the past I have seen issues with niceness and CFS. Those issues were
> > related to tick granularity, if the scheduler tick is too coarse, then
> > scheduler may allow a low priority task to run for a bit longer. But this
> > also means that higher priority tasks will take even longer to catch up to
> > the vruntime of the lower priority ones. IIRC, this can run into several
> > seconds.
> > 
> > Not fully sure if that's what you're seeing. If you drop the niceness by some
> > amount, does the issue go away or get better?
> > 
> > >     the cond_resched() in the stutter_wait() function's loop is ineffective
> > >     because this scenario is built with CONFIG_PREEMPT=y.
> > >     
> > >     It is not clear that such indefinite preemption is supposed to happen, but
> > >     in the meantime this commit prevents kthreads running in stutter_wait()
> > >     from being completely CPU-bound, thus allowing the other threads to get
> > >     some CPU in a timely fashion.  This commit also uses hrtimers to provide
> > >     very short sleeps to avoid degrading the sudden-on testing that stutter
> > >     is supposed to provide.
> > 
> > There is a CFS tracepoint called sched:sched_stat_runtime. That could be
> > enabled to see what happens to the vruntime values on the wakeup of the lower
> > prio task.
> > 
> > I'm also seeing the LOCK05 failure, I see that some writer threads are in
> > TASK_UNINTERRUPTIBLE state shown by hung task detector on LOCK05. So these
> > writers didn't wake up for over 2 minutes to begin with:
> > 
> > [  246.797326] task:lock_torture_wr state:D stack:14696 pid:   72 ppid:     2 flags:0x00004000
> > [  246.798826] Call Trace:
> > [  246.799282]  __schedule+0x414/0x6a0
> > [  246.799917]  schedule+0x41/0xe0
> > [  246.800510]  __rt_mutex_slowlock+0x49/0xd0
> > [  246.801259]  rt_mutex_slowlock+0xca/0x1e0
> > [  246.801994]  ? lock_torture_reader+0x110/0x110
> > [  246.802799]  torture_rtmutex_lock+0xc/0x10
> > [  246.803545]  lock_torture_writer+0x72/0x150
> > [  246.804322]  kthread+0x120/0x160
> > [  246.804911]  ? kthread_park+0x80/0x80
> > [  246.805581]  ret_from_fork+0x22/0x30
> > [  246.806237] INFO: task lock_torture_wr:73 blocked for more than 122 seconds.
> > [  246.807505]       Not tainted 5.9.0-rc1+ #26
> > [  246.808287] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > [  246.809690] task:lock_torture_wr state:D stack:14696 pid:   73 ppid:     2 flags:0x00004000
> > [  246.811208] Call Trace:
> > [  246.811657]  __schedule+0x414/0x6a0
> > [  246.812306]  schedule+0x41/0xe0
> > [  246.812881]  __rt_mutex_slowlock+0x49/0xd0
> > [  246.813636]  rt_mutex_slowlock+0xca/0x1e0
> > [  246.814371]  ? lock_torture_reader+0x110/0x110
> > [  246.815182]  torture_rtmutex_lock+0xc/0x10
> > [  246.815923]  lock_torture_writer+0x72/0x150
> > [  246.816692]  kthread+0x120/0x160
> > [  246.817287]  ? kthread_park+0x80/0x80
> > [  246.817952]  ret_from_fork+0x22/0x30
> > 
> > Could this just be a side effect of the issue you are seeing?  (A writer
> > acquired a lock but never got CPU to release it, which inturn caused lock
> > acquirers to block in D-state indefinitely).
> 
> It appears to me the reason could be because the higher priority task is RT:
> 
> sched_switch: prev_comm=lock_torture_wr prev_pid=74 prev_prio=139 prev_state=R+ ==> next_comm=lock_torture_wr next_pid=70 next_prio=49
> 
> After this, only pid=70 runs till the hungtasks detector screams.
> 
> Could this because the writer calls cur_ops->task_boost(); which sets pid=70
> to RT?  As long as RT task runs, it will block the CFS task without giving it CPU.

Thank you for looking into this!  Sounds like something I would do...
And unlike rcutorture, the timeframes are too short for throttling
to kick in.  Seems like the fix is instead to set to everything to
SCHED_OTHER while it is in torture_stutter().  I will give that a try.
Though perhaps the hrtimer sleep is better?

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ