lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190820152025.GU2349@hirez.programming.kicks-ass.net>
Date:   Tue, 20 Aug 2019 17:20:25 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
        tglx@...utronix.de
Subject: Re: [PATCH] sched/core: Schedule new worker even if PI-blocked

On Tue, Aug 20, 2019 at 04:59:26PM +0200, Sebastian Andrzej Siewior wrote:
> On 2019-08-20 15:50:14 [+0200], Peter Zijlstra wrote:
> > On Fri, Aug 16, 2019 at 06:06:26PM +0200, Sebastian Andrzej Siewior wrote:
> > > If a task is PI-blocked (blocking on sleeping spinlock) then we don't want to
> > > schedule a new kworker if we schedule out due to lock contention because !RT
> > > does not do that as well.
> > 
> >  s/as well/either/
> > 
> > > A spinning spinlock disables preemption and a worker
> > > does not schedule out on lock contention (but spin).
> > 
> > I'm not much liking this; it means that rt_mutex and mutex have
> > different behaviour, and there are 'normal' rt_mutex users in the tree.
> 
> There isc RCU (boosting) and futex. I'm sceptical about the i2c users…

Well, yes, I too was/am sceptical, but it was tglx who twisted my arm
and said the i2c people were right and rt_mutex is/should-be a generic
usable interface.

This then resulted in the futex specific interface and lockdep support
for rt_mutex:

  5293c2efda37 ("futex,rt_mutex: Provide futex specific rt_mutex API")
  f5694788ad8d ("rt_mutex: Add lockdep annotations")

> > > On RT the RW-semaphore implementation uses an rtmutex so
> > > tsk_is_pi_blocked() will return true if a task blocks on it. In this case we
> > > will now start a new worker
> > 
> > I'm confused, by bailing out early it does _NOT_ start a new worker; or
> > am I reading it wrong?
> 
> s@now@not@. Your eyes work good, soory for that.

All good, just trying to make sense of things :-)

> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -3945,7 +3945,7 @@ void __noreturn do_task_dead(void)
> > >  
> > >  static inline void sched_submit_work(struct task_struct *tsk)
> > >  {
> > > -	if (!tsk->state || tsk_is_pi_blocked(tsk))
> > > +	if (!tsk->state)
> > >  		return;
> > >  
> > >  	/*

So this part actually makes rt_mutex less special and is good.

> > > @@ -3961,6 +3961,9 @@ static inline void sched_submit_work(str
> > >  		preempt_enable_no_resched();
> > >  	}
> > >  
> > > +	if (tsk_is_pi_blocked(tsk))
> > > +		return;
> > > +
> > >  	/*
> > >  	 * If we are going to sleep and we have plugged IO queued,
> > >  	 * make sure to submit it to avoid deadlocks.
> > 
> > What do we need that clause for? Why is pi_blocked special _at_all_?
> 
> so !RT the scheduler does nothing special if a task blocks on sleeping
> lock. 
> If I remember correctly then blk_schedule_flush_plug() is the problem.
> It may require a lock which is held by the task. 
> It may hold A and wait for B while another task has B and waits for A. 
> If my memory does bot betray me then ext+jbd can lockup without this.

And am I right in thinking that that, again, is specific to the
sleeping-spinlocks from PREEMPT_RT? Is there really nothing else that
identifies those more specifically? It's been a while since I looked at
them.

Also, I suppose it would be really good to put that in a comment.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ