lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 31 Oct 2014 17:22:10 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	linux-kernel@...r.kernel.org, mingo@...nel.org,
	laijs@...fujitsu.com, dipankar@...ibm.com,
	akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
	josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
	dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
	fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com,
	Clark Williams <clark.williams@...il.com>
Subject: Re: [PATCH tip/core/rcu 4/7] rcu: Unify boost and kthread priorities

On Wed, Oct 29, 2014 at 09:16:02AM -0700, Paul E. McKenney wrote:

> > Also, should we look at running this stuff as deadline in order to
> > provide interference guarantees etc.. ?
> 
> Excellent question!  I have absolutely no idea what the answer might be.
> 
> Taking the two sets of kthreads separately...
> 
> rcub/N:	This is for RCU priority boosting.  In the preferred common case,
> 	these never wake up ever.  When they do wake up, all they do is
> 	cause blocked RCU readers to get priority boosted.   I vaguely
> 	recall something about inheritance of deadlines, which might
> 	work here.  One concern is what happens if the deadline is
> 	violated, as this isn't really necessarily an error condition
> 	in this case -- we don't know how long the RCU read-side critical
> 	section will run once awakened.

Yea, this one is 'hard'. How is this used today? From the previous email
we've learnt that the default is FIFO-1, iow. it will preempt
SCHED_OTHER but not much more. How is this used in RT systems, what are
the criteria for actually changing this?

Increase until RCU stops spilling stalled warns, but not so far that
your workload fails?

Not quite sure how to translate that into dl speak :-), the problem of
course is that if a DL task starts to trigger the stalls we need to do
something.

> rcuc/N: This is the softirq replacement in -rt, but in mainline all it
> 	does is invoke RCU callbacks.	It might make sense to give it a
> 	deadline of something like a few milliseconds, but we would need
> 	to temper that if there were huge numbers of callbacks pending.
> 	Or perhaps have it claim that its "unit of work" was some fixed
> 	number of callbacks or emptying the list, whichever came first.
> 	Or maybe have its "unit of work" also depend on the number of
> 	callbacks pending.

Right, so the problem is if we give it insufficient time it will never
catch up on running the callbacks, ie. more will come in than we can
process and get out.

So if it works by splicing a callback list to a local list, then runs
until completion and then either immediately starts again if there's
new work, or goes to sleep waiting for more, _then_ we can already
assign it DL parameters with the only caveat being the above issue.

The advantage being indeed that if there are 'many' callbacks pending,
we'd only run a few, sleep, run a few more, etc.. due to the CBS until
we're done. This smooths out peak interference at the 'cost' of
additional delays in actually running the callbacks.

We should be able to detect the case where more and work piles on and
the actual running does not appear to catch up, but I'm not sure what to
do about it, seeing how system stability is at risk.

Certainly something to think about..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists