lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241008144829.GG14587@noisy.programming.kicks-ass.net>
Date: Tue, 8 Oct 2024 16:48:29 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Ankur Arora <ankur.a.arora@...cle.com>
Cc: bigeasy@...utronix.de, tglx@...utronix.de, mingo@...nel.org,
	linux-kernel@...r.kernel.org, juri.lelli@...hat.com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
	vschneid@...hat.com, efault@....de
Subject: Re: [PATCH 2/5] sched: Add Lazy preemption model

On Mon, Oct 07, 2024 at 10:43:58PM -0700, Ankur Arora wrote:

> > @@ -519,7 +525,7 @@ static inline bool preempt_model_rt(void
> >   */
> >  static inline bool preempt_model_preemptible(void)
> >  {
> > -	return preempt_model_full() || preempt_model_rt();
> > +	return preempt_model_full() || preempt_model_lazy() || preempt_model_rt();
> >  }
> 
> In addition to preempt_model_preemptible() we probably also need
> 
>   static inline bool preempt_model_minimize_latency(void)
>   {
>   	return preempt_model_full() || preempt_model_rt();
>   }
> 
> for spin_needbreak()/rwlock_needbreak().
> 
> That would make the behaviour of spin_needbreak() under the lazy model
> similar to none/voluntary.

That whole thing needs rethinking, for one the preempt_model_rt() one
doesn't really make sense anymore at the end of this.

> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -1078,6 +1078,9 @@ static void __resched_curr(struct rq *rq
> >
> >  	lockdep_assert_rq_held(rq);
> >
> > +	if (is_idle_task(curr) && tif == TIF_NEED_RESCHED_LAZY)
> > +		tif = TIF_NEED_RESCHED;
> > +
> 
> Tasks with idle policy get handled at the usual user space boundary.
> Maybe a comment reflecting that?

is_idle_task() != SCHED_IDLE. This is about the idle task, which you
want to force preempt always. But I can stick a comment on.

> > @@ -5598,6 +5627,10 @@ void sched_tick(void)
> >  	update_rq_clock(rq);
> >  	hw_pressure = arch_scale_hw_pressure(cpu_of(rq));
> >  	update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure);
> > +
> > +	if (dynamic_preempt_lazy() && tif_test_bit(TIF_NEED_RESCHED_LAZY))
> > +		resched_curr(rq);
> > +
> 
> So this works for SCHED_NORMAL. But, does this do the right thing for
> deadline etc other scheduling classes?

Yeah, only fair.c uses resched_curr_laz8(), the others still use
resched_curr() and will work as if Full.

So that is: SCHED_IDLE, SCHED_BATCH and SCHED_NORMAL/OTHER get the lazy
thing, FIFO, RR and DEADLINE get the traditional Full behaviour.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ