lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 13 Sep 2010 19:36:34 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Tony Lindgren <tony@...mide.com>,
	Mike Galbraith <efault@....de>
Subject: Re: [RFC PATCH] check_preempt_tick should not compare vruntime with
 wall time


* Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:

> * Peter Zijlstra (peterz@...radead.org) wrote:
> > On Mon, 2010-09-13 at 09:56 -0400, Mathieu Desnoyers wrote:
> [...]
> > > >  static void
> > > >  check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
> > > >  {
> > > > -     unsigned long ideal_runtime, delta_exec;
> > > > +     unsigned long slice = sched_slice(cfs_rq, curr);
> > > 
> > > So you still compute the sched_slice(), based on sched_period(), based on
> > > sysctl_sched_min_granularity *= nr_running when there are more than nr_latency
> > > running threads.
> > 
> > What's wrong with that? I keep asking you, you keep not giving an
> > answer. Stop focussing on nr_latency, its an by produce not a
> > fundamental entity.
> > 
> >  period := max(latency, min_gran * nr_running)
> > 
> > See, no nr_latency -- the one and only purpose of nr_latency is avoiding
> > that multiplication when possible.
> 
> OK, the long IRC discussions we just had convinced me that the current 
> scheme takes things into account by adapting the granularity 
> dynamically, but also got me to notice that check_preempt seems to 
> compare vruntime with wall time, which is utterly incorrect. So maybe 
> all my patch was doing was to expose this bug:
> 
> ---
>  kernel/sched_fair.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> Index: linux-2.6-lttng.git/kernel/sched_fair.c
> ===================================================================
> --- linux-2.6-lttng.git.orig/kernel/sched_fair.c
> +++ linux-2.6-lttng.git/kernel/sched_fair.c
> @@ -869,7 +869,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq
>  		struct sched_entity *se = __pick_next_entity(cfs_rq);
>  		s64 delta = curr->vruntime - se->vruntime;
>  
> -		if (delta > ideal_runtime)
> +		if (delta > calc_delta_fair(ideal_runtime, curr))
>  			resched_task(rq_of(cfs_rq)->curr);
>  	}
>  }

It should have no effect at all on your latency measurements, as 
calc_delta_fair() is a NOP for nice-0 tasks:

 static inline unsigned long
 calc_delta_fair(unsigned long delta, struct sched_entity *se)
 {
         if (unlikely(se->load.weight != NICE_0_LOAD))
                 delta = calc_delta_mine(delta, NICE_0_LOAD, &se->load);

         return delta;
 }

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ