lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150720003405.GG3956@byungchulpark-X58A-UD3R>
Date:	Mon, 20 Jul 2015 09:34:05 +0900
From:	Byungchul Park <byungchul.park@....com>
To:	Mike Galbraith <umgwanakikbuti@...il.com>
Cc:	mingo@...nel.org, peterz@...radead.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] sched: modify how to compute a slice and check a
 preemptability

On Sun, Jul 19, 2015 at 01:57:14PM +0200, Mike Galbraith wrote:
> On Sun, 2015-07-19 at 18:11 +0900, byungchul.park@....com wrote:
> 
> > @@ -3226,6 +3226,12 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
> >  	struct sched_entity *se;
> >  	s64 delta;
> >  
> > +	/*
> > +	 * Ensure that a task executes at least for sysctl_sched_min_granularity
> > +	 */
> > +	if (delta_exec < sysctl_sched_min_granularity)
> > +		return;
> > +
> 
> Think about what this does to a low weight task, or any task in a low
> weight group.  The scheduler equalizes runtimes for a living, there is
> no free lunch.  Any runtime larger than fair share that you graciously
> grant to random task foo doesn't magically appear out of the vacuum, it
> comes out of task foo's wallet. If you drag that hard coded minimum down
> into the depths of group scheduling, yeah, every task will get a nice
> juicy slice of CPU.. eventually, though you may not live to see it.

hello mike,

then i will not raise the question about ensuring minimum slice quantity more.
the case 2 must be taken.

> 
> (yeah, overrun can and will happen at all depths due to tick
> granularity, but you guaranteed it, so I inflated severity a bit;)

yes, i also think that a preemption granularity has little meaning, atually 
because of tick granularity. so, to be honest with you, my try is a kind of
trivial things but just things to fix wrong code.

> 
> >  	ideal_runtime = sched_slice(cfs_rq, curr);
> >  	delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
> >  	if (delta_exec > ideal_runtime) {
> > @@ -3243,9 +3249,6 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
> >  	 * narrow margin doesn't have to wait for a full slice.
> >  	 * This also mitigates buddy induced latencies under load.
> >  	 */
> > -	if (delta_exec < sysctl_sched_min_granularity)
> > -		return;
> > -
> 
> That was about something entirely different.  Feel free to remove it
> after verifying that it has outlived it's original purpose, but please
> don't just move it about at random.

yes, i will not ensure minimum preemption granularity any more.
if i have to choose the case 2, i want to remove it.

thank you,
byungchul

> 
> 	-Mike
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ