lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1437307034.3520.108.camel@gmail.com>
Date:	Sun, 19 Jul 2015 13:57:14 +0200
From:	Mike Galbraith <umgwanakikbuti@...il.com>
To:	byungchul.park@....com
Cc:	mingo@...nel.org, peterz@...radead.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] sched: modify how to compute a slice and check a
 preemptability

On Sun, 2015-07-19 at 18:11 +0900, byungchul.park@....com wrote:

> @@ -3226,6 +3226,12 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
>  	struct sched_entity *se;
>  	s64 delta;
>  
> +	/*
> +	 * Ensure that a task executes at least for sysctl_sched_min_granularity
> +	 */
> +	if (delta_exec < sysctl_sched_min_granularity)
> +		return;
> +

Think about what this does to a low weight task, or any task in a low
weight group.  The scheduler equalizes runtimes for a living, there is
no free lunch.  Any runtime larger than fair share that you graciously
grant to random task foo doesn't magically appear out of the vacuum, it
comes out of task foo's wallet. If you drag that hard coded minimum down
into the depths of group scheduling, yeah, every task will get a nice
juicy slice of CPU.. eventually, though you may not live to see it.

(yeah, overrun can and will happen at all depths due to tick
granularity, but you guaranteed it, so I inflated severity a bit;)

>  	ideal_runtime = sched_slice(cfs_rq, curr);
>  	delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
>  	if (delta_exec > ideal_runtime) {
> @@ -3243,9 +3249,6 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
>  	 * narrow margin doesn't have to wait for a full slice.
>  	 * This also mitigates buddy induced latencies under load.
>  	 */
> -	if (delta_exec < sysctl_sched_min_granularity)
> -		return;
> -

That was about something entirely different.  Feel free to remove it
after verifying that it has outlived it's original purpose, but please
don't just move it about at random.

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ