lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130402093535.GF16699@lge.com>
Date:	Tue, 2 Apr 2013 18:35:35 +0900
From:	Joonsoo Kim <iamjoonsoo.kim@....com>
To:	Mike Galbraith <efault@....de>
Cc:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, Paul Turner <pjt@...gle.com>,
	Alex Shi <alex.shi@...el.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Namhyung Kim <namhyung@...nel.org>
Subject: Re: [PATCH 4/5] sched: don't consider upper se in sched_slice()

Hello, Mike.

On Tue, Apr 02, 2013 at 04:35:26AM +0200, Mike Galbraith wrote:
> On Tue, 2013-04-02 at 11:25 +0900, Joonsoo Kim wrote: 
> > Hello, Preeti.
> > 
> > On Mon, Apr 01, 2013 at 12:36:52PM +0530, Preeti U Murthy wrote:
> > > Hi Joonsoo,
> > > 
> > > On 04/01/2013 09:38 AM, Joonsoo Kim wrote:
> > > > Hello, Preeti.
> > > > 
> > > 
> > > >>
> > > >> Ideally the children's cpu share must add upto the parent's share.
> > > >>
> > > > 
> > > > I don't think so.
> > > > 
> > > > We should schedule out the parent tg if 5ms is over. As we do so, we can
> > > > fairly distribute time slice to every tg within short term. If we add
> > > > the children's cpu share upto the parent's, the parent tg may have
> > > > large time slice, so it cannot be preempted easily. There may be a latency
> > > > problem if there are many tgs.
> > > 
> > > In the case where the #running < sched_nr_latency, the children's
> > > sched_slices add up to the parent's.
> > > 
> > > A rq with two tgs,each with 3 tasks.
> > > 
> > > Each of these tasks have a sched slice of
> > > [(sysctl_sched_latency / 3) / 2] as of the present implementation.
> > > 
> > > The sum of the above sched_slice of all tasks of a tg will lead to the
> > > sched_slice of its parent: sysctl_sched_latency / 2
> > > 
> > > This breaks when the nr_running on each tg > sched_nr_latency. However I
> > > don't know if this is a good thing or a bad thing.
> > 
> > Ah.. Now I get your point. Yes, you are right and it may be good thing.
> > With that property, all tasks in the system can be scheduled at least once
> > in sysctl_sched_latency. sysctl_sched_latency is system-wide configuration,
> > so my patch may be wrong. With my patch, all tasks in the system cannot be
> > scheduled at least once in sysctl_sched_latency. Instead, it schedule
> > all tasks in cfs_rq at least once in sysctl_sched_latency if there is
> > no other tgs.
> > 
> > I think that it is real problem that sysctl_sched_min_granularity is not
> > guaranteed for each task.
> > Instead of this patch, how about considering low bound?
> > 
> > if (slice < sysctl_sched_min_granularity)
> > 	slice = sysctl_sched_min_granularity;
> 
> How many SCHED_IDLE or +nice tasks will fit in that?

It is more related to how many running tasks in cfs_rq and how many tg is
in the system. If we have two tgs which have more than sched_nr_latency
tasks, all these tasks fit in this condition in current implementation.

Thanks.

> 
> -Mike
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ