lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110217025426.GA2775@in.ibm.com>
Date:	Thu, 17 Feb 2011 08:24:26 +0530
From:	Bharata B Rao <bharata@...ux.vnet.ibm.com>
To:	Balbir Singh <balbir@...ux.vnet.ibm.com>
Cc:	Paul Turner <pjt@...gle.com>, linux-kernel@...r.kernel.org,
	Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
	Gautham R Shenoy <ego@...ibm.com>,
	Srivatsa Vaddagiri <vatsa@...ibm.com>,
	Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Pavel Emelyanov <xemul@...nvz.org>,
	Herbert Poetzl <herbert@...hfloor.at>,
	Avi Kivity <avi@...hat.com>,
	Chris Friesen <cfriesen@...tel.com>,
	Nikhil Rao <ncrao@...gle.com>
Subject: Re: [CFS Bandwidth Control v4 1/7] sched: introduce primitives to
 account for CFS bandwidth tracking

On Wed, Feb 16, 2011 at 10:22:16PM +0530, Balbir Singh wrote:
> * Paul Turner <pjt@...gle.com> [2011-02-15 19:18:32]:
> 
> > In this patch we introduce the notion of CFS bandwidth, to account for the
> > realities of SMP this is partitioned into globally unassigned bandwidth, and
> > locally claimed bandwidth:
> > - The global bandwidth is per task_group, it represents a pool of unclaimed
> >   bandwidth that cfs_rq's can allocate from.  It uses the new cfs_bandwidth
> >   structure.
> > - The local bandwidth is tracked per-cfs_rq, this represents allotments from
> >   the global pool
> >   bandwidth assigned to a task_group, this is tracked using the
> >   new cfs_bandwidth structure.
> > 
> > Bandwidth is managed via cgroupfs via two new files in the cpu subsystem:
> > - cpu.cfs_period_us : the bandwidth period in usecs
> > - cpu.cfs_quota_us : the cpu bandwidth (in usecs) that this tg will be allowed
> >   to consume over period above.
> > 
> > A per-cfs_bandwidth timer is also introduced to handle future refresh at
> > period expiration.  There's some minor refactoring here so that
> > start_bandwidth_timer() functionality can be shared
> > 
> > Signed-off-by: Paul Turner <pjt@...gle.com>
> > Signed-off-by: Nikhil Rao <ncrao@...gle.com>
> > Signed-off-by: Bharata B Rao <bharata@...ux.vnet.ibm.com>
> > ---
> 
> Looks good, minor nits below
> 
> 
> Acked-by: Balbir Singh <balbir@...ux.vnet.ibm.com>

Thanks Balbir.

> > +
> > +static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
> > +{
> > +	struct cfs_bandwidth *cfs_b =
> > +		container_of(timer, struct cfs_bandwidth, period_timer);
> > +	ktime_t now;
> > +	int overrun;
> > +	int idle = 0;
> > +
> > +	for (;;) {
> > +		now = hrtimer_cb_get_time(timer);
> > +		overrun = hrtimer_forward(timer, now, cfs_b->period);
> > +
> > +		if (!overrun)
> > +			break;
> > +
> > +		idle = do_sched_cfs_period_timer(cfs_b, overrun);
> 
> This patch just sets up to return do_sched_cfs_period_timer to return
> 1. I am afraid I don't understand why this function is introduced
> here.

Answered this during last post: http://lkml.org/lkml/2010/10/14/31

> > +
> > +	mutex_lock(&mutex);
> > +	raw_spin_lock_irq(&tg->cfs_bandwidth.lock);
> > +	tg->cfs_bandwidth.period = ns_to_ktime(period);
> > +	tg->cfs_bandwidth.runtime = tg->cfs_bandwidth.quota = quota;
> > +	raw_spin_unlock_irq(&tg->cfs_bandwidth.lock);
> > +
> > +	for_each_possible_cpu(i) {
> 
> Why for each possible cpu - to avoid hotplug handling?

Touched upon this during last post: https://lkml.org/lkml/2010/12/6/49

Regards,
Bharata.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ