lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1399587244.2030.59.camel@j-VirtualBox>
Date:	Thu, 08 May 2014 15:14:04 -0700
From:	Jason Low <jason.low2@...com>
To:	Ingo Molnar <mingo@...nel.org>, jason.low2@...com
Cc:	peterz@...radead.org, linux-kernel@...r.kernel.org,
	daniel.lezcano@...aro.org, alex.shi@...aro.org,
	preeti@...ux.vnet.ibm.com, efault@....de,
	vincent.guittot@...aro.org, morten.rasmussen@....com, aswin@...com
Subject: Re: [PATCH 2/2] sched: Fix next_balance logic in
 rebalance_domains() and idle_balance()

On Thu, 2014-05-08 at 19:38 +0200, Ingo Molnar wrote:
> * Jason Low <jason.low2@...com> wrote:
> 
> > On Mon, 2014-04-28 at 15:45 -0700, Jason Low wrote:
> > > Currently, in idle_balance(), we update rq->next_balance when we pull_tasks. 
> > > However, it is also important to update this in the !pulled_tasks case too.
> > > 
> > > When the CPU is "busy" (the CPU isn't idle), rq->next_balance gets computed
> > > using sd->busy_factor (so we increase the balance interval when the CPU is
> > > busy). However, when the CPU goes idle, rq->next_balance could still be set
> > > to a large value that was computed with the sd->busy_factor.
> > > 
> > > Thus, we need to also update rq->next_balance in idle_balance() in the cases
> > > where !pulled_tasks too, so that rq->next_balance gets updated without taking
> > > the busy_factor into account when the CPU is about to go idle.
> > > 
> > > This patch makes rq->next_balance get updated independently of whether or
> > > not we pulled_task. Also, we add logic to ensure that we always traverse
> > > at least 1 of the sched domains to get a proper next_balance value for
> > > updating rq->next_balance.
> > > 
> > > Additionally, since load_balance() modifies the sd->balance_interval, we
> > > need to re-obtain the sched domain's interval after the call to
> > > load_balance() in rebalance_domains() before we update rq->next_balance.
> > > 
> > > This patch adds and uses 2 new helper functions, update_next_balance() and
> > > get_sd_balance_interval() to update next_balance and obtain the sched
> > > domain's balance_interval. 
> > 
> > 
> > Hi Peter,
> > 
> > I noticed that patch 1 is in tip, but not this patch 2. I was wondering
> > what the current status with this [PATCH 2/2] is at the moment.
> 
> It was crashing the bootup with the attached config, it gave the splat 
> attached below. (ignore the line duplication, it's a serial logging 
> artifact.)

Hi Ingo, Peter,

Were there NULL domains on the test system? If so, I think we can
address the problem by doing update_next_balance() only if the below
rcu_dereference_check_sched_domain() returns a non-null domain.

@@ -6665,8 +6692,14 @@ static int idle_balance(struct rq *this_rq)
>  	 */
>  	this_rq->idle_stamp = rq_clock(this_rq);
>  
> -	if (this_rq->avg_idle < sysctl_sched_migration_cost)
> +	if (this_rq->avg_idle < sysctl_sched_migration_cost) {
> +		rcu_read_lock();
> +		sd = rcu_dereference_check_sched_domain(this_rq->sd);
> +		update_next_balance(sd, 0, &next_balance);
> +		rcu_read_unlock();
> +
>  		goto out;
> +	}


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ