lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 17 Feb 2012 03:53:18 -0800
From:	Paul Turner <pjt@...gle.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	linux-kernel@...r.kernel.org, Venki Pallipadi <venki@...gle.com>,
	Srivatsa Vaddagiri <vatsa@...ibm.com>,
	Mike Galbraith <efault@....de>,
	Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
	Ben Segall <bsegall@...gle.com>, Ingo Molnar <mingo@...e.hu>,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>
Subject: Re: [RFC PATCH 04/14] sched: maintain the load contribution of
 blocked entities

On Thu, Feb 16, 2012 at 4:25 AM, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> On Wed, 2012-02-01 at 17:38 -0800, Paul Turner wrote:
>> +static inline void subtract_blocked_load_contrib(struct cfs_rq *cfs_rq,
>> +                                                long load_contrib)
>> +{
>> +       if (likely(load_contrib < cfs_rq->blocked_load_avg))
>> +               cfs_rq->blocked_load_avg -= load_contrib;
>> +       else
>> +               cfs_rq->blocked_load_avg = 0;
>> +}
>> +
>>  /* Update a sched_entity's runnable average */
>> -static inline void update_entity_load_avg(struct sched_entity *se)
>> +static inline void update_entity_load_avg(struct sched_entity *se,
>> +                                         int update_cfs_rq)
>>  {
>>         struct cfs_rq *cfs_rq = cfs_rq_of(se);
>>         long contrib_delta;
>> @@ -1106,8 +1130,34 @@ static inline void update_entity_load_avg(struct sched_entity *se)
>>                 return;
>>
>>         contrib_delta = __update_entity_load_avg_contrib(se);
>> +
>> +       if (!update_cfs_rq)
>> +               return;
>> +
>>         if (se->on_rq)
>>                 cfs_rq->runnable_load_avg += contrib_delta;
>> +       else
>> +               subtract_blocked_load_contrib(cfs_rq, -contrib_delta);
>> +}
>
> So that last bit is add_blocked_load_contrib(), right?

Yes, although contrib_delta is signed.

I suppose this looks a little funny but there's a good explanation:
When adding or removing a contribution to blocked_load_sum we have to
be careful since rounding errors may mean the delta to our
contribution and the delta to our portion of the blocked_load_sum may
be off by a few bits.  Being low is fine since it's constantly
decaying to zero so any error term is not long for this world -- but
we do want to make sure we don't underflow in the other direction.

This means any time we remove we have to do the whole
	"if (likely(load_contrib < cfs_rq->blocked_load_avg))
		cfs_rq->blocked_load_avg -= load_contrib;
	else
		cfs_rq->blocked_load_avg = 0"
thing.

Typically we only care about doing this on removing load (e.g. local
wake-up); since we have no error going in the opposite direction; so
we end up with subtract_blocked_load_contrib to handle the first case.

Coming back to the use of it here:
Since contrib_delta is on a freshly computed load_contribution we have
to take the same care as when we are removing; but luckily we already
have subtract_blocked_load_contrib from dealing with that everywhere
else.  So we just re-use it and flip the sign.

We could call it add everywhere else (and again flip the sign) but
that's less intuitive since we really are only looking to (safely) remove in
those cases.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ