[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FEA30B2.9010902@att.net>
Date: Tue, 26 Jun 2012 16:59:14 -0500
From: Daniel Santos <danielfsantos@....net>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Christopher Li <sparse@...isli.org>,
David Daney <david.daney@...ium.com>,
David Howells <dhowells@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
Ingo Molnar <mingo@...nel.org>, Joe Perches <joe@...ches.com>,
Konstantin Khlebnikov <khlebnikov@...nvz.org>,
linux-doc@...r.kernel.org, linux-sparse@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Paul Turner <pjt@...gle.com>,
Pavel Pisa <pisa@....felk.cvut.cz>,
Richard Weinberger <richard@....at>,
Rob Landley <rob@...dley.net>,
Steven Rostedt <rostedt@...dmis.org>,
Suresh Siddha <suresh.b.siddha@...el.com>
Subject: Re: [PATCH v4 12/13] fair.c: Use generic rbtree impl in fair scheduler
On 06/26/2012 07:15 AM, Peter Zijlstra wrote:
> On Fri, 2012-06-22 at 23:00 -0500, Daniel Santos wrote:
>> +static inline long compare_vruntime(u64 *a, u64 *b)
>> +{
>> +#if __BITS_PER_LONG >= 64
>> + return (long)((s64)*a - (s64)*b);
>> +#else
>> +/* This is hacky, but is done to reduce instructions -- we wont use this for
>> + * rbtree lookups, only inserts, and since our relationship is defined as
>> + * non-unique, we only need to return positive if a > b and any other value
>> + * means less than.
>> + */
>> + return (long)(*a > *b);
>> +#endif
>> +}
> That's wrong.. suppose: a = 10, b = ULLONG_MAX - 10
>
> In that case (s64)(a - b) = 20, however a > b is false.
>
> And yes, vruntime wrap does happen.
Oh, I see now! (looking at entity_before)
static inline int entity_before(struct sched_entity *a,
struct sched_entity *b)
{
return (s64)(a->vruntime - b->vruntime) < 0;
}
Do the subtraction unsigned, then evaluate the result as signed. Thank
you very much, I'll fix that.
Also, to address why we're not using entity_before (or a less()
function) directly, there's two main reasons (one that doesn't even
affect CFS). The first reason is that an "is equal" evaluation would
also be required for insertions in trees with unique keys, as well as
all lookups. This doesn't doesn't affect CFS because it isn't doing
lookups (it only cares about leftmost) and duplicate keys are allowed.
The second is that the compare function is only evaluated once by just
returning a diff. This *would* have an better performance benefit on
x86 if only gcc were willing to do the cmp or sub operation and then use
the CPU zero & negative flags to branch. Instead, it seems to like to
do a sub (to subtract the values) and then cmp the result with zero.
This is only once extra instruction in this case, but when you need to
use the (a > b ? 1 : (a < b ? -1 : 0)) construct, it's worse. Off
topic, but something I wanted to mention in light of my "this is hacky"
section.
I guess I just need "get off of my duff", put together a succinct test
case and file a gcc bug report for this.
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists