[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <475708A7.4030708@jlab.org>
Date: Wed, 05 Dec 2007 15:23:03 -0500
From: Jie Chen <chen@...b.org>
To: Ingo Molnar <mingo@...e.hu>
CC: Simon Holm Th??gersen <odie@...aau.dk>,
Eric Dumazet <dada1@...mosbay.com>,
linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: Possible bug from kernel 2.6.22 and above, 2.6.24-rc4
Ingo Molnar wrote:
> * Jie Chen <chen@...b.org> wrote:
>
>> Since I am using affinity flag to bind each thread to a different
>> core, the synchronization overhead should increases as the number of
>> cores/threads increases. But what we observed in the new kernel is the
>> opposite. The barrier overhead of two threads is 8.93 micro seconds vs
>> 1.86 microseconds for 8 threads (the old kernel is 0.49 vs 1.86). This
>> will confuse most of people who study the
>> synchronization/communication scalability. I know my test code is not
>> real-world computation which usually use up all cores. I hope I have
>> explained myself clearly. Thank you very much.
>
> btw., could you try to not use the affinity mask and let the scheduler
> manage the spreading of tasks? It generally has a better knowledge about
> how tasks interrelate.
>
> Ingo
Hi, Ingo:
I just disabled the affinity mask and reran the test. There were no
significant changes for two threads (barrier overhead is around 9
microseconds). As for 8 threads, the barrier overhead actually drops a
little, which is good. Let me know whether I can be any help. Thank you
very much.
--
###############################################
Jie Chen
Scientific Computing Group
Thomas Jefferson National Accelerator Facility
12000, Jefferson Ave.
Newport News, VA 23606
(757)269-5046 (office) (757)269-6248 (fax)
chen@...b.org
###############################################
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists