[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47B0C1E8.2050809@tmr.com>
Date: Mon, 11 Feb 2008 16:45:12 -0500
From: Bill Davidsen <davidsen@....com>
To: Olof Johansson <olof@...om.net>
CC: Willy Tarreau <w@....eu>, Mike Galbraith <efault@....de>,
linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: Scheduler(?) regression from 2.6.22 to 2.6.24 for short-lived
threads
Olof Johansson wrote:
>> However, I fail to understand the goal of the reproducer. Granted it shows
>> irregularities in the scheduler under such conditions, but what *real*
>> workload would spend its time sequentially creating then immediately killing
>> threads, never using more than 2 at a time ?
>>
>> If this could be turned into a DoS, I could understand, but here it looks
>> a bit pointless :-/
>
> It seems generally unfortunate that it takes longer for a new thread to
> move over to the second cpu even when the first is busy with the original
> thread. I can certainly see cases where this causes suboptimal overall
> system behaviour.
>
I think the moving to another CPU gets really dependent on the CPU type.
On a P4+HT the caches are shared, and moving costs almost nothing for
cache hits, while on CPUs which have other cache layouts the migration
cost is higher. Obviously multi-core should be cheaper than
multi-socket, by avoiding using the system memory bus, but it still can
get ugly.
I have an IPC test around which showed that, it ran like hell on HT, and
progressively worse as cache because less shared. I wonder why the
latest git works so much better?
--
Bill Davidsen <davidsen@....com>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists