[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111103081835.GA9330@elte.hu>
Date: Thu, 3 Nov 2011 09:18:35 +0100
From: Ingo Molnar <mingo@...e.hu>
To: "Artem S. Tashkinov" <t.artem@...os.com>
Cc: linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>
Subject: Re: HT (Hyper Threading) aware process scheduling doesn't work as it
should
( Sorry about the delay in the reply - folks are returning from and
recovering from the Kernel Summit ;-) I've extended the Cc: list.
Please Cc: scheduler folks when reporting bugs, next time around. )
* Artem S. Tashkinov <t.artem@...os.com> wrote:
> Hello,
>
> It's known that if you want to reach maximum performance on HT
> enabled Intel CPUs you should distribute the load evenly between
> physical cores, and when you have loaded all of them you should
> then load the remaining virtual cores.
>
> For example, if you have 4 physical cores and 8 virtual CPUs then
> if you have just four tasks consuming 100% of CPU time you should
> load four CPU pairs:
>
> VCPUs: {1,2} - one task running
>
> VCPUs: {3,4} - one task running
>
> VCPUs: {5,6} - one task running
>
> VCPUs: {7,8} - one task running
>
> It's absolutely detrimental to performance to bind two tasks to
> e.g. two physical cores {1,2} {3,4} and then the remaining two
> tasks to e.g. the third core 5,6:
>
> VCPUs: {1,2} - one task running
>
> VCPUs: {3,4} - one task running
>
> VCPUs: {5,6} - *two* task runnings
>
> VCPUs: {7,8} - no tasks running
>
> I've found out that even on Linux 3.0.8 the process scheduler
> doesn't correctly distributes the load amongst virtual CPUs. E.g.
> on a 4-core system (8 total virtual CPUs) the process scheduler
> often run some instances of four different tasks on the same
> physical CPU.
>
> Maybe I shouldn't trust top/htop output on this matter but the same
> test carried out on Microsoft XP OS shows that it indeed
> distributes the load correctly, running tasks on different physical
> cores whenever possible.
>
> Any thoughts? comments? I think this is quite a serious problem.
If sched_mc is set to zero then this looks like a serious load
balancing bug - you are perfectly right that we should balance
between physical packages first and ending up with the kind of
asymmetry you describe for any observable length is a bug.
You have not outlined your exact workload - do you run a simple CPU
consuming loop with no sleeping done whatsoever, or something more
complex?
Peter, Paul, Mike, any ideas?
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists