[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eab6890e2976525235b93e171607d17d.squirrel@webmail.greenhost.nl>
Date: Mon, 20 Feb 2012 03:09:28 +0100
From: "Indan Zupancic" <indan@....nu>
To: "Linus Torvalds" <torvalds@...ux-foundation.org>
Cc: "Michael Neuling" <mikey@...ling.org>,
"Thomas Gleixner" <tglx@...utronix.de>,
"Ingo Molnar" <mingo@...hat.com>, "H. Peter Anvin" <hpa@...or.com>,
x86@...nel.org,
"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
benh@...nel.crashing.org, anton@...ba.org
Subject: Re: [PATCH 0/2] More i387 state save/restore work
On Mon, February 20, 2012 02:03, Linus Torvalds wrote:
> On Sun, Feb 19, 2012 at 4:53 PM, Michael Neuling <mikey@...ling.org> wrote:
>>
>> Does "2476844 loops in 2 seconds" imply 2476844 context switches in 2
>> sec? With Anton's context_switch [1] benchmark, we don't even hit 100K
>> context switches per sec.
No, it implies 2476844 context switches per second, because it only counts
the loops in one process, and it takes two context switches to switch away
and back again.
My numbers for context_switch.c are 418K for no VDSO/FPU and 413K with FPU.
Linus' test program gets:
1050525 loops in 2 seconds with FPU
1150258 loops in 2 seconds with use_math() commented.
So it seems that the overhead of doing the pipe thing is quite high compared
to sched_yield().
These numbers are for an old Pentium M pinned at 1.4GHz, so getting only
100K seems very bad.
>>
>> Do you have this test program anywhere?
>
> Here. No guarantees that this is at all sane, it's special-cased code
> literally for testing only this one issue. The only indication I have
> that this works at all is that the numbers did change roughly as
> expected, and the kernel profile changes made sense.
I tested both programs, and your loops per second is half the context
switches according to vmstat, so it works as expected. I haven't tested
your FPU patches though.
Greetings,
Indan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists