[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cf3edd8d0801160838o3edf8885uf0a8a9d2872ee36e@mail.gmail.com>
Date: Wed, 16 Jan 2008 16:38:33 +0000
From: "Colin Fowler" <elethiomel@...il.com>
To: "Ingo Molnar" <mingo@...e.hu>
Cc: linux-kernel@...r.kernel.org,
"Peter Zijlstra" <a.p.zijlstra@...llo.nl>
Subject: Re: Performance loss 2.6.22->22.6.23->2.6.24-rc7 on CPU intensive benchmark on 8 Core Xeon
Hi Ingo,
I have permission for a binary only release (mailed the supervisor
intermediately after your earler mail). I'm sure the abstract code
simulating the workload will be alright too, but I need time to put it
together as I'm a bit swamped at the moment. Will hope to have it in
the next few days.
regards,
Colin
On Jan 16, 2008 4:19 PM, Ingo Molnar <mingo@...e.hu> wrote:
>
> * Colin Fowler <elethiomel@...il.com> wrote:
>
> > Hi Ingo, I'll need to convince my supervisor first if I can release a
> > binary. Technically anythin glike this needs to go through our
> > University's "innovations department" and requires lengthy paperwork
> > and NDAs :(.
>
> a binary wouldnt work for me anyway. But you could try to write a
> "workload simulator": just pick out the pthread ops and replace the
> worker functions with some dummy stuff that just touches an array that
> has similar size to the tiles (in a tight loop). Make sure it has
> similar context-switch rate and idle percentage as your real workload -
> then send us the .c file. As long as it's a single .c file that runs for
> a few seconds and outputs a precise enough "run time" result, kernel
> developers would pick it up and use it for optimizations. To get the #
> of cpus automatically you can do:
>
> cpus = system("exit `grep processor /proc/cpuinfo | wc -l`");
> cpus = WEXITSTATUS(cpus);
>
> and start as many threads as many CPUs there are in the system.
>
> Ingo
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists