[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51D633DB.5010508@linux.vnet.ibm.com>
Date: Fri, 05 Jul 2013 10:47:55 +0800
From: Michael Wang <wangyun@...ux.vnet.ibm.com>
To: Mike Galbraith <efault@....de>
CC: Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>, Alex Shi <alex.shi@...el.com>,
Namhyung Kim <namhyung@...nel.org>,
Paul Turner <pjt@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
Ram Pai <linuxram@...ibm.com>
Subject: Re: [PATCH] sched: smart wake-affine
On 07/04/2013 06:33 PM, Mike Galbraith wrote:
[snip]
>> Well, seems like we still have many follow-up research works after fix
>> the issue ;-)
>
> Yeah. Like how to how to exterminate the plus sign, they munch cache
> lines, and have a general tendency to negatively impact benchmarks.
>
> Q6600 box, hackbench -l 1000
> avg
> 3.10.0-regress 2.293 2.297 2.313 2.291 2.295 2.297 1.000
> 3.10.0-regressx 2.560 2.524 2.427 2.599 2.602 2.542 1.106
Wow, I used to think such issue is very hard to be tracked by
benchmarks, is this regression stable?
My test could not get a stable differ, this time a little bit lose but
next time a little bit win, it's always floating, may caused by the
different chip cache-behaviour I suppose...
>
> pahole said...
>
> marge:/usr/local/src/kernel/linux-3.x.git # tail virgin
> long unsigned int timer_slack_ns; /* 1512 8 */
> long unsigned int default_timer_slack_ns; /* 1520 8 */
> atomic_t ptrace_bp_refcnt; /* 1528 4 */
>
> /* size: 1536, cachelines: 24, members: 125 */
> /* sum members: 1509, holes: 6, sum holes: 23 */
> /* bit holes: 1, sum bit holes: 26 bits */
> /* padding: 4 */
> /* paddings: 1, sum paddings: 4 */
> };
>
> marge:/usr/local/src/kernel/linux-3.x.git # tail michael
> long unsigned int default_timer_slack_ns; /* 1552 8 */
> atomic_t ptrace_bp_refcnt; /* 1560 4 */
>
> /* size: 1568, cachelines: 25, members: 128 */
> /* sum members: 1533, holes: 8, sum holes: 31 */
> /* bit holes: 1, sum bit holes: 26 bits */
> /* padding: 4 */
> /* paddings: 1, sum paddings: 4 */
> /* last cacheline: 32 bytes */
> };
>
> ..but plugging holes, didn't help, moving this/that around neither, nor
> did letting pahole go wild to get the line back. It's plus signs I tell
> ya, the evil things must die ;-)
Hmm...so the new members kicked some tail members to a new line...or may
be totally different when compiler take part in...
It's really hard to estimate the influence, especially when the
task_struct is still keep changing...
But the task_struct is really a little big now, may be we could put the
'cold' members into a new structure and just record the pointer, that
may increase the chances of cache-hit the hot members, but it's platform
related and not so easy to be detect...
Regards,
Michael Wang
>
> -Mike
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists