[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <495FD171.4030502@sgi.com>
Date: Sat, 03 Jan 2009 12:58:25 -0800
From: Mike Travis <travis@....com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Ingo Molnar <mingo@...e.hu>, Rusty Russell <rusty@...tcorp.com.au>,
linux-kernel@...r.kernel.org, Jack Steiner <steiner@....com>
Subject: Re: [git pull] cpus4096 tree, part 3
Linus Torvalds wrote:
>
> On Sat, 3 Jan 2009, Ingo Molnar wrote:
>> ok. The pending regressions are all fixed now, and i've just finished my
>> standard tests on the latest tree and all the tests passed fine.
>
> Ok, pulled and pushed out.
>
> Has anybody looked at what the stack size is with MAXSMP set with an
> allyesconfig? And what areas are still problematic, if any? Are we going
> to have some code-paths that still essentially have 1kB+ of stack space
> just because they haven't been converted and still have the cpu mask on
> stack?
>
> Linus
Hi Linus,
Yes, I do periodically collect stats for memory and stack usage. Here is
a recent stack summary (not "allyes", but most all trace/debug options
turned on). It shows the stack growth from a 128 NR_CPUS config to a
MAXSMP (4096 NR_CPUS) config. Most of the changes to correct these "stack
hogs" have been sitting in a queue until the changes affecting non-x86
architectures have been accepted (which you just did), though some are
because of new code from the merge activity.
Rusty has introduced a config option that disables the old cpumask_t
which really highlights where the offenders still are. Ultimately,
that should prevent any new stack hogs from being introduced, but it
won't be settable until 2.6.30 time frame.
====== Stack (-l 500)
1 - 128-defconfig
2 - 4k-defconfig
.1. .2. ..final..
0 +1640 1640 . acpi_cpufreq_target
0 +1368 1368 . cpufreq_add_dev
0 +1344 1344 . store_scaling_governor
0 +1328 1328 . store_scaling_min_freq
0 +1328 1328 . store_scaling_max_freq
0 +1328 1328 . cpufreq_update_policy
0 +1328 1328 . cpu_has_cpufreq
0 +1048 1048 . get_cur_val
0 +1032 1032 . local_cpus_show
0 +1032 1032 . local_cpulist_show
0 +1024 1024 . pci_bus_show_cpuaffinity
0 +808 808 . cpuset_write_resmask
0 +736 736 . update_flag
0 +648 648 . init_intel_cacheinfo
0 +640 640 . cpuset_attach
0 +584 584 . shmem_getpage
0 +584 584 . __percpu_alloc_mask
0 +552 552 . smp_call_function_many
0 +536 536 . pci_device_probe
0 +536 536 . native_flush_tlb_others
0 +536 536 . cpuset_common_file_read
0 +520 520 . show_related_cpus
0 +520 520 . show_affected_cpus
0 +520 520 . get_measured_perf
0 +520 520 . flush_tlb_page
0 +520 520 . cpuset_can_attach
0 +512 512 . flush_tlb_mm
0 +512 512 . flush_tlb_current_task
0 +512 512 . find_lowest_rq
0 +512 512 . acpi_processor_ffh_cstate_probe
====== Text/Data ()
Overall memory reservation looks like this:
.1. .2. ..final..
5799936 +4096 5804032 +0.07% TextSize
3772416 +139264 3911680 +3.69% DataSize
8822784 +1234944 10057728 +13% BssSize
2445312 +794624 3239936 +32% InitSize
1884160 +4096 1888256 +0.22% PerCPU
143360 +708608 851968 +494% OtherSize
22867968 +2885632 25753600 +112% Totals
I will update these with the latest changes (and use a
allyesconfig config) and post them again soon.
Thanks,
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists