[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1433696107.29864.51.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Sun, 07 Jun 2015 09:55:07 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Tejun Heo <tj@...nel.org>
Cc: "H. Peter Anvin" <hpa@...or.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [RFC] percpu section full of holes
Hi Tejun
In commit bdf977b37418cdf8a2252504779a7e12a09b7575
("x86, percpu: Collect hot percpu variables into one cacheline")
You wrote that forcing ____cacheline_aligned on
current_task would put all hot variables together.
However this seems not generally true.
nm -v vmlinux
...
000000000000a140 d cpu_loops_per_jiffy
<56 bytes hole :( >
000000000000a180 d current_vcpu
...
000000000000a980 D debug_idt_ctr
000000000000a984 D debug_stack_usage
<hole>
000000000000a9a0 D orig_ist
000000000000a9d8 D fpu_owner_task
000000000000a9e0 D __preempt_count
000000000000a9e4 D irq_count
000000000000a9e8 D irq_stack_ptr
<hole>
000000000000aa00 D current_task
000000000000aa08 D kernel_stack
000000000000aa10 d debug_stack_addr
000000000000aa20 D cpu_hw_events
compiler/linker do not seem to care about the order in the source file.
$ grep DEFINE_PER_CPU arch/x86/kernel/cpu/common.c
DEFINE_PER_CPU_PAGE_ALIGNED(struct gdt_page, gdt_page) = { .gdt = {
DEFINE_PER_CPU(unsigned long, kernel_stack) =
DEFINE_PER_CPU_FIRST(union irq_stack_union,
DEFINE_PER_CPU(struct task_struct *, current_task) ____cacheline_aligned =
DEFINE_PER_CPU(char *, irq_stack_ptr) =
DEFINE_PER_CPU(unsigned int, irq_count) __visible = -1;
DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT;
DEFINE_PER_CPU(struct task_struct *, fpu_owner_task);
static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
DEFINE_PER_CPU(struct orig_ist, orig_ist);
static DEFINE_PER_CPU(unsigned long, debug_stack_addr);
DEFINE_PER_CPU(int, debug_stack_usage);
DEFINE_PER_CPU(u32, debug_idt_ctr);
DEFINE_PER_CPU(struct task_struct *, current_task) = &init_task;
DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT;
DEFINE_PER_CPU(struct task_struct *, fpu_owner_task);
DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) =
DEFINE_PER_CPU_ALIGNED(struct stack_canary, stack_canary);
I wish we had a way to remove the automatic alignment of ELF sections
based on size of objects/structures.
Why __alignof__ is not respected I dont know.
linker propagates the biggest alignment to various built-in.o builds,
and we have all these holes.
objdump -h kernel/built-in.o | grep data..percpu
28 .data..percpu 00003ce0 0000000000000000 0000000000000000 0011c940 2**6
283 .data..percpu..shared_aligned 00001720 0000000000000000 0000000000000000 00146e80 2**6
We might add a DEFINE_PER_CPU_SMALL() for objects less than sizeof(long),
but this would mean a lot of changes in the tree.
And we would still have holes when per cpu objects are 40 bytes long,
because we would not use the 24 bytes after them.
Oh well.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists