[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190421211007.GA30444@avx2>
Date: Mon, 22 Apr 2019 00:10:07 +0300
From: Alexey Dobriyan <adobriyan@...il.com>
To: hpa@...or.com
Cc: Ingo Molnar <mingo@...nel.org>, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, linux-kernel@...r.kernel.org,
x86@...nel.org
Subject: Re: [PATCH] x86_64: uninline TASK_SIZE
On Sun, Apr 21, 2019 at 01:07:08PM -0700, hpa@...or.com wrote:
> On April 21, 2019 11:28:42 AM PDT, Ingo Molnar <mingo@...nel.org> wrote:
> >
> >* Alexey Dobriyan <adobriyan@...il.com> wrote:
> >
> >> TASK_SIZE macro is quite deceptive: it looks like a constant but in
> >fact
> >> compiles to 50+ bytes.
> >>
> >> Space savings on x86_64 defconfig:
> >>
> >> add/remove: 1/0 grow/shrink: 3/24 up/down: 77/-2247 (-2170)
> >> Function old new delta
> >> _task_size - 52 +52
> >> mpol_shared_policy_init 344 363 +19
> >> shmem_get_unmapped_area 92 97 +5
> >> __rseq_handle_notify_resume.cold 34 35 +1
> >> copy_from_user_nmi 123 113 -10
> >> mmap_address_hint_valid 92 56 -36
> >> arch_get_unmapped_area_topdown 471 435 -36
> >> tlb_gather_mmu 164 126 -38
> >> hugetlb_get_unmapped_area 774 736 -38
> >> __create_xol_area 497 458 -39
> >> arch_tlb_gather_mmu 160 120 -40
> >> setup_new_exec 380 336 -44
> >> __x64_sys_mlockall 378 333 -45
> >> __ia32_sys_mlockall 378 333 -45
> >> tlb_flush_mmu 235 189 -46
> >> unmap_page_range 2098 2048 -50
> >> copy_mount_options 518 465 -53
> >> __get_user_pages 1737 1675 -62
> >> get_unmapped_area 270 204 -66
> >> perf_prepare_sample 1176 1098 -78
> >> perf_callchain_user 549 469 -80
> >> mremap_to.isra 545 457 -88
> >> arch_tlb_finish_mmu 394 305 -89
> >> __do_munmap 1039 927 -112
> >> elf_map 527 409 -118
> >> prctl_set_mm 1509 1335 -174
> >> __rseq_handle_notify_resume 1116 906 -210
> >> load_elf_binary 11761 11111 -650
> >> Total: Before=14121337, After=14119167, chg -0.02%
> >>
> >> Signed-off-by: Alexey Dobriyan <adobriyan@...il.com>
> >> ---
> >>
> >> arch/x86/include/asm/processor.h | 4 ++--
> >> arch/x86/kernel/Makefile | 1 +
> >> arch/x86/kernel/task_size_64.c | 9 +++++++++
> >> 3 files changed, 12 insertions(+), 2 deletions(-)
> >>
> >> --- a/arch/x86/include/asm/processor.h
> >> +++ b/arch/x86/include/asm/processor.h
> >> @@ -887,8 +887,8 @@ static inline void spin_lock_prefetch(const void
> >*x)
> >>
> >> #define TASK_SIZE_LOW (test_thread_flag(TIF_ADDR32) ? \
> >> IA32_PAGE_OFFSET : DEFAULT_MAP_WINDOW)
> >> -#define TASK_SIZE (test_thread_flag(TIF_ADDR32) ? \
> >> - IA32_PAGE_OFFSET : TASK_SIZE_MAX)
> >> +unsigned long _task_size(void);
> >> +#define TASK_SIZE _task_size()
> >> #define TASK_SIZE_OF(child) ((test_tsk_thread_flag(child,
> >TIF_ADDR32)) ? \
> >> IA32_PAGE_OFFSET : TASK_SIZE_MAX)
> >>
> >> --- a/arch/x86/kernel/Makefile
> >> +++ b/arch/x86/kernel/Makefile
> >> @@ -46,6 +46,7 @@ CFLAGS_irq.o := -I$(src)/../include/asm/trace
> >>
> >> obj-y := process_$(BITS).o signal.o
> >> obj-$(CONFIG_COMPAT) += signal_compat.o
> >> +obj-$(CONFIG_X86_64) += task_size_64.o
> >> obj-y += traps.o idt.o irq.o irq_$(BITS).o dumpstack_$(BITS).o
> >> obj-y += time.o ioport.o dumpstack.o nmi.o
> >> obj-$(CONFIG_MODIFY_LDT_SYSCALL) += ldt.o
> >> new file mode 100644
> >> --- /dev/null
> >> +++ b/arch/x86/kernel/task_size_64.c
> >> @@ -0,0 +1,9 @@
> >> +#include <linux/export.h>
> >> +#include <linux/sched.h>
> >> +#include <linux/thread_info.h>
> >> +
> >> +unsigned long _task_size(void)
> >> +{
> >> + return test_thread_flag(TIF_ADDR32) ? IA32_PAGE_OFFSET :
> >TASK_SIZE_MAX;
> >> +}
> >> +EXPORT_SYMBOL(_task_size);
> >
> >Good idea - but instead of adding yet another compilation unit, why not
> >
> >stick _task_size() into arch/x86/kernel/process_64.c, which is the
> >canonical place for process management related arch functions?
> >
> >Thanks,
> >
> > Ingo
>
> Better yet... since TIF_ADDR32 isn't something that changes randomly, perhaps this should be a separate variable?
Maybe. I only thought about putting every 32-bit related flag under
CONFIG_COMPAT to further eradicate bloat (and force everyone else
to keep an eye on it, ha-ha).
Powered by blists - more mailing lists