[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <028ffb37-cbea-47fc-804b-a296e456682d.ydzhang@linux.alibaba.com>
Date: Wed, 12 Jul 2023 22:48:07 +0800
From: "wardenjohn" <ydzhang@...ux.alibaba.com>
To: "Josh Poimboeuf" <jpoimboe@...nel.org>
Cc: "Bagas Sanjaya" <bagasdotme@...il.com>, "jikos" <jikos@...nel.org>,
"mbenes" <mbenes@...e.cz>, "pmladek" <pmladek@...e.com>,
"joe.lawrence" <joe.lawrence@...hat.com>,
"Kernel Live Patching" <live-patching@...r.kernel.org>,
"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>
Subject: Re: Fix MAX_STACK_ENTRIES from 100 to 32
It is a powerful and convincing explanation to my patch.
Thanks for patiently answering my suggesting. :)
Wardenjohn
----------------------------------------------------------------
From:Josh Poimboeuf <jpoimboe@...nel.org>
Send Time:2023年7月11日(星期二) 01:13
To:wardenjohn <ydzhang@...ux.alibaba.com>
Cc:Bagas Sanjaya <bagasdotme@...il.com>; jikos <jikos@...nel.org>; mbenes <mbenes@...e.cz>; pmladek <pmladek@...e.com>; joe.lawrence <joe.lawrence@...hat.com>; Kernel Live Patching <live-patching@...r.kernel.org>; Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject:Re: Fix MAX_STACK_ENTRIES from 100 to 32
On Sun, Jul 09, 2023 at 09:09:14PM +0800, wardenjohn wrote:
> OK, I will resubmit the patch by git-send-email(1) instead. :)
>
> But I want ask how can I provide the Link to discussion?
> And what is v2 patch?
> I am sorry that it is my first time to join the kernel discussion.
>
> I am looking forward to get the guidance from you. Thanks!
>
> The reason of reducing MAX_STACK_ENTRIES from 100 to 32 is as follows:
> In my daily work, I found that all the function stack will not achieve the number of 32.
> Therefore, setting the array of 100 may be a waste of kernel memory. Therefore, I suggest
> to reduce the number of entries of the stack entries from 100 to 32.
>
> Here is an example of the call trace:
> [20409.505602] [<ffffffff81168861>] group_sched_out+0x61/0xb0
> [20409.514791] [<ffffffff81168bfd>] ctx_sched_out+0xad/0xf0
> [20409.520307] [<ffffffff8116a03d>] __perf_install_in_context+0xbd/0x1b0
> [20409.526952] [<ffffffff811649b0>] remote_function+0x40/0x50
> [20409.532644] [<ffffffff810f1666>] generic_exec_single+0x156/0x1a0
> [20409.538864] [<ffffffff81164970>] ? perf_event_set_output+0x190/0x190
> [20409.545425] [<ffffffff810f170f>] smp_call_function_single+0x5f/0xa0
> [20409.551895] [<ffffffff811f5e70>] ? alloc_file+0xa0/0xf0
> [20409.557326] [<ffffffff81163523>] task_function_call+0x53/0x80
> [20409.563274] [<ffffffff81169f80>] ? perf_cpu_hrtimer_handler+0x1b0/0x1b0
> [20409.570089] [<ffffffff81166688>] perf_install_in_context+0x78/0x120
> [20409.576558] [<ffffffff8116da54>] SYSC_perf_event_open+0x794/0xa40
> [20409.582852] [<ffffffff8116e169>] SyS_perf_event_open+0x9/0x10
> [20409.588803] [<ffffffff8166bf3d>] system_call_fastpath+0x16/0x1b
> [20409.594926] [<ffffffff8166bddd>] ? system_call_after_swapgs+0xca/0x214
Actually, when I booted with CONFIG_PREEMPT+CONFIG_LOCKDEP, I saw the
number of stack entries go higher than 60. I didn't do extensive
testing, so it might go even higher than that.
I'd rather leave it at 100 for now, as we currently have no way of
reporting if the limit is getting hit across a variety of configs and
usage scenarios. And any memory savings would be negligible anyway.
--
Josh
Powered by blists - more mailing lists