[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8c826c96-62ec-2f72-c4cb-30139d5639d1@redhat.com>
Date: Fri, 18 Nov 2022 18:32:54 +0100
From: David Hildenbrand <david@...hat.com>
To: Luis Chamberlain <mcgrof@...nel.org>
Cc: Prarit Bhargava <prarit@...hat.com>, pmladek@...e.com,
Petr Pavlu <petr.pavlu@...e.com>,
linux-modules@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] module: Merge same-name module load requests
On 15.11.22 20:29, Luis Chamberlain wrote:
> On Mon, Nov 14, 2022 at 04:45:05PM +0100, David Hildenbrand wrote:
>> Note that I don't think the issue I raised is due to 6e6de3dee51a.
>> I don't have the machine at hand right now. But, again, I doubt this will
>> fix it.
>
> There are *more* modules processed after that commit. That's all. So
> testing would be appreciated.
I just tested that change on top of 6.1.0-rc5+ on that large system
and CONFIG_KASAN_INLINE=y. No change.
[ 207.955184] vmap allocation for size 2490368 failed: use vmalloc=<size> to increase size
[ 207.955891] vmap allocation for size 2490368 failed: use vmalloc=<size> to increase size
[ 207.956253] vmap allocation for size 2490368 failed: use vmalloc=<size> to increase size
[ 207.956461] systemd-udevd: vmalloc error: size 2486272, vm_struct allocation failed, mode:0xcc0(GFP_KERNEL), nodemask=(null),cpuset=/,mems_allowed=1-7
[ 207.956573] CPU: 88 PID: 4925 Comm: systemd-udevd Not tainted 6.1.0-rc5+ #4
[ 207.956580] Hardware name: Lenovo ThinkSystem SR950 -[7X12ABC1WW]-/-[7X12ABC1WW]-, BIOS -[PSE130O-1.81]- 05/20/2020
[ 207.956584] Call Trace:
[ 207.956588] <TASK>
[ 207.956593] vmap allocation for size 2490368 failed: use vmalloc=<size> to increase size
[ 207.956593] dump_stack_lvl+0x5b/0x77
[ 207.956613] warn_alloc.cold+0x86/0x195
[ 207.956632] ? zone_watermark_ok_safe+0x2b0/0x2b0
[ 207.956641] ? slab_free_freelist_hook+0x11e/0x1d0
[ 207.956672] ? __get_vm_area_node+0x2a4/0x340
[ 207.956694] __vmalloc_node_range+0xad6/0x11b0
[ 207.956699] ? trace_contention_end+0xda/0x140
[ 207.956715] ? __mutex_lock+0x254/0x1360
[ 207.956740] ? __mutex_unlock_slowpath+0x154/0x600
[ 207.956752] ? bit_wait_io_timeout+0x170/0x170
[ 207.956761] ? vfree_atomic+0xa0/0xa0
[ 207.956775] ? load_module+0x1d8f/0x7ff0
[ 207.956786] module_alloc+0xe7/0x170
[ 207.956802] ? load_module+0x1d8f/0x7ff0
[ 207.956822] load_module+0x1d8f/0x7ff0
[ 207.956876] ? module_frob_arch_sections+0x20/0x20
[ 207.956888] ? ima_post_read_file+0x15a/0x180
[ 207.956904] ? ima_read_file+0x140/0x140
[ 207.956918] ? kernel_read+0x5c/0x140
[ 207.956931] ? security_kernel_post_read_file+0x6d/0xb0
[ 207.956950] ? kernel_read_file+0x21d/0x7d0
[ 207.956971] ? __x64_sys_fspick+0x270/0x270
[ 207.956999] ? __do_sys_finit_module+0xfc/0x180
[ 207.957005] __do_sys_finit_module+0xfc/0x180
[ 207.957012] ? __ia32_sys_init_module+0xa0/0xa0
[ 207.957023] ? __seccomp_filter+0x15e/0xc20
[ 207.957066] ? syscall_trace_enter.constprop.0+0x98/0x230
[ 207.957078] do_syscall_64+0x58/0x80
[ 207.957085] ? asm_exc_page_fault+0x22/0x30
[ 207.957095] ? lockdep_hardirqs_on+0x7d/0x100
[ 207.957103] entry_SYSCALL_64_after_hwframe+0x63/0xcd
I have access to the system for a couple more days, if there
is anything else I should test.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists