[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6967cd38.050a0220.58bed.0001.GAE@google.com>
Date: Wed, 14 Jan 2026 09:07:04 -0800
From: syzbot ci <syzbot+ci7d2110b831be06f6@...kaller.appspotmail.com>
To: akpm@...ux-foundation.org, apais@...ux.microsoft.com,
axelrasmussen@...gle.com, cgroups@...r.kernel.org, chengming.zhou@...ux.dev,
chenridong@...wei.com, chenridong@...weicloud.com, david@...nel.org,
hamzamahfooz@...ux.microsoft.com, hannes@...xchg.org, harry.yoo@...cle.com,
hughd@...gle.com, imran.f.khan@...cle.com, kamalesh.babulal@...cle.com,
lance.yang@...ux.dev, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
lorenzo.stoakes@...cle.com, mhocko@...e.com, mkoutny@...e.com,
muchun.song@...ux.dev, nphamcs@...il.com, qi.zheng@...ux.dev,
roman.gushchin@...ux.dev, shakeel.butt@...ux.dev, songmuchun@...edance.com,
weixugc@...gle.com, yosry.ahmed@...ux.dev, yuanchu@...gle.com,
zhengqi.arch@...edance.com, ziy@...dia.com
Cc: syzbot@...ts.linux.dev, syzkaller-bugs@...glegroups.com
Subject: [syzbot ci] Re: Eliminate Dying Memory Cgroup
syzbot ci has tested the following series
[v3] Eliminate Dying Memory Cgroup
https://lore.kernel.org/all/cover.1768389889.git.zhengqi.arch@bytedance.com
* [PATCH v3 01/30] mm: memcontrol: remove dead code of checking parent memory cgroup
* [PATCH v3 02/30] mm: workingset: use folio_lruvec() in workingset_refault()
* [PATCH v3 03/30] mm: rename unlock_page_lruvec_irq and its variants
* [PATCH v3 04/30] mm: vmscan: prepare for the refactoring the move_folios_to_lru()
* [PATCH v3 05/30] mm: vmscan: refactor move_folios_to_lru()
* [PATCH v3 06/30] mm: memcontrol: allocate object cgroup for non-kmem case
* [PATCH v3 07/30] mm: memcontrol: return root object cgroup for root memory cgroup
* [PATCH v3 08/30] mm: memcontrol: prevent memory cgroup release in get_mem_cgroup_from_folio()
* [PATCH v3 09/30] buffer: prevent memory cgroup release in folio_alloc_buffers()
* [PATCH v3 10/30] writeback: prevent memory cgroup release in writeback module
* [PATCH v3 11/30] mm: memcontrol: prevent memory cgroup release in count_memcg_folio_events()
* [PATCH v3 12/30] mm: page_io: prevent memory cgroup release in page_io module
* [PATCH v3 13/30] mm: migrate: prevent memory cgroup release in folio_migrate_mapping()
* [PATCH v3 14/30] mm: mglru: prevent memory cgroup release in mglru
* [PATCH v3 15/30] mm: memcontrol: prevent memory cgroup release in mem_cgroup_swap_full()
* [PATCH v3 16/30] mm: workingset: prevent memory cgroup release in lru_gen_eviction()
* [PATCH v3 17/30] mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}()
* [PATCH v3 18/30] mm: zswap: prevent memory cgroup release in zswap_compress()
* [PATCH v3 19/30] mm: workingset: prevent lruvec release in workingset_refault()
* [PATCH v3 20/30] mm: zswap: prevent lruvec release in zswap_folio_swapin()
* [PATCH v3 21/30] mm: swap: prevent lruvec release in lru_gen_clear_refs()
* [PATCH v3 22/30] mm: workingset: prevent lruvec release in workingset_activation()
* [PATCH v3 23/30] mm: do not open-code lruvec lock
* [PATCH v3 24/30] mm: memcontrol: prepare for reparenting LRU pages for lruvec lock
* [PATCH v3 25/30] mm: vmscan: prepare for reparenting traditional LRU folios
* [PATCH v3 26/30] mm: vmscan: prepare for reparenting MGLRU folios
* [PATCH v3 27/30] mm: memcontrol: refactor memcg_reparent_objcgs()
* [PATCH v3 28/30] mm: memcontrol: prepare for reparenting state_local
* [PATCH v3 29/30] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios
* [PATCH v3 30/30] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance helpers
and found the following issue:
UBSAN: array-index-out-of-bounds in reparent_memcg_lruvec_state_local
Full report is available here:
https://ci.syzbot.org/series/45c0b58d-255a-4579-9880-497bdbd4fb99
***
UBSAN: array-index-out-of-bounds in reparent_memcg_lruvec_state_local
tree: linux-next
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next
base: b775e489bec70895b7ef6b66927886bbac79598f
arch: amd64
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
config: https://ci.syzbot.org/builds/4d8819ab-0f94-42e8-bd70-87c7e83c37d2/config
syz repro: https://ci.syzbot.org/findings/7850f5dd-4ac7-4b74-85ff-a75ddddebbee/syz_repro
------------[ cut here ]------------
UBSAN: array-index-out-of-bounds in mm/memcontrol.c:530:3
index 33 is out of range for type 'long[33]'
CPU: 1 UID: 0 PID: 31 Comm: kworker/1:1 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: cgroup_offline css_killed_work_fn
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
ubsan_epilogue+0xa/0x30 lib/ubsan.c:233
__ubsan_handle_out_of_bounds+0xe8/0xf0 lib/ubsan.c:455
reparent_memcg_lruvec_state_local+0x34f/0x460 mm/memcontrol.c:530
reparent_memcg1_lruvec_state_local+0xa7/0xc0 mm/memcontrol-v1.c:1917
reparent_state_local mm/memcontrol.c:242 [inline]
memcg_reparent_objcgs mm/memcontrol.c:299 [inline]
mem_cgroup_css_offline+0xc7c/0xc90 mm/memcontrol.c:4054
offline_css kernel/cgroup/cgroup.c:5760 [inline]
css_killed_work_fn+0x12f/0x570 kernel/cgroup/cgroup.c:6055
process_one_work+0x949/0x15a0 kernel/workqueue.c:3279
process_scheduled_works kernel/workqueue.c:3362 [inline]
worker_thread+0x9af/0xee0 kernel/workqueue.c:3443
kthread+0x388/0x470 kernel/kthread.c:467
ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
</TASK>
---[ end trace ]---
Kernel panic - not syncing: UBSAN: panic_on_warn set ...
CPU: 1 UID: 0 PID: 31 Comm: kworker/1:1 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: cgroup_offline css_killed_work_fn
Call Trace:
<TASK>
vpanic+0x1e0/0x670 kernel/panic.c:490
panic+0xc5/0xd0 kernel/panic.c:627
check_panic_on_warn+0x89/0xb0 kernel/panic.c:377
__ubsan_handle_out_of_bounds+0xe8/0xf0 lib/ubsan.c:455
reparent_memcg_lruvec_state_local+0x34f/0x460 mm/memcontrol.c:530
reparent_memcg1_lruvec_state_local+0xa7/0xc0 mm/memcontrol-v1.c:1917
reparent_state_local mm/memcontrol.c:242 [inline]
memcg_reparent_objcgs mm/memcontrol.c:299 [inline]
mem_cgroup_css_offline+0xc7c/0xc90 mm/memcontrol.c:4054
offline_css kernel/cgroup/cgroup.c:5760 [inline]
css_killed_work_fn+0x12f/0x570 kernel/cgroup/cgroup.c:6055
process_one_work+0x949/0x15a0 kernel/workqueue.c:3279
process_scheduled_works kernel/workqueue.c:3362 [inline]
worker_thread+0x9af/0xee0 kernel/workqueue.c:3443
kthread+0x388/0x470 kernel/kthread.c:467
ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
</TASK>
Kernel Offset: disabled
Rebooting in 86400 seconds..
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@...kaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@...glegroups.com.
Powered by blists - more mailing lists