lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f480049b-cf65-4a39-a314-b6deecfa3883@oracle.com>
Date: Tue, 30 Jul 2024 20:28:03 +0530
From: Kamalesh Babulal <kamalesh.babulal@...cle.com>
To: Chen Ridong <chenridong@...wei.com>, tj@...nel.org,
        lizefan.x@...edance.com, hannes@...xchg.org, longman@...hat.com,
        adityakali@...gle.com, sergeh@...nel.org
Cc: bpf@...r.kernel.org, cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] cgroup/cpuset: fix panic caused by partcmd_update



On 7/30/24 3:21 PM, Chen Ridong wrote:
> We find a bug as below:
> BUG: unable to handle page fault for address: 00000003
> PGD 0 P4D 0
> Oops: 0000 [#1] PREEMPT SMP NOPTI
> CPU: 3 PID: 358 Comm: bash Tainted: G        W I        6.6.0-10893-g60d6
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/4
> RIP: 0010:partition_sched_domains_locked+0x483/0x600
> Code: 01 48 85 d2 74 0d 48 83 05 29 3f f8 03 01 f3 48 0f bc c2 89 c0 48 9
> RSP: 0018:ffffc90000fdbc58 EFLAGS: 00000202
> RAX: 0000000100000003 RBX: ffff888100b3dfa0 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000002fe80
> RBP: ffff888100b3dfb0 R08: 0000000000000001 R09: 0000000000000000
> R10: ffffc90000fdbcb0 R11: 0000000000000004 R12: 0000000000000002
> R13: ffff888100a92b48 R14: 0000000000000000 R15: 0000000000000000
> FS:  00007f44a5425740(0000) GS:ffff888237d80000(0000) knlGS:0000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000100030973 CR3: 000000010722c000 CR4: 00000000000006e0
> Call Trace:
>  <TASK>
>  ? show_regs+0x8c/0xa0
>  ? __die_body+0x23/0xa0
>  ? __die+0x3a/0x50
>  ? page_fault_oops+0x1d2/0x5c0
>  ? partition_sched_domains_locked+0x483/0x600
>  ? search_module_extables+0x2a/0xb0
>  ? search_exception_tables+0x67/0x90
>  ? kernelmode_fixup_or_oops+0x144/0x1b0
>  ? __bad_area_nosemaphore+0x211/0x360
>  ? up_read+0x3b/0x50
>  ? bad_area_nosemaphore+0x1a/0x30
>  ? exc_page_fault+0x890/0xd90
>  ? __lock_acquire.constprop.0+0x24f/0x8d0
>  ? __lock_acquire.constprop.0+0x24f/0x8d0
>  ? asm_exc_page_fault+0x26/0x30
>  ? partition_sched_domains_locked+0x483/0x600
>  ? partition_sched_domains_locked+0xf0/0x600
>  rebuild_sched_domains_locked+0x806/0xdc0
>  update_partition_sd_lb+0x118/0x130
>  cpuset_write_resmask+0xffc/0x1420
>  cgroup_file_write+0xb2/0x290
>  kernfs_fop_write_iter+0x194/0x290
>  new_sync_write+0xeb/0x160
>  vfs_write+0x16f/0x1d0
>  ksys_write+0x81/0x180
>  __x64_sys_write+0x21/0x30
>  x64_sys_call+0x2f25/0x4630
>  do_syscall_64+0x44/0xb0
>  entry_SYSCALL_64_after_hwframe+0x78/0xe2
> RIP: 0033:0x7f44a553c887
> 
> It can be reproduced with cammands:
> cd /sys/fs/cgroup/
> mkdir test
> cd test/
> echo +cpuset > ../cgroup.subtree_control
> echo root > cpuset.cpus.partition
> cat /sys/fs/cgroup/cpuset.cpus.effective
> 0-3
> echo 0-3 > cpuset.cpus // taking away all cpus from root
> 
> This issue is caused by the incorrect rebuilding of scheduling domains.
> In this scenario, test/cpuset.cpus.partition should be an invalid root
> and should not trigger the rebuilding of scheduling domains. When calling
> update_parent_effective_cpumask with partcmd_update, if newmask is not
> null, it should recheck newmask whether there are cpus is available
> for parect/cs that has tasks.
> 
> Fixes: 0c7f293efc87 ("cgroup/cpuset: Add cpuset.cpus.exclusive.effective for v2")
> Signed-off-by: Chen Ridong <chenridong@...wei.com>

I tested the patch using the reproducer in the commit message and
it fixes the issue, seen with the reproducer.

I think we should Cc: stable 

Tested-by: Kamalesh Babulal <kamalesh.babulal@...cle.com>

-- 
Thanks,
Kamalesh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ