lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+ZW8SD3zvf7HTKYch25hev-Egp8fO+nmQ5V+RBBhQ9+DQ@mail.gmail.com>
Date:   Mon, 27 Mar 2017 16:46:58 +0200
From:   Dmitry Vyukov <dvyukov@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        KVM list <kvm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Steve Rutherford <srutherford@...gle.com>,
        Wanpeng Li <kernellwp@...il.com>,
        Haozhong Zhang <haozhong.zhang@...el.com>,
        James Mattson <jmattson@...gle.com>,
        David Hildenbrand <david@...hat.com>,
        Cornelia Huck <cornelia.huck@...ibm.com>,
        xiaoguangrong.eric@...il.com,
        Paul McKenney <paulmck@...ux.vnet.ibm.com>
Cc:     syzkaller <syzkaller@...glegroups.com>
Subject: kvm: use-after-free in srcu_reschedule

Hello,

I've got the following use-after-free report on
linux-next/65b2dc38291f9f27e5ec3b804d6eb3b5f79a3ce4.

==================================================================
BUG: KASAN: use-after-free in debug_spin_unlock
kernel/locking/spinlock_debug.c:97 [inline]
BUG: KASAN: use-after-free in do_raw_spin_unlock+0x2ea/0x320
kernel/locking/spinlock_debug.c:134
Read of size 4 at addr ffff88014158a564 by task kworker/1:1/5712

CPU: 1 PID: 5712 Comm: kworker/1:1 Not tainted 4.11.0-rc3-next-20170324+ #1
Hardware name: Google Google Compute Engine/Google Compute Engine,
BIOS Google 01/01/2011
Workqueue: events_power_efficient process_srcu
Call Trace:
 __dump_stack lib/dump_stack.c:16 [inline]
 dump_stack+0x2fb/0x40f lib/dump_stack.c:52
 print_address_description+0x7f/0x260 mm/kasan/report.c:250
 kasan_report_error mm/kasan/report.c:349 [inline]
 kasan_report.part.3+0x21f/0x310 mm/kasan/report.c:372
 kasan_report mm/kasan/report.c:392 [inline]
 __asan_report_load4_noabort+0x29/0x30 mm/kasan/report.c:392
 debug_spin_unlock kernel/locking/spinlock_debug.c:97 [inline]
 do_raw_spin_unlock+0x2ea/0x320 kernel/locking/spinlock_debug.c:134
 __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:167 [inline]
 _raw_spin_unlock_irq+0x22/0x70 kernel/locking/spinlock.c:199
 spin_unlock_irq include/linux/spinlock.h:349 [inline]
 srcu_reschedule+0x1a1/0x260 kernel/rcu/srcu.c:582
 process_srcu+0x63c/0x11c0 kernel/rcu/srcu.c:600
 process_one_work+0xac0/0x1b00 kernel/workqueue.c:2097
 worker_thread+0x1b4/0x1300 kernel/workqueue.c:2231
 kthread+0x36c/0x440 kernel/kthread.c:231
 ret_from_fork+0x31/0x40 arch/x86/entry/entry_64.S:430

Allocated by task 20961:
 save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
 save_stack+0x43/0xd0 mm/kasan/kasan.c:515
 set_track mm/kasan/kasan.c:527 [inline]
 kasan_kmalloc+0xaa/0xd0 mm/kasan/kasan.c:619
 kmem_cache_alloc_trace+0x10b/0x670 mm/slab.c:3635
 kmalloc include/linux/slab.h:492 [inline]
 kzalloc include/linux/slab.h:665 [inline]
 kvm_arch_alloc_vm include/linux/kvm_host.h:773 [inline]
 kvm_create_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:610 [inline]
 kvm_dev_ioctl_create_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:3161 [inline]
 kvm_dev_ioctl+0x1bf/0x1460 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3205
 vfs_ioctl fs/ioctl.c:45 [inline]
 do_vfs_ioctl+0x1bf/0x1780 fs/ioctl.c:685
 SYSC_ioctl fs/ioctl.c:700 [inline]
 SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
 entry_SYSCALL_64_fastpath+0x1f/0xbe

Freed by task 20960:
 save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
 save_stack+0x43/0xd0 mm/kasan/kasan.c:515
 set_track mm/kasan/kasan.c:527 [inline]
 kasan_slab_free+0x6e/0xc0 mm/kasan/kasan.c:592
 __cache_free mm/slab.c:3511 [inline]
 kfree+0xd3/0x250 mm/slab.c:3828
 kvm_arch_free_vm include/linux/kvm_host.h:778 [inline]
 kvm_destroy_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:732 [inline]
 kvm_put_kvm+0x709/0x9a0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:747
 kvm_vm_release+0x42/0x50 arch/x86/kvm/../../../virt/kvm/kvm_main.c:758
 __fput+0x332/0x800 fs/file_table.c:209
 ____fput+0x15/0x20 fs/file_table.c:245
 task_work_run+0x197/0x260 kernel/task_work.c:116
 exit_task_work include/linux/task_work.h:21 [inline]
 do_exit+0x1a53/0x27c0 kernel/exit.c:878
 do_group_exit+0x149/0x420 kernel/exit.c:982
 get_signal+0x7d8/0x1820 kernel/signal.c:2318
 do_signal+0xd2/0x2190 arch/x86/kernel/signal.c:808
 exit_to_usermode_loop+0x21c/0x2d0 arch/x86/entry/common.c:157
 prepare_exit_to_usermode arch/x86/entry/common.c:194 [inline]
 syscall_return_slowpath+0x4d3/0x570 arch/x86/entry/common.c:263
 entry_SYSCALL_64_fastpath+0xbc/0xbe

The buggy address belongs to the object at ffff880141581640
 which belongs to the cache kmalloc-65536 of size 65536
The buggy address is located 36644 bytes inside of
 65536-byte region [ffff880141581640, ffff880141591640)
The buggy address belongs to the page:
page:ffffea000464b400 count:1 mapcount:0 mapping:ffff880141581640
index:0x0 compound_mapcount: 0
flags: 0x200000000008100(slab|head)
raw: 0200000000008100 ffff880141581640 0000000000000000 0000000100000001
raw: ffffea00064b1f20 ffffea000640fa20 ffff8801db800d00
page dumped because: kasan: bad access detected

Memory state around the buggy address:
 ffff88014158a400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ffff88014158a480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88014158a500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                                       ^
 ffff88014158a580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ffff88014158a600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


Paul McKenney writes:

===
Hmmm...  I am not seeing a call to cleanup_srcu_struct() for the
->track_srcu field of the kvm_page_track_notifier_head structure.
Or is this structure immortal, so that it is never cleaned up?
Or am I just blind this morning?

In any case, freeing the kvm_page_track_notifier_head structure
without first invoking cleanup_srcu_struct() on its ->track_srcu
srcu_struct field could easily result in a use-after-free bug.
===

I also don't see cleanup of page track srcu.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ