lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171213143413.e3efqns53333uf5g@lakrids.cambridge.arm.com>
Date:   Wed, 13 Dec 2017 14:34:14 +0000
From:   Mark Rutland <mark.rutland@....com>
To:     Cornelia Huck <cohuck@...hat.com>,
        "Michael S . Tsirkin" <mst@...hat.com>
Cc:     linux-kernel@...r.kernel.org,
        weiping zhang <zhangweiping@...ichuxing.com>,
        virtualization@...ts.linux-foundation.org
Subject: Re: [PATCHv2] virtio_mmio: fix devm cleanup

On Tue, Dec 12, 2017 at 06:02:23PM +0100, Cornelia Huck wrote:
> On Tue, 12 Dec 2017 13:45:50 +0000
> Mark Rutland <mark.rutland@....com> wrote:
> 
> > Recent rework of the virtio_mmio probe/remove paths balanced a
> > devm_ioremap() with an iounmap() rather than its devm variant. This ends
> > up corrupting the devm datastructures, and results in the following
> > boot-time splat on arm64 under QEMU 2.9.0:
> > 
> > [    3.450397] ------------[ cut here ]------------
> > [    3.453822] Trying to vfree() nonexistent vm area (00000000c05b4844)
> > [    3.460534] WARNING: CPU: 1 PID: 1 at mm/vmalloc.c:1525 __vunmap+0x1b8/0x220
> > [    3.475898] Kernel panic - not syncing: panic_on_warn set ...
> > [    3.475898]
> > [    3.493933] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.15.0-rc3 #1
> > [    3.513109] Hardware name: linux,dummy-virt (DT)
> > [    3.525382] Call trace:
> > [    3.531683]  dump_backtrace+0x0/0x368
> > [    3.543921]  show_stack+0x20/0x30
> > [    3.547767]  dump_stack+0x108/0x164
> > [    3.559584]  panic+0x25c/0x51c
> > [    3.569184]  __warn+0x29c/0x31c
> > [    3.576023]  report_bug+0x1d4/0x290
> > [    3.586069]  bug_handler.part.2+0x40/0x100
> > [    3.597820]  bug_handler+0x4c/0x88
> > [    3.608400]  brk_handler+0x11c/0x218
> > [    3.613430]  do_debug_exception+0xe8/0x318
> > [    3.627370]  el1_dbg+0x18/0x78
> > [    3.634037]  __vunmap+0x1b8/0x220
> > [    3.648747]  vunmap+0x6c/0xc0
> > [    3.653864]  __iounmap+0x44/0x58
> > [    3.659771]  devm_ioremap_release+0x34/0x68
> > [    3.672983]  release_nodes+0x404/0x880
> > [    3.683543]  devres_release_all+0x6c/0xe8
> > [    3.695692]  driver_probe_device+0x250/0x828
> > [    3.706187]  __driver_attach+0x190/0x210
> > [    3.717645]  bus_for_each_dev+0x14c/0x1f0
> > [    3.728633]  driver_attach+0x48/0x78
> > [    3.740249]  bus_add_driver+0x26c/0x5b8
> > [    3.752248]  driver_register+0x16c/0x398
> > [    3.757211]  __platform_driver_register+0xd8/0x128
> > [    3.770860]  virtio_mmio_init+0x1c/0x24
> > [    3.782671]  do_one_initcall+0xe0/0x398
> > [    3.791890]  kernel_init_freeable+0x594/0x660
> > [    3.798514]  kernel_init+0x18/0x190
> > [    3.810220]  ret_from_fork+0x10/0x18
> > 
> > To fix this, we can simply rip out the explicit cleanup that the devm
> > infrastructure will do for us when our probe function returns an error
> > code, or when our remove function returns.
> > 
> > We only need to ensure that we call put_device() if a call to
> > register_virtio_device() fails in the probe path.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@....com>
> > Fixes: 7eb781b1bbb7136f ("virtio_mmio: add cleanup for virtio_mmio_probe")
> > Fixes: 25f32223bce5c580 ("virtio_mmio: add cleanup for virtio_mmio_remove")
> > Cc: Cornelia Huck <cohuck@...hat.com>
> > Cc: Michael S. Tsirkin <mst@...hat.com>
> > Cc: weiping zhang <zhangweiping@...ichuxing.com>
> > Cc: virtualization@...ts.linux-foundation.org
> > ---
> >  drivers/virtio/virtio_mmio.c | 43 +++++++++----------------------------------
> >  1 file changed, 9 insertions(+), 34 deletions(-)
> 
> In the hope that I have grokked the devm_* interface by now,
> 
> Reviewed-by: Cornelia Huck <cohuck@...hat.com>

Thanks!

Michael, could you please queue this as a fix for v4.15?

This regressed arm64 VMs booting between v4.15-rc1 and v4-15-rc2,
impacting our automated regression testing, and I'd very much like to
get back to testing pure mainline rather than mainline + local fixes.

Thanks,
Mark.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ