[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201029100906.GA137578@mtl-vdi-166.wap.labs.mlnx>
Date: Thu, 29 Oct 2020 12:09:06 +0200
From: Eli Cohen <elic@...dia.com>
To: Jason Wang <jasowang@...hat.com>
CC: "Michael S. Tsirkin" <mst@...hat.com>,
<virtualization@...ts.linux-foundation.org>,
netdev <netdev@...r.kernel.org>, <lingshan.zhu@...el.com>
Subject: Re: [PATCH] vhost: Use mutex to protect vq_irq setup
On Thu, Oct 29, 2020 at 04:08:24PM +0800, Jason Wang wrote:
>
> On 2020/10/29 下午3:50, Eli Cohen wrote:
> > On Thu, Oct 29, 2020 at 03:39:24PM +0800, Jason Wang wrote:
> > > On 2020/10/29 下午3:37, Eli Cohen wrote:
> > > > On Thu, Oct 29, 2020 at 03:03:24PM +0800, Jason Wang wrote:
> > > > > On 2020/10/28 下午10:20, Eli Cohen wrote:
> > > > > > Both irq_bypass_register_producer() and irq_bypass_unregister_producer()
> > > > > > require process context to run. Change the call context lock from
> > > > > > spinlock to mutex to protect the setup process to avoid deadlocks.
> > > > > >
> > > > > > Fixes: 265a0ad8731d ("vhost: introduce vhost_vring_call")
> > > > > > Signed-off-by: Eli Cohen<elic@...dia.com>
> > > > > Hi Eli:
> > > > >
> > > > > During review we spot that the spinlock is not necessary. And it was already
> > > > > protected by vq mutex. So it was removed in this commit:
> > > > >
> > > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=86e182fe12ee5869022614457037097c70fe2ed1
> > > > >
> > > > > Thanks
> > > > >
> > > > I see, thanks.
> > > >
> > > > BTW, while testing irq bypassing, I noticed that qemu started crashing
> > > > and I fail to boot the VM? Is that a known issue. I checked using
> > > > updated master branch of qemu updated yesterday.
> > > Not known yet.
> > >
> > >
> > > > Any ideas how to check this further?
> > > I would be helpful if you can paste the calltrace here.
> > >
> > I am not too familiar with qemu. Assuming I am using virsh start to boot
> > the VM, how can I get the call trace?
>
>
> You probably need to configure qemu with --enable-debug. Then after VM is
> launching, you can use gdb to attach to the qemu process, then gdb may
> report a calltrace if qemu crashes.
>
I run qemu from the console (no virsh) and I get this message:
*** stack smashing detected ***: terminated
Aborted (core dumped)
When I run coredumpctl debug on the core file I see this backtrace:
#0 __GI_raise (sig=sig@...ry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f0ca5b95895 in __GI_abort () at abort.c:79
#2 0x00007f0ca5bf0857 in __libc_message (action=action@...ry=do_abort, fmt=fmt@...ry=0x7f0ca5d01c14 "*** %s ***: terminated\n") at ../sysdeps/posix/libc_fatal.c:155
#3 0x00007f0ca5c8177a in __GI___fortify_fail (msg=msg@...ry=0x7f0ca5d01bfc "stack smashing detected") at fortify_fail.c:26
#4 0x00007f0ca5c81746 in __stack_chk_fail () at stack_chk_fail.c:24
#5 0x000055ce01cd4d4e in vhost_vdpa_set_backend_cap (dev=0x55ce03800370) at ../hw/virtio/vhost-vdpa.c:256
#6 0x000055ce01cbc42c in vhost_dev_set_features (dev=dev@...ry=0x55ce03800370, enable_log=<optimized out>) at ../hw/virtio/vhost.c:820
#7 0x000055ce01cbf5b8 in vhost_dev_start (hdev=hdev@...ry=0x55ce03800370, vdev=vdev@...ry=0x55ce045edc70) at ../hw/virtio/vhost.c:1701
#8 0x000055ce01a57eab in vhost_net_start_one (dev=0x55ce045edc70, net=0x55ce03800370) at ../hw/net/vhost_net.c:246
#9 vhost_net_start (dev=dev@...ry=0x55ce045edc70, ncs=0x55ce04601510, total_queues=total_queues@...ry=1) at ../hw/net/vhost_net.c:351
#10 0x000055ce01cdafbc in virtio_net_vhost_status (status=<optimized out>, n=0x55ce045edc70) at ../hw/net/virtio-net.c:281
#11 virtio_net_set_status (vdev=0x55ce045edc70, status=<optimized out>) at ../hw/net/virtio-net.c:362
#12 0x000055ce01c7015b in virtio_set_status (vdev=vdev@...ry=0x55ce045edc70, val=val@...ry=15 '\017') at ../hw/virtio/virtio.c:1957
#13 0x000055ce01bdf4e8 in virtio_pci_common_write (opaque=0x55ce045e5ae0, addr=<optimized out>, val=<optimized out>, size=<optimized out>) at ../hw/virtio/virtio-pci.c:1258
#14 0x000055ce01ce05fc in memory_region_write_accessor
(mr=mr@...ry=0x55ce045e64c0, addr=20, value=value@...ry=0x7f0c9ec6f7b8, size=size@...ry=1, shift=<optimized out>, mask=mask@...ry=255, attrs=...) at ../softmmu/memory.c:484
#15 0x000055ce01cdf11e in access_with_adjusted_size
(addr=addr@...ry=20, value=value@...ry=0x7f0c9ec6f7b8, size=size@...ry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=
0x55ce01ce0570 <memory_region_write_accessor>, mr=0x55ce045e64c0, attrs=...) at ../softmmu/memory.c:545
#16 0x000055ce01ce2933 in memory_region_dispatch_write (mr=mr@...ry=0x55ce045e64c0, addr=20, data=<optimized out>, op=<optimized out>, attrs=attrs@...ry=...)
at ../softmmu/memory.c:1494
#17 0x000055ce01c81380 in flatview_write_continue
(fv=fv@...ry=0x7f0980000b90, addr=addr@...ry=4261412884, attrs=attrs@...ry=..., ptr=ptr@...ry=0x7f0ca674f028, len=len@...ry=1, addr1=<optimized out>, l=<optimized out>, mr=0x55ce045e64c0) at
/images/eli/src/newgits/qemu/include/qemu/host-utils.h:164
#18 0x000055ce01c842c5 in flatview_write (len=1, buf=0x7f0ca674f028, attrs=..., addr=4261412884, fv=0x7f0980000b90) at ../softmmu/physmem.c:2807
#19 address_space_write (as=0x55ce02740800 <address_space_memory>, addr=4261412884, attrs=..., buf=buf@...ry=0x7f0ca674f028, len=1) at ../softmmu/physmem.c:2899
#20 0x000055ce01c8435a in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=...,
attrs@...ry=..., buf=buf@...ry=0x7f0ca674f028, len=<optimized out>, is_write=<optimized out>) at ../softmmu/physmem.c:2909
#21 0x000055ce01cb0d76 in kvm_cpu_exec (cpu=cpu@...ry=0x55ce03827620) at ../accel/kvm/kvm-all.c:2539
#22 0x000055ce01d2ea75 in kvm_vcpu_thread_fn (arg=arg@...ry=0x55ce03827620) at ../accel/kvm/kvm-cpus.c:49
#23 0x000055ce01f05559 in qemu_thread_start (args=0x7f0c9ec6f9b0) at ../util/qemu-thread-posix.c:521
#24 0x00007f0ca5d43432 in start_thread (arg=<optimized out>) at pthread_create.c:477
#25 0x00007f0ca5c71913 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
The assert at frame 5 looks to me false.
> Thanks
>
>
Powered by blists - more mailing lists