lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 10 Mar 2012 15:34:19 +0800
From:	Hu Tao <hutao@...fujitsu.com>
To:	Paolo Bonzini <pbonzini@...hat.com>
Cc:	linux-kernel@...r.kernel.org,
	"Michael S. Tsirkin" <mst@...hat.com>,
	linux-scsi <linux-scsi@...r.kernel.org>,
	Rusty Russell <rusty@...tcorp.com.au>,
	Stefan Hajnoczi <stefanha@...ux.vnet.ibm.com>,
	Mike Christie <michaelc@...wisc.edu>
Subject: Re: [PATCH v3 0/2] virtio-scsi driver

On Mon, Dec 19, 2011 at 01:03:06PM +0100, Paolo Bonzini wrote:
> This is the first implementation of the virtio-scsi driver, a virtual
> HBA that will be supported by KVM.  It implements a subset of the spec,
> in particular it does not implement asynchronous notifications for either
> LUN reset/removal/addition or CD-ROM media events, but it is already
> functional and usable.

Hi Paolo,

In my tests there are two BUG_ON triggered, one is at blk-core.c:2292,
in blk_finish_request: BUG_ON(blk_queued_rq(req)); the other is at
blk-softirq.c:110, in __blk_complete_request: BUG_ON(!q->softirq_done_fn).

env:

In guest there are 300 disks on one virtio-scsi controller, which are
for test, and one virtio-blk disk in which the guest OS lives.

how to reproduce:

launch 300 background dd processes, one dd for one virtio-scsi disk:

  # for d in `ls /dev/sd*[1-9]`; do dd if=/dev/zero of=/mnt/$d/delme bs=1M count=200 & done

The BUG_ON will show up after some scsi command aborts.

To reproduce the second one, the dd command is slightly different:

  dd if=/root/testfile ...

where /root/testfile is a file in the virtio-blk disk.

BUG_ON(!q->softirq_done_fn) seems impossible but I examined the vmcore
file with the crash tool and found that the request was sent to the
virtio-blk disk(but wrongly grabbed by virtio-scsi):


------------[ cut here ]------------
kernel BUG at /home/hutao/linux-2.6/block/blk-softirq.c:110!
invalid opcode: 0000 [#1] SMP 
CPU 0 
Modules linked in: tun bridge stp llc autofs4 pcspkr sg i2c_piix4 i2c_core sr_mod cdrom [last unloaded: speedstep_lib]

Pid: 0, comm: swapper/0 Not tainted 3.3.0-rc3-ht-virtio-scsi-1+ #13 Bochs Bochs
RIP: 0010:[<ffffffff81267567>]  [<ffffffff81267567>] __blk_complete_request+0x177/0x180
RSP: 0018:ffff88003fc03bd0  EFLAGS: 00010046
RAX: 0000000000000000 RBX: ffff880037c90d40 RCX: 0000000000000000
RDX: ffff88003acf57e0 RSI: ffff880037c90d40 RDI: ffff88003780f118
RBP: ffff88003fc03bf0 R08: 0000000000000000 R09: ffff88003fc03ed0
R10: 00000000000e147a R11: 00000000008e147a R12: ffff880013b1e8c0
R13: ffff88003780f118 R14: ffff88003ad1a6a0 R15: 0000000000000086
FS:  0000000000000000(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007f52da498000 CR3: 000000003506f000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process swapper/0 (pid: 0, threadinfo ffffffff81a00000, task ffffffff81a0d020)
Stack:
 ffff880037c90d40 ffff880013b1e8c0 ffffffff81349d00 ffff88003ad1a6a0
 ffff88003fc03c00 ffffffff81267595 ffff88003fc03c20 ffffffff81339c06
 ffff880037c90d40 ffff880013b1e8c0 ffff88003fc03c50 ffffffff81349e49
Call Trace:
 <IRQ> 
 [<ffffffff81349d00>] ? virtscsi_ctrl_done+0x20/0x20
 [<ffffffff81267595>] blk_complete_request+0x25/0x30
 [<ffffffff81339c06>] scsi_done+0x26/0x60
 [<ffffffff81349e49>] virtscsi_complete_cmd+0x149/0x2a0
 [<ffffffff812ed98b>] ? virtqueue_get_buf+0x6b/0x120
 [<ffffffff81349c66>] virtscsi_vq_done+0x56/0x90
 [<ffffffff81349cb5>] virtscsi_req_done+0x15/0x20
 [<ffffffff812ed8ac>] vring_interrupt+0x3c/0xb0
 [<ffffffff812ee7e3>] vp_vring_interrupt+0x63/0xa0
 [<ffffffff810b746d>] handle_irq_event_percpu+0x5d/0x210
 [<ffffffff810b7662>] handle_irq_event+0x42/0x70
 [<ffffffff810bace9>] handle_edge_irq+0x69/0x120
 [<ffffffff8100436c>] handle_irq+0x5c/0x150
 [<ffffffff81041532>] ? irq_enter+0x22/0x80
 [<ffffffff814a3cbd>] do_IRQ+0x5d/0xe0
 [<ffffffff8149a4ee>] common_interrupt+0x6e/0x6e
 [<ffffffff812ee804>] ? vp_vring_interrupt+0x84/0xa0
 [<ffffffff810415f0>] ? __do_softirq+0x60/0x200
 [<ffffffff810b766d>] ? handle_irq_event+0x4d/0x70
 [<ffffffff814a35dc>] call_softirq+0x1c/0x30
 [<ffffffff810042d5>] do_softirq+0x65/0xa0
 [<ffffffff8104144d>] irq_exit+0xbd/0xe0
 [<ffffffff814a3cc6>] do_IRQ+0x66/0xe0
 [<ffffffff8149a4ee>] common_interrupt+0x6e/0x6e
 <EOI> 
 [<ffffffff81029f76>] ? native_safe_halt+0x6/0x10
 [<ffffffff8100b37d>] default_idle+0x5d/0x190
 [<ffffffff81002099>] cpu_idle+0xd9/0x120
 [<ffffffff8148113d>] rest_init+0x6d/0x80
 [<ffffffff81ad3cc7>] start_kernel+0x3d6/0x3e1
 [<ffffffff81ad332a>] x86_64_start_reservations+0x131/0x136
 [<ffffffff81ad3432>] x86_64_start_kernel+0x103/0x112
Code: 4d 89 6d 28 66 41 c7 45 30 00 00 31 d2 89 df e8 60 b7 e2 ff e9 5b ff ff ff 0f 1f 00 bf 04 00 00 00 e8 9e a3 dd ff e9 49 ff ff ff <0f> 0b eb fe 0f 1f 44 00 00 55 48 89 e5 66 66 66 66 90 3e 0f ba 
RIP  [<ffffffff81267567>] __blk_complete_request+0x177/0x180
 RSP <ffff88003fc03bd0>
crash-6> p /d &(((struct request*)0)->q)
$8 = 56
crash-6> rd ffff88003780f118 8
ffff88003780f118:  ffff88003780f118 ffff88003780f118   ...7.......7....
ffff88003780f128:  0000000106386166 0000000000000000   fa8.............
ffff88003780f138:  0000000000000000 0000000000000000   ................
ffff88003780f148:  0000000000000000 ffff88003acf57e0   .........W.:....
crash-6> p /d &(((struct request_queue*)0)->softirq_done_fn)
$9 = 152
crash-6> rd ffff88003acf57e0 20
ffff88003acf57e0:  ffff88003acf57e0 ffff88003acf57e0   .W.:.....W.:....
ffff88003acf57f0:  ffff88003aea1438 ffff88003acc75c0   8..:.....u.:....
ffff88003acf5800:  0000001500000080 0000000000000000   ................
ffff88003acf5810:  0000000000000095 ffff88003acc7640   ........@v.:....
ffff88003acf5820:  00000000a19aa19a ffff8800308c9370   ........p..0....
ffff88003acf5830:  ffff880032297370 0000000049864986   ps)2.....I.I....
ffff88003acf5840:  ffff88003acf5840 ffff88003acf5840   @X.:....@X.:....
ffff88003acf5850:  ffffffff81335c00 ffffffff81261760   .\3.....`.&.....
ffff88003acf5860:  0000000000000000 0000000000000000   ................
ffff88003acf5870:  0000000000000000 0000000000000000   ................
crash-6> sym ffffffff81261760
ffffffff81261760 (T) blk_queue_bio /home/hutao/linux-2.6/block/blk-core.c: 1315
crash-6> sym ffffffff81335c00
ffffffff81335c00 (t) do_virtblk_request /home/hutao/linux-2.6/drivers/block/virtio_blk.c: 192

                     ^^^ this is request_queue->request_fn
                     
crash-6> 


-- 
Thanks,
Hu Tao

View attachment "dmesg-1" of type "text/plain" (247003 bytes)

View attachment "dmesg-2" of type "text/plain" (171984 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ