lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 4 Nov 2014 15:44:47 +0100
From:	Alexey Lapitsky <lex.public@...il.com>
To:	Rusty Russell <rusty@...tcorp.com.au>
Cc:	Dave Airlie <airlied@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	virtualization@...ts.linux-foundation.org
Subject: Re: BUG_ON in virtio-ring.c

Hi,

Sorry for the long delay. It prints exactly the same:

[    3.792033] virtqueue elements = 128, max_segments = 126 (1 queues)
[    3.802191]  vda: vda1 vda2 < vda5 >

A little bit more about my setup (if it helps):

It's a qemu-system-x86_64 kvm instance with 16 cores and 10G of RAM.
I can reproduce the bug every time with mkfs.btrfs on a 10GB LVM
volume (right after the reboot).

I have almost no knowledge of vring / virtio.
Is it correct that we need just one sg_elem entry in the vq->vring if
vq->indirect flag is set?
That's what I thought when applying the "BUG_ON(total_sg >
vq->vring.num && !vq->indirect)" patch.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ