lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 26 Nov 2015 10:28:10 +0800 From: Bob Liu <bob.liu@...cle.com> To: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com> CC: xen-devel@...ts.xen.org, linux-kernel@...r.kernel.org, roger.pau@...rix.com, felipe.franciosi@...rix.com, axboe@...com, avanzini.arianna@...il.com, rafal.mielniczuk@...rix.com, jonathan.davies@...rix.com, david.vrabel@...rix.com Subject: Re: [PATCH v5 00/10] xen-block: multi hardware-queues/rings support On 11/26/2015 06:12 AM, Konrad Rzeszutek Wilk wrote: > On Wed, Nov 25, 2015 at 03:56:03PM -0500, Konrad Rzeszutek Wilk wrote: >> On Wed, Nov 25, 2015 at 02:25:07PM -0500, Konrad Rzeszutek Wilk wrote: >>>> xen/blkback: separate ring information out of struct xen_blkif >>>> xen/blkback: pseudo support for multi hardware queues/rings >>>> xen/blkback: get the number of hardware queues/rings from blkfront >>>> xen/blkback: make pool of persistent grants and free pages per-queue >>> >>> OK, got to those as well. I have put them in 'devel/for-jens-4.5' and >>> are going to test them overnight before pushing them out. >>> >>> I see two bugs in the code that we MUST deal with: >>> >>> - print_stats () is going to show zero values. >>> - the sysfs code (VBD_SHOW) aren't converted over to fetch data >>> from all the rings. >> >> - kthread_run can't handle the two "name, i" arguments. I see: >> >> root 5101 2 0 20:47 ? 00:00:00 [blkback.3.xvda-] >> root 5102 2 0 20:47 ? 00:00:00 [blkback.3.xvda-] > > And doing save/restore: > > xl save <id> /tmp/A; > xl restore /tmp/A; > > ends up us loosing the proper state and not getting the ring setup back. > I see this is backend: > > [ 2719.448600] vbd vbd-22-51712: -1 guest requested 0 queues, exceeding the maximum of 3. > > And XenStore agrees: > tool = "" > xenstored = "" > local = "" > domain = "" > 0 = "" > domid = "0" > name = "Domain-0" > device-model = "" > 0 = "" > state = "running" > error = "" > backend = "" > vbd = "" > 2 = "" > 51712 = "" > error = "-1 guest requested 0 queues, exceeding the maximum of 3." > > .. which also leads to a memory leak as xen_blkbk_remove never gets > called. I think which was already fix by your patch: [PATCH RFC 2/2] xen/blkback: Free resources if connect_ring failed. P.S. I didn't see your git tree updated with these patches. -- Regards, -Bob -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists