[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1242361761.21442.276.camel@haakon2.linux-iscsi.org>
Date: Thu, 14 May 2009 21:29:21 -0700
From: "Nicholas A. Bellinger" <nab@...ux-iscsi.org>
To: kvm-devel <kvm@...r.kernel.org>,
linux-scsi <linux-scsi@...r.kernel.org>,
Linux-netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>, Hannes Reinecke <hare@...e.de>,
FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
Mike Christie <michaelc@...wisc.edu>,
Christoph Hellwig <hch@....de>,
James Bottomley <James.Bottomley@...senPartnership.com>,
Sheng Yang <sheng@...ux.intel.com>,
Leonid Grossman <leonid.grossman@...erion.com>,
Ramkrishna Vepa <Ramkrishna.Vepa@...erion.com>,
Rastapur Santosh <santosh.rastapur@...erion.com>
Subject: KVM 10/Gb Ethernet PCIe passthrough with Linux/iSCSI and large
block sizes
Greetings all,
The first test results for Linux/iSCSI Initiators and targets for large
block sizes using 10 Gb/sec Ethernet + PCIe device-passthrough into
Linux/KVM guests have been posted at:
http://linux-iscsi.org/index.php/KVM-LIO-Target
So far, the results have been quite impressive using the Neterion X3100
series hardware with recent KVM-85 stable code (with Marcelo's patches,
see the above link) on v2.6.29.2 KVM guests, and using v2.6.30-rc3 KVM
Hosts.
Using iSCSI RFC defined MC/S to scale a *single* KVM accessable
Linux/iSCSI Logical Unit to 10 Gb/sec line-rate speeds has been
successful using Core-iSCSI WRITE/READ (bi-directional) traffic using
Linux-Test-Project disktest pthreaded benchmark with O_DIRECT enabled.
Using Core-iSCSI MC/S w/ iSCSI READ (uni-directional) the average is
about 6-7 Gb/sec, and with MC/S iSCSI WRITE (uni-directional) the
average is about 5 Gb/sec to the RAMDISK_DR and FILEIO storage objects
for these same streaming tests. Please see the link for more
information on the tests and hardware/software setup.
The tests have been run with both upstream Open-iSCSI and Core-iSCSI
Initiators against Target_Core_Mod/LIO-Target v3.0 in KVM guests. It is
important to note that these tests have been run with tcp_sendpage()
disabled (tcp_sendpage() is enabled by default in LIO-Target and
Open-iSCSI) in 10 Gb/sec KVM guests, which have been disable into order
to get up running with the 10 Gb/sec hardware. 1 Gb/sec e1000e ports
are stable with sendpage() in LIO-Target KVM guests, and these will be
enabled in 10 Gb/sec hardware in subsequent tests. Also note that
Open-iSCSI WRITEs using tcp_sendpage() have been ommited for this first
run of tests.
It is also important to note that both iSCSI MC/S and dm-multipath are
methods to allow a single Linux/SCSI Logical Unit to scale across
multiple TCP connections using the iSCSI Protocol. Both of these
methods (iSCSI RFC fabric level multiplexing and OS-level SCSI
Multipath) allow for means for scaling across multiple X3110 Vpaths
(MSI-X TX/RX pairs), and MC/S is a method that has a low amount of
overhead.
Some of the future setups for KVM + 10 Gb/sec will be using dm-multipath
block devices, 10 Gb/sec Ethernet PCIe multi-function mode into KVM
guest, as well as PCIe SR-IOV on recent IOMMU capable hardware
platforms.
Many thanks to the Neterion folks and Sheng Yang for answering my
questions!
--nab
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists