lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100903105233.GA32193@redhat.com>
Date:	Fri, 3 Sep 2010 13:52:33 +0300
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Shirley Ma <mashirle@...ibm.com>
Cc:	xiaohui.xin@...el.com, netdev@...r.kernel.org, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, mingo@...e.hu, davem@...emloft.net,
	herbert@...dor.hengli.com.au, jdike@...ux.intel.com
Subject: Re: [RFC PATCH v9 00/16] Provide a zero-copy method on KVM
 virtio-net.

On Tue, Aug 10, 2010 at 11:55:04PM -0700, Shirley Ma wrote:
> On Tue, 2010-08-10 at 23:01 -0700, Shirley Ma wrote:
> > On Tue, 2010-08-10 at 18:43 -0700, Shirley Ma wrote:
> > > > Also I found some vhost performance regression on the new
> > > > kernel with tuning. I used to get 9.4Gb/s, now I couldn't get it. 
> > > 
> > > I forgot to mention the kernel I used 2.6.36 one. And I found the
> > > native
> > > host BW is limited to 8.0Gb/s, so the regression might come from the
> > > device driver not vhost.
> > 
> > Something is very interesting, when binding ixgbe interrupts to cpu1,
> > and running netperf/netserver on cpu0, the native host to host
> > performance is still around 8.0Gb/s, however, the macvtap zero copy
> > result is 9.0Gb/s.
> > 
> > root@...alhost ~]# netperf -H 192.168.10.74 -c -C -l60 -T0,0 -- -m 64K
> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.74 (192.168..
> > 10.74) port 0 AF_INET : cpu bind
> > Recv   Send    Send                          Utilization       Service Demand
> > Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
> > Size   Size    Size     Time     Throughput  local    remote   local   remote
> > bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB
> > 
> >  87380  16384  65536    60.00      9013.59   53.01    8.21     0.963   0.597
> > 
> > Below is perf top output:
> > 
> >               578.00  6.5% copy_user_generic_string   
> >               381.00  4.3% vmx_vcpu_run                
> >               250.00  2.8% schedule                    
> >               207.00  2.3% vhost_get_vq_desc           
> >               204.00  2.3% _raw_spin_lock_irqsave      
> >               197.00  2.2% translate_desc              
> >               193.00  2.2% memcpy_fromiovec            
> >               162.00  1.8% gup_pte_range   
> > 
> > We can compare your results with mine to see any difference.

Could you look at the guest as well?

> When binding vhost thread to cpu3, qemu I/O thread to cpu2, macvtap zero
> copy patch can get 9.4Gb/s. 
> 
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.74 (192.168.10.74) port 0 AF_INET : cpu bind
> Recv   Send    Send                          Utilization       Service Demand
> Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
> Size   Size    Size     Time     Throughput  local    remote   local   remote
> bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB
> 
>  87380  16384  65536    60.00      9408.19   55.69    8.45     0.970   0.589
> 
> Shirley

OTOH CPU utilization is up too.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ