lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 02 Apr 2009 10:24:39 -0400
From:	Gregory Haskins <ghaskins@...ell.com>
To:	Avi Kivity <avi@...hat.com>
CC:	Anthony Liguori <anthony@...emonkey.ws>,
	Andi Kleen <andi@...stfloor.org>, linux-kernel@...r.kernel.org,
	agraf@...e.de, pmullaney@...ell.com, pmorreale@...ell.com,
	rusty@...tcorp.com.au, netdev@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC PATCH 00/17] virtual-bus

Avi Kivity wrote:
> Gregory Haskins wrote:
>> Avi Kivity wrote:
>>  
>>> Gregory Haskins wrote:
>>>    
>>>> Avi Kivity wrote:
>>>>  
>>>>      
>>>>> My 'prohibitively expensive' is true only if you exit every packet.
>>>>>
>>>>>
>>>>>             
>>>> Understood, but yet you need to do this if you want something like
>>>> iSCSI
>>>> READ transactions to have as low-latency as possible.
>>>>         
>>> Dunno, two microseconds is too much?  The wire imposes much more.
>>>
>>>     
>>
>> No, but thats not what we are talking about.  You said signaling on
>> every packet is prohibitively expensive.  I am saying signaling on every
>> packet is required for decent latency.  So is it prohibitively expensive
>> or not?
>>   
>
> We're heading dangerously into the word-game area.  Let's not do that.
>
> If you have a high throughput workload with many packets per seconds
> then an exit per packet (whether to userspace or to the kernel) is
> expensive.  So you do exit mitigation.  Latency is not important since
> the packets are going to sit in the output queue anyway.

Agreed.  virtio-net currently does this with batching.  I do with the
bidir napi thing (which effectively crosses the producer::consumer > 1
threshold to mitigate the signal path).


>
> If you have a request-response workload with the wire idle and latency
> critical, then there's no problem having an exit per packet because
> (a) there aren't that many packets and (b) the guest isn't doing any
> batching, so guest overhead will swamp the hypervisor overhead.
Right, so the trick is to use an algorithm that adapts here.  Batching
solves the first case, but not the second.  The bidir napi thing solves
both, but it does assume you have ample host processing power to run the
algorithm concurrently.  This may or may not be suitable to all
applications, I admit.

>
> If you have a low latency request-response workload mixed with a high
> throughput workload, then you aren't going to get low latency since
> your low latency packets will sit on the queue behind the high
> throughput packets.  You can fix that with multiqueue and then you're
> back to one of the scenarios above.
Agreed, and thats ok.  Now we are getting more into 802.1p type MQ
issues anyway, if the application cared about it that much.

>
>> I think most would agree that adding 2us is not bad, but so far that is
>> an unproven theory that the IO path in question only adds 2us.   And we
>> are not just looking at the rate at which we can enter and exit the
>> guest...we need the whole path...from the PIO kick to the dev_xmit() on
>> the egress hardware, to the ingress and rx-injection.  This includes any
>> and all penalties associated with the path, even if they are imposed by
>> something like the design of tun-tap.
>>   
>
> Correct, we need to look at the whole path.  That's why the wishing
> well is clogged with my 'give me a better userspace interface' emails.
>
>> Right now its way way way worse than 2us.  In fact, at my last reading
>> this was more like 3060us (3125-65).  So shorten that 3125 to 67 (while
>> maintaining line-rate) and I will be impressed.  Heck, shorten it to
>> 80us and I will be impressed.
>>   
>
> The 3060us thing is a timer, not cpu time.
Agreed, but its still "state of the art" from an observer perspective. 
The reason "why", though easily explainable, is inconsequential to most
people.  FWIW, I have seen virtio-net do a much more respectable 350us
on an older version, so I know there is plenty of room for improvement.

>   We aren't starting a JVM for each packet.
Heh...it kind of feels like that right now, so hopefully some
improvement will at least be on the one thing that comes out of all this.

-Greg


Download attachment "signature.asc" of type "application/pgp-signature" (258 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ