lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49D4C191.2070502@redhat.com>
Date:	Thu, 02 Apr 2009 16:45:53 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Gregory Haskins <ghaskins@...ell.com>
CC:	Anthony Liguori <anthony@...emonkey.ws>,
	Andi Kleen <andi@...stfloor.org>, linux-kernel@...r.kernel.org,
	agraf@...e.de, pmullaney@...ell.com, pmorreale@...ell.com,
	rusty@...tcorp.com.au, netdev@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC PATCH 00/17] virtual-bus

Gregory Haskins wrote:
> Avi Kivity wrote:
>   
>> Gregory Haskins wrote:
>>     
>>> Avi Kivity wrote:
>>>  
>>>       
>>>> My 'prohibitively expensive' is true only if you exit every packet.
>>>>
>>>>
>>>>     
>>>>         
>>> Understood, but yet you need to do this if you want something like iSCSI
>>> READ transactions to have as low-latency as possible.
>>>   
>>>       
>> Dunno, two microseconds is too much?  The wire imposes much more.
>>
>>     
>
> No, but thats not what we are talking about.  You said signaling on
> every packet is prohibitively expensive.  I am saying signaling on every
> packet is required for decent latency.  So is it prohibitively expensive
> or not?
>   

We're heading dangerously into the word-game area.  Let's not do that.

If you have a high throughput workload with many packets per seconds 
then an exit per packet (whether to userspace or to the kernel) is 
expensive.  So you do exit mitigation.  Latency is not important since 
the packets are going to sit in the output queue anyway.

If you have a request-response workload with the wire idle and latency 
critical, then there's no problem having an exit per packet because (a) 
there aren't that many packets and (b) the guest isn't doing any 
batching, so guest overhead will swamp the hypervisor overhead.

If you have a low latency request-response workload mixed with a high 
throughput workload, then you aren't going to get low latency since your 
low latency packets will sit on the queue behind the high throughput 
packets.  You can fix that with multiqueue and then you're back to one 
of the scenarios above.

> I think most would agree that adding 2us is not bad, but so far that is
> an unproven theory that the IO path in question only adds 2us.   And we
> are not just looking at the rate at which we can enter and exit the
> guest...we need the whole path...from the PIO kick to the dev_xmit() on
> the egress hardware, to the ingress and rx-injection.  This includes any
> and all penalties associated with the path, even if they are imposed by
> something like the design of tun-tap.
>   

Correct, we need to look at the whole path.  That's why the wishing well 
is clogged with my 'give me a better userspace interface' emails.

> Right now its way way way worse than 2us.  In fact, at my last reading
> this was more like 3060us (3125-65).  So shorten that 3125 to 67 (while
> maintaining line-rate) and I will be impressed.  Heck, shorten it to
> 80us and I will be impressed.
>   

The 3060us thing is a timer, not cpu time.  We aren't starting a JVM for 
each packet.  We could remove it given a notification API, or 
duplicating the sched-and-forget thing, like Rusty did with lguest or 
Mark with qemu.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ