lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 13 Sep 2010 12:40:11 -0500
From:	Anthony Liguori <anthony@...emonkey.ws>
To:	"Michael S. Tsirkin" <mst@...hat.com>
CC:	Krishna Kumar2 <krkumar2@...ibm.com>, davem@...emloft.net,
	kvm@...r.kernel.org, netdev@...r.kernel.org,
	Rusty Russell <rusty@...tcorp.com.au>
Subject: Re: [RFC PATCH 1/4] Add a new API to virtio-pci

On 09/13/2010 11:30 AM, Michael S. Tsirkin wrote:
> On Mon, Sep 13, 2010 at 10:59:34AM -0500, Anthony Liguori wrote:
>    
>> On 09/13/2010 04:04 AM, Michael S. Tsirkin wrote:
>>      
>>> On Mon, Sep 13, 2010 at 09:50:42AM +0530, Krishna Kumar2 wrote:
>>>        
>>>> "Michael S. Tsirkin"<mst@...hat.com>   wrote on 09/12/2010 05:16:37 PM:
>>>>
>>>>          
>>>>> "Michael S. Tsirkin"<mst@...hat.com>
>>>>> 09/12/2010 05:16 PM
>>>>>
>>>>> On Thu, Sep 09, 2010 at 07:19:33PM +0530, Krishna Kumar2 wrote:
>>>>>            
>>>>>> Unfortunately I need a
>>>>>> constant in vhost for now.
>>>>>>              
>>>>> Maybe not even that: you create multiple vhost-net
>>>>> devices so vhost-net in kernel does not care about these
>>>>> either, right? So this can be just part of vhost_net.h
>>>>> in qemu.
>>>>>            
>>>> Sorry, I didn't understand what you meant.
>>>>
>>>> I can remove all socks[] arrays/constants by pre-allocating
>>>> sockets in vhost_setup_vqs. Then I can remove all "socks"
>>>> parameters in vhost_net_stop, vhost_net_release and
>>>> vhost_net_reset_owner.
>>>>
>>>> Does this make sense?
>>>>
>>>> Thanks,
>>>>
>>>> - KK
>>>>          
>>> Here's what I mean: each vhost device includes 1 TX
>>> and 1 RX VQ. Instead of teaching vhost about multiqueue,
>>> we could simply open /dev/vhost-net multiple times.
>>> How many times would be up to qemu.
>>>        
>> Trouble is, each vhost-net device is associated with 1 tun/tap
>> device which means that each vhost-net device is associated with a
>> transmit and receive queue.
>>
>> I don't know if you'll always have an equal number of transmit and
>> receive queues but there's certainly  challenge in terms of
>> flexibility with this model.
>>
>> Regards,
>>
>> Anthony Liguori
>>      
> Not really, TX and RX can be mapped to different devices,
>    

It's just a little odd.  Would you bond multiple tun tap devices to 
achieve multi-queue TX?  For RX, do you somehow limit RX to only one of 
those devices?

If we were doing this in QEMU (and btw, there needs to be userspace 
patches before we implement this in the kernel side), I think it would 
make more sense to just rely on doing a multithreaded write to a single 
tun/tap device and then to hope that in can be made smarter at the 
macvtap layer.

Regards,

Anthony Liguori

Regards,

Anthony Liguori

> or you can only map one of these. What is the trouble?
> What other features would you desire in terms of flexibility?
>
>    

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ