lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090818114914.GA17721@redhat.com>
Date:	Tue, 18 Aug 2009 14:49:14 +0300
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Avi Kivity <avi@...hat.com>
Cc:	Gregory Haskins <gregory.haskins@...il.com>,
	Ingo Molnar <mingo@...e.hu>,
	Gregory Haskins <ghaskins@...ell.com>, kvm@...r.kernel.org,
	alacrityvm-devel@...ts.sourceforge.net,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH v3 3/6] vbus: add a "vbus-proxy" bus model for
	vbus_driver objects

On Tue, Aug 18, 2009 at 02:15:57PM +0300, Avi Kivity wrote:
> On 08/18/2009 02:07 PM, Michael S. Tsirkin wrote:
>> On Tue, Aug 18, 2009 at 01:45:05PM +0300, Avi Kivity wrote:
>>    
>>> On 08/18/2009 01:28 PM, Michael S. Tsirkin wrote:
>>>      
>>>>        
>>>>> Suppose a nested guest has two devices.  One a virtual device backed by
>>>>> its host (our guest), and one a virtual device backed by us (the real
>>>>> host), and assigned by the guest to the nested guest.  If both devices
>>>>> use hypercalls, there is no way to distinguish between them.
>>>>>
>>>>>          
>>>> Not sure I understand. What I had in mind is that devices would have to
>>>> either use different hypercalls and map hypercall to address during
>>>> setup, or pass address with each hypercall.  We get the hypercall,
>>>> translate the address as if it was pio access, and know the destination?
>>>>
>>>>        
>>> There are no different hypercalls.  There's just one hypercall
>>> instruction, and there's no standard on how it's used.  If a nested call
>>> issues a hypercall instruction, you have no idea if it's calling a
>>> Hyper-V hypercall or a vbus/virtio kick.
>>>      
>> userspace will know which it is, because hypercall capability
>> in the device has been activated, and can tell kernel, using
>> something similar to iosignalfd. No?
>>    
>
> The host kernel sees a hypercall vmexit.  How does it know if it's a  
> nested-guest-to-guest hypercall or a nested-guest-to-host hypercall?   
> The two are equally valid at the same time.

Here is how this can work - it is similar to MSI if you like:
- by default, the device uses pio kicks
- nested guest driver can enable hypercall capability in the device,
  probably with pci config cycle
- guest userspace (hypervisor running in guest) will see this request
  and perform pci config cycle on the "real" device, telling it to which
  nested guest this device is assigned
- host userspace (hypervisor running in host) will see this.
  it now knows both which guest the hypercalls will be for,
  and that the device in question is an emulated one,
  and can set up kvm appropriately


>>> You could have a protocol where you register the hypercall instruction's
>>> address with its recipient, but it quickly becomes a tangled mess.
>>>      
>> I really thought we could pass the io address in register as an input
>> parameter. Is there a way to do this in a secure manner?
>>
>> Hmm. Doesn't kvm use hypercalls now? How does this work with nesting?
>> For example, in this code in arch/x86/kvm/x86.c:
>>
>>          switch (nr) {
>>          case KVM_HC_VAPIC_POLL_IRQ:
>>                  ret = 0;
>>                  break;
>>          case KVM_HC_MMU_OP:
>>                  r = kvm_pv_mmu_op(vcpu, a0, hc_gpa(vcpu, a1, a2),&ret);
>>                  break;
>>          default:
>>                  ret = -KVM_ENOSYS;
>>                  break;
>>          }
>>
>> how do we know that it's the guest and not the nested guest performing
>> the hypercall?
>>    
>
> The host knows whether the guest or nested guest are running.  If the  
> guest is running, it's a guest-to-host hypercall.  If the nested guest  
> is running, it's a nested-guest-to-guest hypercall.  We don't have  
> nested-guest-to-host hypercalls (and couldn't unless we get agreement on  
> a protocol from all hypervisor vendors).

Not necessarily. What I am saying is we could make this protocol part of
guest paravirt driver. the guest that loads the driver and enables the
capability, has to agree to the protocol. If it doesn't want to, it does
not have to use that driver.

> -- 
> error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ