lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 26 Jun 2024 10:37:36 +0200
From: Stefano Garzarella <sgarzare@...hat.com>
To: Dorjoy Chowdhury <dorjoychy111@...il.com>
Cc: Alexander Graf <graf@...zon.com>, Paolo Bonzini <pbonzini@...hat.com>, 
	Alexander Graf <agraf@...raf.de>, virtualization@...ts.linux.dev, kvm@...r.kernel.org, 
	netdev@...r.kernel.org, stefanha@...hat.com
Subject: Re: How to implement message forwarding from one CID to another in
 vhost driver

Hi Dorjoy,

On Tue, Jun 25, 2024 at 11:44:30PM GMT, Dorjoy Chowdhury wrote:
>Hey Stefano,

[...]

>> >
>> >So the immediate plan would be to:
>> >
>> >  1) Build a new vhost-vsock-forward object model that connects to
>> >vhost as CID 3 and then forwards every packet from CID 1 to the
>> >Enclave-CID and every packet that arrives on to CID 3 to CID 2.
>>
>> This though requires writing completely from scratch the virtio-vsock
>> emulation in QEMU. If you have time that would be great, otherwise if
>> you want to do a PoC, my advice is to start with vhost-user-vsock which
>> is already there.
>>
>
>Can you give me some more details about how I can implement the
>daemon? 

We already have a demon written in Rust, so I don't recommend you 
rewrite one from scratch, just start with that. You can find the daemon 
and instructions on how to use it with QEMU here [1].

>I would appreciate some pointers to code too.

I sent the pointer to it in my first reply [2].

>
>Right now, the "nitro-enclave" machine type (wip) in QEMU
>automatically spawns a VHOST_VSOCK device with the CID equal to the
>"guest-cid" machine option. I think this is equivalent to using the
>"-device vhost-vsock-device,guest-cid=N" option explicitly. Does that
>need any change? I guess instead of "vhost-vsock-device", the
>vhost-vsock device needs to be equivalent to "-device
>vhost-user-vsock-device,guest-cid=N"?

Nope, the vhost-user-vsock device requires just a `chardev` option.
The chardev points to the Unix socket used by QEMU to talk with the 
daemon. The daemon has a parameter to set the CID. See [1] for the 
examples.

>
>The applications inside the nitro-enclave VM will still connect and
>talk to CID 3. So on the daemon side, do we need to spawn a device
>that has CID 3 and then forward everything this device receives to CID
>1 (VMADDR_CID_LOCAL) same port and everything it receives from CID 1
>to the "guest-cid"? 

Yep, I think this is right.
Note: to use VMADDR_CID_LOCAL, the host needs to load `vsock_loopback` 
kernel module.

Before modifying the code, if you want to do some testing, perhaps you 
can use socat (which supports both UNIX-* and VSOCK-*). The daemon for 
now exposes two unix sockets, one is used to communicate with QEMU via 
the vhost-user protocol, and the other is to be used by the application 
to communicate with vsock sockets in the guest using the hybrid protocol 
defined by firecracker. So you could initiate a socat between the latter 
and VMADDR_CID_LOCAL, the only problem I see is that you have to send 
the first string provided by the hybrid protocol (CONNECT 1234), but for 
a PoC it should be ok.

I just tried the following and it works without touching any code:

shell1$ ./target/debug/vhost-device-vsock \
     --vm guest-cid=3,socket=/tmp/vhost3.socket,uds-path=/tmp/vm3.vsock

shell2$ sudo modprobe vsock_loopback
shell2$ socat VSOCK-LISTEN:1234 UNIX-CONNECT:/tmp/vm3.vsock

shell3$ qemu-system-x86_64 -smp 2 -M q35,accel=kvm,memory-backend=mem \
     -drive file=fedora40.qcow2,format=qcow2,if=virtio\
     -chardev socket,id=char0,path=/tmp/vhost3.socket \
     -device vhost-user-vsock-pci,chardev=char0 \
     -object memory-backend-memfd,id=mem,size=512M \
     -nographic

     guest$ nc --vsock -l 1234

shell4$ nc --vsock 1 1234
CONNECT 1234

     Note: the `CONNECT 1234` is required by the hybrid vsock protocol 
     defined by firecracker, so if we extend the vhost-device-vsock 
     daemon to forward packet to VMADDR_CID_LOCAL, that would not be 
     needed (including running socat).


This is just an example for how to use loopback, now if from the VM you 
want to connect to a CID other than 2, then we have to modify the daemon 
to do that.

>The applications that will be running in the host
>need to be changed so that instead of connecting to the "guest-cid" of
>the nitro-enclave VM, they will instead connect to VMADDR_CID_LOCAL.
>Is my understanding correct?

Yep.

>
>BTW is there anything related to the "VMADDR_FLAG_TO_HOST" flag that
>needs to be checked? I remember some discussion about it.

No, that flag is handled by the driver. If that flag is on, the driver 
forwards the packet to the host, regardless of the destination CID. So 
it has to be set by the application in the guest, but it should already 
do that since that flag was introduced just for Nitro enclaves.

>
>It would be great if you could give me some details about how I can
>achieve the CID 3 <-> CID 2 communication using the vhost-user-vsock.

CID 3 <-> CID 2 is the standard use case, right?
The readme in [1] contains several examples, let me know if you need 
more details ;-)

>Is this https://github.com/stefano-garzarella/vhost-user-vsock where I
>would need to add support for forwarding everything to
>VMADDR_CID_LOCAL via an option maybe?

Nope, that one was a PoC and the repo is archived, the daemon is [1].
BTW, I agree on the option for the forwarding.

Thanks,
Stefano

[1] 
https://github.com/rust-vmm/vhost-device/tree/main/vhost-device-vsock
[2] 
https://lore.kernel.org/virtualization/CAFfO_h5_uAwdNJB=fjrxb_pPiwRDQxaZn=OvR3yrYd+c18tUdQ@mail.gmail.com/T/#m4a50f94a5329cd262412437ac80a4f406404bf20


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ