lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4i525r6irzjgibqqtrs3qzofqfifws2k3fmzotg37pyurs5wkd@js54ugamyyin>
Date: Mon, 20 May 2024 10:55:26 +0200
From: Stefano Garzarella <sgarzare@...hat.com>
To: Dorjoy Chowdhury <dorjoychy111@...il.com>
Cc: virtualization@...ts.linux.dev, kvm@...r.kernel.org, 
	netdev@...r.kernel.org, Alexander Graf <graf@...zon.com>, agraf@...raf.de, 
	stefanha@...hat.com
Subject: Re: How to implement message forwarding from one CID to another in
 vhost driver

Hi Dorjoy,

On Sat, May 18, 2024 at 04:17:38PM GMT, Dorjoy Chowdhury wrote:
>Hi,
>
>Hope you are doing well. I am working on adding AWS Nitro Enclave[1]
>emulation support in QEMU. Alexander Graf is mentoring me on this work. A v1
>patch series has already been posted to the qemu-devel mailing list[2].
>
>AWS nitro enclaves is an Amazon EC2[3] feature that allows creating isolated
>execution environments, called enclaves, from Amazon EC2 instances, which are
>used for processing highly sensitive data. Enclaves have no persistent storage
>and no external networking. The enclave VMs are based on Firecracker microvm
>and have a vhost-vsock device for communication with the parent EC2 instance
>that spawned it and a Nitro Secure Module (NSM) device for cryptographic
>attestation. The parent instance VM always has CID 3 while the enclave VM gets
>a dynamic CID. The enclave VMs can communicate with the parent instance over
>various ports to CID 3, for example, the init process inside an enclave sends a
>heartbeat to port 9000 upon boot, expecting a heartbeat reply, letting the
>parent instance know that the enclave VM has successfully booted.
>
>The plan is to eventually make the nitro enclave emulation in QEMU standalone
>i.e., without needing to run another VM with CID 3 with proper vsock

If you don't have to launch another VM, maybe we can avoid vhost-vsock 
and emulate virtio-vsock in user-space, having complete control over the 
behavior.

So we could use this opportunity to implement virtio-vsock in QEMU [4] 
or use vhost-user-vsock [5] and customize it somehow.
(Note: vhost-user-vsock already supports sibling communication, so maybe 
with a few modifications it fits your case perfectly)

[4] https://gitlab.com/qemu-project/qemu/-/issues/2095
[5] https://github.com/rust-vmm/vhost-device/tree/main/vhost-device-vsock

>communication support. For this to work, one approach could be to teach the
>vhost driver in kernel to forward CID 3 messages to another CID N

So in this case both CID 3 and N would be assigned to the same QEMU
process?

Do you have to allocate 2 separate virtio-vsock devices, one for the 
parent and one for the enclave?

>(set to CID 2 for host) i.e., it patches CID from 3 to N on incoming messages
>and from N to 3 on responses. This will enable users of the

Will these messages have the VMADDR_FLAG_TO_HOST flag set?

We don't support this in vhost-vsock yet, if supporting it helps, we 
might, but we need to better understand how to avoid security issues, so 
maybe each device needs to explicitly enable the feature and specify 
from which CIDs it accepts packets.

>nitro-enclave machine
>type in QEMU to run the necessary vsock server/clients in the host machine
>(some defaults can be implemented in QEMU as well, for example, sending a reply
>to the heartbeat) which will rid them of the cumbersome way of running another
>whole VM with CID 3. This way, users of nitro-enclave machine in QEMU, could
>potentially also run multiple enclaves with their messages for CID 3 forwarded
>to different CIDs which, in QEMU side, could then be specified using a new
>machine type option (parent-cid) if implemented. I guess in the QEMU side, this
>will be an ioctl call (or some other way) to indicate to the host kernel that
>the CID 3 messages need to be forwarded. Does this approach of

What if there is already a VM with CID = 3 in the system?

>forwarding CID 3 messages to another CID sound good?

It seems too specific a case, if we can generalize it maybe we could 
make this change, but we would like to avoid complicating vhost-vsock 
and keep it as simple as possible to avoid then having to implement 
firewalls, etc.

So first I would see if vhost-user-vsock or the QEMU built-in device is 
right for this use-case.

Thanks,
Stefano

>
>If this approach sounds good, I need some guidance on where the code
>should be written in order to achieve this. I would greatly appreciate
>any suggestions.
>
>Thanks.
>
>Regards,
>Dorjoy
>
>[1] https://docs.aws.amazon.com/enclaves/latest/user/nitro-enclave.html
>[2] https://mail.gnu.org/archive/html/qemu-devel/2024-05/msg03524.html
>[3] https://aws.amazon.com/ec2/
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ