lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 25 Apr 2020 18:25:21 +0300
From:   Liran Alon <liran.alon@...cle.com>
To:     "Paraschiv, Andra-Irina" <andraprs@...zon.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        linux-kernel@...r.kernel.org
Cc:     Anthony Liguori <aliguori@...zon.com>,
        Benjamin Herrenschmidt <benh@...zon.com>,
        Colm MacCarthaigh <colmmacc@...zon.com>,
        Bjoern Doebel <doebel@...zon.de>,
        David Woodhouse <dwmw@...zon.co.uk>,
        Frank van der Linden <fllinden@...zon.com>,
        Alexander Graf <graf@...zon.de>,
        Martin Pohlack <mpohlack@...zon.de>,
        Matt Wilson <msw@...zon.com>, Balbir Singh <sblbir@...zon.com>,
        Stewart Smith <trawets@...zon.com>,
        Uwe Dannowski <uwed@...zon.de>, kvm@...r.kernel.org,
        ne-devel-upstream@...zon.com
Subject: Re: [PATCH v1 00/15] Add support for Nitro Enclaves


On 23/04/2020 16:19, Paraschiv, Andra-Irina wrote:
>
> The memory and CPUs are carved out of the primary VM, they are 
> dedicated for the enclave. The Nitro hypervisor running on the host 
> ensures memory and CPU isolation between the primary VM and the 
> enclave VM.
I hope you properly take into consideration Hyper-Threading speculative 
side-channel vulnerabilities here.
i.e. Usually cloud providers designate each CPU core to be assigned to 
run only vCPUs of specific guest. To avoid sharing a single CPU core 
between multiple guests.
To handle this properly, you need to use some kind of core-scheduling 
mechanism (Such that each CPU core either runs only vCPUs of enclave or 
only vCPUs of primary VM at any given point in time).

In addition, can you elaborate more on how the enclave memory is carved 
out of the primary VM?
Does this involve performing a memory hot-unplug operation from primary 
VM or just unmap enclave-assigned guest physical pages from primary VM's 
SLAT (EPT/NPT) and map them now only in enclave's SLAT?

>
> Let me know if further clarifications are needed.
>
I don't quite understand why Enclave VM needs to be provisioned/teardown 
during primary VM's runtime.

For example, an alternative could have been to just provision both 
primary VM and Enclave VM on primary VM startup.
Then, wait for primary VM to setup a communication channel with Enclave 
VM (E.g. via virtio-vsock).
Then, primary VM is free to request Enclave VM to perform various tasks 
when required on the isolated environment.

Such setup will mimic a common Enclave setup. Such as Microsoft Windows 
VBS EPT-based Enclaves (That all runs on VTL1). It is also similar to 
TEEs running on ARM TrustZone.
i.e. In my alternative proposed solution, the Enclave VM is similar to 
VTL1/TrustZone.
It will also avoid requiring introducing a new PCI device and driver.

-Liran


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ