lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 27 Apr 2020 10:56:31 +0300
From:   "Paraschiv, Andra-Irina" <andraprs@...zon.com>
To:     Liran Alon <liran.alon@...cle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        <linux-kernel@...r.kernel.org>
CC:     Anthony Liguori <aliguori@...zon.com>,
        Benjamin Herrenschmidt <benh@...zon.com>,
        Colm MacCarthaigh <colmmacc@...zon.com>,
        Bjoern Doebel <doebel@...zon.de>,
        David Woodhouse <dwmw@...zon.co.uk>,
        Frank van der Linden <fllinden@...zon.com>,
        Alexander Graf <graf@...zon.de>,
        Martin Pohlack <mpohlack@...zon.de>,
        Matt Wilson <msw@...zon.com>, Balbir Singh <sblbir@...zon.com>,
        Stewart Smith <trawets@...zon.com>,
        Uwe Dannowski <uwed@...zon.de>, <kvm@...r.kernel.org>,
        <ne-devel-upstream@...zon.com>
Subject: Re: [PATCH v1 00/15] Add support for Nitro Enclaves



On 25/04/2020 18:25, Liran Alon wrote:
>
> On 23/04/2020 16:19, Paraschiv, Andra-Irina wrote:
>>
>> The memory and CPUs are carved out of the primary VM, they are 
>> dedicated for the enclave. The Nitro hypervisor running on the host 
>> ensures memory and CPU isolation between the primary VM and the 
>> enclave VM.
> I hope you properly take into consideration Hyper-Threading 
> speculative side-channel vulnerabilities here.
> i.e. Usually cloud providers designate each CPU core to be assigned to 
> run only vCPUs of specific guest. To avoid sharing a single CPU core 
> between multiple guests.
> To handle this properly, you need to use some kind of core-scheduling 
> mechanism (Such that each CPU core either runs only vCPUs of enclave 
> or only vCPUs of primary VM at any given point in time).
>
> In addition, can you elaborate more on how the enclave memory is 
> carved out of the primary VM?
> Does this involve performing a memory hot-unplug operation from 
> primary VM or just unmap enclave-assigned guest physical pages from 
> primary VM's SLAT (EPT/NPT) and map them now only in enclave's SLAT?


Correct, we take into consideration the HT setup. The enclave gets 
dedicated physical cores. The primary VM and the enclave VM don't run on 
CPU siblings of a physical core.

Regarding the memory carve out, the logic includes page table entries 
handling.

IIRC, memory hot-unplug can be used for the memory blocks that were 
previously hot-plugged.

https://www.kernel.org/doc/html/latest/admin-guide/mm/memory-hotplug.html

>
>>
>> Let me know if further clarifications are needed.
>>
> I don't quite understand why Enclave VM needs to be 
> provisioned/teardown during primary VM's runtime.
>
> For example, an alternative could have been to just provision both 
> primary VM and Enclave VM on primary VM startup.
> Then, wait for primary VM to setup a communication channel with 
> Enclave VM (E.g. via virtio-vsock).
> Then, primary VM is free to request Enclave VM to perform various 
> tasks when required on the isolated environment.
>
> Such setup will mimic a common Enclave setup. Such as Microsoft 
> Windows VBS EPT-based Enclaves (That all runs on VTL1). It is also 
> similar to TEEs running on ARM TrustZone.
> i.e. In my alternative proposed solution, the Enclave VM is similar to 
> VTL1/TrustZone.
> It will also avoid requiring introducing a new PCI device and driver.

True, this can be another option, to provision the primary VM and the 
enclave VM at launch time.

In the proposed setup, the primary VM starts with the initial allocated 
resources (memory, CPUs). The launch path of the enclave VM, as it's 
spawned on the same host, is done via the ioctl interface - PCI device - 
host hypervisor path. Short-running or long-running enclave can be 
bootstrapped during primary VM lifetime. Depending on the use case, a 
custom set of resources (memory and CPUs) is set for an enclave and then 
given back when the enclave is terminated; these resources can be used 
for another enclave spawned later on or the primary VM tasks.

Thanks,
Andra




Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ