lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200907085818.GB1101646@kroah.com>
Date:   Mon, 7 Sep 2020 10:58:18 +0200
From:   Greg KH <gregkh@...uxfoundation.org>
To:     Andra Paraschiv <andraprs@...zon.com>
Cc:     linux-kernel <linux-kernel@...r.kernel.org>,
        Anthony Liguori <aliguori@...zon.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Colm MacCarthaigh <colmmacc@...zon.com>,
        David Duncan <davdunc@...zon.com>,
        Bjoern Doebel <doebel@...zon.de>,
        David Woodhouse <dwmw@...zon.co.uk>,
        Frank van der Linden <fllinden@...zon.com>,
        Alexander Graf <graf@...zon.de>, Karen Noel <knoel@...hat.com>,
        Martin Pohlack <mpohlack@...zon.de>,
        Matt Wilson <msw@...zon.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Balbir Singh <sblbir@...zon.com>,
        Stefano Garzarella <sgarzare@...hat.com>,
        Stefan Hajnoczi <stefanha@...hat.com>,
        Stewart Smith <trawets@...zon.com>,
        Uwe Dannowski <uwed@...zon.de>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        kvm <kvm@...r.kernel.org>,
        ne-devel-upstream <ne-devel-upstream@...zon.com>
Subject: Re: [PATCH v8 09/18] nitro_enclaves: Add logic for setting an
 enclave vCPU

On Fri, Sep 04, 2020 at 08:37:09PM +0300, Andra Paraschiv wrote:
> An enclave, before being started, has its resources set. One of its
> resources is CPU.
> 
> A NE CPU pool is set and enclave CPUs are chosen from it. Offline the
> CPUs from the NE CPU pool during the pool setup and online them back
> during the NE CPU pool teardown. The CPU offline is necessary so that
> there would not be more vCPUs than physical CPUs available to the
> primary / parent VM. In that case the CPUs would be overcommitted and
> would change the initial configuration of the primary / parent VM of
> having dedicated vCPUs to physical CPUs.
> 
> The enclave CPUs need to be full cores and from the same NUMA node. CPU
> 0 and its siblings have to remain available to the primary / parent VM.
> 
> Add ioctl command logic for setting an enclave vCPU.
> 
> Signed-off-by: Alexandru Vasile <lexnv@...zon.com>
> Signed-off-by: Andra Paraschiv <andraprs@...zon.com>
> Reviewed-by: Alexander Graf <graf@...zon.com>
> ---
> Changelog
> 
> v7 -> v8
> 
> * No changes.
> 
> v6 -> v7
> 
> * Check for error return value when setting the kernel parameter string.
> * Use the NE misc device parent field to get the NE PCI device.
> * Update the naming and add more comments to make more clear the logic
>   of handling full CPU cores and dedicating them to the enclave.
> * Calculate the number of threads per core and not use smp_num_siblings
>   that is x86 specific.
> 
> v5 -> v6
> 
> * Check CPUs are from the same NUMA node before going through CPU
>   siblings during the NE CPU pool setup.
> * Update documentation to kernel-doc format.
> 
> v4 -> v5
> 
> * Set empty string in case of invalid NE CPU pool.
> * Clear NE CPU pool mask on pool setup failure.
> * Setup NE CPU cores out of the NE CPU pool.
> * Early exit on NE CPU pool setup if enclave(s) already running.
> * Remove sanity checks for situations that shouldn't happen, only if
>   buggy system or broken logic at all.
> * Add check for maximum vCPU id possible before looking into the CPU
>   pool.
> * Remove log on copy_from_user() / copy_to_user() failure and on admin
>   capability check for setting the NE CPU pool.
> * Update the ioctl call to not create a file descriptor for the vCPU.
> * Split the CPU pool usage logic in 2 separate functions - one to get a
>   CPU from the pool and the other to check the given CPU is available in
>   the pool.
> 
> v3 -> v4
> 
> * Setup the NE CPU pool at runtime via a sysfs file for the kernel
>   parameter.
> * Check enclave CPUs to be from the same NUMA node.
> * Use dev_err instead of custom NE log pattern.
> * Update the NE ioctl call to match the decoupling from the KVM API.
> 
> v2 -> v3
> 
> * Remove the WARN_ON calls.
> * Update static calls sanity checks.
> * Update kzfree() calls to kfree().
> * Remove file ops that do nothing for now - open, ioctl and release.
> 
> v1 -> v2
> 
> * Add log pattern for NE.
> * Update goto labels to match their purpose.
> * Remove the BUG_ON calls.
> * Check if enclave state is init when setting enclave vCPU.
> ---
>  drivers/virt/nitro_enclaves/ne_misc_dev.c | 702 ++++++++++++++++++++++
>  1 file changed, 702 insertions(+)
> 
> diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c b/drivers/virt/nitro_enclaves/ne_misc_dev.c
> index 7ad3f1eb75d4..0477b11bf15d 100644
> --- a/drivers/virt/nitro_enclaves/ne_misc_dev.c
> +++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
> @@ -64,8 +64,11 @@
>   * TODO: Update logic to create new sysfs entries instead of using
>   * a kernel parameter e.g. if multiple sysfs files needed.
>   */
> +static int ne_set_kernel_param(const char *val, const struct kernel_param *kp);
> +
>  static const struct kernel_param_ops ne_cpu_pool_ops = {
>  	.get	= param_get_string,
> +	.set	= ne_set_kernel_param,
>  };
>  
>  static char ne_cpus[NE_CPUS_SIZE];
> @@ -103,6 +106,702 @@ struct ne_cpu_pool {
>  
>  static struct ne_cpu_pool ne_cpu_pool;
>  
> +/**
> + * ne_check_enclaves_created() - Verify if at least one enclave has been created.
> + * @void:	No parameters provided.
> + *
> + * Context: Process context.
> + * Return:
> + * * True if at least one enclave is created.
> + * * False otherwise.
> + */
> +static bool ne_check_enclaves_created(void)
> +{
> +	struct ne_pci_dev *ne_pci_dev = NULL;
> +	struct pci_dev *pdev = NULL;
> +	bool ret = false;
> +
> +	if (!ne_misc_dev.parent)

How can that be the case?

I wouldn't rely on the misc device's internals to be something that you
count on for proper operation of your code, right?

thanks,

greg k-h

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ