lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sat, 22 Apr 2023 01:05:44 +0000
From:   Dexuan Cui <decui@...rosoft.com>
To:     "Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
        "ak@...ux.intel.com" <ak@...ux.intel.com>,
        "arnd@...db.de" <arnd@...db.de>, "bp@...en8.de" <bp@...en8.de>,
        "brijesh.singh@....com" <brijesh.singh@....com>,
        "dan.j.williams@...el.com" <dan.j.williams@...el.com>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        "hpa@...or.com" <hpa@...or.com>,
        "jane.chu@...cle.com" <jane.chu@...cle.com>,
        "kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
        KY Srinivasan <kys@...rosoft.com>,
        "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
        "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
        "luto@...nel.org" <luto@...nel.org>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "rostedt@...dmis.org" <rostedt@...dmis.org>,
        "sathyanarayanan.kuppuswamy@...ux.intel.com" 
        <sathyanarayanan.kuppuswamy@...ux.intel.com>,
        "seanjc@...gle.com" <seanjc@...gle.com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "tony.luck@...el.com" <tony.luck@...el.com>,
        "wei.liu@...nel.org" <wei.liu@...nel.org>,
        "x86@...nel.org" <x86@...nel.org>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Tianyu Lan <Tianyu.Lan@...rosoft.com>
Subject: RE: [PATCH v4 5/6] Drivers: hv: vmbus: Support TDX guests

> From: Michael Kelley (LINUX) <mikelley@...rosoft.com>
> Sent: Tuesday, April 11, 2023 9:53 AM
> ...
> > @@ -168,6 +170,30 @@ int hv_synic_alloc(void)
> >  			pr_err("Unable to allocate post msg page\n");
> >  			goto err;
> > ...
> The error path here doesn't always work correctly.  If one or more of the
> three pages is decrypted, and then one of the decryptions fails, we're left
> in a state where all three pages are allocated, but some are decrypted
> and some are not.  hv_synic_free() won't know which pages are allocated
> but still encrypted, and will try to re-encrypt a page that wasn't
> successfully decrypted.
> 
> The code to clean up from an error is messy because there are three pages
> involved. You've posted a separate patch to eliminate the need for the
> post_msg_page.  If that patch came before this patch series (or maybe
> incorporated into this series), the code here only has to deal with two
> pages instead of three, making the cleanup of decryption errors easier.
> 
> >  	}
> >. ..
> >  void hv_synic_free(void)
> >  {
> >  	int cpu;
> > +	int ret;
> >... 
> If any of the three re-encryptions fails, we'll leak all three pages. That's
> probably OK.  Eliminating the post_msg_page will help.

The post_msg_page has been removed in the hyperv-next branch.
I'm rebasing this patch and is going to post v5. 
I think I can use the below changes:

@@ -159,6 +161,28 @@ int hv_synic_alloc(void)
                                goto err;
                        }
                }
+
+               /* It's better to leak the page if the decryption fails. */
+               if (hv_isolation_type_tdx()) {
+                       ret = set_memory_decrypted(
+                               (unsigned long)hv_cpu->synic_message_page, 1);
+                       if (ret) {
+                               pr_err("Failed to decrypt SYNIC msg page\n");
+                               hv_cpu->synic_message_page = NULL;
+                               goto err;
+                       }
+
+                       ret = set_memory_decrypted(
+                               (unsigned long)hv_cpu->synic_event_page, 1);
+                       if (ret) {
+                               pr_err("Failed to decrypt SYNIC event page\n");
+                               hv_cpu->synic_event_page = NULL;
+                               goto err;
+                       }
+
+                       memset(hv_cpu->synic_message_page, 0, PAGE_SIZE);
+                       memset(hv_cpu->synic_event_page, 0, PAGE_SIZE);
+               }
        }
... 
 void hv_synic_free(void)
 {
        int cpu;
+       int ret;
 
        for_each_present_cpu(cpu) {
                struct hv_per_cpu_context *hv_cpu
                        = per_cpu_ptr(hv_context.cpu_context, cpu);
 
+               /* It's better to leak the page if the encryption fails. */
+               if (hv_isolation_type_tdx()) {
+                       if (hv_cpu->synic_message_page) {
+                               ret = set_memory_encrypted((unsigned long)
+                                       hv_cpu->synic_message_page, 1);
+                               if (ret) {
+                                       pr_err("Failed to encrypt SYNIC msg page\n");
+                                       hv_cpu->synic_message_page = NULL;
+                               }
+                       }
+
+                       if (hv_cpu->synic_event_page) {
+                               ret = set_memory_encrypted((unsigned long)
+                                       hv_cpu->synic_event_page, 1);
+                               if (ret) {
+                                       pr_err("Failed to encrypt SYNIC event page\n");
+                                       hv_cpu->synic_event_page = NULL;
+                               }
+                       }
+               }
+
                free_page((unsigned long)hv_cpu->synic_event_page);
                free_page((unsigned long)hv_cpu->synic_message_page);
        }

BTW, at the end of hv_synic_alloc() there is a comment saying:
        /*
         * Any memory allocations that succeeded will be freed when
         * the caller cleans up by calling hv_synic_free()
         */

If hv_synic_alloc() -> get_zeroed_page() fails, hv_context.hv_numa_map() is
not freed() and hv_synic_alloc() returns -ENOMEM to vmbus_bus_init(); next,
we do "goto err_alloc;", i.e. hv_synic_free() is not called. We can post a
separate patch for this.

Thanks,
Dexuan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ