lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 4 Oct 2022 18:51:54 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Carlos Bilbao <carlos.bilbao@....com>
Cc:     tglx@...utronix.de, bp@...en8.de, mingo@...hat.com,
        dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
        venu.busireddy@...cle.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        "Lendacky, Thomas" <Thomas.Lendacky@....com>, bilbao@...edu
Subject: Re: [PATCH] KVM: SVM: Fix reserved fields of struct sev_es_save_area

On Tue, Oct 04, 2022, Carlos Bilbao wrote:
> 
> On 10/4/22 11:29, Sean Christopherson wrote:
> > On Tue, Oct 04, 2022, Carlos Bilbao wrote:
> > > On 10/4/22 09:05, Carlos Bilbao wrote:
> > > 
> > > > Reserved fields of struct sev_es_save_area are named by their order of
> > > > appearance, but right now they jump from reserved_5 to reserved_7. Rename
> > > > them with the correct order.
> > > > 
> > > > Fixes: 6d3b3d34e39eb ("KVM: SVM: Update the SEV-ES save area mapping")
> > > Actually, there is no bug, so this Fix tag could go. Thanks!!
> > Fixes: is appropriate, if we think it's worth fixing.  Personally, I don't think
> > it's worth the churn/effort to keep the reserved numbers accurate, e.g. if the
> > two bytes in reserved_1 are used, then every other field will need to be updated
> > just to accomodate a tiny change.  We'll find ourselves in a similar situation if
> > field is added in the middle of reserved_3,
> > 
> > If we really want to the number to have any kind of meaning without needing a pile
> > of churn for every update, the best idea I can think of is to name them reserved_<offset>.
> > That way only the affected reserved field needs to be modified when adding new
> > legal fields.  But that has it's own flavor of maintenance burden as calculating
> > and verifying the offset is a waste of everyone's time.
> > 
> > TL;DR: I vote to sweep this under the rug and live with arbitrary/bad numbers.
> Well, the discussion on what is the most appropriate way to name reserved
> fields is orthogonal to this patch, IMO.

It's not orthogonal, because how this "bug" is fixed determines the ongoing
maintenance cost.  If we're going to deal with code churn to clean things up, then
we want to churn the code exactly once.  If this was a one-line change then I
wouldn't care as much, but since it requires modifying half the reserved fields,
I'd rather we take an all-or-nothing approach.

> This change just follows the prior approach (reserved_x), but correctly.
> Keep in mind that the existence of reserved_{1,5} and reserved_{7,11}
> implies there's a reserved_6 (there isn't). Why knowingly keep something
> that's wrong, even if small?

Because the cost of maintaining the ordering far outweighs the benefits.  It's
not just about this patch, it's about all the future patches and reviews that will
be needed to keep the ordering correct.  On the benefits side, the fields are never
referenced and the names are effectively arbitrary, i.e. there's no real value in
keeping a sequence.

A simple "fix" would be to add a comment that the names are arbitrary and have
no meaning.  If really want to avoid ordering/skipping issues, then the we could
assign truly arbitrary names, e.g. something silly like this:

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 0361626841bc..e55299a733b3 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -332,7 +332,10 @@ struct vmcb_save_area {
        u32 spec_ctrl;          /* Guest version of SPEC_CTRL at 0x2E0 */
 } __packed;
 
-/* Save area definition for SEV-ES and SEV-SNP guests */
+/*
+ * Save area definition for SEV-ES and SEV-SNP guests.  Note the names of the
+ * reserved fields are arbitrary and have no meaning.
+ */
 struct sev_es_save_area {
        struct vmcb_seg es;
        struct vmcb_seg cs;
@@ -349,12 +352,12 @@ struct sev_es_save_area {
        u64 vmpl2_ssp;
        u64 vmpl3_ssp;
        u64 u_cet;
-       u8 reserved_1[2];
+       u8 reserved_beef[2];
        u8 vmpl;
        u8 cpl;
-       u8 reserved_2[4];
+       u8 reserved_cafe[4];
        u64 efer;
-       u8 reserved_3[104];
+       u8 reserved_feed[104];
        u64 xss;
        u64 cr4;
        u64 cr3;
@@ -371,7 +374,7 @@ struct sev_es_save_area {
        u64 dr1_addr_mask;
        u64 dr2_addr_mask;
        u64 dr3_addr_mask;
-       u8 reserved_4[24];
+       u8 reserved_fee[24];
        u64 rsp;
        u64 s_cet;
        u64 ssp;
@@ -386,21 +389,21 @@ struct sev_es_save_area {
        u64 sysenter_esp;
        u64 sysenter_eip;
        u64 cr2;
-       u8 reserved_5[32];
+       u8 reserved_deaf[32];
        u64 g_pat;
        u64 dbgctl;
        u64 br_from;
        u64 br_to;
        u64 last_excp_from;
        u64 last_excp_to;
-       u8 reserved_7[80];
+       u8 reserved_dead[80];
        u32 pkru;
-       u8 reserved_8[20];
-       u64 reserved_9;         /* rax already available at 0x01f8 */
+       u8 reserved_bed[20];
+       u64 reserved_bead;              /* rax already available at 0x01f8 */
        u64 rcx;
        u64 rdx;
        u64 rbx;
-       u64 reserved_10;        /* rsp already available at 0x01d8 */
+       u64 reserved_cab;       /* rsp already available at 0x01d8 */
        u64 rbp;
        u64 rsi;
        u64 rdi;
@@ -412,7 +415,7 @@ struct sev_es_save_area {
        u64 r13;
        u64 r14;
        u64 r15;
-       u8 reserved_11[16];
+       u8 reserved_face[16];
        u64 guest_exit_info_1;
        u64 guest_exit_info_2;
        u64 guest_exit_int_info;
@@ -425,7 +428,7 @@ struct sev_es_save_area {
        u64 pcpu_id;
        u64 event_inj;
        u64 xcr0;
-       u8 reserved_12[16];
+       u8 reserved_fade[16];
 
        /* Floating point area */
        u64 x87_dp;


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ