lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f45fb79a-d09a-bdbb-8529-77219171435b@amazon.com>
Date:   Wed, 16 Sep 2020 21:15:53 +0200
From:   Alexander Graf <graf@...zon.com>
To:     Sean Christopherson <sean.j.christopherson@...el.com>
CC:     Aaron Lewis <aaronlewis@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        "Joerg Roedel" <joro@...tes.org>,
        KarimAllah Raslan <karahmed@...zon.de>,
        "Dan Carpenter" <dan.carpenter@...cle.com>,
        kvm list <kvm@...r.kernel.org>, <linux-doc@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 1/7] KVM: x86: Deflect unknown MSR accesses to user
 space



On 16.09.20 19:08, Sean Christopherson wrote:
> 
> On Wed, Sep 16, 2020 at 11:31:30AM +0200, Alexander Graf wrote:
>> On 03.09.20 21:27, Aaron Lewis wrote:
>>>> @@ -412,6 +414,15 @@ struct kvm_run {
>>>>                           __u64 esr_iss;
>>>>                           __u64 fault_ipa;
>>>>                   } arm_nisv;
>>>> +               /* KVM_EXIT_X86_RDMSR / KVM_EXIT_X86_WRMSR */
>>>> +               struct {
>>>> +                       __u8 error; /* user -> kernel */
>>>> +                       __u8 pad[3];
>>>
>>> __u8 pad[7] to maintain 8 byte alignment?  unless we can get away with
>>> fewer bits for 'reason' and
>>> get them from 'pad'.
>>
>> Why would we need an 8 byte alignment here? I always thought natural u64
>> alignment on x86_64 was on 4 bytes?
> 
> u64 will usually (always?) be 8 byte aligned by the compiler.  "Natural"
> alignment means an object is aligned to its size.  E.g. an 8-byte object
> can split a cache line if it's only aligned on a 4-byte boundary.

For some reason I always thought that x86_64 had a special hack that 
allows u64s to be "naturally" aligned on a 32bit boundary. But I just 
double checked what you said and indeed, gcc does pad it to an actual 
natural boundary.

You never stop learning :).

In that case, it absolutely makes sense to make the padding explicit 
(and pull it earlier)!


Alex




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ