lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <924ee966-7412-f9ff-c2b0-598e4abbb05c@redhat.com>
Date:   Thu, 28 May 2020 16:32:09 +1000
From:   Gavin Shan <gshan@...hat.com>
To:     Marc Zyngier <maz@...nel.org>, pbonzini@...hat.com
Cc:     Mark Rutland <mark.rutland@....com>, kvmarm@...ts.cs.columbia.edu,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        will@...nel.org, catalin.marinas@....com, james.morse@....com,
        suzuki.poulose@....com, drjones@...hat.com, eric.auger@...hat.com,
        aarcange@...hat.com, shan.gavin@...il.com
Subject: Re: [PATCH RFCv2 7/9] kvm/arm64: Support async page fault

Hi Marc,

On 5/27/20 5:37 PM, Marc Zyngier wrote:
> On 2020-05-27 05:05, Gavin Shan wrote:

[...]
  
>>>> +struct kvm_vcpu_pv_apf_data {
>>>> +    __u32    reason;
>>>> +    __u8    pad[60];
>>>> +    __u32    enabled;
>>>> +};
>>>
>>> What's all the padding for?
>>>
>>
>> The padding is ensure the @reason and @enabled in different cache
>> line. @reason is shared by host/guest, while @enabled is almostly
>> owned by guest.
> 
> So you are assuming that a cache line is at most 64 bytes.
> It is actualy implementation defined, and you can probe for it
> by looking at the CTR_EL0 register. There are implementations
> ranging from 32 to 256 bytes in the wild, and let's not mention
> broken big-little implementations here.
> 
> [...]
> 

Ok, Thanks for your comments and hints.

>>>> +bool kvm_arch_can_inject_async_page_not_present(struct kvm_vcpu *vcpu)
>>>> +{
>>>> +    u64 vbar, pc;
>>>> +    u32 val;
>>>> +    int ret;
>>>> +
>>>> +    if (!(vcpu->arch.apf.control_block & KVM_ASYNC_PF_ENABLED))
>>>> +        return false;
>>>> +
>>>> +    if (vcpu->arch.apf.send_user_only && vcpu_mode_priv(vcpu))
>>>> +        return false;
>>>> +
>>>> +    /* Pending page fault, which ins't acknowledged by guest */
>>>> +    ret = kvm_async_pf_read_cache(vcpu, &val);
>>>> +    if (ret || val)
>>>> +        return false;
>>>> +
>>>> +    /*
>>>> +     * Events can't be injected through data abort because it's
>>>> +     * going to clobber ELR_EL1, which might not consued (or saved)
>>>> +     * by guest yet.
>>>> +     */
>>>> +    vbar = vcpu_read_sys_reg(vcpu, VBAR_EL1);
>>>> +    pc = *vcpu_pc(vcpu);
>>>> +    if (pc >= vbar && pc < (vbar + vcpu->arch.apf.no_fault_inst_range))
>>>> +        return false;
>>>
>>> Ah, so that's when this `no_fault_inst_range` is for.
>>>
>>> As-is this is not sufficient, and we'll need t be extremely careful
>>> here.
>>>
>>> The vectors themselves typically only have a small amount of stub code,
>>> and the bulk of the non-reentrant exception entry work happens
>>> elsewhere, in a mixture of assembly and C code that isn't even virtually
>>> contiguous with either the vectors or itself.
>>>
>>> It's possible in theory that code in modules (or perhaps in eBPF JIT'd
>>> code) that isn't safe to take a fault from, so even having a contiguous
>>> range controlled by the kernel isn't ideal.
>>>
>>> How does this work on x86?
>>>
>>
>> Yeah, here we just provide a mechanism to forbid injecting data abort. The
>> range is fed by guest through HVC call. So I think it's guest related issue.
>> You had more comments about this in PATCH[9]. I will explain a bit more there.
>>
>> x86 basically relies on EFLAGS[IF] flag. The async page fault can be injected
>> if it's on. Otherwise, it's forbidden. It's workable because exception is
>> special interrupt to x86 if I'm correct.
>>
>>            return (vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) &&
>>                   !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
>>                         (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
> 
> I really wish this was relying on an architected exception delivery
> mechanism that can be blocked by the guest itself (PSTATE.{I,F,A}).
> Trying to guess based on the PC won't fly. But these signals are
> pretty hard to multiplex with anything else. Like any form of
> non-architected exception injection, I don't see a good path forward
> unless we start considering something like SDEI.
> 
>          M.

As Paolo mentioned in another reply. There are two types of notifications
sent from host to guest: page_not_present and page_ready. The page_not_present
notification should be delivered synchronously while page_ready can be
delivered asynchronously. He also suggested to reserve a ESR (or DFSC) subclass
for page_not_present. For page_ready, it can be delivered by interrupt, like PPI.
x86 is changing the code to deliver page_ready by interrupt, which was done by
exception previously.

when we use interrupt, instead of exception for page_ready. We won't need the
game of guessing PC.

I assume you prefer to use SDEI for page_not_present, correct? In that case,
what's the current status of SDEI? I mean it has been fully or partially
supported, or we need develop it from the scratch :)

Thanks,
Gavin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ