lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ec26a85f-ff1c-89d9-5e6c-ff42e834c48d@oracle.com>
Date:   Mon, 13 May 2019 19:00:31 +0200
From:   Alexandre Chartre <alexandre.chartre@...cle.com>
To:     Andy Lutomirski <luto@...nel.org>,
        Dave Hansen <dave.hansen@...el.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krcmar <rkrcmar@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>,
        kvm list <kvm@...r.kernel.org>, X86 ML <x86@...nel.org>,
        Linux-MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        jan.setjeeilers@...cle.com, Liran Alon <liran.alon@...cle.com>,
        Jonathan Adams <jwadams@...gle.com>
Subject: Re: [RFC KVM 19/27] kvm/isolation: initialize the KVM page table with
 core mappings



On 5/13/19 6:00 PM, Andy Lutomirski wrote:
> On Mon, May 13, 2019 at 8:50 AM Dave Hansen <dave.hansen@...el.com> wrote:
>>
>>> +     /*
>>> +      * Copy the mapping for all the kernel text. We copy at the PMD
>>> +      * level since the PUD is shared with the module mapping space.
>>> +      */
>>> +     rv = kvm_copy_mapping((void *)__START_KERNEL_map, KERNEL_IMAGE_SIZE,
>>> +          PGT_LEVEL_PMD);
>>> +     if (rv)
>>> +             goto out_uninit_page_table;
>>
>> Could you double-check this?  We (I) have had some repeated confusion
>> with the PTI code and kernel text vs. kernel data vs. __init.
>> KERNEL_IMAGE_SIZE looks to be 512MB which is quite a bit bigger than
>> kernel text.
>>
>>> +     /*
>>> +      * Copy the mapping for cpu_entry_area and %esp fixup stacks
>>> +      * (this is based on the PTI userland address space, but probably
>>> +      * not needed because the KVM address space is not directly
>>> +      * enterered from userspace). They can both be copied at the P4D
>>> +      * level since they each have a dedicated P4D entry.
>>> +      */
>>> +     rv = kvm_copy_mapping((void *)CPU_ENTRY_AREA_PER_CPU, P4D_SIZE,
>>> +          PGT_LEVEL_P4D);
>>> +     if (rv)
>>> +             goto out_uninit_page_table;
>>
>> cpu_entry_area is used for more than just entry from userspace.  The gdt
>> mapping, for instance, is needed everywhere.  You might want to go look
>> at 'struct cpu_entry_area' in some more detail.
>>
>>> +#ifdef CONFIG_X86_ESPFIX64
>>> +     rv = kvm_copy_mapping((void *)ESPFIX_BASE_ADDR, P4D_SIZE,
>>> +          PGT_LEVEL_P4D);
>>> +     if (rv)
>>> +             goto out_uninit_page_table;
>>> +#endif
>>
>> Why are these mappings *needed*?  I thought we only actually used these
>> fixup stacks for some crazy iret-to-userspace handling.  We're certainly
>> not doing that from KVM context.
>>
>> Am I forgetting something?
>>
>>> +#ifdef CONFIG_VMAP_STACK
>>> +     /*
>>> +      * Interrupt stacks are vmap'ed with guard pages, so we need to
>>> +      * copy mappings.
>>> +      */
>>> +     for_each_possible_cpu(cpu) {
>>> +             stack = per_cpu(hardirq_stack_ptr, cpu);
>>> +             pr_debug("IRQ Stack %px\n", stack);
>>> +             if (!stack)
>>> +                     continue;
>>> +             rv = kvm_copy_ptes(stack - IRQ_STACK_SIZE, IRQ_STACK_SIZE);
>>> +             if (rv)
>>> +                     goto out_uninit_page_table;
>>> +     }
>>> +
>>> +#endif
>>
>> I seem to remember that the KVM VMENTRY/VMEXIT context is very special.
>>   Interrupts (and even NMIs?) are disabled.  Would it be feasible to do
>> the switching in there so that we never even *get* interrupts in the KVM
>> context?
> 
> That would be nicer.
> 
> Looking at this code, it occurs to me that mapping the IRQ stacks
> seems questionable.  As it stands, this series switches to a normal
> CR3 in some C code somewhere moderately deep in the APIC IRQ code.  By
> that time, I think you may have executed traceable code, and, if that
> happens, you lose.  i hate to say this, but any shenanigans like this
> patch does might need to happen in the entry code *before* even
> switching to the IRQ stack.  Or perhaps shortly thereafter.
>
> We've talked about moving context tracking to C.  If we go that route,
> then this KVM context mess could go there, too -- we'd have a
> low-level C wrapper for each entry that would deal with getting us
> ready to run normal C code.
> 
> (We need to do something about terminology.  This kvm_mm thing isn't
> an mm in the normal sense.  An mm has normal kernel mappings and
> varying user mappings.  For example, the PTI "userspace" page tables
> aren't an mm.  And we really don't want a situation where the vmalloc
> fault code runs with the "kvm_mm" mm active -- it will totally
> malfunction.)
> 

One of my next step is to try to put the KVM page table in the PTI userspace
page tables, and not switch CR3 on KVM_RUN ioctl. That way, we will run with
a regular mm (but using the userspace page table). Then interrupt would switch
CR3 to kernel page table (like paranoid idtentry currently do it).

alex.




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ