lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac7b7cbb-67d2-5a1e-fc2a-ffb6b522224b@intel.com>
Date:   Mon, 23 Jul 2018 05:25:27 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     "Kirill A. Shutemov" <kirill@...temov.name>
Cc:     "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Ingo Molnar <mingo@...hat.com>, x86@...nel.org,
        Thomas Gleixner <tglx@...utronix.de>,
        "H. Peter Anvin" <hpa@...or.com>,
        Tom Lendacky <thomas.lendacky@....com>,
        Kai Huang <kai.huang@...ux.intel.com>,
        Jacob Pan <jacob.jun.pan@...ux.intel.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv5 17/19] x86/mm: Implement sync_direct_mapping()

On 07/23/2018 03:04 AM, Kirill A. Shutemov wrote:
> On Wed, Jul 18, 2018 at 05:01:37PM -0700, Dave Hansen wrote:>> Please make an effort to refactor this to reuse the code that we already
>> have to manage the direct mapping.  We can't afford 455 new lines of
>> page table manipulation that nobody tests or runs.
> 
> I'll look in this once again. But I'm not sure that there's any better
> solution.
> 
> The problem boils down to page allocation issue. We are not be able to
> allocate enough page tables in early boot for all direct mappings. At that
> stage we have very limited pool of pages that can be used for page tables.
> The pool is allocated at compile-time and it's not enough to handle MKTME.
> 
> Syncing approach appeared to be the simplest to me.

If that is, indeed, the primary motivation for this design, then please
call that out in the changelog.  It's exceedingly difficult to review
without this information.

We also need data and facts, please.

Which pool are we talking about?  How large is it now?  How large would
it need to be to accommodate MKTME?  How much memory do we need to map
before we run into issues?

>> How _was_ this tested?
> 
> Besides normal boot with MTKME enabled and access pages via new direct
> mappings, I also test memory hotplug and hotremove with QEMU.

... also great changelog fodder.

> Ideally we wound need some self-test for this. But I don't see a way to
> simulate hotplug and hotremove. Soft offlining doesn't cut it. We
> actually need to see the ACPI event to trigger the code.

That's something that we have to go fix.  For the online side, we always
have the "probe" file.  I guess nobody ever bothered to make an
equivalent for the remove side.  But, that doesn't seem like an
insurmountable problem to me.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ