lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 11 Jun 2020 10:05:38 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     Daniel Jordan <daniel.m.jordan@...cle.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Andy Lutomirski <luto@...nel.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        David Hildenbrand <david@...hat.com>,
        Michal Hocko <mhocko@...nel.org>,
        Pavel Tatashin <pasha.tatashin@...een.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Steven Sistare <steven.sistare@...cle.com>
Subject: Re: [PATCH v2] x86/mm: use max memory block size on bare metal

On 6/11/20 9:59 AM, Daniel Jordan wrote:
> On Thu, Jun 11, 2020 at 07:16:02AM -0700, Dave Hansen wrote:
>> On 6/9/20 3:54 PM, Daniel Jordan wrote:
>>> +	/*
>>> +	 * Use max block size to minimize overhead on bare metal, where
>>> +	 * alignment for memory hotplug isn't a concern.
>>> +	 */
>>> +	if (hypervisor_is_type(X86_HYPER_NATIVE)) {
>>> +		bz = MAX_BLOCK_SIZE;
>>> +		goto done;
>>> +	}
>> What ends up being the worst case scenario?  Booting a really small
>> bare-metal x86 system, say with 64MB or 128MB of RAM?  What's the
>> overhead there?
> Might not be following you, so bear with me, but we only get to this check on a
> system with a physical address end of at least MEM_SIZE_FOR_LARGE_BLOCK (64G),
> and this would still (ever so slightly...) reduce overhead of memory block init
> at boot in that case.

Ahh, I see now.  That is just above the hunk you added, but just wasn't
in the diff context or mentioned in the changelog.

One other nit for this.  We *do* have actual hardware hotplug, and I'm
pretty sure the alignment guarantees for hardware hotplug are pretty
weak.  For instance, the alignment guarantees for persistent memory are
still only 64MB even on modern platforms.

Let's say we're on bare metal and we see an SRAT table that has some
areas that show that hotplug might happen there.  Is this patch still
ideal there?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ