lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <X9F/Cc946aKaA8gr@redhat.com>
Date:   Wed, 9 Dec 2020 20:51:05 -0500
From:   Andrea Arcangeli <aarcange@...hat.com>
To:     Mike Rapoport <rppt@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Baoquan He <bhe@...hat.com>,
        David Hildenbrand <david@...hat.com>,
        Mel Gorman <mgorman@...e.de>, Michal Hocko <mhocko@...nel.org>,
        Mike Rapoport <rppt@...ux.ibm.com>, Qian Cai <cai@....pw>,
        Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, stable@...r.kernel.org
Subject: Re: [PATCH v2 2/2] mm: fix initialization of struct page for holes
 in memory layout

Hello,

On Wed, Dec 09, 2020 at 11:43:04PM +0200, Mike Rapoport wrote:
> +void __init __weak memmap_init(unsigned long size, int nid,
> +			       unsigned long zone,
> +			       unsigned long range_start_pfn)
> +{
> +	unsigned long start_pfn, end_pfn, hole_start_pfn = 0;
>  	unsigned long range_end_pfn = range_start_pfn + size;
> +	u64 pgcnt = 0;
>  	int i;
>  
>  	for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
>  		start_pfn = clamp(start_pfn, range_start_pfn, range_end_pfn);
>  		end_pfn = clamp(end_pfn, range_start_pfn, range_end_pfn);
> +		hole_start_pfn = clamp(hole_start_pfn, range_start_pfn,
> +				       range_end_pfn);
>  
>  		if (end_pfn > start_pfn) {
>  			size = end_pfn - start_pfn;
>  			memmap_init_zone(size, nid, zone, start_pfn,
>  					 MEMINIT_EARLY, NULL, MIGRATE_MOVABLE);
>  		}
> +
> +		if (hole_start_pfn < start_pfn)
> +			pgcnt += init_unavailable_range(hole_start_pfn,
> +							start_pfn, zone, nid);
> +		hole_start_pfn = end_pfn;
>  	}

After applying the new 1/2, the above loop seem to be functionally a
noop compared to what was in -mm yesterday, so the above looks great
as far as I'm concerned.

Unlike the simple fix this will not loop over holes that aren't part
of memblock.memory nor memblock.reserved and it drops the static
variable which would have required ordering and serialization.

By being functionally equivalent, it looks it also suffers from the
same dependency on pfn 0 (and not just pfn 0) being reserved that you
pointed out earlier.

I suppose to drop that further dependency we need a further round down
in this logic to the start of the pageblock_order or max-order like
mentioned yesterday?

If the first pfn of a pageblock (or maybe better a max-order block) is
valid, but not in memblock.reserved nor memblock.memory and any other
pages in such pageblock is freed to the buddy allocator, we should
make sure the whole pageblock gets initialized (or at least the pages
with a pfn lower than the one that was added to the buddy). So
applying a round down in the above loop might just do the trick.

Since the removal of that extra dependency was mostly orthogonal with
the above, I guess it's actually cleaner to do it incrementally.

I'd suggest to also document why we're doing it, in the code (not just
commit header) of the incremental patch, by mentioning which are the
specific VM invariants we're enforcing that the VM code always
depended upon, that required the rundown etc...

In the meantime I'll try to update all systems again with this
implementation to test it.

Thanks!
Andrea

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ