lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 18 May 2021 13:53:46 +0300
From:   Mike Rapoport <rppt@...nel.org>
To:     "Russell King (Oracle)" <linux@...linux.org.uk>
Cc:     linux-arm-kernel@...ts.infradead.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Kefeng Wang <wangkefeng.wang@...wei.com>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 3/3] arm: extend pfn_valid to take into accound freed
 memory map alignment

On Tue, May 18, 2021 at 10:44:27AM +0100, Russell King (Oracle) wrote:
> On Tue, May 18, 2021 at 12:06:13PM +0300, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@...ux.ibm.com>
> > 
> > When unused memory map is freed the preserved part of the memory map is
> > extended to match pageblock boundaries because lots of core mm
> > functionality relies on homogeneity of the memory map within pageblock
> > boundaries.
> > 
> > Since pfn_valid() is used to check whether there is a valid memory map
> > entry for a PFN, make it return true also for PFNs that have memory map
> > entries even if there is no actual memory populated there.
> 
> I thought pfn_valid() was a particularly hot path... do we really want
> to be doing multiple lookups here? Is there no better solution?

It is hot, but for more, hmm, straightforward memory layouts it'll take 

	if (memblock_is_map_memory(addr))
		return 1;

branch, I think.

Most of mm operations are on pages that are fed into buddy allocator, and
if there are no holes with weird alignment  pfn_valid() will return 1 right
away.

Now thinking about it, with the patch that marks NOMAP areas reserved in
the memory map [1], we could also use
	
	memblock_overlaps_region(&memblock.memory,
				 ALIGN_DOWN(addr, pageblock_size),
				 ALIGN(addr + 1, pageblock_size))
to have only one lookup.

Completely another approach would be to simply stop freeing memory map with
SPARSEMEM. For systems like the one Kefen is using, it would waste less
than 2M out of 1.5G.
It is worse of course for old systems with small memories. The worst case
being mach-ep93xx with sections size of 256M and I presume 16M per section
would be normal for such machines.

[1] https://lore.kernel.org/lkml/20210511100550.28178-3-rppt@kernel.org

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ