lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-6WuzNDSwPN4Enn@kernel.org>
Date: Thu, 3 Apr 2025 17:10:03 +0300
From: Mike Rapoport <rppt@...nel.org>
To: David Woodhouse <dwmw2@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
	"Sauerwein, David" <dssauerw@...zon.de>,
	Anshuman Khandual <anshuman.khandual@....com>,
	Ard Biesheuvel <ardb@...nel.org>,
	Catalin Marinas <catalin.marinas@....com>,
	David Hildenbrand <david@...hat.com>, Marc Zyngier <maz@...nel.org>,
	Mark Rutland <mark.rutland@....com>,
	Mike Rapoport <rppt@...ux.ibm.com>, Will Deacon <will@...nel.org>,
	kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC PATCH 3/3] mm: Implement for_each_valid_pfn() for
 CONFIG_SPARSEMEM

On Thu, Apr 03, 2025 at 08:07:22AM +0100, David Woodhouse wrote:
> On Thu, 2025-04-03 at 09:24 +0300, Mike Rapoport wrote:
> > with a small nit below
> > 
> > > +static inline bool first_valid_pfn(unsigned long *p_pfn)
> > > +{
> > > +	unsigned long pfn = *p_pfn;
> > > +	unsigned long nr = pfn_to_section_nr(pfn);
> > > +	struct mem_section *ms;
> > > +	bool ret = false;
> > > +
> > > +	ms = __pfn_to_section(pfn);
> > > +
> > > +	rcu_read_lock_sched();
> > > +
> > > +	while (!ret && nr <= __highest_present_section_nr) {
> > 
> > This could be just for(;;), we anyway break when ret becomes true or we get
> > past last present section.
> 
> True for the 'ret' part but not *nicely* for the last present section.
> If the original pfn is higher than the last present section it could
> trigger that check before entering the loop.
> 
> Yes, in that case 'ms' will be NULL, valid_section(NULL) is false and
> you're right that it'll make it through to the check in the loop
> without crashing. So it would currently be harmless, but I didn't like
> it. It's relying on the loop not to do the wrong thing with an input
> which is arguably invalid.
> 
> I'll see if I can make it neater. I may drop the 'ret' variable
> completely and just turn the match clause into unlock-and-return-true.
> I *like* having a single unlock site. But I think I like simpler loop
> code more than that.
> 
> FWIW I think the check for (PHYS_PFN(PFN_PHYS(pfn)) != pfn) at the
> start of pfn_valid() a few lines above is similarly redundant. Because
> if the high bits are set in the PFN then pfn_to_section_nr(pfn) is
> surely going to be higher than NR_MEM_SECTIONS and it'll get thrown out
> at the very next check, won't it?

I believe the check for (PHYS_PFN(PFN_PHYS(pfn)) != pfn) got to the generic
version from arm64::pfn_valid() that historically supported both FLATMEM
and SPARSEMEM.

I can't think of a configuration in which (PHYS_PFN(PFN_PHYS(pfn)) != pfn)
and pfn_to_section_nr(pfn) won't be higher than NR_MEM_SECTIONS, but with
all variants that arm64 has for PAGE_SHIFT and ARM64_PA_BITS I could miss
something.
 
> I care because I didn't bother to duplicate that 'redundant' check in
> my first_valid_pfn(), so if there's a reason for it that I'm missing, I
> should take a closer look.
> 
> I'm also missing the reason why the FLATMEM code in memory_model.h does
> 'unsigned long pfn_offset = ARCH_PFN_OFFSET' and then uses its local
> pfn_offset variable, instead of just using ARCH_PFN_OFFSET directly as
> I do in the FLATMEM for_each_valid_pfn() macro.

Don't remember now, but I surely had some $REASON for that :) 

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ