lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 Oct 2021 11:24:08 +0530
From:   Mahesh J Salgaonkar <mahesh@...ux.ibm.com>
To:     "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
Cc:     Sourabh Jain <sourabhjain@...ux.ibm.com>, mpe@...erman.id.au,
        linuxppc-dev@...abs.org, mahesh@...ux.vnet.ibm.com,
        linux-kernel@...r.kernel.org,
        Abdul haleem <abdhalee@...ux.vnet.ibm.com>,
        hbathini@...ux.ibm.com
Subject: Re: [PATCH 1/3] fixup mmu_features immediately after getting cpu pa
 features.

On 2021-10-04 21:02:21 Mon, Aneesh Kumar K.V wrote:
> On 10/4/21 20:41, Sourabh Jain wrote:
> > From: Mahesh Salgaonkar <mahesh@...ux.ibm.com>
> > 
> > On system with radix support available, early_radix_enabled() starts
> > returning true for a small window (until mmu_early_init_devtree() is
> > called) even when radix mode disabled on kernel command line. This causes
> > ppc64_bolted_size() to return ULONG_MAX in HPT mode instead of supported
> > segment size, during boot cpu paca allocation.
> > 
> > With kernel command line = "... disable_radix":
> > 
> > early_init_devtree:			  <- early_radix_enabled() = false
> >    early_init_dt_scan_cpus:		  <- early_radix_enabled() = false
> >        ...
> >        check_cpu_pa_features:		  <- early_radix_enabled() = false
> >        ...				^ <- early_radix_enabled() = TRUE
> >        allocate_paca:			| <- early_radix_enabled() = TRUE
> >            ...                           |
> >            ppc64_bolted_size:		| <- early_radix_enabled() = TRUE
> >                if (early_radix_enabled())| <- early_radix_enabled() = TRUE
> >                    return ULONG_MAX;     |
> >        ...                               |
> >    ...					| <- early_radix_enabled() = TRUE
> >    ...					| <- early_radix_enabled() = TRUE
> >    mmu_early_init_devtree()              V
> >    ...					  <- early_radix_enabled() = false
> > 
> > So far we have not seen any issue because allocate_paca() takes minimum of
> > ppc64_bolted_size and rma_size while allocating paca. However it is better
> > to close this window by fixing up the mmu features as early as possible.
> > This fixes early_radix_enabled() and ppc64_bolted_size() to return valid
> > values in radix disable mode. This patch will help subsequent patch to
> > depend on early_radix_enabled() check while detecting supported segment
> > size in HPT mode.
> > 
> > Signed-off-by: Mahesh Salgaonkar <mahesh@...ux.ibm.com>
> > Signed-off-by: Sourabh Jain <sourabhjain@...ux.ibm.com>
> > Reported-and-tested-by: Abdul haleem <abdhalee@...ux.vnet.ibm.com>
> > ---
> >   arch/powerpc/include/asm/book3s/64/mmu.h | 1 +
> >   arch/powerpc/include/asm/mmu.h           | 1 +
> >   arch/powerpc/kernel/prom.c               | 1 +
> >   arch/powerpc/mm/init_64.c                | 5 ++++-
> >   4 files changed, 7 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> > index c02f42d1031e..69a89fa1330d 100644
> > --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> > +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> > @@ -197,6 +197,7 @@ extern int mmu_vmemmap_psize;
> >   extern int mmu_io_psize;
> >   /* MMU initialization */
> > +void mmu_cpu_feature_fixup(void);
> >   void mmu_early_init_devtree(void);
> >   void hash__early_init_devtree(void);
> >   void radix__early_init_devtree(void);
> > diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
> > index 8abe8e42e045..c8eafd401fe9 100644
> > --- a/arch/powerpc/include/asm/mmu.h
> > +++ b/arch/powerpc/include/asm/mmu.h
> > @@ -401,6 +401,7 @@ extern void early_init_mmu(void);
> >   extern void early_init_mmu_secondary(void);
> >   extern void setup_initial_memory_limit(phys_addr_t first_memblock_base,
> >   				       phys_addr_t first_memblock_size);
> > +static inline void mmu_cpu_feature_fixup(void) { }
> >   static inline void mmu_early_init_devtree(void) { }
> >   static inline void pkey_early_init_devtree(void) {}
> > diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
> > index 2e67588f6f6e..1727a3abe6c1 100644
> > --- a/arch/powerpc/kernel/prom.c
> > +++ b/arch/powerpc/kernel/prom.c
> > @@ -380,6 +380,7 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
> >   		check_cpu_pa_features(node);
> >   	}
> > +	mmu_cpu_feature_fixup();
> 
> can you do that call inside check_cpu_pa_features? or is it because we have
> the same issue with baremetal platforms?

Yup same issue exist on baremetal as well in case of dt_cpu_ftrs_in_use
is true. Hence calling it after the if (!dt_cpu_ftrs_in_use) code block
takes care of both pseries and baremetal platforms.

> 
> Can we also rename this to indicate we are sanitizing the feature flag based
> on kernel command line.  Something like
> 
> /* Update cpu features based on kernel command line */
> update_cpu_features();

Sure will do.

Thanks for your review.
-Mahesh.

Powered by blists - more mailing lists