[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87imppuf0w.fsf@mpe.ellerman.id.au>
Date: Thu, 19 Sep 2019 13:43:43 +1000
From: Michael Ellerman <mpe@...erman.id.au>
To: Alastair D'Silva <alastair@....ibm.com>, alastair@...ilva.org
Cc: stable@...r.kernel.org,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Christophe Leroy <christophe.leroy@....fr>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Qian Cai <cai@....pw>, Thomas Gleixner <tglx@...utronix.de>,
Nicholas Piggin <npiggin@...il.com>,
Allison Randal <allison@...utok.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
David Hildenbrand <david@...hat.com>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/5] powerpc: Allow flush_icache_range to work across ranges >4GB
"Alastair D'Silva" <alastair@....ibm.com> writes:
> From: Alastair D'Silva <alastair@...ilva.org>
>
> When calling flush_icache_range with a size >4GB, we were masking
> off the upper 32 bits, so we would incorrectly flush a range smaller
> than intended.
>
> __kernel_sync_dicache in the 64 bit VDSO has the same bug.
Please fix that in a separate patch.
Your subject doesn't mention __kernel_sync_dicache(), and also the two
changes backport differently, so it's better if they're done as separate
patches.
cheers
> This patch replaces the 32 bit shifts with 64 bit ones, so that
> the full size is accounted for.
>
> Signed-off-by: Alastair D'Silva <alastair@...ilva.org>
> Cc: stable@...r.kernel.org
> ---
> arch/powerpc/kernel/misc_64.S | 4 ++--
> arch/powerpc/kernel/vdso64/cacheflush.S | 4 ++--
> 2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
> index b55a7b4cb543..9bc0aa9aeb65 100644
> --- a/arch/powerpc/kernel/misc_64.S
> +++ b/arch/powerpc/kernel/misc_64.S
> @@ -82,7 +82,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
> subf r8,r6,r4 /* compute length */
> add r8,r8,r5 /* ensure we get enough */
> lwz r9,DCACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of cache block size */
> - srw. r8,r8,r9 /* compute line count */
> + srd. r8,r8,r9 /* compute line count */
> beqlr /* nothing to do? */
> mtctr r8
> 1: dcbst 0,r6
> @@ -98,7 +98,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
> subf r8,r6,r4 /* compute length */
> add r8,r8,r5
> lwz r9,ICACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of Icache block size */
> - srw. r8,r8,r9 /* compute line count */
> + srd. r8,r8,r9 /* compute line count */
> beqlr /* nothing to do? */
> mtctr r8
> 2: icbi 0,r6
> diff --git a/arch/powerpc/kernel/vdso64/cacheflush.S b/arch/powerpc/kernel/vdso64/cacheflush.S
> index 3f92561a64c4..526f5ba2593e 100644
> --- a/arch/powerpc/kernel/vdso64/cacheflush.S
> +++ b/arch/powerpc/kernel/vdso64/cacheflush.S
> @@ -35,7 +35,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
> subf r8,r6,r4 /* compute length */
> add r8,r8,r5 /* ensure we get enough */
> lwz r9,CFG_DCACHE_LOGBLOCKSZ(r10)
> - srw. r8,r8,r9 /* compute line count */
> + srd. r8,r8,r9 /* compute line count */
> crclr cr0*4+so
> beqlr /* nothing to do? */
> mtctr r8
> @@ -52,7 +52,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
> subf r8,r6,r4 /* compute length */
> add r8,r8,r5
> lwz r9,CFG_ICACHE_LOGBLOCKSZ(r10)
> - srw. r8,r8,r9 /* compute line count */
> + srd. r8,r8,r9 /* compute line count */
> crclr cr0*4+so
> beqlr /* nothing to do? */
> mtctr r8
> --
> 2.21.0
Powered by blists - more mailing lists