[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56CBA181.8070606@gmail.com>
Date: Mon, 22 Feb 2016 16:02:09 -0800
From: Florian Fainelli <f.fainelli@...il.com>
To: Paul Burton <paul.burton@...tec.com>, linux-mips@...ux-mips.org,
Ralf Baechle <ralf@...ux-mips.org>
Cc: James Hogan <james.hogan@...tec.com>,
Joshua Kinard <kumba@...too.org>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
linux-kernel@...r.kernel.org,
"Maciej W. Rozycki" <macro@...esourcery.com>,
Markos Chandras <markos.chandras@...tec.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH 1/2] MIPS: Add barriers between dcache & icache flushes
On 22/02/16 10:09, Paul Burton wrote:
> Index-based cache operations may be arbitrarily reordered by out of
> order CPUs. Thus code which writes back the dcache & then invalidates
> the icache using indexed cache ops must include a barrier between
> operating on the 2 caches in order to prevent the scenario in which:
>
> - icache invalidation occurs.
>
> - icache fetch occurs, due to speculation.
>
> - dcache writeback occurs.
>
> If the above were allowed to happen then the icache would contain stale
> data. Forcing the dcache writeback to complete before the icache
> invalidation avoids this.
Is that also true for CPUs with have cpu_has_ic_fills_dc?
>
> Signed-off-by: Paul Burton <paul.burton@...tec.com>
> Cc: James Hogan <james.hogan@...tec.com>
> ---
>
> arch/mips/mm/c-r4k.c | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
> index caac3d7..a49010c 100644
> --- a/arch/mips/mm/c-r4k.c
> +++ b/arch/mips/mm/c-r4k.c
> @@ -449,6 +449,7 @@ static inline void local_r4k___flush_cache_all(void * args)
>
> default:
> r4k_blast_dcache();
> + mb(); /* cache instructions may be reordered */
> r4k_blast_icache();
> break;
> }
> @@ -493,8 +494,10 @@ static inline void local_r4k_flush_cache_range(void * args)
> return;
>
> r4k_blast_dcache();
> - if (exec)
> + if (exec) {
> + mb(); /* cache instructions may be reordered */
> r4k_blast_icache();
> + }
> }
>
> static void r4k_flush_cache_range(struct vm_area_struct *vma,
> @@ -599,8 +602,13 @@ static inline void local_r4k_flush_cache_page(void *args)
> if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc)) {
> vaddr ? r4k_blast_dcache_page(addr) :
> r4k_blast_dcache_user_page(addr);
> - if (exec && !cpu_icache_snoops_remote_store)
> + if (exec)
> + mb(); /* cache instructions may be reordered */
> +
> + if (exec && !cpu_icache_snoops_remote_store) {
> r4k_blast_scache_page(addr);
> + mb(); /* cache instructions may be reordered */
> + }
> }
> if (exec) {
> if (vaddr && cpu_has_vtag_icache && mm == current->active_mm) {
> @@ -660,6 +668,7 @@ static inline void local_r4k_flush_icache_range(unsigned long start, unsigned lo
> R4600_HIT_CACHEOP_WAR_IMPL;
> protected_blast_dcache_range(start, end);
> }
> + mb(); /* cache instructions may be reordered */
> }
>
> if (end - start > icache_size)
> @@ -798,6 +807,8 @@ static void local_r4k_flush_cache_sigtramp(void * arg)
> protected_writeback_dcache_line(addr & ~(dc_lsize - 1));
> if (!cpu_icache_snoops_remote_store && scache_size)
> protected_writeback_scache_line(addr & ~(sc_lsize - 1));
> + if ((dc_lsize || scache_size) && ic_lsize)
> + mb(); /* cache instructions may be reordered */
> if (ic_lsize)
> protected_flush_icache_line(addr & ~(ic_lsize - 1));
> if (MIPS4K_ICACHE_REFILL_WAR) {
>
--
Florian
Powered by blists - more mailing lists