[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160203144838.GG5464@jhogan-linux.le.imgtec.org>
Date: Wed, 3 Feb 2016 14:48:39 +0000
From: James Hogan <james.hogan@...tec.com>
To: Paul Burton <paul.burton@...tec.com>
CC: <linux-mips@...ux-mips.org>, Ralf Baechle <ralf@...ux-mips.org>,
"Markos Chandras" <markos.chandras@...tec.com>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 03/15] MIPS: pm-cps: Avoid offset overflow on MIPSr6
On Wed, Feb 03, 2016 at 03:15:23AM +0000, Paul Burton wrote:
> From: Markos Chandras <markos.chandras@...tec.com>
>
> This is similar to commit 934c79231c1b ("MIPS: asm: r4kcache: Add MIPS
> R6 cache unroll functions"). The CACHE instruction has been redefined
> for MIPSr6 and it reduced its offset field to 8 bits. This leads to
> micro-assembler field overflow warnings when booting SMP MIPSr6 cores
> like the following one:
>
> Call Trace:
> [<ffffffff8010af88>] show_stack+0x68/0x88
> [<ffffffff8056ddf0>] dump_stack+0x68/0x88
> [<ffffffff801305bc>] warn_slowpath_common+0x8c/0xc8
> [<ffffffff80130630>] warn_slowpath_fmt+0x38/0x48
> [<ffffffff80125814>] build_insn+0x514/0x5c0
> [<ffffffff806ee134>] cps_gen_cache_routine.isra.3+0xe0/0x1b8
> [<ffffffff806ee570>] cps_pm_init+0x364/0x9ec
> [<ffffffff80100538>] do_one_initcall+0x90/0x1a8
> [<ffffffff806e8c14>] kernel_init_freeable+0x160/0x21c
> [<ffffffff8056b6a0>] kernel_init+0x10/0xf8
> [<ffffffff801059f8>] ret_from_kernel_thread+0x14/0x1c
>
> We fix this by incrementing the base register on every loop.
>
> Signed-off-by: Markos Chandras <markos.chandras@...tec.com>
> Signed-off-by: Paul Burton <paul.burton@...tec.com>
> ---
>
> arch/mips/kernel/pm-cps.c | 15 +++++++++++----
> 1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/arch/mips/kernel/pm-cps.c b/arch/mips/kernel/pm-cps.c
> index f63a289..524ba11 100644
> --- a/arch/mips/kernel/pm-cps.c
> +++ b/arch/mips/kernel/pm-cps.c
> @@ -224,11 +224,18 @@ static void __init cps_gen_cache_routine(u32 **pp, struct uasm_label **pl,
> uasm_build_label(pl, *pp, lbl);
>
> /* Generate the cache ops */
> - for (i = 0; i < unroll_lines; i++)
> - uasm_i_cache(pp, op, i * cache->linesz, t0);
> + for (i = 0; i < unroll_lines; i++) {
Maybe worth adding a comment here to mention different immediate field
size in r6 encodings, otherwise it could look a bit mysterious to the
reader.
Cheers
James
> + if (cpu_has_mips_r6) {
> + uasm_i_cache(pp, op, 0, t0);
> + uasm_i_addiu(pp, t0, t0, cache->linesz);
> + } else {
> + uasm_i_cache(pp, op, i * cache->linesz, t0);
> + }
> + }
>
> - /* Update the base address */
> - uasm_i_addiu(pp, t0, t0, unroll_lines * cache->linesz);
> + if (!cpu_has_mips_r6)
> + /* Update the base address */
> + uasm_i_addiu(pp, t0, t0, unroll_lines * cache->linesz);
>
> /* Loop if we haven't reached the end address yet */
> uasm_il_bne(pp, pr, t0, t1, lbl);
> --
> 2.7.0
>
>
Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)
Powered by blists - more mailing lists