[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mhng-778da899-6dc7-4b10-825d-85284990534b@palmer-si-x1e>
Date: Sun, 16 Jun 2019 20:09:28 -0700 (PDT)
From: Palmer Dabbelt <palmer@...ive.com>
To: joel@...g.id.au
CC: linux-riscv@...ts.infradead.org,
Paul Walmsley <paul.walmsley@...ive.com>, marco@...red.org,
me@...losedp.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] RISC-V: Break load reservations during switch_to
On Sun, 16 Jun 2019 10:54:06 PDT (-0700), joel@...g.id.au wrote:
> On 19-06-07 15:22:22, Palmer Dabbelt wrote:
>> The comment describes why in detail. This was found because QEMU never
>> gives up load reservations, the issue is unlikely to manifest on real
>> hardware.
>
> Makes sense, however it obviously will not help until qemu actually
> clears load reservations on SC (or otherwise handles the interleaved
> SC case).
Ya, I wasn't paying close enough attention. I think this should do it
diff --git a/target/riscv/insn_trans/trans_rva.inc.c b/target/riscv/insn_trans/trans_rva.inc.c
index f6dbbc065e15..001a68ced005 100644
--- a/target/riscv/insn_trans/trans_rva.inc.c
+++ b/target/riscv/insn_trans/trans_rva.inc.c
@@ -69,6 +69,7 @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
gen_set_gpr(a->rd, dat);
gen_set_label(l2);
+ tcg_gen_movi_tl(load_res, -1);
tcg_temp_free(dat);
tcg_temp_free(src1);
tcg_temp_free(src2);
I'll send the patch out to the QEMU mailing list.
> See comment inline.
>
>> Thanks to Carlos Eduardo for finding the bug!
>>
>> Signed-off-by: Palmer Dabbelt <palmer@...ive.com>
>> ---
>> Changes since v1 <20190605231735.26581-1-palmer@...ive.com>:
>>
>> * REG_SC is now defined as a helper macro, for any code that wants to SC
>> a register-sized value.
>> * The explicit #ifdef to check that TASK_THREAD_RA_RA is 0 has been
>> removed. Instead we rely on the assembler to catch non-zero SC
>> offsets. I've tested this does actually work.
>>
>> arch/riscv/include/asm/asm.h | 1 +
>> arch/riscv/kernel/entry.S | 11 +++++++++++
>> 2 files changed, 12 insertions(+)
>>
>> diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
>> index 5ad4cb622bed..946b671f996c 100644
>> --- a/arch/riscv/include/asm/asm.h
>> +++ b/arch/riscv/include/asm/asm.h
>> @@ -30,6 +30,7 @@
>>
>> #define REG_L __REG_SEL(ld, lw)
>> #define REG_S __REG_SEL(sd, sw)
>> +#define REG_SC __REG_SEL(sc.w, sc.d)
>
> The instructions appear to be inverted here (i.e. "sc.d, sc.w").
Thanks, I'll send a v3.
>> #define SZREG __REG_SEL(8, 4)
>> #define LGREG __REG_SEL(3, 2)
>>
>> diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
>> index 1c1ecc238cfa..af685782af17 100644
>> --- a/arch/riscv/kernel/entry.S
>> +++ b/arch/riscv/kernel/entry.S
>> @@ -330,6 +330,17 @@ ENTRY(__switch_to)
>> add a3, a0, a4
>> add a4, a1, a4
>> REG_S ra, TASK_THREAD_RA_RA(a3)
>> + /*
>> + * The Linux ABI allows programs to depend on load reservations being
>> + * broken on context switches, but the ISA doesn't require that the
>> + * hardware ever breaks a load reservation. The only way to break a
>> + * load reservation is with a store conditional, so we emit one here.
>> + * Since nothing ever takes a load reservation on TASK_THREAD_RA_RA we
>> + * know this will always fail, but just to be on the safe side this
>> + * writes the same value that was unconditionally written by the
>> + * previous instruction.
>> + */
>> + REG_SC x0, ra, TASK_THREAD_RA_RA(a3)
>> REG_S sp, TASK_THREAD_SP_RA(a3)
>> REG_S s0, TASK_THREAD_S0_RA(a3)
>> REG_S s1, TASK_THREAD_S1_RA(a3)
>> --
>> 2.21.0
>>
Powered by blists - more mailing lists