lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Mon, 13 May 2024 06:10:30 +0000
From: Christophe Leroy <christophe.leroy@...roup.eu>
To: Michael Ellerman <mpe@...erman.id.au>, Puranjay Mohan
	<puranjay@...nel.org>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann
	<daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau
	<martin.lau@...ux.dev>, Eduard Zingerman <eddyz87@...il.com>, Song Liu
	<song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>, John Fastabend
	<john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>, Stanislav Fomichev
	<sdf@...gle.com>, Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
	"Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>, Nicholas Piggin
	<npiggin@...il.com>, Aneesh Kumar K.V <aneesh.kumar@...nel.org>, Hari Bathini
	<hbathini@...ux.ibm.com>, "bpf@...r.kernel.org" <bpf@...r.kernel.org>,
	"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: "puranjay12@...il.com" <puranjay12@...il.com>
Subject: Re: [PATCH bpf] powerpc/bpf: enforce full ordering for ATOMIC
 operations with BPF_FETCH



Le 08/05/2024 à 07:05, Michael Ellerman a écrit :
> Puranjay Mohan <puranjay@...nel.org> writes:
>> The Linux Kernel Memory Model [1][2] requires RMW operations that have a
>> return value to be fully ordered.
>>
>> BPF atomic operations with BPF_FETCH (including BPF_XCHG and
>> BPF_CMPXCHG) return a value back so they need to be JITed to fully
>> ordered operations. POWERPC currently emits relaxed operations for
>> these.
> 
> Thanks for catching this.
> 
>> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
>> index 2f39c50ca729..b635e5344e8a 100644
>> --- a/arch/powerpc/net/bpf_jit_comp32.c
>> +++ b/arch/powerpc/net/bpf_jit_comp32.c
>> @@ -853,6 +853,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
>>   			/* Get offset into TMP_REG */
>>   			EMIT(PPC_RAW_LI(tmp_reg, off));
>>   			tmp_idx = ctx->idx * 4;
>> +			/*
>> +			 * Enforce full ordering for operations with BPF_FETCH by emitting a 'sync'
>> +			 * before and after the operation.
>> +			 *
>> +			 * This is a requirement in the Linux Kernel Memory Model.
>> +			 * See __cmpxchg_u64() in asm/cmpxchg.h as an example.
>> +			 */
>> +			if (imm & BPF_FETCH)
>> +				EMIT(PPC_RAW_SYNC());
>>   			/* load value from memory into r0 */
>>   			EMIT(PPC_RAW_LWARX(_R0, tmp_reg, dst_reg, 0));
>>   
>> @@ -905,6 +914,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
>>   
>>   			/* For the BPF_FETCH variant, get old data into src_reg */
>>   			if (imm & BPF_FETCH) {
>> +				/* Emit 'sync' to enforce full ordering */
>> +				EMIT(PPC_RAW_SYNC());
>>   				EMIT(PPC_RAW_MR(ret_reg, ax_reg));
>>   				if (!fp->aux->verifier_zext)
>>   					EMIT(PPC_RAW_LI(ret_reg - 1, 0)); /* higher 32-bit */
> 
> On 32-bit there are non-SMP systems where those syncs will probably be expensive.
> 
> I think just adding an IS_ENABLED(CONFIG_SMP) around the syncs is
> probably sufficient. Christophe?

Yes indeed, thanks for spotting it, the sync is only required on SMP and 
is worth avoiding on non SMP.

Thanks
Christophe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ