[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPhsuW4JVUUzMfNQwTE_uzp3bnO3EAYDikU1Nyx6x-6ROFDNOA@mail.gmail.com>
Date: Mon, 5 Jun 2023 10:05:00 -0700
From: Song Liu <song@...nel.org>
To: Puranjay Mohan <puranjay12@...il.com>
Cc: ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
martin.lau@...ux.dev, catalin.marinas@....com,
mark.rutland@....com, bpf@...r.kernel.org, kpsingh@...nel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next 3/3] bpf, arm64: use bpf_jit_binary_pack_alloc
On Mon, Jun 5, 2023 at 12:40 AM Puranjay Mohan <puranjay12@...il.com> wrote:
>
> Use bpf_jit_binary_pack_alloc for memory management of JIT binaries in
> ARM64 BPF JIT. The bpf_jit_binary_pack_alloc creates a pair of RW and RX
> buffers. The JIT writes the program into the RW buffer. When the JIT is
> done, the program is copied to the final ROX buffer
> with bpf_jit_binary_pack_finalize.
>
> Implement bpf_arch_text_copy() and bpf_arch_text_invalidate() for ARM64
> JIT as these functions are required by bpf_jit_binary_pack allocator.
>
> Signed-off-by: Puranjay Mohan <puranjay12@...il.com>
> ---
> arch/arm64/net/bpf_jit_comp.c | 119 +++++++++++++++++++++++++++++-----
> 1 file changed, 102 insertions(+), 17 deletions(-)
>
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index 145b540ec34f..ee9414cadea8 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -76,6 +76,7 @@ struct jit_ctx {
> int *offset;
> int exentry_idx;
> __le32 *image;
> + __le32 *ro_image;
We are using:
image vs. ro_image
rw_header vs. header
rw_image_ptr vs. image_ptr
Shall we be more consistent with rw_ or ro_ prefix?
> u32 stack_size;
> int fpb_offset;
> };
> @@ -205,6 +206,20 @@ static void jit_fill_hole(void *area, unsigned int size)
> *ptr++ = cpu_to_le32(AARCH64_BREAK_FAULT);
> }
>
> +int bpf_arch_text_invalidate(void *dst, size_t len)
> +{
> + __le32 *ptr;
> + int ret;
> +
> + for (ptr = dst; len >= sizeof(u32); len -= sizeof(u32)) {
> + ret = aarch64_insn_patch_text_nosync(ptr++, AARCH64_BREAK_FAULT);
I think one aarch64_insn_patch_text_nosync() per 4 byte is too much overhead.
Shall we add a helper to do this in bigger patches?
Thanks,
Song
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
[...]
Powered by blists - more mailing lists