[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPhsuW4K+oDsytLvz4n44Fe3Pbjmpu6tnCk63A-UVxCZpz_rjg@mail.gmail.com>
Date: Tue, 25 Jan 2022 14:25:40 -0800
From: Song Liu <song@...nel.org>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Song Liu <songliubraving@...com>,
Ilya Leoshkevich <iii@...ux.ibm.com>,
bpf <bpf@...r.kernel.org>,
Network Development <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Kernel Team <Kernel-team@...com>,
Peter Zijlstra <peterz@...radead.org>, X86 ML <x86@...nel.org>
Subject: Re: [PATCH v6 bpf-next 6/7] bpf: introduce bpf_prog_pack allocator
On Tue, Jan 25, 2022 at 12:00 PM Alexei Starovoitov
<alexei.starovoitov@...il.com> wrote:
>
> On Mon, Jan 24, 2022 at 11:21 PM Song Liu <song@...nel.org> wrote:
> >
> > On Mon, Jan 24, 2022 at 9:21 PM Alexei Starovoitov
> > <alexei.starovoitov@...il.com> wrote:
> > >
> > > On Mon, Jan 24, 2022 at 10:27 AM Song Liu <songliubraving@...com> wrote:
> > > > >
> > > > > Are arches expected to allocate rw buffers in different ways? If not,
> > > > > I would consider putting this into the common code as well. Then
> > > > > arch-specific code would do something like
> > > > >
> > > > > header = bpf_jit_binary_alloc_pack(size, &prg_buf, &prg_addr, ...);
> > > > > ...
> > > > > /*
> > > > > * Generate code into prg_buf, the code should assume that its first
> > > > > * byte is located at prg_addr.
> > > > > */
> > > > > ...
> > > > > bpf_jit_binary_finalize_pack(header, prg_buf);
> > > > >
> > > > > where bpf_jit_binary_finalize_pack() would copy prg_buf to header and
> > > > > free it.
> > >
> > > It feels right, but bpf_jit_binary_finalize_pack() sounds 100% arch
> > > dependent. The only thing it will do is perform a copy via text_poke.
> > > What else?
> > >
> > > > I think this should work.
> > > >
> > > > We will need an API like: bpf_arch_text_copy, which uses text_poke_copy()
> > > > for x86_64 and s390_kernel_write() for x390. We will use bpf_arch_text_copy
> > > > to
> > > > 1) write header->size;
> > > > 2) do finally copy in bpf_jit_binary_finalize_pack().
> > >
> > > we can combine all text_poke operations into one.
> > >
> > > Can we add an 'image' pointer into struct bpf_binary_header ?
> >
> > There is a 4-byte hole in bpf_binary_header. How about we put
> > image_offset there? Actually we only need 2 bytes for offset.
> >
> > > Then do:
> > > int bpf_jit_binary_alloc_pack(size, &ro_hdr, &rw_hdr);
> > >
> > > ro_hdr->image would be the address used to compute offsets by JIT.
> >
> > If we only do one text_poke(), we cannot write ro_hdr->image yet. We
> > can use ro_hdr + rw_hdr->image_offset instead.
>
> Good points.
> Maybe let's go back to Ilya's suggestion and return 4 pointers
> from bpf_jit_binary_alloc_pack ?
How about we use image_offset, like:
struct bpf_binary_header {
u32 size;
u32 image_offset;
u8 image[] __aligned(BPF_IMAGE_ALIGNMENT);
};
Then we can use
image = (void *)header + header->image_offset;
In this way, we will only have two output pointers.
>
> > > rw_hdr->image would point to kvmalloc-ed area for emitting insns.
> > > rw_hdr->size would already be populated.
> > >
> > > The JITs would write insns into rw_hdr->image including 'int 3' insns.
> > > At the end the JIT will do text_poke_copy(ro_hdr, rw_hdr, rw_hdr->size);
> > > That would be the only copy that will transfer everything into final
> > > location.
> > > Then kvfree(rw_hdr)
> >
> > The only problem is the asymmetry of allocating rw_hdr from bpf/core.c,
> > and freeing it from arch/bpf_jit_comp.c. But it doesn't bother me too much.
>
> Indeed. Asymmetry needs to be fixed.
> Let's then pass 4 pointers back into
> bpf_jit_binary_finalize_pack()
> which will call arch dependent weak function to do text_poke_copy
> or use default __weak function that returns eopnotsupp
> and then kvfree the rw_hdr ?
> I'd like to avoid callbacks. imo __weak is easier to follow.
Yeah, I also like __weak function better.
Thanks,
Song
Powered by blists - more mailing lists