[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d13bcf4-8d20-0f06-5c00-3880b79363af@gmail.com>
Date: Wed, 9 Feb 2022 17:13:09 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: Song Liu <song@...nel.org>, bpf@...r.kernel.org,
netdev@...r.kernel.org
Cc: ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
kernel-team@...com, kernel test robot <lkp@...el.com>
Subject: Re: [PATCH bpf-next 2/2] bpf: fix bpf_prog_pack build HPAGE_PMD_SIZE
On 2/8/22 14:05, Song Liu wrote:
> Fix build with CONFIG_TRANSPARENT_HUGEPAGE=n with BPF_PROG_PACK_SIZE as
> PAGE_SIZE.
>
> Fixes: 57631054fae6 ("bpf: Introduce bpf_prog_pack allocator")
> Reported-by: kernel test robot <lkp@...el.com>
> Signed-off-by: Song Liu <song@...nel.org>
> ---
> kernel/bpf/core.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 306aa63fa58e..9519264ab1ee 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -814,7 +814,11 @@ int bpf_jit_add_poke_descriptor(struct bpf_prog *prog,
> * allocator. The prog_pack allocator uses HPAGE_PMD_SIZE page (2MB on x86)
> * to host BPF programs.
> */
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> #define BPF_PROG_PACK_SIZE HPAGE_PMD_SIZE
> +#else
> +#define BPF_PROG_PACK_SIZE PAGE_SIZE
> +#endif
> #define BPF_PROG_CHUNK_SHIFT 6
> #define BPF_PROG_CHUNK_SIZE (1 << BPF_PROG_CHUNK_SHIFT)
> #define BPF_PROG_CHUNK_MASK (~(BPF_PROG_CHUNK_SIZE - 1))
BTW, I do not understand with module_alloc(HPAGE_PMD_SIZE) would
necessarily allocate a huge page.
I am pretty sure it does not on x86_64 and dual socket host (NUMA)
It seems you need to multiply this by num_online_nodes() or change the
way __vmalloc_node_range()
works, because it currently does:
if (vmap_allow_huge && !(vm_flags & VM_NO_HUGE_VMAP)) {
unsigned long size_per_node;
/*
* Try huge pages. Only try for PAGE_KERNEL allocations,
* others like modules don't yet expect huge pages in
* their allocations due to apply_to_page_range not
* supporting them.
*/
size_per_node = size;
if (node == NUMA_NO_NODE)
<*> size_per_node /= num_online_nodes();
if (arch_vmap_pmd_supported(prot) && size_per_node >= PMD_SIZE)
shift = PMD_SHIFT;
else
shift = arch_vmap_pte_supported_shift(size_per_node);
Powered by blists - more mailing lists