lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQJtc6JJZMXuZ0M5_0A3=N-TJuYO2vMofJmK6KLhWrBAPg@mail.gmail.com>
Date:   Thu, 9 Nov 2023 10:18:44 -0800
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Yonghong Song <yonghong.song@...ux.dev>
Cc:     "Kirill A. Shutemov" <kirill@...temov.name>,
        Hou Tao <houtao1@...wei.com>, Jakub Kicinski <kuba@...nel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        "David S. Miller" <davem@...emloft.net>,
        Network Development <netdev@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Paolo Abeni <pabeni@...hat.com>
Subject: Re: [GIT PULL v2] Networking for 6.7

On Thu, Nov 9, 2023 at 10:09 AM Yonghong Song <yonghong.song@...ux.dev> wrote:
>
>
> On 11/9/23 8:14 AM, Kirill A. Shutemov wrote:
> > On Thu, Nov 09, 2023 at 08:01:39AM -0800, Alexei Starovoitov wrote:
> >> On Thu, Nov 9, 2023 at 7:49 AM Kirill A. Shutemov <kirill@...temov.name> wrote:
> >>> On Tue, Oct 31, 2023 at 02:09:48PM -0700, Jakub Kicinski wrote:
> >>>>        bpf: Add support for non-fix-size percpu mem allocation
> >>> Recent changes in BPF increased per-CPU memory consumption a lot.
> >>>
> >>> On virtual machine with 288 CPUs, per-CPU consumtion increased from 111 MB
> >>> to 969 MB, or 8.7x.
> >>>
> >>> I've bisected it to the commit 41a5db8d8161 ("bpf: Add support for
> >>> non-fix-size percpu mem allocation"), which part of the pull request.
> >> Hmm. This is unexpected. Thank you for reporting.
> >>
> >> How did you measure this 111 MB vs 969 MB ?
> >> Pls share the steps to reproduce.
> > Boot VMM with 288 (qemu-system-x86_64 -smp 288) and check Percpu: field of
> > /proc/meminfo.
>
> I did some experiments with my VM. My VM currently supports up to 255 cpus,
> so I tried 4/32/252 number of cpus. For a particular number of cpus, two
> experiments are done:
>    (1). bpf-percpu-mem-prefill
>    (2). no-bpf-percpu-mem-prefill
>
> For 4 cpu:
>     bpf-percpu-mem-prefill:
>       Percpu:             2000 kB
>     no-bpf-percpu-mem-prefill:
>       Percpu:             1808 kB
>
>     bpf-percpu-mem-prefill percpu cost: (2000 - 1808)/4 KB = 48KB
>
> For 32 cpus:
>     bpf-percpu-mem-prefill:
>       Percpu:            25344 kB
>     no-bpf-percpu-mem-prefill:
>       Percpu:            14464 kB
>
>     bpf-percpu-mem-prefill percpu cost: (25344 - 14464)/4 KB = 340KB
>
> For 252 cpus:
>     bpf-percpu-mem-prefill:
>       Percpu:           230912 kB
>     no-bpf-percpu-mem-prefill:
>       Percpu:            57856 kB
>
>     bpf-percpu-mem-prefill percpu cost: (230912 - 57856)/4 KB = 686KB
>
> I am not able to reproduce the dramatic number from 111 MB to 969 MB.
> My number with 252 cpus is from ~58MB to ~231MB.

Even 231MB is way too much. We shouldn't be allocating that much.
Let's switch to on-demand allocation. Only when bpf progs that
user per-cpu are loaded.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ