lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 13 Jul 2022 01:00:37 +0000
From:   Song Liu <songliubraving@...com>
To:     Luis Chamberlain <mcgrof@...nel.org>
CC:     Peter Zijlstra <peterz@...radead.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        "Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>,
        "David S. Miller" <davem@...emloft.net>,
        Anil S Keshavamurthy <anil.s.keshavamurthy@...el.com>,
        Kees Cook <keescook@...omium.org>, Song Liu <song@...nel.org>,
        bpf <bpf@...r.kernel.org>, Christoph Hellwig <hch@...radead.org>,
        Davidlohr Bueso <dave@...olabs.net>,
        lkml <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Kernel Team <Kernel-team@...com>,
        "x86@...nel.org" <x86@...nel.org>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        "rick.p.edgecombe@...el.com" <rick.p.edgecombe@...el.com>,
        "linux-modules@...r.kernel.org" <linux-modules@...r.kernel.org>
Subject: Re: [PATCH v6 bpf-next 0/5] bpf_prog_pack followup



> On Jul 12, 2022, at 4:42 PM, Luis Chamberlain <mcgrof@...nel.org> wrote:
> 
> On Tue, Jul 12, 2022 at 11:12:22PM +0000, Song Liu wrote:
>> 
>> 
>>> On Jul 12, 2022, at 12:04 PM, Luis Chamberlain <mcgrof@...nel.org> wrote:
>>> 
>>> On Tue, Jul 12, 2022 at 05:49:32AM +0000, Song Liu wrote:
>>>>> On Jul 11, 2022, at 9:18 PM, Luis Chamberlain <mcgrof@...nel.org> wrote:
>>>> 
>>>>> I believe you are mentioning requiring text_poke() because the way
>>>>> eBPF code uses the module_alloc() is different. Correct me if I'm
>>>>> wrong, but from what I gather is you use the text_poke_copy() as the data
>>>>> is already RO+X, contrary module_alloc() use cases. You do this since your
>>>>> bpf_prog_pack_alloc() calls set_memory_ro() and set_memory_x() after
>>>>> module_alloc() and before you can use this memory. This is a different type
>>>>> of allocator. And, again please correct me if I'm wrong but now you want to
>>>>> share *one* 2 MiB huge-page for multiple BPF programs to help with the
>>>>> impact of TLB misses.
>>>> 
>>>> Yes, sharing 1x 2MiB huge page is the main reason to require text_poke. 
>>>> OTOH, 2MiB huge pages without sharing is not really useful. Both kprobe
>>>> and ftrace only uses a fraction of a 4kB page. Most BPF programs and 
>>>> modules cannot use 2MiB either. Therefore, vmalloc_rw_exec() doesn't add
>>>> much value on top of current module_alloc(). 
>>> 
>>> Thanks for the clarification.
>>> 
>>>>> A vmalloc_ro_exec() by definition would imply a text_poke().
>>>>> 
>>>>> Can kprobes, ftrace and modules use it too? It would be nice
>>>>> so to not have to deal with the loose semantics on the user to
>>>>> have to use set_vm_flush_reset_perms() on ro+x later, but
>>>>> I think this can be addressed separately on a case by case basis.
>>>> 
>>>> I am pretty confident that kprobe and ftrace can share huge pages with 
>>>> BPF programs.
>>> 
>>> Then wonderful, we know where to go in terms of a new API then as it
>>> can be shared in the future for sure and there are gains.
>>> 
>>>> I haven't looked into all the details with modules, but 
>>>> given CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC, I think it is also 
>>>> possible.
>>> 
>>> Sure.
>>> 
>>>> Once this is done, a regular system (without huge BPF program or huge
>>>> modules) will just use 1x 2MB page for text from module, ftrace, kprobe, 
>>>> and bpf programs. 
>>> 
>>> That would be nice, if possible, however modules will require likely its
>>> own thing, on my system I see about 57 MiB used on coresize alone.
>>> 
>>> lsmod | grep -v Module | cut -f1 -d ' ' | \
>>> 	xargs sudo modinfo | grep filename | \
>>> 	grep -o '/.*' | xargs stat -c "%s - %n" | \
>>> 	awk 'BEGIN {sum=0} {sum+=$1} END {print sum}'
>>> 60001272
>>> 
>>> And so perhaps we need such a pool size to be configurable.
>>> 
>>>>> But a vmalloc_ro_exec() with a respective free can remove the
>>>>> requirement to do set_vm_flush_reset_perms().
>>>> 
>>>> Removing the requirement to set_vm_flush_reset_perms() is the other
>>>> reason to go directly to vmalloc_ro_exec(). 
>>> 
>>> Yes fantastic.
>>> 
>>>> My current version looks like this:
>>>> 
>>>> void *vmalloc_exec(unsigned long size);
>>>> void vfree_exec(void *ptr, unsigned int size);
>>>> 
>>>> ro is eliminated as there is no rw version of the API. 
>>> 
>>> Alright.
>>> 
>>> I am not sure if 2 MiB will suffice given what I mentioned above, and
>>> what to do to ensure this grows at a reasonable pace. Then, at least for
>>> usage for all architectures since not all will support text_poke() we
>>> will want to consider a way to make it easy to users to use non huge
>>> page fallbacks, but that would be up to those users, so we can wait for
>>> that.
>> 
>> We are not limited to 2MiB total. The logic is like: 
>> 
>> 1. Anything bigger than 2MiB gets its own allocation.
> 
> And does that allocation get split up into a few huge 2 MiB pages?
> When freed does that go into the pool of available list of 2 MiB pages
> to use?

This would have some 2MiB pages and some 4kB pages. For example, if we 
need 4MiB + 5kB, it will allocate 2x 2MiB pages, and 2x 4kB pages (round
up to 8kB). 

On free, the will not go to the pool. Instead, it will be vfree()'ed. 

> 
>> 2. We maintain a list of 2MiB pages, and bitmaps showing which parts of 
>>   these pages are in use. 
> 
> How many 2 MiB huge pages are allocated initially? Do we have a cap?

Current logic just allocates 1 huge page at a time. No cap. 

> 
>> 3. For objects smaller than 2MiB, we will try to fit it in one of these
>>   pages. 
>>   3. a) If there isn't a page with big enough continuous free space, we
>>        will allocate a new 2MiB page. 
>> 
>> (For system with n NUMA nodes, multiple 2MiB above by n). 
>> 
>> So, if we have 100 kernel modules using 1MiB each, they will share 50x
>> 2MiB pages. 
> 
> lsmod | grep -v Module | cut -f1 -d ' ' | \
> 	xargs sudo modinfo | grep filename |\
> 	grep -o '/.*' | xargs stat -c "%s - %n" | \
> 	awk 'BEGIN {sum=0} {sum+=$1} END {print sum/NR/1024}' 
> 271.273
> 
> On average my system's modules are 271 KiB.
> 
> Then I only have 6 out of 216 modules which are use more than 2 MiB or
> memory for coresize. So roughly 97% of my modules would be covered
> with this. Not bad.

Are these all the modules we have in tree? ;)

Thanks,
Song

> 
> The monsters:
> 
> lsmod | grep -v Module | cut -f1 -d ' ' | xargs sudo modinfo \
> 	| grep filename |grep -o '/.*' | xargs stat -c "%s %n" | \
> 	sort -n -k 1 -r | head -10 | \
> 	awk '{print $1/1024/1024" "$2}'
> 6.50775 /lib/modules/5.17.0-1-amd64/kernel/drivers/gpu/drm/i915/i915.ko
> 3.6847 /lib/modules/5.17.0-1-amd64/kernel/fs/xfs/xfs.ko
> 3.34252 /lib/modules/5.17.0-1-amd64/kernel/fs/btrfs/btrfs.ko
> 2.37677 /lib/modules/5.17.0-1-amd64/kernel/net/mac80211/mac80211.ko
> 2.2972 /lib/modules/5.17.0-1-amd64/kernel/net/wireless/cfg80211.ko
> 2.05754 /lib/modules/5.17.0-1-amd64/kernel/arch/x86/kvm/kvm.ko
> 1.96126 /lib/modules/5.17.0-1-amd64/kernel/net/bluetooth/bluetooth.ko
> 1.83429 /lib/modules/5.17.0-1-amd64/kernel/fs/ext4/ext4.ko
> 1.7724 /lib/modules/5.17.0-1-amd64/kernel/fs/nfsd/nfsd.ko
> 1.60539 /lib/modules/5.17.0-1-amd64/kernel/net/sunrpc/sunrpc.ko
> 
> On a big iron server I have 149 modules and the situation is better
> there:
> 
> 3.69791 /lib/modules/5.16.0-6-amd64/kernel/fs/xfs/xfs.ko
> 3.35575 /lib/modules/5.16.0-6-amd64/kernel/fs/btrfs/btrfs.ko
> 3.21056 /lib/modules/5.16.0-6-amd64/kernel/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko
> 2.02773 /lib/modules/5.16.0-6-amd64/kernel/arch/x86/kvm/kvm.ko
> 1.82574 /lib/modules/5.16.0-6-amd64/kernel/fs/ext4/ext4.ko
> 1.36571 /lib/modules/5.16.0-6-amd64/kernel/net/sunrpc/sunrpc.ko
> 1.32686 /lib/modules/5.16.0-6-amd64/kernel/fs/nfsd/nfsd.ko
> 1.12648 /lib/modules/5.16.0-6-amd64/kernel/drivers/gpu/drm/drm.ko
> 0.898623 /lib/modules/5.16.0-6-amd64/kernel/drivers/infiniband/hw/mlx5/mlx5_ib.ko
> 0.86922 /lib/modules/5.16.0-6-amd64/kernel/drivers/infiniband/core/ib_core.ko
> 
> So this may just work nicely.
> 
>  Luis

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ