[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <92719b15-daf8-484f-b0db-72e23ae696ad@os.amperecomputing.com>
Date: Thu, 11 Sep 2025 15:03:31 -0700
From: Yang Shi <yang@...amperecomputing.com>
To: Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Ard Biesheuvel <ardb@...nel.org>, scott@...amperecomputing.com, cl@...two.org
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block
mapping when rodata=full
>>> IIUC, the intent of the code is "reset direct map permission
>>> *without* leaving a
>>> RW+X window". The TLB flush call actually flushes both VA and direct
>>> map together.
>>> So if this is the intent, approach #2 may have VA with X permission
>>> but direct
>>> map may be RW at the mean time. It seems break the intent.
>> Ahh! Thanks, it's starting to make more sense now.
>>
>> Though on first sight it seems a bit mad to me to form a tlb flush
>> range that
>> covers all the direct map pages and all the lazy vunmap regions. Is that
>> intended to be a perf optimization or something else? It's not clear
>> from the
>> history.
>
> I think it should be mainly performance driven. I can't see how come
> two TLB flushes (for vmap and direct map respectively) don't work if I
> don't miss something.
>
>>
>>
>> Could this be split into 2 operations?
>>
>> 1. unmap the aliases (+ tlbi the aliases).
>> 2. set the direct memory back to default (+ tlbi the direct map region).
>>
>> The only 2 potential problems I can think of are;
>>
>> - Performance: 2 tlbis instead of 1, but conversely we probably
>> avoid flushing
>> a load of TLB entries that we didn't really need to.
>
> The two tlbis should work. But performance is definitely a concern. It
> may be hard to justify how much performance impact caused by over
> flush, but multiple TLBIs is definitely not preferred, particularly on
> some large scale machines. We have experienced some scalability issues
> with TLBI due to the large core count on Ampere systems.
>>
>> - Given there is now no lock around the tlbis (currently it's under
>> vmap_purge_lock) is there a race where a new alias can appear between
>> steps 1
>> and 2? I don't think so, because the memory is allocated to the
>> current mapping
>> so how is it going to get re-mapped?
>
> Yes, I agree. I don't think the race is real. The physical pages will
> not be freed until vm_reset_perms() is done. The VA may be
> reallocated, but it will be mapped to different physical pages.
>
>>
>>
>> Could this solve it?
>
> I think it could. But the potential performance impact (two TLBIs) is
> a real concern.
>
> Anyway the vmalloc user should call set_memory_*() for any RO/ROX
> mapping, set_memory_*() should split the page table before reaching
> vm_reset_perms() so it should not fail. If set_memory_*() is not
> called, it is a bug, it should be fixed, like ARM64 kprobes.
>
> It is definitely welcome to make it more robust, although the warning
> from split may mitigate this somehow. But I don't think this should be
> a blocker for this series IMHO.
Hi Ryan & Catalin,
Any more concerns about this? Shall we move forward with v8? We can
include the fix to kprobes in v8 or I can send it separately, either is
fine to me. Hopefully we can make v6.18.
Thanks,
Yang
>
> Thanks,
> Yang
>
>>
>>
>>
>>> Thanks,
>>> Yang
>>>
>>>> The benefit of approach 1 is that it is guarranteed that it is
>>>> impossible for
>>>> different CPUs to have different translations for the same VA in their
>>>> respective TLB. But for approach 2, it's possible that between
>>>> steps 1 and 2, 1
>>>> CPU has a RO entry and another CPU has a RW entry. But that will
>>>> get fixed once
>>>> the TLB is flushed - it's not really an issue.
>>>>
>>>> (There is probably also an obscure way to end up with 2 TLB entries
>>>> (one with RO
>>>> and one with RW) for the same CPU, but the arm64 architecture
>>>> permits that as
>>>> long as it's only a permission mismatch).
>>>>
>>>> Anyway, approach 2 is used when changing memory permissions on user
>>>> mappings, so
>>>> I don't see why we can't take the same approach here. That would
>>>> solve this
>>>> whole class of issue for us.
>>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>
>>>>> Thanks,
>>>>> Yang
>>>>>
>>>>>> Thanks,
>>>>>> Ryan
>>>>>>
>>>>>>
>>>>>>> Tested the below patch with bpftrace kfunc (allocate bpf
>>>>>>> trampoline) and
>>>>>>> kprobes. It seems work well.
>>>>>>>
>>>>>>> diff --git a/arch/arm64/kernel/probes/kprobes.c
>>>>>>> b/arch/arm64/kernel/probes/
>>>>>>> kprobes.c
>>>>>>> index 0c5d408afd95..c4f8c4750f1e 100644
>>>>>>> --- a/arch/arm64/kernel/probes/kprobes.c
>>>>>>> +++ b/arch/arm64/kernel/probes/kprobes.c
>>>>>>> @@ -10,6 +10,7 @@
>>>>>>>
>>>>>>> #define pr_fmt(fmt) "kprobes: " fmt
>>>>>>>
>>>>>>> +#include <linux/execmem.h>
>>>>>>> #include <linux/extable.h>
>>>>>>> #include <linux/kasan.h>
>>>>>>> #include <linux/kernel.h>
>>>>>>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk,
>>>>>>> kprobe_ctlblk);
>>>>>>> static void __kprobes
>>>>>>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *,
>>>>>>> struct pt_regs
>>>>>>> *);
>>>>>>>
>>>>>>> +void *alloc_insn_page(void)
>>>>>>> +{
>>>>>>> + void *page;
>>>>>>> +
>>>>>>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>>>>>> + if (!page)
>>>>>>> + return NULL;
>>>>>>> + set_memory_rox((unsigned long)page, 1);
>>>>>>> + return page;
>>>>>>> +}
>>>>>>> +
>>>>>>> static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
>>>>>>> {
>>>>>>> kprobe_opcode_t *addr = p->ainsn.xol_insn;
>>>>>>> diff --git a/arch/arm64/net/bpf_jit_comp.c
>>>>>>> b/arch/arm64/net/bpf_jit_comp.c
>>>>>>> index 52ffe115a8c4..3e301bc2cd66 100644
>>>>>>> --- a/arch/arm64/net/bpf_jit_comp.c
>>>>>>> +++ b/arch/arm64/net/bpf_jit_comp.c
>>>>>>> @@ -2717,11 +2717,6 @@ void arch_free_bpf_trampoline(void
>>>>>>> *image, unsigned int
>>>>>>> size)
>>>>>>> bpf_prog_pack_free(image, size);
>>>>>>> }
>>>>>>>
>>>>>>> -int arch_protect_bpf_trampoline(void *image, unsigned int size)
>>>>>>> -{
>>>>>>> - return 0;
>>>>>>> -}
>>>>>>> -
>>>>>>> int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>>>>>>> void *ro_image,
>>>>>>> void *ro_image_end, const struct
>>>>>>> btf_func_model *m,
>>>>>>> u32 flags, struct
>>>>>>> bpf_tramp_links *tlinks,
>>>>>>>
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Yang
>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Ryan
>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Yang
>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Yang
>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>
Powered by blists - more mailing lists