[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <96AE1C18-3833-4EB8-9145-202517331DF5@nvidia.com>
Date: Wed, 08 Oct 2025 07:27:38 -0400
From: Zi Yan <ziy@...dia.com>
To: Yafang Shao <laoar.shao@...il.com>, David Hildenbrand <david@...hat.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Johannes Weiner <hannes@...xchg.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, baolin.wang@...ux.alibaba.com,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Liam Howlett <Liam.Howlett@...cle.com>, npache@...hat.com,
ryan.roberts@....com, dev.jain@....com, usamaarif642@...il.com,
gutierrez.asier@...wei-partners.com, Matthew Wilcox <willy@...radead.org>,
Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>, Amery Hung <ameryhung@...il.com>,
David Rientjes <rientjes@...gle.com>, Jonathan Corbet <corbet@....net>,
21cnbao@...il.com, Shakeel Butt <shakeel.butt@...ux.dev>,
Tejun Heo <tj@...nel.org>, lance.yang@...ux.dev,
Randy Dunlap <rdunlap@...radead.org>, bpf <bpf@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v9 mm-new 03/11] mm: thp: add support for BPF based THP
order selection
On 8 Oct 2025, at 5:04, Yafang Shao wrote:
> On Wed, Oct 8, 2025 at 4:28 PM David Hildenbrand <david@...hat.com> wrote:
>>
>> On 08.10.25 10:18, Yafang Shao wrote:
>>> On Wed, Oct 8, 2025 at 4:08 PM David Hildenbrand <david@...hat.com> wrote:
>>>>
>>>> On 03.10.25 04:18, Alexei Starovoitov wrote:
>>>>> On Mon, Sep 29, 2025 at 10:59 PM Yafang Shao <laoar.shao@...il.com> wrote:
>>>>>>
>>>>>> +unsigned long bpf_hook_thp_get_orders(struct vm_area_struct *vma,
>>>>>> + enum tva_type type,
>>>>>> + unsigned long orders)
>>>>>> +{
>>>>>> + thp_order_fn_t *bpf_hook_thp_get_order;
>>>>>> + int bpf_order;
>>>>>> +
>>>>>> + /* No BPF program is attached */
>>>>>> + if (!test_bit(TRANSPARENT_HUGEPAGE_BPF_ATTACHED,
>>>>>> + &transparent_hugepage_flags))
>>>>>> + return orders;
>>>>>> +
>>>>>> + rcu_read_lock();
>>>>>> + bpf_hook_thp_get_order = rcu_dereference(bpf_thp.thp_get_order);
>>>>>> + if (WARN_ON_ONCE(!bpf_hook_thp_get_order))
>>>>>> + goto out;
>>>>>> +
>>>>>> + bpf_order = bpf_hook_thp_get_order(vma, type, orders);
>>>>>> + orders &= BIT(bpf_order);
>>>>>> +
>>>>>> +out:
>>>>>> + rcu_read_unlock();
>>>>>> + return orders;
>>>>>> +}
>>>>>
>>>>> I thought I explained it earlier.
>>>>> Nack to a single global prog approach.
>>>>
>>>> I agree. We should have the option to either specify a policy globally,
>>>> or more refined for cgroups/processes.
>>>>
>>>> It's an interesting question if a program would ever want to ship its
>>>> own policy: I can see use cases for that.
>>>>
>>>> So I agree that we should make it more flexible right from the start.
>>>
>>> To achieve per-process granularity, the struct-ops must be embedded
>>> within the mm_struct as follows:
>>>
>>> +#ifdef CONFIG_BPF_MM
>>> +struct bpf_mm_ops {
>>> +#ifdef CONFIG_BPF_THP
>>> + struct bpf_thp_ops bpf_thp;
>>> +#endif
>>> +};
>>> +#endif
>>> +
>>> /*
>>> * Opaque type representing current mm_struct flag state. Must be accessed via
>>> * mm_flags_xxx() helper functions.
>>> @@ -1268,6 +1281,10 @@ struct mm_struct {
>>> #ifdef CONFIG_MM_ID
>>> mm_id_t mm_id;
>>> #endif /* CONFIG_MM_ID */
>>> +
>>> +#ifdef CONFIG_BPF_MM
>>> + struct bpf_mm_ops bpf_mm;
>>> +#endif
>>> } __randomize_layout;
>>>
>>> We should be aware that this will involve extensive changes in mm/.
>>
>> That's what we do on linux-mm :)
>>
>> It would be great to use Alexei's feedback/experience to come up with
>> something that is flexible for various use cases.
>
> I'm still not entirely convinced that allowing individual processes or
> cgroups to run independent progs is a valid use case. However, since
> we have a consensus that this is the right direction, I will proceed
> with this approach.
>
>>
>> So I think this is likely the right direction.
>>
>> It would be great to evaluate which scenarios we could unlock with this
>> (global vs. per-process vs. per-cgroup) approach, and how
>> extensive/involved the changes will be.
>
> 1. Global Approach
> - Pros:
> Simple;
> Can manage different THP policies for different cgroups or processes.
> - Cons:
> Does not allow individual processes to run their own BPF programs.
>
> 2. Per-Process Approach
> - Pros:
> Enables each process to run its own BPF program.
> - Cons:
> Introduces significant complexity, as it requires handling the
> BPF program's lifecycle (creation, destruction, inheritance) within
> every mm_struct.
>
> 3. Per-Cgroup Approach
> - Pros:
> Allows individual cgroups to run their own BPF programs.
> Less complex than the per-process model, as it can leverage the
> existing cgroup operations structure.
> - Cons:
> Creates a dependency on the cgroup subsystem.
> might not be easy to control at the per-process level.
Another issue is that how and who to deal with hierarchical cgroup, where one
cgroup is a parent of another. Should bpf program to do that or mm code
to do that? I remember hierarchical cgroup is the main reason THP control
at cgroup level is rejected. If we do per-cgroup bpf control, wouldn't we
get the same rejection from cgroup folks?
>
>>
>> If we need a slot in the bi-weekly mm alignment session to brainstorm,
>> we can ask Dave R. for one in the upcoming weeks.
>
> I will draft an RFC to outline the required changes in both the mm/
> and bpf/ subsystems and solicit feedback.
>
> --
> Regards
> Yafang
--
Best Regards,
Yan, Zi
Powered by blists - more mailing lists