[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALOAHbAeS2HzQN96UZNOCuME098=GvXBUh1P4UwUJr0U-bB5EQ@mail.gmail.com>
Date: Sat, 11 Oct 2025 10:13:48 +0800
From: Yafang Shao <laoar.shao@...il.com>
To: David Hildenbrand <david@...hat.com>, Tejun Heo <tj@...nel.org>, Michal Hocko <mhocko@...e.com>,
Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Zi Yan <ziy@...dia.com>, Alexei Starovoitov <alexei.starovoitov@...il.com>,
Johannes Weiner <hannes@...xchg.org>, Andrew Morton <akpm@...ux-foundation.org>,
baolin.wang@...ux.alibaba.com, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Liam Howlett <Liam.Howlett@...cle.com>, npache@...hat.com, ryan.roberts@....com,
dev.jain@....com, usamaarif642@...il.com, gutierrez.asier@...wei-partners.com,
Matthew Wilcox <willy@...radead.org>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>,
Amery Hung <ameryhung@...il.com>, David Rientjes <rientjes@...gle.com>,
Jonathan Corbet <corbet@....net>, 21cnbao@...il.com, Shakeel Butt <shakeel.butt@...ux.dev>,
lance.yang@...ux.dev, Randy Dunlap <rdunlap@...radead.org>, bpf <bpf@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>, "open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v9 mm-new 03/11] mm: thp: add support for BPF based THP
order selection
On Fri, Oct 10, 2025 at 3:54 PM David Hildenbrand <david@...hat.com> wrote:
>
> On 09.10.25 11:59, Yafang Shao wrote:
> > On Thu, Oct 9, 2025 at 5:19 PM David Hildenbrand <david@...hat.com> wrote:
> >>
> >> On 08.10.25 15:11, Yafang Shao wrote:
> >>> On Wed, Oct 8, 2025 at 8:07 PM David Hildenbrand <david@...hat.com> wrote:
> >>>>
> >>>> On 08.10.25 13:27, Zi Yan wrote:
> >>>>> On 8 Oct 2025, at 5:04, Yafang Shao wrote:
> >>>>>
> >>>>>> On Wed, Oct 8, 2025 at 4:28 PM David Hildenbrand <david@...hat.com> wrote:
> >>>>>>>
> >>>>>>> On 08.10.25 10:18, Yafang Shao wrote:
> >>>>>>>> On Wed, Oct 8, 2025 at 4:08 PM David Hildenbrand <david@...hat.com> wrote:
> >>>>>>>>>
> >>>>>>>>> On 03.10.25 04:18, Alexei Starovoitov wrote:
> >>>>>>>>>> On Mon, Sep 29, 2025 at 10:59 PM Yafang Shao <laoar.shao@...il.com> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> +unsigned long bpf_hook_thp_get_orders(struct vm_area_struct *vma,
> >>>>>>>>>>> + enum tva_type type,
> >>>>>>>>>>> + unsigned long orders)
> >>>>>>>>>>> +{
> >>>>>>>>>>> + thp_order_fn_t *bpf_hook_thp_get_order;
> >>>>>>>>>>> + int bpf_order;
> >>>>>>>>>>> +
> >>>>>>>>>>> + /* No BPF program is attached */
> >>>>>>>>>>> + if (!test_bit(TRANSPARENT_HUGEPAGE_BPF_ATTACHED,
> >>>>>>>>>>> + &transparent_hugepage_flags))
> >>>>>>>>>>> + return orders;
> >>>>>>>>>>> +
> >>>>>>>>>>> + rcu_read_lock();
> >>>>>>>>>>> + bpf_hook_thp_get_order = rcu_dereference(bpf_thp.thp_get_order);
> >>>>>>>>>>> + if (WARN_ON_ONCE(!bpf_hook_thp_get_order))
> >>>>>>>>>>> + goto out;
> >>>>>>>>>>> +
> >>>>>>>>>>> + bpf_order = bpf_hook_thp_get_order(vma, type, orders);
> >>>>>>>>>>> + orders &= BIT(bpf_order);
> >>>>>>>>>>> +
> >>>>>>>>>>> +out:
> >>>>>>>>>>> + rcu_read_unlock();
> >>>>>>>>>>> + return orders;
> >>>>>>>>>>> +}
> >>>>>>>>>>
> >>>>>>>>>> I thought I explained it earlier.
> >>>>>>>>>> Nack to a single global prog approach.
> >>>>>>>>>
> >>>>>>>>> I agree. We should have the option to either specify a policy globally,
> >>>>>>>>> or more refined for cgroups/processes.
> >>>>>>>>>
> >>>>>>>>> It's an interesting question if a program would ever want to ship its
> >>>>>>>>> own policy: I can see use cases for that.
> >>>>>>>>>
> >>>>>>>>> So I agree that we should make it more flexible right from the start.
> >>>>>>>>
> >>>>>>>> To achieve per-process granularity, the struct-ops must be embedded
> >>>>>>>> within the mm_struct as follows:
> >>>>>>>>
> >>>>>>>> +#ifdef CONFIG_BPF_MM
> >>>>>>>> +struct bpf_mm_ops {
> >>>>>>>> +#ifdef CONFIG_BPF_THP
> >>>>>>>> + struct bpf_thp_ops bpf_thp;
> >>>>>>>> +#endif
> >>>>>>>> +};
> >>>>>>>> +#endif
> >>>>>>>> +
> >>>>>>>> /*
> >>>>>>>> * Opaque type representing current mm_struct flag state. Must be accessed via
> >>>>>>>> * mm_flags_xxx() helper functions.
> >>>>>>>> @@ -1268,6 +1281,10 @@ struct mm_struct {
> >>>>>>>> #ifdef CONFIG_MM_ID
> >>>>>>>> mm_id_t mm_id;
> >>>>>>>> #endif /* CONFIG_MM_ID */
> >>>>>>>> +
> >>>>>>>> +#ifdef CONFIG_BPF_MM
> >>>>>>>> + struct bpf_mm_ops bpf_mm;
> >>>>>>>> +#endif
> >>>>>>>> } __randomize_layout;
> >>>>>>>>
> >>>>>>>> We should be aware that this will involve extensive changes in mm/.
> >>>>>>>
> >>>>>>> That's what we do on linux-mm :)
> >>>>>>>
> >>>>>>> It would be great to use Alexei's feedback/experience to come up with
> >>>>>>> something that is flexible for various use cases.
> >>>>>>
> >>>>>> I'm still not entirely convinced that allowing individual processes or
> >>>>>> cgroups to run independent progs is a valid use case. However, since
> >>>>>> we have a consensus that this is the right direction, I will proceed
> >>>>>> with this approach.
> >>>>>>
> >>>>>>>
> >>>>>>> So I think this is likely the right direction.
> >>>>>>>
> >>>>>>> It would be great to evaluate which scenarios we could unlock with this
> >>>>>>> (global vs. per-process vs. per-cgroup) approach, and how
> >>>>>>> extensive/involved the changes will be.
> >>>>>>
> >>>>>> 1. Global Approach
> >>>>>> - Pros:
> >>>>>> Simple;
> >>>>>> Can manage different THP policies for different cgroups or processes.
> >>>>>> - Cons:
> >>>>>> Does not allow individual processes to run their own BPF programs.
> >>>>>>
> >>>>>> 2. Per-Process Approach
> >>>>>> - Pros:
> >>>>>> Enables each process to run its own BPF program.
> >>>>>> - Cons:
> >>>>>> Introduces significant complexity, as it requires handling the
> >>>>>> BPF program's lifecycle (creation, destruction, inheritance) within
> >>>>>> every mm_struct.
> >>>>>>
> >>>>>> 3. Per-Cgroup Approach
> >>>>>> - Pros:
> >>>>>> Allows individual cgroups to run their own BPF programs.
> >>>>>> Less complex than the per-process model, as it can leverage the
> >>>>>> existing cgroup operations structure.
> >>>>>> - Cons:
> >>>>>> Creates a dependency on the cgroup subsystem.
> >>>>>> might not be easy to control at the per-process level.
> >>>>>
> >>>>> Another issue is that how and who to deal with hierarchical cgroup, where one
> >>>>> cgroup is a parent of another. Should bpf program to do that or mm code
> >>>>> to do that? I remember hierarchical cgroup is the main reason THP control
> >>>>> at cgroup level is rejected. If we do per-cgroup bpf control, wouldn't we
> >>>>> get the same rejection from cgroup folks?
> >>>>
> >>>> Valid point.
> >>>>
> >>>> I do wonder if that problem was already encountered elsewhere with bpf
> >>>> and if there is already a solution.
> >>>
> >>> Our standard is to run only one instance of a BPF program type
> >>> system-wide to avoid conflicts. For example, we can't have both
> >>> systemd and a container runtime running bpf-thp simultaneously.
> >>
> >> Right, it's a good question how to combine policies, or "who wins".
> >
> > From my perspective, the ideal approach is to have one BPF-THP
> > instance per mm_struct. This allows for separate managers in different
> > domains, such as systemd managing BPF-THP for system processes and
> > containerd for container processes, while ensuring that any single
> > process is managed by only one BPF-THP.
>
> I came to the same conclusion. At least it's a valid start.
>
> Maybe we would later want a global fallback BPF-THP prog if none was
> enabled for a specific MM.
good idea. We can fallback to the global model when attaching pid 1.
>
> But I would expect to start with a per MM way of doing it, it gives you
> way more flexibility in the long run.
THP, such as shmem and file-backed THP, are shareable across multiple
processes and cgroups. If we allow different BPF-THP policies to be
applied to these shared resources, it could lead to policy
inconsistencies. This would ultimately recreate a long-standing issue
in memcg, which still lacks a robust solution for this problem [0].
This suggests that applying SCOPED policies to SHAREABLE memory may be
fundamentally flawed ;-)
[0]. https://lore.kernel.org/linux-mm/YwNold0GMOappUxc@slm.duckdns.org/
(Added the maintainers from the old discussion to this thread.)
--
Regards
Yafang
Powered by blists - more mailing lists