[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae9da909-0d21-c2aa-fc69-764ebba29672@suse.cz>
Date: Wed, 19 Jan 2022 17:49:54 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Liam Howlett <liam.howlett@...cle.com>,
"maple-tree@...ts.infradead.org" <maple-tree@...ts.infradead.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Song Liu <songliubraving@...com>,
Davidlohr Bueso <dave@...olabs.net>,
"Paul E . McKenney" <paulmck@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Laurent Dufour <ldufour@...ux.ibm.com>,
David Rientjes <rientjes@...gle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Rik van Riel <riel@...riel.com>,
Peter Zijlstra <peterz@...radead.org>,
Michel Lespinasse <walken.cr@...il.com>,
Jerome Glisse <jglisse@...hat.com>,
Minchan Kim <minchan@...gle.com>,
Joel Fernandes <joelaf@...gle.com>,
Rom Lemarchand <romlem@...gle.com>
Subject: Re: [PATCH v4 47/66] sched: Use maple tree iterator to walk VMAs
On 12/1/21 15:30, Liam Howlett wrote:
> From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
>
> The linked list is slower than walking the VMAs using the maple tree.
> We can't use the VMA iterator here because it doesn't support
> moving to an earlier position.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> kernel/sched/fair.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6e476f6d9435..39bb4a6c8507 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2672,6 +2672,7 @@ static void task_numa_work(struct callback_head *work)
> struct task_struct *p = current;
> struct mm_struct *mm = p->mm;
> u64 runtime = p->se.sum_exec_runtime;
> + MA_STATE(mas, &mm->mm_mt, 0, 0);
> struct vm_area_struct *vma;
> unsigned long start, end;
> unsigned long nr_pte_updates = 0;
> @@ -2728,13 +2729,16 @@ static void task_numa_work(struct callback_head *work)
>
> if (!mmap_read_trylock(mm))
> return;
> - vma = find_vma(mm, start);
> + mas_set(&mas, start);
> + vma = mas_find(&mas, ULONG_MAX);
> if (!vma) {
> reset_ptenuma_scan(p);
> start = 0;
> - vma = mm->mmap;
> + mas_set(&mas, start);
> + vma = mas_find(&mas, ULONG_MAX);
> }
> - for (; vma; vma = vma->vm_next) {
> +
> + for (; vma; vma = mas_find(&mas, ULONG_MAX)) {
> if (!vma_migratable(vma) || !vma_policy_mof(vma) ||
> is_vm_hugetlb_page(vma) || (vma->vm_flags & VM_MIXEDMAP)) {
> continue;
Powered by blists - more mailing lists