[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <117f730b-6107-4093-afd2-51c15e503cda@suse.cz>
Date: Fri, 20 Jun 2025 16:32:48 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: David Hildenbrand <david@...hat.com>, Zi Yan <ziy@...dia.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>, Nico Pache
<npache@...hat.com>, Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>, Barry Song <baohua@...nel.org>,
Jann Horn <jannh@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Lance Yang <ioworker0@...il.com>,
SeongJae Park <sj@...nel.org>, Suren Baghdasaryan <surenb@...gle.com>
Subject: Re: [PATCH 4/5] mm/madvise: thread all madvise state through
madv_behavior
On 6/19/25 22:26, Lorenzo Stoakes wrote:
> Doing so means we can get rid of all the weird struct vm_area_struct **prev
> stuff, everything becomes consistent and in future if we want to make
> change to behaviour there's a single place where all relevant state is
> stored.
>
> This also allows us to update try_vma_read_lock() to be a little more
> succinct and set up state for us, as well as cleaning up
> madvise_update_vma().
>
> We also update the debug assertion prior to madvise_update_vma() to assert
> that this is a write operation as correctly pointed out by Barry in the
> relevant thread.
>
> We can't reasonably update the madvise functions that live outside of
> mm/madvise.c so we leave those as-is.
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
The prev manipulation is indeed confusing, looking forward to the next patch...
Nits:
> ---
> mm/madvise.c | 283 ++++++++++++++++++++++++++-------------------------
> 1 file changed, 146 insertions(+), 137 deletions(-)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 6faa38b92111..86fe04aa7c88 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -74,6 +74,8 @@ struct madvise_behavior {
> * traversing multiple VMAs, this is updated for each.
> */
> struct madvise_behavior_range range;
> + /* The VMA and VMA preceding it (if applicable) currently targeted. */
> + struct vm_area_struct *prev, *vma;
Would also do separate lines here.
> -static long madvise_dontneed_free(struct vm_area_struct *vma,
> - struct vm_area_struct **prev,
> - unsigned long start, unsigned long end,
> - struct madvise_behavior *madv_behavior)
> +static long madvise_dontneed_free(struct madvise_behavior *madv_behavior)
> {
> + struct mm_struct *mm = madv_behavior->mm;
> + struct madvise_behavior_range *range = &madv_behavior->range;
> int behavior = madv_behavior->behavior;
> - struct mm_struct *mm = vma->vm_mm;
>
> - *prev = vma;
> - if (!madvise_dontneed_free_valid_vma(vma, start, &end, behavior))
> + madv_behavior->prev = madv_behavior->vma;
> + if (!madvise_dontneed_free_valid_vma(madv_behavior))
> return -EINVAL;
>
> - if (start == end)
> + if (range->start == range->end)
> return 0;
>
> - if (!userfaultfd_remove(vma, start, end)) {
> - *prev = NULL; /* mmap_lock has been dropped, prev is stale */
> + if (!userfaultfd_remove(madv_behavior->vma, range->start, range->end)) {
> + struct vm_area_struct *vma;
> +
> + madv_behavior->prev = NULL; /* mmap_lock has been dropped, prev is stale */
>
> mmap_read_lock(mm);
> - vma = vma_lookup(mm, start);
> + madv_behavior->vma = vma = vma_lookup(mm, range->start);
This replaces vma in madv_behavior...
> @@ -1617,23 +1625,19 @@ int madvise_walk_vmas(struct madvise_behavior *madv_behavior)
> struct mm_struct *mm = madv_behavior->mm;
> struct madvise_behavior_range *range = &madv_behavior->range;
> unsigned long end = range->end;
> - struct vm_area_struct *vma;
> - struct vm_area_struct *prev;
> int unmapped_error = 0;
> int error;
> + struct vm_area_struct *vma;
>
> /*
> * If VMA read lock is supported, apply madvise to a single VMA
> * tentatively, avoiding walking VMAs.
> */
> - if (madv_behavior->lock_mode == MADVISE_VMA_READ_LOCK) {
> - vma = try_vma_read_lock(madv_behavior);
> - if (vma) {
> - prev = vma;
> - error = madvise_vma_behavior(vma, &prev, madv_behavior);
> - vma_end_read(vma);
> - return error;
> - }
> + if (madv_behavior->lock_mode == MADVISE_VMA_READ_LOCK &&
> + try_vma_read_lock(madv_behavior)) {
> + error = madvise_vma_behavior(madv_behavior);
> + vma_end_read(madv_behavior->vma);
> + return error;
And here we could potentially do vma_end_read() on the replaced vma. And
it's exactly cases using madvise_dontneed_free() that use
MADVISE_VMA_READ_LOCK mode. But it's not an issue as try_vma_read_lock()
will fail with uffd and that vma replacement scenario is tied to
userfaultfd_remove(). It's just quite tricky, hm...
> }
>
> /*
> @@ -1641,11 +1645,13 @@ int madvise_walk_vmas(struct madvise_behavior *madv_behavior)
> * ranges, just ignore them, but return -ENOMEM at the end.
> * - different from the way of handling in mlock etc.
> */
> - vma = find_vma_prev(mm, range->start, &prev);
> + vma = find_vma_prev(mm, range->start, &madv_behavior->prev);
> if (vma && range->start > vma->vm_start)
> - prev = vma;
> + madv_behavior->prev = vma;
>
> for (;;) {
> + struct vm_area_struct *prev;
> +
> /* Still start < end. */
> if (!vma)
> return -ENOMEM;
> @@ -1662,13 +1668,16 @@ int madvise_walk_vmas(struct madvise_behavior *madv_behavior)
> range->end = min(vma->vm_end, end);
>
> /* Here vma->vm_start <= range->start < range->end <= (end|vma->vm_end). */
> - error = madvise_vma_behavior(vma, &prev, madv_behavior);
> + madv_behavior->vma = vma;
> + error = madvise_vma_behavior(madv_behavior);
> if (error)
> return error;
> + prev = madv_behavior->prev;
> +
> range->start = range->end;
> if (prev && range->start < prev->vm_end)
> range->start = prev->vm_end;
> - if (range->start >= range->end)
> + if (range->start >= end)
> break;
> if (prev)
> vma = find_vma(mm, prev->vm_end);
Powered by blists - more mailing lists