lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <01c63720-a5f2-419a-96ff-54e32d71a14d@lucifer.local>
Date: Wed, 29 Jan 2025 16:19:15 +0000
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
        "Liam R . Howlett" <Liam.Howlett@...cle.com>,
        Jann Horn <jannh@...gle.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] mm: completely abstract unnecessary adj_start
 calculation

On Wed, Jan 29, 2025 at 04:42:06PM +0100, Vlastimil Babka wrote:
> On 1/27/25 16:50, Lorenzo Stoakes wrote:
> > The adj_start calculation has been a constant source of confusion in the
> > VMA merge code.
> >
> > There are two cases to consider, one where we adjust the start of the
> > vmg->middle VMA (i.e. the __VMG_FLAG_ADJUST_MIDDLE_START merge flag is
> > set), in which case adj_start is calculated as:
> >
> > (1) adj_start = vmg->end - vmg->middle->vm_start
> >
> > And the case where we adjust the start of the vmg->next VMA (i.e.t he
> > __VMG_FLAG_ADJUST_NEXT_START merge flag is set), in which case adj_start is
> > calculated as:
> >
> > (2) adj_start = -(vmg->middle->vm_end - vmg->end)
> >
> > We apply (1) thusly:
> >
> > vmg->middle->vm_start =
> > 	vmg->middle->vm_start + vmg->end - vmg->middle->vm_start
> >
> > Which simplifies to:
> >
> > vmg->middle->vm_start = vmg->end
> >
> > Similarly, we apply (2) as:
> >
> > vmg->next->vm_start =
> > 	vmg->next->vm_start + -(vmg->middle->vm_end - vmg->end)
> >
> > Noting that for these VMAs to be mergeable vmg->middle->vm_end ==
> > vmg->next->vm_start and so this simplifies to:
> >
> > vmg->next->vm_start =
> > 	vmg->next->vm_start + -(vmg->next->vm_start - vmg->end)
> >
> > Which simplifies to:
> >
> > vmg->next->vm_start = vmg->end
> >
> > Therefore in each case, we simply need to adjust the start of the VMA to
> > vmg->end (!) and can do away with this adj_start calculation. The only
> > caveat is that we must ensure we update the vm_pgoff field correctly.
> >
> > We therefore abstract this entire calculation to a new function
> > vmg_adjust_set_range() which performs this calculation and sets the
> > adjusted VMA's new range using the general vma_set_range() function.
> >
> > We also must update vma_adjust_trans_huge() which expects the
> > now-abstracted adj_start parameter. It turns out this is wholly
> > unnecessary.
> >
> > In vma_adjust_trans_huge() the relevant code is:
> >
> > 	if (adjust_next > 0) {
> > 		struct vm_area_struct *next = find_vma(vma->vm_mm, vma->vm_end);
> > 		unsigned long nstart = next->vm_start;
> > 		nstart += adjust_next;
> > 		split_huge_pmd_if_needed(next, nstart);
> > 	}
> >
> > The only case where this is relevant is when __VMG_FLAG_ADJUST_MIDDLE_START
> > is specified (in which case adj_next would have been positive), i.e. the
> > one in which the vma specified is vmg->prev and this the sought 'next' VMA
> > would be vmg->middle.
> >
> > We can therefore eliminate the find_vma() invocation altogether and simply
> > provide the vmg->middle VMA in this instance, or NULL otherwise.
> >
> > Again we have an adj_next offset calculation:
> >
> > next->vm_start + vmg->end - vmg->middle->vm_start
> >
> > Where next == vmg->middle this simplifies to vmg->end as previously
> > demonstrated.
> >
> > Therefore nstart is equal to vmg->end, which is already passed to
> > vma_adjust_trans_huge() via the 'end' parameter and so this code (rather
> > delightfully) simplifies to:
> >
> > 	if (next)
> > 		split_huge_pmd_if_needed(next, end);
> >
> > With these changes in place, it becomes silly for commit_merge() to return
> > vmg->target, as it is always the same and threaded through vmg, so we
> > finally change commit_merge() to return an error value once again.
> >
> > This patch has no change in functional behaviour.
> >
> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
>
> Yeah this makes the preparations worth it. Nice!
>
> Reviewed-by: Vlastimil Babka <vbabka@...e.cz>

Thanks!

>
> > +/*
> > + * Actually perform the VMA merge operation.
> > + *
> > + * On success, returns the merged VMA. Otherwise returns NULL.
>
> Needs updating?

Ah yeah you're right, will fix!

>
> > + */
> > +static int commit_merge(struct vma_merge_struct *vmg)
> > +{
> > +	struct vm_area_struct *vma;
> > +	struct vma_prepare vp;
> > +	bool adj_middle = vmg->merge_flags & __VMG_FLAG_ADJUST_MIDDLE_START;
> >
> > -		vma_iter_config(vmg->vmi, vmg->next->vm_start + adj_start,
> > -				vmg->next->vm_end);
> > +	if (vmg->merge_flags & __VMG_FLAG_ADJUST_NEXT_START) {
> > +		/* In this case we manipulate middle and return next. */
>
> Also we don't return next anymore? At least not here.
> vma_merge_existing_range() does, but here it's rather "the target is next"?

Yeah we should reword as you say, will fix!

>
> > +		vma = vmg->middle;
> > +		vma_iter_config(vmg->vmi, vmg->end, vmg->next->vm_end);
> >  	} else {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ