[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CADyq12z8ZAPs6pAvrmSrzW5t9wqktCdVM+45FGrcX5Yf9i1wxw@mail.gmail.com>
Date: Fri, 14 Feb 2020 10:46:43 -0800
From: Brian Geffon <bgeffon@...gle.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Michael S . Tsirkin" <mst@...hat.com>,
Arnd Bergmann <arnd@...db.de>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Linux API <linux-api@...r.kernel.org>,
Andy Lutomirski <luto@...capital.net>,
Will Deacon <will@...nel.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Sonny Rao <sonnyrao@...gle.com>,
Minchan Kim <minchan@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Yu Zhao <yuzhao@...gle.com>,
Jesse Barnes <jsbarnes@...gle.com>,
Nathan Chancellor <natechancellor@...il.com>,
Florian Weimer <fweimer@...hat.com>
Subject: Re: [PATCH v5 1/2] mm: Add MREMAP_DONTUNMAP to mremap().
Hi Kirill,
> > - if (vm_flags & VM_LOCKED) {
> > - mm->locked_vm += new_len >> PAGE_SHIFT;
> > - *locked = true;
> > - }
> > -
>
> Ah. You moved this piece. Why?
Because we're not unmapping, do_munmap would have adjusted
mm->locked_vm by decreasing it by old_len so then we have to add back
in the new_len in the normal case (non. MREMAP_DONTUNMAP), but since
we're not doing the unmap I want to skip the increase by new_len and
just adjust accordingly. In the MREMAP_DONTUNMAP case if the VMA got
smaller then the do_munmap on that portion would have decreased it by
new_len - old_len, and the accounting is correct. In the case of an
unchanged VMA size there is nothing to do, but in the case where it
grows we're responsible for adding new_len - old_len because of the
decision to jump that block and now the accounting is right for all
cases.
If we were to leave the original block and not jump over it then we
would have to remove old_len bytes and then we're doing the same thing
but now special casing the situation where new_len < old_len because
the unmap on the removed part would have reduced it by new_len -
old_len so backing old_len would be too much and we'd have to add back
in new_len - old_len. I hope that explains it all.
By doing it this way, IMO it makes it easier to see how the locked_vm
accounting is happening because the vm_locked incrementing happens in
only one of two places based on the type of remap that is happening.
But I definitely can clean up the code a bit to drop the levels of
indentation, maybe this:
/*
* locked_vm accounting: if the mapping remained the same size
* it will have just moved and we don't need to touch locked_vm
* because we skip the do_unmap. If the mapping shrunk before
* being moved then the do_unmap on that portion will have
* adjusted vm_locked. Only if the mapping grows do we need to
* do something special; the reason is locked_vm only accounts
* for old_len, but we're now adding new_len - old_len locked
* bytes to the new mapping.
*/
if (vm_flags & VM_LOCKED && new_len > old_len) {
mm->locked_vm += (new_len - old_len) >> PAGE_SHIFT;
*locked = true;
}
/* We always clear VM_LOCKED[ONFAULT] on the old vma */
vma->vm_flags &= VM_LOCKED_CLEAR_MASK;
goto out;
}
Having only one place where locked_vm is accounted and adjusted based
on the type of remap seems like it will be easier to follow and less
error prone later. What do you think about this?
> > + if (flags & MREMAP_FIXED) {
>
> I think it has to be
>
> if (!(flags & MREMAP_DONTUNMAP)) {
>
> No?
No. Because we dropped the requirement to use MREMAP_FIXED with
MREMAP_DONTUNMAP, if we're not using MREMAP_FIXED we don't need to
unmap anything at dest if it already exists because
get_unmapped_area() below will not be using the MAP_FIXED flag either,
instead it will search for a new unmapped area. If we were to change
it then we wouldn't be able to do MREMAP_FIXED | MREMAP_DONTUNMAP, so
I think this is correct.
> > - if (flags & MREMAP_FIXED) {
> > + if (flags & MREMAP_FIXED || flags & MREMAP_DONTUNMAP) {
>
> if (flags & (MREMAP_FIXED | MREMAP_DONTUNMAP)) {
Sure, I can change that.
If you're good with all of that I can mail a new patch today.
Thanks again,
Brian
Powered by blists - more mailing lists