lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 7 Dec 2022 13:21:12 -0800 From: John Hubbard <jhubbard@...dia.com> To: Peter Xu <peterx@...hat.com>, <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org> CC: Muchun Song <songmuchun@...edance.com>, Andrea Arcangeli <aarcange@...hat.com>, James Houghton <jthoughton@...gle.com>, Jann Horn <jannh@...gle.com>, Rik van Riel <riel@...riel.com>, Miaohe Lin <linmiaohe@...wei.com>, Andrew Morton <akpm@...ux-foundation.org>, "Mike Kravetz" <mike.kravetz@...cle.com>, David Hildenbrand <david@...hat.com>, Nadav Amit <nadav.amit@...il.com> Subject: Re: [PATCH v2 01/10] mm/hugetlb: Let vma_offset_start() to return start On 12/7/22 12:30, Peter Xu wrote: > Even though vma_offset_start() is named like that, it's not returning "the > start address of the range" but rather the offset we should use to offset > the vma->vm_start address. > > Make it return the real value of the start vaddr, and it also helps for all > the callers because whenever the retval is used, it'll be ultimately added > into the vma->vm_start anyway, so it's better. > > Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com> > Reviewed-by: David Hildenbrand <david@...hat.com> > Signed-off-by: Peter Xu <peterx@...hat.com> > --- > fs/hugetlbfs/inode.c | 24 ++++++++++++------------ > 1 file changed, 12 insertions(+), 12 deletions(-) This is a nice refinement. Reviewed-by: John Hubbard <jhubbard@...dia.com> thanks, -- John Hubbard NVIDIA > > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index 790d2727141a..fdb16246f46e 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -412,10 +412,12 @@ static bool hugetlb_vma_maps_page(struct vm_area_struct *vma, > */ > static unsigned long vma_offset_start(struct vm_area_struct *vma, pgoff_t start) > { > + unsigned long offset = 0; > + > if (vma->vm_pgoff < start) > - return (start - vma->vm_pgoff) << PAGE_SHIFT; > - else > - return 0; > + offset = (start - vma->vm_pgoff) << PAGE_SHIFT; > + > + return vma->vm_start + offset; > } > > static unsigned long vma_offset_end(struct vm_area_struct *vma, pgoff_t end) > @@ -457,7 +459,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h, > v_start = vma_offset_start(vma, start); > v_end = vma_offset_end(vma, end); > > - if (!hugetlb_vma_maps_page(vma, vma->vm_start + v_start, page)) > + if (!hugetlb_vma_maps_page(vma, v_start, page)) > continue; > > if (!hugetlb_vma_trylock_write(vma)) { > @@ -473,8 +475,8 @@ static void hugetlb_unmap_file_folio(struct hstate *h, > break; > } > > - unmap_hugepage_range(vma, vma->vm_start + v_start, v_end, > - NULL, ZAP_FLAG_DROP_MARKER); > + unmap_hugepage_range(vma, v_start, v_end, NULL, > + ZAP_FLAG_DROP_MARKER); > hugetlb_vma_unlock_write(vma); > } > > @@ -507,10 +509,9 @@ static void hugetlb_unmap_file_folio(struct hstate *h, > */ > v_start = vma_offset_start(vma, start); > v_end = vma_offset_end(vma, end); > - if (hugetlb_vma_maps_page(vma, vma->vm_start + v_start, page)) > - unmap_hugepage_range(vma, vma->vm_start + v_start, > - v_end, NULL, > - ZAP_FLAG_DROP_MARKER); > + if (hugetlb_vma_maps_page(vma, v_start, page)) > + unmap_hugepage_range(vma, v_start, v_end, NULL, > + ZAP_FLAG_DROP_MARKER); > > kref_put(&vma_lock->refs, hugetlb_vma_lock_release); > hugetlb_vma_unlock_write(vma); > @@ -540,8 +541,7 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end, > v_start = vma_offset_start(vma, start); > v_end = vma_offset_end(vma, end); > > - unmap_hugepage_range(vma, vma->vm_start + v_start, v_end, > - NULL, zap_flags); > + unmap_hugepage_range(vma, v_start, v_end, NULL, zap_flags); > > /* > * Note that vma lock only exists for shared/non-private
Powered by blists - more mailing lists