[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y7cbbLAZIxcL77+f@monkey>
Date: Thu, 5 Jan 2023 10:48:12 -0800
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Peter Xu <peterx@...hat.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Muchun Song <songmuchun@...edance.com>,
Nadav Amit <nadav.amit@...il.com>,
Andrea Arcangeli <aarcange@...hat.com>,
David Hildenbrand <david@...hat.com>,
James Houghton <jthoughton@...gle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 2/3] mm/mprotect: Use long for page accountings and retval
On 01/04/23 17:52, Peter Xu wrote:
> Switch to use type "long" for page accountings and retval across the whole
> procedure of change_protection().
>
> The change should have shrinked the possible maximum page number to be half
> comparing to previous (ULONG_MAX / 2), but it shouldn't overflow on any
> system either because the maximum possible pages touched by change
> protection should be ULONG_MAX / PAGE_SIZE.
>
> Two reasons to switch from "unsigned long" to "long":
>
> 1. It suites better on count_vm_numa_events(), whose 2nd parameter takes
> a long type.
>
> 2. It paves way for returning negative (error) values in the future.
>
> Currently the only caller that consumes this retval is change_prot_numa(),
> where the unsigned long was converted to an int. Since at it, touching up
> the numa code to also take a long, so it'll avoid any possible overflow too
> during the int-size convertion.
>
> Signed-off-by: Peter Xu <peterx@...hat.com>
> ---
> include/linux/hugetlb.h | 4 ++--
> include/linux/mm.h | 2 +-
> mm/hugetlb.c | 4 ++--
> mm/mempolicy.c | 2 +-
> mm/mprotect.c | 26 +++++++++++++-------------
> 5 files changed, 19 insertions(+), 19 deletions(-)
Acked-by: Mike Kravetz <mike.kravetz@...cle.com>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index b6b10101bea7..e3aa336df900 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -248,7 +248,7 @@ void hugetlb_vma_lock_release(struct kref *kref);
>
> int pmd_huge(pmd_t pmd);
> int pud_huge(pud_t pud);
> -unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> +long hugetlb_change_protection(struct vm_area_struct *vma,
> unsigned long address, unsigned long end, pgprot_t newprot,
> unsigned long cp_flags);
>
> @@ -437,7 +437,7 @@ static inline void move_hugetlb_state(struct folio *old_folio,
> {
> }
>
> -static inline unsigned long hugetlb_change_protection(
> +static inline long hugetlb_change_protection(
> struct vm_area_struct *vma, unsigned long address,
> unsigned long end, pgprot_t newprot,
> unsigned long cp_flags)
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c37f9330f14e..86fe17e6ded7 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2132,7 +2132,7 @@ static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma
> }
> bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
> pte_t pte);
> -extern unsigned long change_protection(struct mmu_gather *tlb,
> +extern long change_protection(struct mmu_gather *tlb,
> struct vm_area_struct *vma, unsigned long start,
> unsigned long end, unsigned long cp_flags);
> extern int mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma,
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 017d9159cddf..84bc665c7c86 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6613,7 +6613,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> return i ? i : err;
> }
>
> -unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> +long hugetlb_change_protection(struct vm_area_struct *vma,
> unsigned long address, unsigned long end,
> pgprot_t newprot, unsigned long cp_flags)
> {
> @@ -6622,7 +6622,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> pte_t *ptep;
> pte_t pte;
> struct hstate *h = hstate_vma(vma);
> - unsigned long pages = 0, psize = huge_page_size(h);
> + long pages = 0, psize = huge_page_size(h);
Small nit,
psize is passed to routines as an unsigned long argument. Arithmetic
should always be correct, but I am not sure if some of the static
checkers may complain.
--
Mike Kravetz
> bool shared_pmd = false;
> struct mmu_notifier_range range;
> unsigned long last_addr_mask;
Powered by blists - more mailing lists