lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Fri, 14 Feb 2020 14:16:00 +0800
From:   kbuild test robot <lkp@...el.com>
To:     Christoph Hellwig <hch@....de>
Cc:     kbuild-all@...ts.01.org, linux-kernel@...r.kernel.org,
        Paul Walmsley <paul.walmsley@...ive.com>,
        Anup Patel <anup@...infault.org>
Subject: mm/hugetlb.c:3454:14: error: implicit declaration of function
 'pte_page'; did you mean 'put_page'?

Hi Christoph,

FYI, the error/warning still remains.

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
head:   b19e8c68470385dd2c5440876591fddb02c8c402
commit: 6bd33e1ece528f67646db33bf97406b747dafda0 riscv: add nommu support
date:   3 months ago
config: riscv-randconfig-a001-20200214 (attached as .config)
compiler: riscv64-linux-gcc (GCC) 7.5.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        git checkout 6bd33e1ece528f67646db33bf97406b747dafda0
        # save the attached .config to linux build tree
        GCC_VERSION=7.5.0 make.cross ARCH=riscv 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@...el.com>

All error/warnings (new ones prefixed by >>):

   In file included from arch/riscv/include/asm/hugetlb.h:5:0,
                    from include/linux/hugetlb.h:444,
                    from mm/hugetlb.c:36:
   include/asm-generic/hugetlb.h: In function 'mk_huge_pte':
   include/asm-generic/hugetlb.h:7:9: error: implicit declaration of function 'mk_pte'; did you mean '__pte'? [-Werror=implicit-function-declaration]
     return mk_pte(page, pgprot);
            ^~~~~~
            __pte
   include/asm-generic/hugetlb.h:7:9: error: incompatible types when returning type 'int' but 'pte_t {aka struct <anonymous>}' was expected
     return mk_pte(page, pgprot);
            ^~~~~~~~~~~~~~~~~~~~
   include/asm-generic/hugetlb.h: In function 'huge_pte_write':
   include/asm-generic/hugetlb.h:12:9: error: implicit declaration of function 'pte_write'; did you mean 'pgd_write'? [-Werror=implicit-function-declaration]
     return pte_write(pte);
            ^~~~~~~~~
            pgd_write
   include/asm-generic/hugetlb.h: In function 'huge_pte_dirty':
   include/asm-generic/hugetlb.h:17:9: error: implicit declaration of function 'pte_dirty'; did you mean 'info_dirty'? [-Werror=implicit-function-declaration]
     return pte_dirty(pte);
            ^~~~~~~~~
            info_dirty
   include/asm-generic/hugetlb.h: In function 'huge_pte_mkwrite':
   include/asm-generic/hugetlb.h:22:9: error: implicit declaration of function 'pte_mkwrite'; did you mean 'pgd_write'? [-Werror=implicit-function-declaration]
     return pte_mkwrite(pte);
            ^~~~~~~~~~~
            pgd_write
   include/asm-generic/hugetlb.h:22:9: error: incompatible types when returning type 'int' but 'pte_t {aka struct <anonymous>}' was expected
     return pte_mkwrite(pte);
            ^~~~~~~~~~~~~~~~
   include/asm-generic/hugetlb.h: In function 'huge_pte_mkdirty':
   include/asm-generic/hugetlb.h:27:9: error: implicit declaration of function 'pte_mkdirty'; did you mean 'huge_pte_mkdirty'? [-Werror=implicit-function-declaration]
     return pte_mkdirty(pte);
            ^~~~~~~~~~~
            huge_pte_mkdirty
   include/asm-generic/hugetlb.h:27:9: error: incompatible types when returning type 'int' but 'pte_t {aka struct <anonymous>}' was expected
     return pte_mkdirty(pte);
            ^~~~~~~~~~~~~~~~
   include/asm-generic/hugetlb.h: In function 'huge_pte_modify':
   include/asm-generic/hugetlb.h:32:9: error: implicit declaration of function 'pte_modify'; did you mean 'lease_modify'? [-Werror=implicit-function-declaration]
     return pte_modify(pte, newprot);
            ^~~~~~~~~~
            lease_modify
   include/asm-generic/hugetlb.h:32:9: error: incompatible types when returning type 'int' but 'pte_t {aka struct <anonymous>}' was expected
     return pte_modify(pte, newprot);
            ^~~~~~~~~~~~~~~~~~~~~~~~
   include/asm-generic/hugetlb.h: In function 'huge_pte_clear':
   include/asm-generic/hugetlb.h:39:2: error: implicit declaration of function 'pte_clear'; did you mean 'pud_clear'? [-Werror=implicit-function-declaration]
     pte_clear(mm, addr, ptep);
     ^~~~~~~~~
     pud_clear
   include/asm-generic/hugetlb.h: In function 'set_huge_pte_at':
   include/asm-generic/hugetlb.h:56:2: error: implicit declaration of function 'set_pte_at'; did you mean 'set_huge_pte_at'? [-Werror=implicit-function-declaration]
     set_pte_at(mm, addr, ptep, pte);
     ^~~~~~~~~~
     set_huge_pte_at
   include/asm-generic/hugetlb.h: In function 'huge_ptep_get_and_clear':
   include/asm-generic/hugetlb.h:64:9: error: implicit declaration of function 'ptep_get_and_clear'; did you mean 'huge_ptep_get_and_clear'? [-Werror=implicit-function-declaration]
     return ptep_get_and_clear(mm, addr, ptep);
            ^~~~~~~~~~~~~~~~~~
            huge_ptep_get_and_clear
   include/asm-generic/hugetlb.h:64:9: error: incompatible types when returning type 'int' but 'pte_t {aka struct <anonymous>}' was expected
     return ptep_get_and_clear(mm, addr, ptep);
            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   include/asm-generic/hugetlb.h: In function 'huge_ptep_clear_flush':
   include/asm-generic/hugetlb.h:72:2: error: implicit declaration of function 'ptep_clear_flush'; did you mean 'huge_ptep_clear_flush'? [-Werror=implicit-function-declaration]
     ptep_clear_flush(vma, addr, ptep);
     ^~~~~~~~~~~~~~~~
     huge_ptep_clear_flush
   include/asm-generic/hugetlb.h: In function 'huge_pte_none':
   include/asm-generic/hugetlb.h:79:9: error: implicit declaration of function 'pte_none'; did you mean 'pud_none'? [-Werror=implicit-function-declaration]
     return pte_none(pte);
            ^~~~~~~~
            pud_none
   include/asm-generic/hugetlb.h: In function 'huge_pte_wrprotect':
   include/asm-generic/hugetlb.h:86:9: error: implicit declaration of function 'pte_wrprotect'; did you mean 'huge_pte_wrprotect'? [-Werror=implicit-function-declaration]
     return pte_wrprotect(pte);
            ^~~~~~~~~~~~~
            huge_pte_wrprotect
   include/asm-generic/hugetlb.h:86:9: error: incompatible types when returning type 'int' but 'pte_t {aka struct <anonymous>}' was expected
     return pte_wrprotect(pte);
            ^~~~~~~~~~~~~~~~~~
   include/asm-generic/hugetlb.h: In function 'huge_ptep_set_wrprotect':
   include/asm-generic/hugetlb.h:109:2: error: implicit declaration of function 'ptep_set_wrprotect'; did you mean 'huge_ptep_set_wrprotect'? [-Werror=implicit-function-declaration]
     ptep_set_wrprotect(mm, addr, ptep);
     ^~~~~~~~~~~~~~~~~~
     huge_ptep_set_wrprotect
   include/asm-generic/hugetlb.h: In function 'huge_ptep_set_access_flags':
   include/asm-generic/hugetlb.h:118:9: error: implicit declaration of function 'ptep_set_access_flags'; did you mean 'huge_ptep_set_access_flags'? [-Werror=implicit-function-declaration]
     return ptep_set_access_flags(vma, addr, ptep, pte, dirty);
            ^~~~~~~~~~~~~~~~~~~~~
            huge_ptep_set_access_flags
   mm/hugetlb.c: In function 'make_huge_pte':
>> mm/hugetlb.c:3327:10: error: implicit declaration of function 'pte_mkyoung'; did you mean 'page_mapping'? [-Werror=implicit-function-declaration]
     entry = pte_mkyoung(entry);
             ^~~~~~~~~~~
             page_mapping
>> mm/hugetlb.c:3327:8: error: incompatible types when assigning to type 'pte_t {aka struct <anonymous>}' from type 'int'
     entry = pte_mkyoung(entry);
           ^
>> mm/hugetlb.c:3328:10: error: implicit declaration of function 'pte_mkhuge'; did you mean 'pud_huge'? [-Werror=implicit-function-declaration]
     entry = pte_mkhuge(entry);
             ^~~~~~~~~~
             pud_huge
   mm/hugetlb.c:3328:8: error: incompatible types when assigning to type 'pte_t {aka struct <anonymous>}' from type 'int'
     entry = pte_mkhuge(entry);
           ^
   mm/hugetlb.c: In function 'set_huge_ptep_writable':
>> mm/hugetlb.c:3341:3: error: implicit declaration of function 'update_mmu_cache'; did you mean 'node_add_cache'? [-Werror=implicit-function-declaration]
      update_mmu_cache(vma, address, ptep);
      ^~~~~~~~~~~~~~~~
      node_add_cache
   mm/hugetlb.c: In function 'is_hugetlb_entry_migration':
>> mm/hugetlb.c:3348:28: error: implicit declaration of function 'pte_present'; did you mean 'pud_present'? [-Werror=implicit-function-declaration]
     if (huge_pte_none(pte) || pte_present(pte))
                               ^~~~~~~~~~~
                               pud_present
   mm/hugetlb.c:3350:8: error: implicit declaration of function 'pte_to_swp_entry'; did you mean 'pte_lockptr'? [-Werror=implicit-function-declaration]
     swp = pte_to_swp_entry(pte);
           ^~~~~~~~~~~~~~~~
           pte_lockptr
>> mm/hugetlb.c:3350:6: error: incompatible types when assigning to type 'swp_entry_t {aka struct <anonymous>}' from type 'int'
     swp = pte_to_swp_entry(pte);
         ^
>> mm/hugetlb.c:3351:6: error: implicit declaration of function 'non_swap_entry'; did you mean 'init_wait_entry'? [-Werror=implicit-function-declaration]
     if (non_swap_entry(swp) && is_migration_entry(swp))
         ^~~~~~~~~~~~~~
         init_wait_entry
>> mm/hugetlb.c:3351:29: error: implicit declaration of function 'is_migration_entry'; did you mean 'list_first_entry'? [-Werror=implicit-function-declaration]
     if (non_swap_entry(swp) && is_migration_entry(swp))
                                ^~~~~~~~~~~~~~~~~~
                                list_first_entry
   mm/hugetlb.c: In function 'is_hugetlb_entry_hwpoisoned':
   mm/hugetlb.c:3363:6: error: incompatible types when assigning to type 'swp_entry_t {aka struct <anonymous>}' from type 'int'
     swp = pte_to_swp_entry(pte);
         ^
>> mm/hugetlb.c:3364:29: error: implicit declaration of function 'is_hwpoison_entry'; did you mean 'hwpoison_filter'? [-Werror=implicit-function-declaration]
     if (non_swap_entry(swp) && is_hwpoison_entry(swp))
                                ^~~~~~~~~~~~~~~~~
                                hwpoison_filter
   mm/hugetlb.c: In function 'copy_hugetlb_page_range':
>> mm/hugetlb.c:3429:28: error: invalid initializer
       swp_entry_t swp_entry = pte_to_swp_entry(entry);
                               ^~~~~~~~~~~~~~~~
>> mm/hugetlb.c:3431:8: error: implicit declaration of function 'is_write_migration_entry'; did you mean 'init_wait_entry'? [-Werror=implicit-function-declaration]
       if (is_write_migration_entry(swp_entry) && cow) {
           ^~~~~~~~~~~~~~~~~~~~~~~~
           init_wait_entry
>> mm/hugetlb.c:3436:5: error: implicit declaration of function 'make_migration_entry_read'; did you mean 'thp_migration_supported'? [-Werror=implicit-function-declaration]
        make_migration_entry_read(&swp_entry);
        ^~~~~~~~~~~~~~~~~~~~~~~~~
        thp_migration_supported
>> mm/hugetlb.c:3437:13: error: implicit declaration of function 'swp_entry_to_pte'; did you mean '__d_entry_type'? [-Werror=implicit-function-declaration]
        entry = swp_entry_to_pte(swp_entry);
                ^~~~~~~~~~~~~~~~
                __d_entry_type
   mm/hugetlb.c:3437:11: error: incompatible types when assigning to type 'pte_t {aka struct <anonymous>}' from type 'int'
        entry = swp_entry_to_pte(swp_entry);
              ^
>> mm/hugetlb.c:3454:14: error: implicit declaration of function 'pte_page'; did you mean 'put_page'? [-Werror=implicit-function-declaration]
       ptepage = pte_page(entry);
                 ^~~~~~~~
                 put_page
   mm/hugetlb.c:3454:12: warning: assignment makes pointer from integer without a cast [-Wint-conversion]
       ptepage = pte_page(entry);
               ^
>> mm/hugetlb.c:3456:4: error: implicit declaration of function 'page_dup_rmap'; did you mean 'page_is_ram'? [-Werror=implicit-function-declaration]
       page_dup_rmap(ptepage, true);
       ^~~~~~~~~~~~~
       page_is_ram
   mm/hugetlb.c: In function '__unmap_hugepage_range':
>> mm/hugetlb.c:3492:2: error: implicit declaration of function 'tlb_change_page_size'; did you mean 'huge_page_size'? [-Werror=implicit-function-declaration]
     tlb_change_page_size(tlb, sz);
     ^~~~~~~~~~~~~~~~~~~~
     huge_page_size
>> mm/hugetlb.c:3493:2: error: implicit declaration of function 'tlb_start_vma'; did you mean 'hstate_vma'? [-Werror=implicit-function-declaration]
     tlb_start_vma(tlb, vma);
     ^~~~~~~~~~~~~
     hstate_vma
   mm/hugetlb.c:3534:8: warning: assignment makes pointer from integer without a cast [-Wint-conversion]
      page = pte_page(pte);
           ^
>> mm/hugetlb.c:3554:3: error: implicit declaration of function 'tlb_remove_huge_tlb_entry'; did you mean 'move_hugetlb_state'? [-Werror=implicit-function-declaration]
      tlb_remove_huge_tlb_entry(h, tlb, ptep, address);
      ^~~~~~~~~~~~~~~~~~~~~~~~~
      move_hugetlb_state
>> mm/hugetlb.c:3559:3: error: implicit declaration of function 'page_remove_rmap'; did you mean 'page_anon_vma'? [-Werror=implicit-function-declaration]
      page_remove_rmap(page, true);
      ^~~~~~~~~~~~~~~~
      page_anon_vma
>> mm/hugetlb.c:3562:3: error: implicit declaration of function 'tlb_remove_page_size'; did you mean 'vma_mmu_pagesize'? [-Werror=implicit-function-declaration]
      tlb_remove_page_size(tlb, page, huge_page_size(h));
      ^~~~~~~~~~~~~~~~~~~~
      vma_mmu_pagesize
   mm/hugetlb.c:3570:2: error: implicit declaration of function 'tlb_end_vma'; did you mean 'find_vma'? [-Werror=implicit-function-declaration]
     tlb_end_vma(tlb, vma);
     ^~~~~~~~~~~
     find_vma
   mm/hugetlb.c: In function 'unmap_hugepage_range':
   mm/hugetlb.c:3596:20: error: storage size of 'tlb' isn't known
     struct mmu_gather tlb;
                       ^~~
   mm/hugetlb.c:3596:20: warning: unused variable 'tlb' [-Wunused-variable]
   mm/hugetlb.c: In function 'hugetlb_cow':
   mm/hugetlb.c:3691:11: warning: assignment makes pointer from integer without a cast [-Wint-conversion]
     old_page = pte_page(pte);
              ^
   mm/hugetlb.c:3697:3: error: implicit declaration of function 'page_move_anon_rmap'; did you mean 'page_anon_vma'? [-Werror=implicit-function-declaration]
      page_move_anon_rmap(old_page, vma);
      ^~~~~~~~~~~~~~~~~~~
      page_anon_vma
   In file included from include/linux/kernel.h:11:0,
                    from include/linux/list.h:9,
                    from mm/hugetlb.c:6:
   mm/hugetlb.c:3740:8: error: implicit declaration of function 'pte_same'; did you mean 'pte_val'? [-Werror=implicit-function-declaration]
           pte_same(huge_ptep_get(ptep), pte)))
           ^
   include/linux/compiler.h:33:34: note: in definition of macro '__branch_check__'
       ______r = __builtin_expect(!!(x), expect); \
                                     ^
   mm/hugetlb.c:3739:8: note: in expansion of macro 'likely'
       if (likely(ptep &&
           ^~~~~~
   mm/hugetlb.c:3785:3: error: implicit declaration of function 'hugepage_add_new_anon_rmap'; did you mean 'hugepage_new_subpool'? [-Werror=implicit-function-declaration]
      hugepage_add_new_anon_rmap(new_page, vma, haddr);
      ^~~~~~~~~~~~~~~~~~~~~~~~~~
      hugepage_new_subpool
   mm/hugetlb.c: In function 'hugetlb_fault':
   mm/hugetlb.c:4089:4: error: implicit declaration of function 'migration_entry_wait_huge' [-Werror=implicit-function-declaration]
       migration_entry_wait_huge(vma, mm, ptep);
       ^~~~~~~~~~~~~~~~~~~~~~~~~
   mm/hugetlb.c:4161:7: warning: assignment makes pointer from integer without a cast [-Wint-conversion]
     page = pte_page(entry);
          ^
   mm/hugetlb.c:4178:8: error: incompatible types when assigning to type 'pte_t {aka struct <anonymous>}' from type 'int'
     entry = pte_mkyoung(entry);
           ^
   mm/hugetlb.c: In function 'hugetlb_mcopy_atomic_pte':
   mm/hugetlb.c:4311:11: error: incompatible types when assigning to type 'pte_t {aka struct <anonymous>}' from type 'int'
     _dst_pte = pte_mkyoung(_dst_pte);
              ^
   mm/hugetlb.c: In function 'follow_hugetlb_page':
   mm/hugetlb.c:4402:17: error: implicit declaration of function 'is_swap_pte'; did you mean 'is_swap_pmd'? [-Werror=implicit-function-declaration]
      if (absent || is_swap_pte(huge_ptep_get(pte)) ||
                    ^~~~~~~~~~~
                    is_swap_pmd
   mm/hugetlb.c:4448:8: warning: assignment makes pointer from integer without a cast [-Wint-conversion]
      page = pte_page(huge_ptep_get(pte));
           ^
   mm/hugetlb.c: In function 'hugetlb_change_protection':
   mm/hugetlb.c:4548:24: error: invalid initializer
       swp_entry_t entry = pte_to_swp_entry(pte);
                           ^~~~~~~~~~~~~~~~
   mm/hugetlb.c:4554:12: error: incompatible types when assigning to type 'pte_t {aka struct <anonymous>}' from type 'int'
        newpte = swp_entry_to_pte(entry);
               ^
   mm/hugetlb.c:4566:8: error: incompatible types when assigning to type 'pte_t {aka struct <anonymous>}' from type 'int'
       pte = pte_mkhuge(huge_pte_modify(old_pte, newprot));
           ^
   mm/hugetlb.c: In function 'huge_pmd_share':
   mm/hugetlb.c:4843:19: error: implicit declaration of function 'pmd_alloc'; did you mean '__pmd_alloc'? [-Werror=implicit-function-declaration]
      return (pte_t *)pmd_alloc(mm, pud, addr);
                      ^~~~~~~~~
                      __pmd_alloc
   mm/hugetlb.c:4843:10: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
      return (pte_t *)pmd_alloc(mm, pud, addr);
             ^
   mm/hugetlb.c:4866:3: error: implicit declaration of function 'pud_populate'; did you mean 'pgd_populate'? [-Werror=implicit-function-declaration]
      pud_populate(mm, pud,
      ^~~~~~~~~~~~
      pgd_populate
   mm/hugetlb.c:4874:8: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
     pte = (pte_t *)pmd_alloc(mm, pud, addr);
           ^
   mm/hugetlb.c: In function 'huge_pmd_unshare':
   mm/hugetlb.c:4893:15: error: implicit declaration of function 'pgd_offset'; did you mean 'pmd_offset'? [-Werror=implicit-function-declaration]
     pgd_t *pgd = pgd_offset(mm, *addr);
                  ^~~~~~~~~~
                  pmd_offset
   mm/hugetlb.c:4893:15: warning: initialization makes pointer from integer without a cast [-Wint-conversion]
   In file included from include/linux/cache.h:5:0,
                    from include/linux/printk.h:9,
                    from include/linux/kernel.h:15,
                    from include/linux/list.h:9,
                    from mm/hugetlb.c:6:
   mm/hugetlb.c:4904:36: error: 'PTRS_PER_PTE' undeclared (first use in this function); did you mean 'PTRS_PER_P4D'?
     *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE;
                                       ^
   include/uapi/linux/kernel.h:11:47: note: in definition of macro '__ALIGN_KERNEL_MASK'
    #define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask))
                                                  ^~~~

vim +3454 mm/hugetlb.c

^1da177e4c3f415 Linus Torvalds     2005-04-16  3314  
1e8f889b10d8d22 David Gibson       2006-01-06  3315  static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,
1e8f889b10d8d22 David Gibson       2006-01-06  3316  				int writable)
63551ae0feaaa23 David Gibson       2005-06-21  3317  {
63551ae0feaaa23 David Gibson       2005-06-21  3318  	pte_t entry;
63551ae0feaaa23 David Gibson       2005-06-21  3319  
1e8f889b10d8d22 David Gibson       2006-01-06  3320  	if (writable) {
106c992a5ebef28 Gerald Schaefer    2013-04-29  3321  		entry = huge_pte_mkwrite(huge_pte_mkdirty(mk_huge_pte(page,
106c992a5ebef28 Gerald Schaefer    2013-04-29  3322  					 vma->vm_page_prot)));
63551ae0feaaa23 David Gibson       2005-06-21  3323  	} else {
106c992a5ebef28 Gerald Schaefer    2013-04-29  3324  		entry = huge_pte_wrprotect(mk_huge_pte(page,
106c992a5ebef28 Gerald Schaefer    2013-04-29  3325  					   vma->vm_page_prot));
63551ae0feaaa23 David Gibson       2005-06-21  3326  	}
63551ae0feaaa23 David Gibson       2005-06-21 @3327  	entry = pte_mkyoung(entry);
63551ae0feaaa23 David Gibson       2005-06-21 @3328  	entry = pte_mkhuge(entry);
d9ed9faac283a3b Chris Metcalf      2012-04-01  3329  	entry = arch_make_huge_pte(entry, vma, page, writable);
63551ae0feaaa23 David Gibson       2005-06-21  3330  
63551ae0feaaa23 David Gibson       2005-06-21  3331  	return entry;
63551ae0feaaa23 David Gibson       2005-06-21  3332  }
63551ae0feaaa23 David Gibson       2005-06-21  3333  
1e8f889b10d8d22 David Gibson       2006-01-06  3334  static void set_huge_ptep_writable(struct vm_area_struct *vma,
1e8f889b10d8d22 David Gibson       2006-01-06  3335  				   unsigned long address, pte_t *ptep)
1e8f889b10d8d22 David Gibson       2006-01-06  3336  {
1e8f889b10d8d22 David Gibson       2006-01-06  3337  	pte_t entry;
1e8f889b10d8d22 David Gibson       2006-01-06  3338  
106c992a5ebef28 Gerald Schaefer    2013-04-29  3339  	entry = huge_pte_mkwrite(huge_pte_mkdirty(huge_ptep_get(ptep)));
32f84528fbb5177 Chris Forbes       2011-07-25  3340  	if (huge_ptep_set_access_flags(vma, address, ptep, entry, 1))
4b3073e1c53a256 Russell King       2009-12-18 @3341  		update_mmu_cache(vma, address, ptep);
1e8f889b10d8d22 David Gibson       2006-01-06  3342  }
1e8f889b10d8d22 David Gibson       2006-01-06  3343  
d5ed7444dafb94b Aneesh Kumar K.V   2017-07-06  3344  bool is_hugetlb_entry_migration(pte_t pte)
4a705fef986231a Naoya Horiguchi    2014-06-23  3345  {
4a705fef986231a Naoya Horiguchi    2014-06-23  3346  	swp_entry_t swp;
4a705fef986231a Naoya Horiguchi    2014-06-23  3347  
4a705fef986231a Naoya Horiguchi    2014-06-23 @3348  	if (huge_pte_none(pte) || pte_present(pte))
d5ed7444dafb94b Aneesh Kumar K.V   2017-07-06  3349  		return false;
4a705fef986231a Naoya Horiguchi    2014-06-23 @3350  	swp = pte_to_swp_entry(pte);
4a705fef986231a Naoya Horiguchi    2014-06-23 @3351  	if (non_swap_entry(swp) && is_migration_entry(swp))
d5ed7444dafb94b Aneesh Kumar K.V   2017-07-06  3352  		return true;
4a705fef986231a Naoya Horiguchi    2014-06-23  3353  	else
d5ed7444dafb94b Aneesh Kumar K.V   2017-07-06  3354  		return false;
4a705fef986231a Naoya Horiguchi    2014-06-23  3355  }
4a705fef986231a Naoya Horiguchi    2014-06-23  3356  
4a705fef986231a Naoya Horiguchi    2014-06-23  3357  static int is_hugetlb_entry_hwpoisoned(pte_t pte)
4a705fef986231a Naoya Horiguchi    2014-06-23  3358  {
4a705fef986231a Naoya Horiguchi    2014-06-23  3359  	swp_entry_t swp;
4a705fef986231a Naoya Horiguchi    2014-06-23  3360  
4a705fef986231a Naoya Horiguchi    2014-06-23  3361  	if (huge_pte_none(pte) || pte_present(pte))
4a705fef986231a Naoya Horiguchi    2014-06-23  3362  		return 0;
4a705fef986231a Naoya Horiguchi    2014-06-23  3363  	swp = pte_to_swp_entry(pte);
4a705fef986231a Naoya Horiguchi    2014-06-23 @3364  	if (non_swap_entry(swp) && is_hwpoison_entry(swp))
4a705fef986231a Naoya Horiguchi    2014-06-23  3365  		return 1;
4a705fef986231a Naoya Horiguchi    2014-06-23  3366  	else
4a705fef986231a Naoya Horiguchi    2014-06-23  3367  		return 0;
4a705fef986231a Naoya Horiguchi    2014-06-23  3368  }
1e8f889b10d8d22 David Gibson       2006-01-06  3369  
63551ae0feaaa23 David Gibson       2005-06-21  3370  int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
63551ae0feaaa23 David Gibson       2005-06-21  3371  			    struct vm_area_struct *vma)
63551ae0feaaa23 David Gibson       2005-06-21  3372  {
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3373  	pte_t *src_pte, *dst_pte, entry, dst_entry;
63551ae0feaaa23 David Gibson       2005-06-21  3374  	struct page *ptepage;
1c59827d1da9bcd Hugh Dickins       2005-10-19  3375  	unsigned long addr;
1e8f889b10d8d22 David Gibson       2006-01-06  3376  	int cow;
a5516438959d90b Andi Kleen         2008-07-23  3377  	struct hstate *h = hstate_vma(vma);
a5516438959d90b Andi Kleen         2008-07-23  3378  	unsigned long sz = huge_page_size(h);
ac46d4f3c43241f Jérôme Glisse      2018-12-28  3379  	struct mmu_notifier_range range;
e8569dd299dbc7b Andreas Sandberg   2014-01-21  3380  	int ret = 0;
1e8f889b10d8d22 David Gibson       2006-01-06  3381  
1e8f889b10d8d22 David Gibson       2006-01-06  3382  	cow = (vma->vm_flags & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE;
63551ae0feaaa23 David Gibson       2005-06-21  3383  
ac46d4f3c43241f Jérôme Glisse      2018-12-28  3384  	if (cow) {
7269f999934b289 Jérôme Glisse      2019-05-13  3385  		mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, src,
6f4f13e8d9e27ce Jérôme Glisse      2019-05-13  3386  					vma->vm_start,
ac46d4f3c43241f Jérôme Glisse      2018-12-28  3387  					vma->vm_end);
ac46d4f3c43241f Jérôme Glisse      2018-12-28  3388  		mmu_notifier_invalidate_range_start(&range);
ac46d4f3c43241f Jérôme Glisse      2018-12-28  3389  	}
e8569dd299dbc7b Andreas Sandberg   2014-01-21  3390  
a5516438959d90b Andi Kleen         2008-07-23  3391  	for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
cb900f412154474 Kirill A. Shutemov 2013-11-14  3392  		spinlock_t *src_ptl, *dst_ptl;
7868a2087ec13ec Punit Agrawal      2017-07-06  3393  		src_pte = huge_pte_offset(src, addr, sz);
c74df32c724a165 Hugh Dickins       2005-10-29  3394  		if (!src_pte)
c74df32c724a165 Hugh Dickins       2005-10-29  3395  			continue;
a5516438959d90b Andi Kleen         2008-07-23  3396  		dst_pte = huge_pte_alloc(dst, addr, sz);
e8569dd299dbc7b Andreas Sandberg   2014-01-21  3397  		if (!dst_pte) {
e8569dd299dbc7b Andreas Sandberg   2014-01-21  3398  			ret = -ENOMEM;
e8569dd299dbc7b Andreas Sandberg   2014-01-21  3399  			break;
e8569dd299dbc7b Andreas Sandberg   2014-01-21  3400  		}
c5c99429fa57dcf Larry Woodman      2008-01-24  3401  
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3402  		/*
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3403  		 * If the pagetables are shared don't copy or take references.
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3404  		 * dst_pte == src_pte is the common case of src/dest sharing.
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3405  		 *
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3406  		 * However, src could have 'unshared' and dst shares with
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3407  		 * another vma.  If dst_pte !none, this implies sharing.
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3408  		 * Check here before taking page table lock, and once again
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3409  		 * after taking the lock below.
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3410  		 */
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3411  		dst_entry = huge_ptep_get(dst_pte);
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3412  		if ((dst_pte == src_pte) || !huge_pte_none(dst_entry))
c5c99429fa57dcf Larry Woodman      2008-01-24  3413  			continue;
c5c99429fa57dcf Larry Woodman      2008-01-24  3414  
cb900f412154474 Kirill A. Shutemov 2013-11-14  3415  		dst_ptl = huge_pte_lock(h, dst, dst_pte);
cb900f412154474 Kirill A. Shutemov 2013-11-14  3416  		src_ptl = huge_pte_lockptr(h, src, src_pte);
cb900f412154474 Kirill A. Shutemov 2013-11-14  3417  		spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
4a705fef986231a Naoya Horiguchi    2014-06-23  3418  		entry = huge_ptep_get(src_pte);
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3419  		dst_entry = huge_ptep_get(dst_pte);
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3420  		if (huge_pte_none(entry) || !huge_pte_none(dst_entry)) {
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3421  			/*
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3422  			 * Skip if src entry none.  Also, skip in the
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3423  			 * unlikely case dst entry !none as this implies
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3424  			 * sharing with another vma.
5e41540c8a0f0e9 Mike Kravetz       2018-11-16  3425  			 */
4a705fef986231a Naoya Horiguchi    2014-06-23  3426  			;
4a705fef986231a Naoya Horiguchi    2014-06-23  3427  		} else if (unlikely(is_hugetlb_entry_migration(entry) ||
4a705fef986231a Naoya Horiguchi    2014-06-23  3428  				    is_hugetlb_entry_hwpoisoned(entry))) {
4a705fef986231a Naoya Horiguchi    2014-06-23 @3429  			swp_entry_t swp_entry = pte_to_swp_entry(entry);
4a705fef986231a Naoya Horiguchi    2014-06-23  3430  
4a705fef986231a Naoya Horiguchi    2014-06-23 @3431  			if (is_write_migration_entry(swp_entry) && cow) {
4a705fef986231a Naoya Horiguchi    2014-06-23  3432  				/*
4a705fef986231a Naoya Horiguchi    2014-06-23  3433  				 * COW mappings require pages in both
4a705fef986231a Naoya Horiguchi    2014-06-23  3434  				 * parent and child to be set to read.
4a705fef986231a Naoya Horiguchi    2014-06-23  3435  				 */
4a705fef986231a Naoya Horiguchi    2014-06-23 @3436  				make_migration_entry_read(&swp_entry);
4a705fef986231a Naoya Horiguchi    2014-06-23 @3437  				entry = swp_entry_to_pte(swp_entry);
e5251fd43007f9e Punit Agrawal      2017-07-06  3438  				set_huge_swap_pte_at(src, addr, src_pte,
e5251fd43007f9e Punit Agrawal      2017-07-06  3439  						     entry, sz);
4a705fef986231a Naoya Horiguchi    2014-06-23  3440  			}
e5251fd43007f9e Punit Agrawal      2017-07-06  3441  			set_huge_swap_pte_at(dst, addr, dst_pte, entry, sz);
4a705fef986231a Naoya Horiguchi    2014-06-23  3442  		} else {
34ee645e83b60ae Joerg Roedel       2014-11-13  3443  			if (cow) {
0f10851ea475e08 Jérôme Glisse      2017-11-15  3444  				/*
0f10851ea475e08 Jérôme Glisse      2017-11-15  3445  				 * No need to notify as we are downgrading page
0f10851ea475e08 Jérôme Glisse      2017-11-15  3446  				 * table protection not changing it to point
0f10851ea475e08 Jérôme Glisse      2017-11-15  3447  				 * to a new page.
0f10851ea475e08 Jérôme Glisse      2017-11-15  3448  				 *
ad56b738c5dd223 Mike Rapoport      2018-03-21  3449  				 * See Documentation/vm/mmu_notifier.rst
0f10851ea475e08 Jérôme Glisse      2017-11-15  3450  				 */
7f2e9525ba55b1c Gerald Schaefer    2008-04-28  3451  				huge_ptep_set_wrprotect(src, addr, src_pte);
34ee645e83b60ae Joerg Roedel       2014-11-13  3452  			}
0253d634e0803a8 Naoya Horiguchi    2014-07-23  3453  			entry = huge_ptep_get(src_pte);
63551ae0feaaa23 David Gibson       2005-06-21 @3454  			ptepage = pte_page(entry);
63551ae0feaaa23 David Gibson       2005-06-21  3455  			get_page(ptepage);
53f9263baba69fc Kirill A. Shutemov 2016-01-15 @3456  			page_dup_rmap(ptepage, true);
63551ae0feaaa23 David Gibson       2005-06-21  3457  			set_huge_pte_at(dst, addr, dst_pte, entry);
5d317b2b6536592 Naoya Horiguchi    2015-11-05  3458  			hugetlb_count_add(pages_per_huge_page(h), dst);
1c59827d1da9bcd Hugh Dickins       2005-10-19  3459  		}
cb900f412154474 Kirill A. Shutemov 2013-11-14  3460  		spin_unlock(src_ptl);
cb900f412154474 Kirill A. Shutemov 2013-11-14  3461  		spin_unlock(dst_ptl);
63551ae0feaaa23 David Gibson       2005-06-21  3462  	}
63551ae0feaaa23 David Gibson       2005-06-21  3463  
e8569dd299dbc7b Andreas Sandberg   2014-01-21  3464  	if (cow)
ac46d4f3c43241f Jérôme Glisse      2018-12-28  3465  		mmu_notifier_invalidate_range_end(&range);
e8569dd299dbc7b Andreas Sandberg   2014-01-21  3466  
e8569dd299dbc7b Andreas Sandberg   2014-01-21  3467  	return ret;
63551ae0feaaa23 David Gibson       2005-06-21  3468  }
63551ae0feaaa23 David Gibson       2005-06-21  3469  
24669e58477e275 Aneesh Kumar K.V   2012-07-31  3470  void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
24669e58477e275 Aneesh Kumar K.V   2012-07-31  3471  			    unsigned long start, unsigned long end,
24669e58477e275 Aneesh Kumar K.V   2012-07-31  3472  			    struct page *ref_page)
63551ae0feaaa23 David Gibson       2005-06-21  3473  {
63551ae0feaaa23 David Gibson       2005-06-21  3474  	struct mm_struct *mm = vma->vm_mm;
63551ae0feaaa23 David Gibson       2005-06-21  3475  	unsigned long address;
c7546f8f03f5a4f David Gibson       2005-08-05  3476  	pte_t *ptep;
63551ae0feaaa23 David Gibson       2005-06-21  3477  	pte_t pte;
cb900f412154474 Kirill A. Shutemov 2013-11-14  3478  	spinlock_t *ptl;
63551ae0feaaa23 David Gibson       2005-06-21  3479  	struct page *page;
a5516438959d90b Andi Kleen         2008-07-23  3480  	struct hstate *h = hstate_vma(vma);
a5516438959d90b Andi Kleen         2008-07-23  3481  	unsigned long sz = huge_page_size(h);
ac46d4f3c43241f Jérôme Glisse      2018-12-28  3482  	struct mmu_notifier_range range;
a5516438959d90b Andi Kleen         2008-07-23  3483  
63551ae0feaaa23 David Gibson       2005-06-21  3484  	WARN_ON(!is_vm_hugetlb_page(vma));
a5516438959d90b Andi Kleen         2008-07-23  3485  	BUG_ON(start & ~huge_page_mask(h));
a5516438959d90b Andi Kleen         2008-07-23  3486  	BUG_ON(end & ~huge_page_mask(h));
63551ae0feaaa23 David Gibson       2005-06-21  3487  
07e326610e5634e Aneesh Kumar K.V   2016-12-12  3488  	/*
07e326610e5634e Aneesh Kumar K.V   2016-12-12  3489  	 * This is a hugetlb vma, all the pte entries should point
07e326610e5634e Aneesh Kumar K.V   2016-12-12  3490  	 * to huge page.
07e326610e5634e Aneesh Kumar K.V   2016-12-12  3491  	 */
ed6a79352cad00e Peter Zijlstra     2018-08-31 @3492  	tlb_change_page_size(tlb, sz);
24669e58477e275 Aneesh Kumar K.V   2012-07-31 @3493  	tlb_start_vma(tlb, vma);
dff11abe280b47c Mike Kravetz       2018-10-05  3494  
dff11abe280b47c Mike Kravetz       2018-10-05  3495  	/*
dff11abe280b47c Mike Kravetz       2018-10-05  3496  	 * If sharing possible, alert mmu notifiers of worst case.
dff11abe280b47c Mike Kravetz       2018-10-05  3497  	 */
6f4f13e8d9e27ce Jérôme Glisse      2019-05-13  3498  	mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, mm, start,
6f4f13e8d9e27ce Jérôme Glisse      2019-05-13  3499  				end);
ac46d4f3c43241f Jérôme Glisse      2018-12-28  3500  	adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end);
ac46d4f3c43241f Jérôme Glisse      2018-12-28  3501  	mmu_notifier_invalidate_range_start(&range);
569f48b85813f05 Hillf Danton       2014-12-10  3502  	address = start;
569f48b85813f05 Hillf Danton       2014-12-10  3503  	for (; address < end; address += sz) {
7868a2087ec13ec Punit Agrawal      2017-07-06  3504  		ptep = huge_pte_offset(mm, address, sz);
c7546f8f03f5a4f David Gibson       2005-08-05  3505  		if (!ptep)
c7546f8f03f5a4f David Gibson       2005-08-05  3506  			continue;
c7546f8f03f5a4f David Gibson       2005-08-05  3507  
cb900f412154474 Kirill A. Shutemov 2013-11-14  3508  		ptl = huge_pte_lock(h, mm, ptep);
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3509  		if (huge_pmd_unshare(mm, &address, ptep)) {
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3510  			spin_unlock(ptl);
dff11abe280b47c Mike Kravetz       2018-10-05  3511  			/*
dff11abe280b47c Mike Kravetz       2018-10-05  3512  			 * We just unmapped a page of PMDs by clearing a PUD.
dff11abe280b47c Mike Kravetz       2018-10-05  3513  			 * The caller's TLB flush range should cover this area.
dff11abe280b47c Mike Kravetz       2018-10-05  3514  			 */
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3515  			continue;
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3516  		}
39dde65c9940c97 Kenneth W Chen     2006-12-06  3517  
6629326b89b6e69 Hillf Danton       2012-03-23  3518  		pte = huge_ptep_get(ptep);
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3519  		if (huge_pte_none(pte)) {
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3520  			spin_unlock(ptl);
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3521  			continue;
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3522  		}
6629326b89b6e69 Hillf Danton       2012-03-23  3523  
6629326b89b6e69 Hillf Danton       2012-03-23  3524  		/*
9fbc1f635fd0bd2 Naoya Horiguchi    2015-02-11  3525  		 * Migrating hugepage or HWPoisoned hugepage is already
9fbc1f635fd0bd2 Naoya Horiguchi    2015-02-11  3526  		 * unmapped and its refcount is dropped, so just clear pte here.
6629326b89b6e69 Hillf Danton       2012-03-23  3527  		 */
9fbc1f635fd0bd2 Naoya Horiguchi    2015-02-11  3528  		if (unlikely(!pte_present(pte))) {
9386fac34c7cbe3 Punit Agrawal      2017-07-06  3529  			huge_pte_clear(mm, address, ptep, sz);
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3530  			spin_unlock(ptl);
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3531  			continue;
8c4894c6bc790d0 Naoya Horiguchi    2012-12-12  3532  		}
6629326b89b6e69 Hillf Danton       2012-03-23  3533  
6629326b89b6e69 Hillf Danton       2012-03-23  3534  		page = pte_page(pte);
04f2cbe35699d22 Mel Gorman         2008-07-23  3535  		/*
04f2cbe35699d22 Mel Gorman         2008-07-23  3536  		 * If a reference page is supplied, it is because a specific
04f2cbe35699d22 Mel Gorman         2008-07-23  3537  		 * page is being unmapped, not a range. Ensure the page we
04f2cbe35699d22 Mel Gorman         2008-07-23  3538  		 * are about to unmap is the actual page of interest.
04f2cbe35699d22 Mel Gorman         2008-07-23  3539  		 */
04f2cbe35699d22 Mel Gorman         2008-07-23  3540  		if (ref_page) {
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3541  			if (page != ref_page) {
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3542  				spin_unlock(ptl);
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3543  				continue;
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3544  			}
04f2cbe35699d22 Mel Gorman         2008-07-23  3545  			/*
04f2cbe35699d22 Mel Gorman         2008-07-23  3546  			 * Mark the VMA as having unmapped its page so that
04f2cbe35699d22 Mel Gorman         2008-07-23  3547  			 * future faults in this VMA will fail rather than
04f2cbe35699d22 Mel Gorman         2008-07-23  3548  			 * looking like data was lost
04f2cbe35699d22 Mel Gorman         2008-07-23  3549  			 */
04f2cbe35699d22 Mel Gorman         2008-07-23  3550  			set_vma_resv_flags(vma, HPAGE_RESV_UNMAPPED);
04f2cbe35699d22 Mel Gorman         2008-07-23  3551  		}
04f2cbe35699d22 Mel Gorman         2008-07-23  3552  
c7546f8f03f5a4f David Gibson       2005-08-05  3553  		pte = huge_ptep_get_and_clear(mm, address, ptep);
b528e4b6405b9fd Aneesh Kumar K.V   2016-12-12 @3554  		tlb_remove_huge_tlb_entry(h, tlb, ptep, address);
106c992a5ebef28 Gerald Schaefer    2013-04-29  3555  		if (huge_pte_dirty(pte))
6649a3863232eb2 Ken Chen           2007-02-08  3556  			set_page_dirty(page);
9e81130b7ce2305 Hillf Danton       2012-03-21  3557  
5d317b2b6536592 Naoya Horiguchi    2015-11-05  3558  		hugetlb_count_sub(pages_per_huge_page(h), mm);
d281ee614518359 Kirill A. Shutemov 2016-01-15 @3559  		page_remove_rmap(page, true);
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3560  
cb900f412154474 Kirill A. Shutemov 2013-11-14  3561  		spin_unlock(ptl);
e77b0852b551ffd Aneesh Kumar K.V   2016-07-26 @3562  		tlb_remove_page_size(tlb, page, huge_page_size(h));
24669e58477e275 Aneesh Kumar K.V   2012-07-31  3563  		/*
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3564  		 * Bail out after unmapping reference page if supplied
24669e58477e275 Aneesh Kumar K.V   2012-07-31  3565  		 */
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3566  		if (ref_page)
31d49da5ad01728 Aneesh Kumar K.V   2016-07-26  3567  			break;
fe1668ae5bf0145 Kenneth W Chen     2006-10-04  3568  	}
ac46d4f3c43241f Jérôme Glisse      2018-12-28  3569  	mmu_notifier_invalidate_range_end(&range);
24669e58477e275 Aneesh Kumar K.V   2012-07-31 @3570  	tlb_end_vma(tlb, vma);
^1da177e4c3f415 Linus Torvalds     2005-04-16  3571  }
63551ae0feaaa23 David Gibson       2005-06-21  3572  
d833352a4338dc3 Mel Gorman         2012-07-31  3573  void __unmap_hugepage_range_final(struct mmu_gather *tlb,
d833352a4338dc3 Mel Gorman         2012-07-31  3574  			  struct vm_area_struct *vma, unsigned long start,
d833352a4338dc3 Mel Gorman         2012-07-31  3575  			  unsigned long end, struct page *ref_page)
d833352a4338dc3 Mel Gorman         2012-07-31  3576  {
d833352a4338dc3 Mel Gorman         2012-07-31  3577  	__unmap_hugepage_range(tlb, vma, start, end, ref_page);
d833352a4338dc3 Mel Gorman         2012-07-31  3578  
d833352a4338dc3 Mel Gorman         2012-07-31  3579  	/*
d833352a4338dc3 Mel Gorman         2012-07-31  3580  	 * Clear this flag so that x86's huge_pmd_share page_table_shareable
d833352a4338dc3 Mel Gorman         2012-07-31  3581  	 * test will fail on a vma being torn down, and not grab a page table
d833352a4338dc3 Mel Gorman         2012-07-31  3582  	 * on its way out.  We're lucky that the flag has such an appropriate
d833352a4338dc3 Mel Gorman         2012-07-31  3583  	 * name, and can in fact be safely cleared here. We could clear it
d833352a4338dc3 Mel Gorman         2012-07-31  3584  	 * before the __unmap_hugepage_range above, but all that's necessary
c8c06efa8b55260 Davidlohr Bueso    2014-12-12  3585  	 * is to clear it before releasing the i_mmap_rwsem. This works
d833352a4338dc3 Mel Gorman         2012-07-31  3586  	 * because in the context this is called, the VMA is about to be
c8c06efa8b55260 Davidlohr Bueso    2014-12-12  3587  	 * destroyed and the i_mmap_rwsem is held.
d833352a4338dc3 Mel Gorman         2012-07-31  3588  	 */
d833352a4338dc3 Mel Gorman         2012-07-31  3589  	vma->vm_flags &= ~VM_MAYSHARE;
d833352a4338dc3 Mel Gorman         2012-07-31  3590  }
d833352a4338dc3 Mel Gorman         2012-07-31  3591  
502717f4e112b18 Kenneth W Chen     2006-10-11  3592  void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
04f2cbe35699d22 Mel Gorman         2008-07-23  3593  			  unsigned long end, struct page *ref_page)
502717f4e112b18 Kenneth W Chen     2006-10-11  3594  {
24669e58477e275 Aneesh Kumar K.V   2012-07-31  3595  	struct mm_struct *mm;
24669e58477e275 Aneesh Kumar K.V   2012-07-31 @3596  	struct mmu_gather tlb;
dff11abe280b47c Mike Kravetz       2018-10-05  3597  	unsigned long tlb_start = start;
dff11abe280b47c Mike Kravetz       2018-10-05  3598  	unsigned long tlb_end = end;
dff11abe280b47c Mike Kravetz       2018-10-05  3599  
dff11abe280b47c Mike Kravetz       2018-10-05  3600  	/*
dff11abe280b47c Mike Kravetz       2018-10-05  3601  	 * If shared PMDs were possibly used within this vma range, adjust
dff11abe280b47c Mike Kravetz       2018-10-05  3602  	 * start/end for worst case tlb flushing.
dff11abe280b47c Mike Kravetz       2018-10-05  3603  	 * Note that we can not be sure if PMDs are shared until we try to
dff11abe280b47c Mike Kravetz       2018-10-05  3604  	 * unmap pages.  However, we want to make sure TLB flushing covers
dff11abe280b47c Mike Kravetz       2018-10-05  3605  	 * the largest possible range.
dff11abe280b47c Mike Kravetz       2018-10-05  3606  	 */
dff11abe280b47c Mike Kravetz       2018-10-05  3607  	adjust_range_if_pmd_sharing_possible(vma, &tlb_start, &tlb_end);
24669e58477e275 Aneesh Kumar K.V   2012-07-31  3608  
24669e58477e275 Aneesh Kumar K.V   2012-07-31  3609  	mm = vma->vm_mm;
24669e58477e275 Aneesh Kumar K.V   2012-07-31  3610  
dff11abe280b47c Mike Kravetz       2018-10-05  3611  	tlb_gather_mmu(&tlb, mm, tlb_start, tlb_end);
24669e58477e275 Aneesh Kumar K.V   2012-07-31  3612  	__unmap_hugepage_range(&tlb, vma, start, end, ref_page);
dff11abe280b47c Mike Kravetz       2018-10-05  3613  	tlb_finish_mmu(&tlb, tlb_start, tlb_end);
502717f4e112b18 Kenneth W Chen     2006-10-11  3614  }
502717f4e112b18 Kenneth W Chen     2006-10-11  3615  

:::::: The code at line 3454 was first introduced by commit
:::::: 63551ae0feaaa23807ebea60de1901564bbef32e [PATCH] Hugepage consolidation

:::::: TO: David Gibson <david@...son.dropbear.id.au>
:::::: CC: Linus Torvalds <torvalds@...970.osdl.org>

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

Download attachment ".config.gz" of type "application/gzip" (26742 bytes)

Powered by blists - more mailing lists