[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0930856-15cb-559c-4205-5d1352b075f7@oracle.com>
Date: Mon, 28 Aug 2023 10:42:53 +0100
From: Joao Martins <joao.m.martins@...cle.com>
To: kernel test robot <lkp@...el.com>,
Mike Kravetz <mike.kravetz@...cle.com>, linux-mm@...ck.org
Cc: llvm@...ts.linux.dev, oe-kbuild-all@...ts.linux.dev,
Muchun Song <songmuchun@...edance.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Miaohe Lin <linmiaohe@...wei.com>,
David Rientjes <rientjes@...gle.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
Barry Song <song.bao.hua@...ilicon.com>,
Michal Hocko <mhocko@...e.com>,
Matthew Wilcox <willy@...radead.org>,
Xiongchun Duan <duanxiongchun@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/12] hugetlb: batch PMD split for bulk vmemmap dedup
On 26/08/2023 06:56, kernel test robot wrote:
> Hi Mike,
>
> kernel test robot noticed the following build errors:
>
> [auto build test ERROR on next-20230825]
> [cannot apply to akpm-mm/mm-everything v6.5-rc7 v6.5-rc6 v6.5-rc5 linus/master v6.5-rc7]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Mike-Kravetz/hugetlb-clear-flags-in-tail-pages-that-will-be-freed-individually/20230826-030805
> base: next-20230825
> patch link: https://lore.kernel.org/r/20230825190436.55045-11-mike.kravetz%40oracle.com
> patch subject: [PATCH 10/12] hugetlb: batch PMD split for bulk vmemmap dedup
> config: s390-randconfig-001-20230826 (https://download.01.org/0day-ci/archive/20230826/202308261325.ipTttZHZ-lkp@intel.com/config)
> compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
> reproduce: (https://download.01.org/0day-ci/archive/20230826/202308261325.ipTttZHZ-lkp@intel.com/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@...el.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202308261325.ipTttZHZ-lkp@intel.com/
>
> All error/warnings (new ones prefixed by >>):
>
[...]
>>> mm/hugetlb_vmemmap.c:698:28: error: use of undeclared identifier 'TLB_FLUSH_ALL'
> 698 | flush_tlb_kernel_range(0, TLB_FLUSH_ALL);
> | ^
> 2 warnings and 1 error generated.
>
>
TLB_FLUSH_ALL is x86 only so what I wrote above is wrong in what should be
architecture independent. The way I should have written the global TLB flush is
to use flush_tlb_all(), which is what is implemented by the arch.
The alternative is to compose a start/end tuple in the top-level optimize-folios
function as we iterate over folios to remap, and flush via
flush_tlb_kernel_range(). But this would likely only be relevant on x86 only,
that is to optimize the flushing of 3 contiguous 2M hugetlb pages (~24 vmemmap
pages) as that's where the TLB flush ceiling is put (31 pages) for per-page VA
flush, before falling back to a global TLB flush. Weren't sure of the added
complexity for dubious benefit thus kept it in global TLB flush.
> vim +/TLB_FLUSH_ALL +698 mm/hugetlb_vmemmap.c
>
> 666
> > 667 void hugetlb_vmemmap_split(const struct hstate *h, struct page *head)
> 668 {
> 669 unsigned long vmemmap_start = (unsigned long)head, vmemmap_end;
> 670 unsigned long vmemmap_reuse;
> 671
> 672 if (!vmemmap_should_optimize(h, head))
> 673 return;
> 674
> 675 static_branch_inc(&hugetlb_optimize_vmemmap_key);
> 676
> 677 vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h);
> 678 vmemmap_reuse = vmemmap_start;
> 679 vmemmap_start += HUGETLB_VMEMMAP_RESERVE_SIZE;
> 680
> 681 /*
> 682 * Remap the vmemmap virtual address range [@vmemmap_start, @vmemmap_end)
> 683 * to the page which @vmemmap_reuse is mapped to, then free the pages
> 684 * which the range [@vmemmap_start, @vmemmap_end] is mapped to.
> 685 */
> 686 if (vmemmap_remap_split(vmemmap_start, vmemmap_end, vmemmap_reuse))
> 687 static_branch_dec(&hugetlb_optimize_vmemmap_key);
> 688 }
> 689
> 690 void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list)
> 691 {
> 692 struct folio *folio;
> 693 LIST_HEAD(vmemmap_pages);
> 694
> 695 list_for_each_entry(folio, folio_list, lru)
> 696 hugetlb_vmemmap_split(h, &folio->page);
> 697
> > 698 flush_tlb_kernel_range(0, TLB_FLUSH_ALL);
> 699
> 700 list_for_each_entry(folio, folio_list, lru)
> 701 hugetlb_vmemmap_optimize_bulk(h, &folio->page, &vmemmap_pages);
> 702
> 703 free_vmemmap_page_list(&vmemmap_pages);
> 704 }
> 705
>
Powered by blists - more mailing lists