lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 24 Feb 2022 17:03:07 +0100 From: David Hildenbrand <david@...hat.com> To: Dan Carpenter <dan.carpenter@...cle.com>, kbuild@...ts.01.org Cc: lkp@...el.com, kbuild-all@...ts.01.org, linux-kernel@...r.kernel.org, Andrea Arcangeli <aarcange@...hat.com> Subject: Re: [davidhildenbrand:cow_fixes_part_2 21/27] mm/hugetlb.c:6051 follow_hugetlb_page() error: uninitialized symbol 'unshare'. On 24.02.22 16:19, Dan Carpenter wrote: > tree: git://github.com/davidhildenbrand/linux cow_fixes_part_2 > head: 6a519d5bcfc204824056f340a0cfc86207962151 > commit: d41af2eea859d0123cf08a88eae48239d5bdae2b [21/27] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page > config: x86_64-randconfig-m001 (https://download.01.org/0day-ci/archive/20220224/202202241452.QrPElUkk-lkp@intel.com/config) > compiler: gcc-9 (Debian 9.3.0-22) 9.3.0 > > If you fix the issue, kindly add following tag as appropriate > Reported-by: kernel test robot <lkp@...el.com> > Reported-by: Dan Carpenter <dan.carpenter@...cle.com> > > smatch warnings: > mm/hugetlb.c:6051 follow_hugetlb_page() error: uninitialized symbol 'unshare'. > > vim +/unshare +6051 mm/hugetlb.c > > 28a35716d31798 Michel Lespinasse 2013-02-22 5977 long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, > 63551ae0feaaa2 David Gibson 2005-06-21 5978 struct page **pages, struct vm_area_struct **vmas, > 28a35716d31798 Michel Lespinasse 2013-02-22 5979 unsigned long *position, unsigned long *nr_pages, > 4f6da93411806d Peter Xu 2020-04-01 5980 long i, unsigned int flags, int *locked) > 63551ae0feaaa2 David Gibson 2005-06-21 5981 { > d5d4b0aa4e1430 Kenneth W Chen 2006-03-22 5982 unsigned long pfn_offset; > d5d4b0aa4e1430 Kenneth W Chen 2006-03-22 5983 unsigned long vaddr = *position; > 28a35716d31798 Michel Lespinasse 2013-02-22 5984 unsigned long remainder = *nr_pages; > a5516438959d90 Andi Kleen 2008-07-23 5985 struct hstate *h = hstate_vma(vma); > 0fa5bc4023c188 Joao Martins 2021-02-24 5986 int err = -EFAULT, refs; > 63551ae0feaaa2 David Gibson 2005-06-21 5987 > 63551ae0feaaa2 David Gibson 2005-06-21 5988 while (vaddr < vma->vm_end && remainder) { > 63551ae0feaaa2 David Gibson 2005-06-21 5989 pte_t *pte; > cb900f41215447 Kirill A. Shutemov 2013-11-14 5990 spinlock_t *ptl = NULL; > d41af2eea859d0 David Hildenbrand 2021-12-16 5991 bool unshare; > 2a15efc953b26a Hugh Dickins 2009-09-21 5992 int absent; > 63551ae0feaaa2 David Gibson 2005-06-21 5993 struct page *page; > 63551ae0feaaa2 David Gibson 2005-06-21 5994 > 02057967b5d3b7 David Rientjes 2015-04-14 5995 /* > 02057967b5d3b7 David Rientjes 2015-04-14 5996 * If we have a pending SIGKILL, don't keep faulting pages and > 02057967b5d3b7 David Rientjes 2015-04-14 5997 * potentially allocating memory. > 02057967b5d3b7 David Rientjes 2015-04-14 5998 */ > fa45f1162f28cb Davidlohr Bueso 2019-01-03 5999 if (fatal_signal_pending(current)) { > 02057967b5d3b7 David Rientjes 2015-04-14 6000 remainder = 0; > 02057967b5d3b7 David Rientjes 2015-04-14 6001 break; > 02057967b5d3b7 David Rientjes 2015-04-14 6002 } > 02057967b5d3b7 David Rientjes 2015-04-14 6003 > 4c887265977213 Adam Litke 2005-10-29 6004 /* > 4c887265977213 Adam Litke 2005-10-29 6005 * Some archs (sparc64, sh*) have multiple pte_ts to > 2a15efc953b26a Hugh Dickins 2009-09-21 6006 * each hugepage. We have to make sure we get the > 4c887265977213 Adam Litke 2005-10-29 6007 * first, for the page indexing below to work. > cb900f41215447 Kirill A. Shutemov 2013-11-14 6008 * > cb900f41215447 Kirill A. Shutemov 2013-11-14 6009 * Note that page table lock is not held when pte is null. > 4c887265977213 Adam Litke 2005-10-29 6010 */ > 7868a2087ec13e Punit Agrawal 2017-07-06 6011 pte = huge_pte_offset(mm, vaddr & huge_page_mask(h), > 7868a2087ec13e Punit Agrawal 2017-07-06 6012 huge_page_size(h)); > cb900f41215447 Kirill A. Shutemov 2013-11-14 6013 if (pte) > cb900f41215447 Kirill A. Shutemov 2013-11-14 6014 ptl = huge_pte_lock(h, mm, pte); > 2a15efc953b26a Hugh Dickins 2009-09-21 6015 absent = !pte || huge_pte_none(huge_ptep_get(pte)); > 2a15efc953b26a Hugh Dickins 2009-09-21 6016 > 2a15efc953b26a Hugh Dickins 2009-09-21 6017 /* > 2a15efc953b26a Hugh Dickins 2009-09-21 6018 * When coredumping, it suits get_dump_page if we just return > 3ae77f43b1118a Hugh Dickins 2009-09-21 6019 * an error where there's an empty slot with no huge pagecache > 3ae77f43b1118a Hugh Dickins 2009-09-21 6020 * to back it. This way, we avoid allocating a hugepage, and > 3ae77f43b1118a Hugh Dickins 2009-09-21 6021 * the sparse dumpfile avoids allocating disk blocks, but its > 3ae77f43b1118a Hugh Dickins 2009-09-21 6022 * huge holes still show up with zeroes where they need to be. > 2a15efc953b26a Hugh Dickins 2009-09-21 6023 */ > 3ae77f43b1118a Hugh Dickins 2009-09-21 6024 if (absent && (flags & FOLL_DUMP) && > 3ae77f43b1118a Hugh Dickins 2009-09-21 6025 !hugetlbfs_pagecache_present(h, vma, vaddr)) { > cb900f41215447 Kirill A. Shutemov 2013-11-14 6026 if (pte) > cb900f41215447 Kirill A. Shutemov 2013-11-14 6027 spin_unlock(ptl); > 2a15efc953b26a Hugh Dickins 2009-09-21 6028 remainder = 0; > 2a15efc953b26a Hugh Dickins 2009-09-21 6029 break; > 2a15efc953b26a Hugh Dickins 2009-09-21 6030 } > 63551ae0feaaa2 David Gibson 2005-06-21 6031 > 9cc3a5bd40067b Naoya Horiguchi 2013-04-17 6032 /* > 9cc3a5bd40067b Naoya Horiguchi 2013-04-17 6033 * We need call hugetlb_fault for both hugepages under migration > 9cc3a5bd40067b Naoya Horiguchi 2013-04-17 6034 * (in which case hugetlb_fault waits for the migration,) and > 9cc3a5bd40067b Naoya Horiguchi 2013-04-17 6035 * hwpoisoned hugepages (in which case we need to prevent the > 9cc3a5bd40067b Naoya Horiguchi 2013-04-17 6036 * caller from accessing to them.) In order to do this, we use > 9cc3a5bd40067b Naoya Horiguchi 2013-04-17 6037 * here is_swap_pte instead of is_hugetlb_entry_migration and > 9cc3a5bd40067b Naoya Horiguchi 2013-04-17 6038 * is_hugetlb_entry_hwpoisoned. This is because it simply covers > 9cc3a5bd40067b Naoya Horiguchi 2013-04-17 6039 * both cases, and because we can't follow correct pages > 9cc3a5bd40067b Naoya Horiguchi 2013-04-17 6040 * directly from any kind of swap entries. > 9cc3a5bd40067b Naoya Horiguchi 2013-04-17 6041 */ > d41af2eea859d0 David Hildenbrand 2021-12-16 6042 if (absent || > d41af2eea859d0 David Hildenbrand 2021-12-16 6043 __follow_hugetlb_must_fault(flags, pte, &unshare)) { > > Can absent be true > > 2b7403035459c7 Souptick Joarder 2018-08-23 6044 vm_fault_t ret; > 87ffc118b54dcd Andrea Arcangeli 2017-02-22 6045 unsigned int fault_flags = 0; > 4c887265977213 Adam Litke 2005-10-29 6046 > cb900f41215447 Kirill A. Shutemov 2013-11-14 6047 if (pte) > cb900f41215447 Kirill A. Shutemov 2013-11-14 6048 spin_unlock(ptl); > 87ffc118b54dcd Andrea Arcangeli 2017-02-22 6049 if (flags & FOLL_WRITE) > 87ffc118b54dcd Andrea Arcangeli 2017-02-22 6050 fault_flags |= FAULT_FLAG_WRITE; > > And FOLL_WRITE is not set? > > d41af2eea859d0 David Hildenbrand 2021-12-16 @6051 else if (unshare) > ^^^^^^^ > Uninitialized warning... > Yes, I messed it up during a refactoring when moving the initialization into __follow_hugetlb_must_fault(). I'll just move it back out diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6c2a7dd8a48d..f5a310789b58 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5962,7 +5962,6 @@ static inline bool __follow_hugetlb_must_fault(unsigned int flags, pte_t *pte, { pte_t pteval = huge_ptep_get(pte); - *unshare = false; if (is_swap_pte(pteval)) return true; if (huge_pte_write(pteval)) @@ -5988,7 +5987,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, while (vaddr < vma->vm_end && remainder) { pte_t *pte; spinlock_t *ptl = NULL; - bool unshare; + bool unshare = false; int absent; struct page *page; Thanks! -- Thanks, David / dhildenb
Powered by blists - more mailing lists