lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210203153351.3mtcpjrprbuj3kvf@revolver>
Date:   Wed, 3 Feb 2021 15:33:58 +0000
From:   Liam Howlett <liam.howlett@...cle.com>
To:     Dan Carpenter <dan.carpenter@...cle.com>
CC:     "kbuild@...ts.01.org" <kbuild@...ts.01.org>,
        "lkp@...el.com" <lkp@...el.com>,
        "kbuild-all@...ts.01.org" <kbuild-all@...ts.01.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [rcu:willy-maple 134/202] mm/mmap.c:2919 do_brk_munmap() error:
 we previously assumed 'vma->anon_vma' could be null (see line 2884)



Hello,

These are two valid issues.  I had noticed one but both need to be
addressed.

Thank you Dan.

Regards,
Liam

* Dan Carpenter <dan.carpenter@...cle.com> [210203 08:15]:
> tree:   https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git willy-maple
> head:   7e346d2845b4bd77663394f39fa70456e0084c86
> commit: 5b05486ddd0127e852616630ef547dba96a7abad [134/202] mm/mmap: Change do_brk_flags() to expand existing VMA and add do_brk_munmap()
> config: x86_64-randconfig-m001-20210202 (attached as .config)
> compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@...el.com>
> Reported-by: Dan Carpenter <dan.carpenter@...cle.com>
> 
> smatch warnings:
> mm/mmap.c:2919 do_brk_munmap() error: we previously assumed 'vma->anon_vma' could be null (see line 2884)
> mm/mmap.c:3039 do_brk_flags() error: we previously assumed 'vma->anon_vma' could be null (see line 2980)
> 
> vim +2919 mm/mmap.c
> 
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2855  static int do_brk_munmap(struct ma_state *mas, struct vm_area_struct *vma,
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2856  			 unsigned long newbrk, unsigned long oldbrk,
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2857  			 struct list_head *uf)
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2858  {
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2859  	struct mm_struct *mm = vma->vm_mm;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2860  	struct vm_area_struct unmap;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2861  	unsigned long unmap_pages;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2862  	int ret = 1;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2863  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2864  	arch_unmap(mm, newbrk, oldbrk);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2865  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2866  	if (likely(vma->vm_start >= newbrk)) { // remove entire mapping(s)
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2867  		mas_set(mas, newbrk);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2868  		if (vma->vm_start != newbrk)
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2869  			mas_reset(mas); // cause a re-walk for the first overlap.
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2870  		ret = __do_munmap(mm, newbrk, oldbrk - newbrk, uf, true);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2871  		goto munmap_full_vma;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2872  	}
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2873  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2874  	vma_init(&unmap, mm);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2875  	unmap.vm_start = newbrk;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2876  	unmap.vm_end = oldbrk;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2877  	ret = userfaultfd_unmap_prep(&unmap, newbrk, oldbrk, uf);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2878  	if (ret)
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2879  		return ret;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2880  	ret = 1;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2881  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2882  	// Change the oldbrk of vma to the newbrk of the munmap area
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2883  	vma_adjust_trans_huge(vma, vma->vm_start, newbrk, 0);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21 @2884  	if (vma->anon_vma) {
>                                                             ^^^^^^^^^^^^^
> This code assumes "vma->anon_vma" can be NULL.
> 
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2885  		anon_vma_lock_write(vma->anon_vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2886  		anon_vma_interval_tree_pre_update_vma(vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2887  	}
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2888  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2889  	vma->vm_end = newbrk;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2890  	if (vma_mas_remove(&unmap, mas))
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2891  		goto mas_store_fail;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2892  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2893  	vmacache_invalidate(vma->vm_mm);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2894  	if (vma->anon_vma) {
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2895  		anon_vma_interval_tree_post_update_vma(vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2896  		anon_vma_unlock_write(vma->anon_vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2897  	}
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2898  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2899  	unmap_pages = vma_pages(&unmap);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2900  	if (unmap.vm_flags & VM_LOCKED) {
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2901  		mm->locked_vm -= unmap_pages;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2902  		munlock_vma_pages_range(&unmap, newbrk, oldbrk);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2903  	}
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2904  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2905  	mmap_write_downgrade(mm);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2906  	unmap_region(mm, &unmap, vma, newbrk, oldbrk);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2907  	/* Statistics */
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2908  	vm_stat_account(mm, unmap.vm_flags, -unmap_pages);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2909  	if (unmap.vm_flags & VM_ACCOUNT)
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2910  		vm_unacct_memory(unmap_pages);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2911  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2912  munmap_full_vma:
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2913  	validate_mm_mt(mm);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2914  	return ret;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2915  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2916  mas_store_fail:
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2917  	vma->vm_end = oldbrk;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2918  	anon_vma_interval_tree_post_update_vma(vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21 @2919  	anon_vma_unlock_write(vma->anon_vma);
>                                                                               ^^^^^^^^^^^^^
> Unchecked dereference inside function call.
> 
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2920  	return -ENOMEM;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2921  }
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2922  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2923  /*
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2924   * do_brk_flags() - Increase the brk vma if the flags match.
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2925   * @mas: The maple tree state.
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2926   * @addr: The start address
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2927   * @len: The length of the increase
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2928   * @vma: The vma,
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2929   * @flags: The VMA Flags
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2930   *
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2931   * Extend the brk VMA from addr to addr + len.  If the VMA is NULL or the flags
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2932   * do not match then create a new anonymous VMA.  Eventually we may be able to
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2933   * do some brk-specific accounting here.
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2934   */
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2935  static int do_brk_flags(struct ma_state *mas, struct vm_area_struct **brkvma,
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2936  			unsigned long addr, unsigned long len,
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2937  			unsigned long flags)
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2938  {
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2939  	struct mm_struct *mm = current->mm;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2940  	struct vm_area_struct *prev = NULL, *vma;
> 3a459756810912 Kirill Korotaev       2006-09-07  2941  	int error;
> ff68dac6d65cd1 Gaowei Pu             2019-11-30  2942  	unsigned long mapped_addr;
> d25a147c68d737 Liam R. Howlett       2020-07-24  2943  	validate_mm_mt(mm);
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2944  
> 16e72e9b30986e Denys Vlasenko        2017-02-22  2945  	/* Until we need other flags, refuse anything except VM_EXEC. */
> 16e72e9b30986e Denys Vlasenko        2017-02-22  2946  	if ((flags & (~VM_EXEC)) != 0)
> 16e72e9b30986e Denys Vlasenko        2017-02-22  2947  		return -EINVAL;
> 16e72e9b30986e Denys Vlasenko        2017-02-22  2948  	flags |= VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags;
> 3a459756810912 Kirill Korotaev       2006-09-07  2949  
> ff68dac6d65cd1 Gaowei Pu             2019-11-30  2950  	mapped_addr = get_unmapped_area(NULL, addr, len, 0, MAP_FIXED);
> ff68dac6d65cd1 Gaowei Pu             2019-11-30  2951  	if (IS_ERR_VALUE(mapped_addr))
> ff68dac6d65cd1 Gaowei Pu             2019-11-30  2952  		return mapped_addr;
> 3a459756810912 Kirill Korotaev       2006-09-07  2953  
> 363ee17f0f405f Davidlohr Bueso       2014-01-21  2954  	error = mlock_future_check(mm, mm->def_flags, len);
> 363ee17f0f405f Davidlohr Bueso       2014-01-21  2955  	if (error)
> 363ee17f0f405f Davidlohr Bueso       2014-01-21  2956  		return error;
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2957  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2958  	/* Check against address space limits by the changed size */
> 84638335900f19 Konstantin Khlebnikov 2016-01-14  2959  	if (!may_expand_vm(mm, flags, len >> PAGE_SHIFT))
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2960  		return -ENOMEM;
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2961  
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2962  	if (mm->map_count > sysctl_max_map_count)
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2963  		return -ENOMEM;
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2964  
> 191c542442fdf5 Al Viro               2012-02-13  2965  	if (security_vm_enough_memory_mm(mm, len >> PAGE_SHIFT))
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2966  		return -ENOMEM;
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2967  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2968  	mas->last = addr + len - 1;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2969  	if (*brkvma) {
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2970  		vma = *brkvma;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2971  		/* Expand the existing vma if possible; almost never a singular
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2972  		 * list, so this will almost always fail. */
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2973  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2974  		if ((!vma->anon_vma ||
>                                                                      ^^^^^^^^^^^^^^
> Check for NULL
> 
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2975  		     list_is_singular(&vma->anon_vma_chain)) &&
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2976  		     ((vma->vm_flags & ~VM_SOFTDIRTY) == flags)){
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2977  			mas->index = vma->vm_start;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2978  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2979  			vma_adjust_trans_huge(vma, addr, addr + len, 0);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21 @2980  			if (vma->anon_vma) {
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2981  				anon_vma_lock_write(vma->anon_vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2982  				anon_vma_interval_tree_pre_update_vma(vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2983  			}
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2984  			vma->vm_end = addr + len;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2985  			vma->vm_flags |= VM_SOFTDIRTY;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2986  			if (mas_store_gfp(mas, vma, GFP_KERNEL))
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2987  				goto mas_mod_fail;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2988  			if (vma->anon_vma) {
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2989  				anon_vma_interval_tree_post_update_vma(vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2990  				anon_vma_unlock_write(vma->anon_vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2991  			}
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2992  			khugepaged_enter_vma_merge(vma, flags);
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2993  			goto out;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2994  		}
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2995  		prev = vma;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2996  	}
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2997  	mas->index = addr;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  2998  	mas_walk(mas);
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  2999  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3000  	/* create a vma struct for an anonymous mapping */
> 490fc053865c9c Linus Torvalds        2018-07-21  3001  	vma = vm_area_alloc(mm);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3002  	if (!vma)
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3003  		goto vma_alloc_fail;
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  3004  
> bfd40eaff5abb9 Kirill A. Shutemov    2018-07-26  3005  	vma_set_anonymous(vma);
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  3006  	vma->vm_start = addr;
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  3007  	vma->vm_end = addr + len;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3008  	vma->vm_pgoff = addr >> PAGE_SHIFT;
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  3009  	vma->vm_flags = flags;
> 3ed75eb8f1cd89 Coly Li               2007-10-18  3010  	vma->vm_page_prot = vm_get_page_prot(flags);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3011  	if (vma_mas_store(vma, mas))
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3012  		goto mas_store_fail;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3013  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3014  	if (!prev)
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3015  		prev = mas_prev(mas, 0);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3016  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3017  	__vma_link_list(mm, vma, prev);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3018  	mm->map_count++;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3019  	*brkvma = vma;
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  3020  out:
> 3af9e859281bda Eric B Munson         2010-05-18  3021  	perf_event_mmap(vma);
> ^1da177e4c3f41 Linus Torvalds        2005-04-16  3022  	mm->total_vm += len >> PAGE_SHIFT;
> 84638335900f19 Konstantin Khlebnikov 2016-01-14  3023  	mm->data_vm += len >> PAGE_SHIFT;
> 128557ffe147c2 Michel Lespinasse     2013-02-22  3024  	if (flags & VM_LOCKED)
> ba470de43188cd Rik van Riel          2008-10-18  3025  		mm->locked_vm += (len >> PAGE_SHIFT);
> d9104d1ca96624 Cyrill Gorcunov       2013-09-11  3026  	vma->vm_flags |= VM_SOFTDIRTY;
> d25a147c68d737 Liam R. Howlett       2020-07-24  3027  	validate_mm_mt(mm);
> 5d22fc25d4fc80 Linus Torvalds        2016-05-27  3028  	return 0;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3029  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3030  mas_store_fail:
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3031  	vm_area_free(vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3032  vma_alloc_fail:
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3033  	vm_unacct_memory(len >> PAGE_SHIFT);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3034  	return -ENOMEM;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3035  
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3036  mas_mod_fail:
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3037  	vma->vm_end = addr;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3038  	anon_vma_interval_tree_post_update_vma(vma);
> 5b05486ddd0127 Liam R. Howlett       2020-09-21 @3039  	anon_vma_unlock_write(vma->anon_vma);
>                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Unchecked
> 
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3040  	return -ENOMEM;
> 5b05486ddd0127 Liam R. Howlett       2020-09-21  3041  
> 
> ---
> 0-DAY CI Kernel Test Service, Intel Corporation
> https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ