lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 20 Dec 2023 09:53:43 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: "jiajun.xie" <jiajun.xie.sh@...il.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] mm: fix unmap_mapping_range high bits shift bug

On Wed, 20 Dec 2023 13:28:39 +0800 "jiajun.xie" <jiajun.xie.sh@...il.com> wrote:

> From: Jiajun Xie <jiajun.xie.sh@...il.com>
> 
> The bug happens when highest bit of holebegin is 1, suppose
> holebign is 0x8000000111111000, after shift, hba would be
> 0xfff8000000111111, then vma_interval_tree_foreach would look
> it up fail or leads to the wrong result.
> 
> error call seq e.g.:
> - mmap(..., offset=0x8000000111111000)
>   |- syscall(mmap, ... unsigned long, off):
>      |- ksys_mmap_pgoff( ... , off >> PAGE_SHIFT);
> 
>   here pgoff is correctly shifted to 0x8000000111111,
>   but pass 0x8000000111111000 as holebegin to unmap
>   would then cause terrible result, as shown below:
> 
> - unmap_mapping_range(..., loff_t const holebegin)
>   |- pgoff_t hba = holebegin >> PAGE_SHIFT;
>           /* hba = 0xfff8000000111111 unexpectedly */
> 
> turn holebegin to be unsigned first would fix the bug.
> 

Thanks.  Are you able to describe the runtime effects of this
(obviously bad, but it's good to spell it out) and under what
circumstances it occurs?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ