lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 22 Sep 2021 12:40:09 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Peter Xu <peterx@...hat.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Hugh Dickins <hughd@...gle.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH 1/3] mm/smaps: Fix shmem pte hole swap calculation

On 9/17/21 18:47, Peter Xu wrote:
> The shmem swap calculation on the privately writable mappings are using wrong
> parameters as spotted by Vlastimil.  Fix them.  That's introduced in commit
> 48131e03ca4e, when rework shmem_swap_usage to shmem_partial_swap_usage.
> 
> Test program:
> 
> ==================
> 
> void main(void)
> {
>     char *buffer, *p;
>     int i, fd;
> 
>     fd = memfd_create("test", 0);
>     assert(fd > 0);
> 
>     /* isize==2M*3, fill in pages, swap them out */
>     ftruncate(fd, SIZE_2M * 3);
>     buffer = mmap(NULL, SIZE_2M * 3, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>     assert(buffer);
>     for (i = 0, p = buffer; i < SIZE_2M * 3 / 4096; i++) {
>         *p = 1;
>         p += 4096;
>     }
>     madvise(buffer, SIZE_2M * 3, MADV_PAGEOUT);
>     munmap(buffer, SIZE_2M * 3);
> 
>     /*
>      * Remap with private+writtable mappings on partial of the inode (<= 2M*3),
>      * while the size must also be >= 2M*2 to make sure there's a none pmd so
>      * smaps_pte_hole will be triggered.
>      */
>     buffer = mmap(NULL, SIZE_2M * 2, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
>     printf("pid=%d, buffer=%p\n", getpid(), buffer);
> 
>     /* Check /proc/$PID/smap_rollup, should see 4MB swap */
>     sleep(1000000);
> }
> ==================
> 
> Before the patch, smaps_rollup shows <4MB swap and the number will be random
> depending on the alignment of the buffer of mmap() allocated.  After this
> patch, it'll show 4MB.
> 
> Fixes: 48131e03ca4e ("mm, proc: reduce cost of /proc/pid/smaps for unpopulated shmem mappings")

Thanks, too bad I didn't spot it when sending that patch :)

> Reported-by: Vlastimil Babka <vbabka@...e.cz>
> Signed-off-by: Peter Xu <peterx@...hat.com>

Reviewed-by: Vlastimil Babka <vbabka@...e.cz>

> ---
>  fs/proc/task_mmu.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index cf25be3e0321..2197f669e17b 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -478,9 +478,11 @@ static int smaps_pte_hole(unsigned long addr, unsigned long end,
>  			  __always_unused int depth, struct mm_walk *walk)
>  {
>  	struct mem_size_stats *mss = walk->private;
> +	struct vm_area_struct *vma = walk->vma;
>  
> -	mss->swap += shmem_partial_swap_usage(
> -			walk->vma->vm_file->f_mapping, addr, end);
> +	mss->swap += shmem_partial_swap_usage(walk->vma->vm_file->f_mapping,
> +					      linear_page_index(vma, addr),
> +					      linear_page_index(vma, end));
>  
>  	return 0;
>  }
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ