lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 2 Feb 2022 09:14:19 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
        Axel Rasmussen <axelrasmussen@...gle.com>,
        Mina Almasry <almasrymina@...gle.com>,
        Michal Hocko <mhocko@...e.com>, Peter Xu <peterx@...hat.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Shuah Khan <shuah@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v2 1/3] mm: enable MADV_DONTNEED for hugetlb mappings

On 02.02.22 02:40, Mike Kravetz wrote:
> MADV_DONTNEED is currently disabled for hugetlb mappings.  This
> certainly makes sense in shared file mappings as the pagecache maintains
> a reference to the page and it will never be freed.  However, it could
> be useful to unmap and free pages in private mappings.
> 
> The only thing preventing MADV_DONTNEED from working on hugetlb mappings
> is a check in can_madv_lru_vma().  To allow support for hugetlb mappings
> create and use a new routine madvise_dontneed_free_valid_vma() that will
> allow hugetlb mappings.  Also, before calling zap_page_range in the
> DONTNEED case align start and size to huge page size for hugetlb vmas.
> madvise only requires PAGE_SIZE alignment, but the hugetlb unmap routine
> requires huge page size alignment.
> 
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
>  mm/madvise.c | 24 ++++++++++++++++++++++--
>  1 file changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 5604064df464..7ae891e030a4 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -796,10 +796,30 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
>  static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
>  					unsigned long start, unsigned long end)
>  {
> +	/*
> +	 * start and size (end - start) must be huge page size aligned
> +	 * for hugetlb vmas.
> +	 */
> +	if (is_vm_hugetlb_page(vma)) {
> +		struct hstate *h = hstate_vma(vma);
> +
> +		start = ALIGN_DOWN(start, huge_page_size(h));
> +		end = ALIGN(end, huge_page_size(h));

So you effectively extend the range silently. IIUC, if someone would zap
a 4k range you would implicitly zap a whole 2M page and effectively zero
out more data than requested.


Looking at do_madvise(), we:
(1) reject start addresses that are not page-aligned
(2) shrink lengths that are not page-aligned and refuse if it turns 0

The man page documents (1) but doesn't really document (2).

Naturally I'd have assume that we apply the same logic to huge page
sizes and documenting it in the man page accordingly.


Why did you decide to extend the range? I'd assume MADV_REMOVE behaves
like FALLOC_FL_PUNCH_HOLE:
  "Within the specified range, partial filesystem blocks are zeroed, and
   whole filesystem blocks are removed from the file.  After a
   successful call, subsequent reads from  this  range will return
   zeros."
So we don't "discard more than requested".


I see the following possible alternatives:
(a) Fail if the range is not aligned
-> Clear semantics
(b) Fail if the start is not aligned, shrink the end if required
-> Same rules as for PAGE_SIZE
(c) Zero out the requested part
-> Same semantics as FALLOC_FL_PUNCH_HOLE.

My preference would be a), properly documenting it in the man page.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ