lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 7 Feb 2022 11:32:47 +0100
From:   David Hildenbrand <david@...hat.com>
To:     Pedro Demarchi Gomes <pedrodemargomes@...il.com>
Cc:     SeongJae Park <sj@...nel.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, John Hubbard <jhubbard@...dia.com>
Subject: Re: [PATCH] mm/damon: Add option to monitor only writes

On 03.02.22 14:12, Pedro Demarchi Gomes wrote:
> When "writes" is written to /sys/kernel/debug/damon/counter_type damon will monitor only writes.
> This patch also adds the actions mergeable and unmergeable to damos schemes. These actions are used by KSM as explained in [1].

[...]

>  
> +static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
> +{
> +	struct page *page;
> +
> +	if (!pte_write(pte))
> +		return false;
> +	if (!is_cow_mapping(vma->vm_flags))
> +		return false;
> +	if (likely(!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags)))
> +		return false;
> +	page = vm_normal_page(vma, addr, pte);
> +	if (!page)
> +		return false;
> +	return page_maybe_dma_pinned(page);
> +}
> +
> +static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
> +		unsigned long addr, pmd_t *pmdp)
> +{
> +	pmd_t old, pmd = *pmdp;
> +
> +	if (pmd_present(pmd)) {
> +		/* See comment in change_huge_pmd() */
> +		old = pmdp_invalidate(vma, addr, pmdp);
> +		if (pmd_dirty(old))
> +			pmd = pmd_mkdirty(pmd);
> +		if (pmd_young(old))
> +			pmd = pmd_mkyoung(pmd);
> +
> +		pmd = pmd_wrprotect(pmd);
> +		pmd = pmd_clear_soft_dirty(pmd);
> +
> +		set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
> +	} else if (is_migration_entry(pmd_to_swp_entry(pmd))) {
> +		pmd = pmd_swp_clear_soft_dirty(pmd);
> +		set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
> +	}
> +}
> +
> +static inline void clear_soft_dirty(struct vm_area_struct *vma,
> +		unsigned long addr, pte_t *pte)
> +{
> +	/*
> +	 * The soft-dirty tracker uses #PF-s to catch writes
> +	 * to pages, so write-protect the pte as well. See the
> +	 * Documentation/admin-guide/mm/soft-dirty.rst for full description
> +	 * of how soft-dirty works.
> +	 */
> +	pte_t ptent = *pte;
> +
> +	if (pte_present(ptent)) {
> +		pte_t old_pte;
> +
> +		if (pte_is_pinned(vma, addr, ptent))
> +			return;
> +		old_pte = ptep_modify_prot_start(vma, addr, pte);
> +		ptent = pte_wrprotect(old_pte);
> +		ptent = pte_clear_soft_dirty(ptent);
> +		ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent);
> +	} else if (is_swap_pte(ptent)) {
> +		ptent = pte_swp_clear_soft_dirty(ptent);
> +		set_pte_at(vma->vm_mm, addr, pte, ptent);
> +	}
> +}

Just like clearrefs, this can race against GUP-fast to detect pinned
pages. And just like clearrefs, we're not handling PMDs properly. And
just like anything that write-protects random anon pages right now, this
does not consider O_DIRECT as is.

Fortunately, there are not too many users of clearreefs/softdirty
tracking out there (my search a while ago returned no open source
users). My assumption is that your feature might see more widespread use.

Adding more random write protection until we fixed the COW issues [1]
really makes my stomach hurt on a Monday morning.

Please, let's defer any more features that rely on write-protecting
random anon pages until we have ways in place to not corrupt random user
space.

That is:
1) Teaching the COW logic to not copy pages that are pinned -- I'm
working on that.
2) Converting O_DIRECT to use FOLL_PIN instead of FOLL_GET. John is
working on that.

So I'm not against this change. I'm against this change at this point in
time.

[1]
https://lore.kernel.org/all/3ae33b08-d9ef-f846-56fb-645e3b9b4c66@redhat.com/

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ