lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200515134046.cf107c6a13b9604c46ad71b8@linux-foundation.org>
Date:   Fri, 15 May 2020 13:40:46 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Bibo Mao <maobibo@...ngson.cn>
Cc:     Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
        Jiaxun Yang <jiaxun.yang@...goat.com>,
        Huacai Chen <chenhc@...ote.com>,
        Paul Burton <paulburton@...nel.org>,
        Dmitry Korotin <dkorotin@...ecomp.com>,
        Philippe Mathieu-Daudé <f4bug@...at.org>,
        Stafford Horne <shorne@...il.com>,
        Steven Price <steven.price@....com>,
        Anshuman Khandual <anshuman.khandual@....com>,
        linux-mips@...r.kernel.org, linux-kernel@...r.kernel.org,
        Mike Rapoport <rppt@...ux.ibm.com>,
        Sergei Shtylyov <sergei.shtylyov@...entembedded.com>,
        "Maciej W. Rozycki" <macro@....com>, linux-mm@...ck.org
Subject: Re: [PATCH 2/3] mm/memory.c: Update local TLB if PTE entry exists

On Fri, 15 May 2020 12:10:08 +0800 Bibo Mao <maobibo@...ngson.cn> wrote:

> If there are two threads hitting page fault at the same page,
> one thread updates PTE entry and local TLB, the other can
> update local tlb also, rather than give up and do page fault
> again.
>
> ...
>
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1770,8 +1770,8 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
>  			}
>  			entry = pte_mkyoung(*pte);
>  			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
> -			if (ptep_set_access_flags(vma, addr, pte, entry, 1))
> -				update_mmu_cache(vma, addr, pte);
> +			ptep_set_access_flags(vma, addr, pte, entry, 1);
> +			update_mmu_cache(vma, addr, pte);

Presumably these changes mean that other architectures will run
update_mmu_cache() more frequently than they used to.  How much more
frequently, and what will be the impact of this change?  (Please fully
explain all this in the changelog).

>  		}
>  		goto out_unlock;
>  	}
>
> ...
>
> @@ -2463,7 +2462,8 @@ static inline bool cow_user_page(struct page *dst, struct page *src,
>  		vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
>  		locked = true;
>  		if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> -			/* The PTE changed under us. Retry page fault. */
> +			/* The PTE changed under us, update local tlb */
> +			pdate_mmu_cache(vma, addr, vmf->pte);

Missing a 'u' there.  Which tells me this patch isn't the one which you
tested!

>  			ret = false;
>  			goto pte_unlock;
>  		}
>
> ...
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ