lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 19 May 2020 18:30:35 -0700 From: Andrew Morton <akpm@...ux-foundation.org> To: Bibo Mao <maobibo@...ngson.cn> Cc: Thomas Bogendoerfer <tsbogend@...ha.franken.de>, Jiaxun Yang <jiaxun.yang@...goat.com>, Huacai Chen <chenhc@...ote.com>, Paul Burton <paulburton@...nel.org>, Dmitry Korotin <dkorotin@...ecomp.com>, Philippe Mathieu-Daudé <f4bug@...at.org>, Stafford Horne <shorne@...il.com>, Steven Price <steven.price@....com>, Anshuman Khandual <anshuman.khandual@....com>, linux-mips@...r.kernel.org, linux-kernel@...r.kernel.org, Mike Rapoport <rppt@...ux.ibm.com>, Sergei Shtylyov <sergei.shtylyov@...entembedded.com>, "Maciej W. Rozycki" <macro@....com>, linux-mm@...ck.org, David Hildenbrand <david@...hat.com> Subject: Re: [PATCH v4 3/4] mm/memory.c: Add memory read privilege on page fault handling On Tue, 19 May 2020 18:03:29 +0800 Bibo Mao <maobibo@...ngson.cn> wrote: > Here add pte_sw_mkyoung function to make page readable on MIPS > platform during page fault handling. This patch improves page > fault latency about 10% on my MIPS machine with lmbench > lat_pagefault case. > > It is noop function on other arches, there is no negative > influence on those architectures. > > --- a/arch/mips/include/asm/pgtable.h > +++ b/arch/mips/include/asm/pgtable.h > @@ -414,6 +414,8 @@ static inline pte_t pte_mkyoung(pte_t pte) > return pte; > } > > +#define pte_sw_mkyoung pte_mkyoung > + > #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT > static inline int pte_huge(pte_t pte) { return pte_val(pte) & _PAGE_HUGE; } > > --- a/include/asm-generic/pgtable.h > +++ b/include/asm-generic/pgtable.h > @@ -227,6 +227,21 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres > } > #endif > > +/* > + * On some architectures hardware does not set page access bit when accessing > + * memory page, it is responsibilty of software setting this bit. It brings > + * out extra page fault penalty to track page access bit. For optimization page > + * access bit can be set during all page fault flow on these arches. > + * To be differentiate with macro pte_mkyoung, this macro is used on platforms > + * where software maintains page access bit. > + */ > +#ifndef pte_sw_mkyoung > +static inline pte_t pte_sw_mkyoung(pte_t pte) > +{ > + return pte; > +} > +#endif Yup, that's neat enough. Thanks for making this change. It looks like all architectures include asm-generic/pgtable.h so that's fine. It's conventional to add a #define pte_sw_mkyoung pte_sw_mkyoung immediately above the #endif there, so we can't try to implement pte_sw_mkyoung() twice if this header gets included twice. But the header has #ifndef _ASM_GENERIC_PGTABLE_H around the whole thing so that should be OK.
Powered by blists - more mailing lists