lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZBGPmq7bv5pky4tl@kernel.org>
Date:   Wed, 15 Mar 2023 11:27:54 +0200
From:   Mike Rapoport <rppt@...nel.org>
To:     "Matthew Wilcox (Oracle)" <willy@...radead.org>
Cc:     linux-arch@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 02/36] mm: Add generic flush_icache_pages() and
 documentation

On Wed, Mar 15, 2023 at 05:14:10AM +0000, Matthew Wilcox (Oracle) wrote:
> flush_icache_page() is deprecated but not yet removed, so add
> a range version of it.  Change the documentation to refer to
> update_mmu_cache_range() instead of update_mmu_cache().
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>

Acked-by: Mike Rapoport (IBM) <rppt@...nel.org>

> ---
>  Documentation/core-api/cachetlb.rst | 35 +++++++++++++++--------------
>  include/asm-generic/cacheflush.h    |  5 +++++
>  2 files changed, 23 insertions(+), 17 deletions(-)
> 
> diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst
> index 5c0552e78c58..d4c9e2a28d36 100644
> --- a/Documentation/core-api/cachetlb.rst
> +++ b/Documentation/core-api/cachetlb.rst
> @@ -88,13 +88,13 @@ changes occur:
>  
>  	This is used primarily during fault processing.
>  
> -5) ``void update_mmu_cache(struct vm_area_struct *vma,
> -   unsigned long address, pte_t *ptep)``
> +5) ``void update_mmu_cache_range(struct vm_area_struct *vma,
> +   unsigned long address, pte_t *ptep, unsigned int nr)``
>  
> -	At the end of every page fault, this routine is invoked to
> -	tell the architecture specific code that a translation
> -	now exists at virtual address "address" for address space
> -	"vma->vm_mm", in the software page tables.
> +	At the end of every page fault, this routine is invoked to tell
> +	the architecture specific code that translations now exists
> +	in the software page tables for address space "vma->vm_mm"
> +	at virtual address "address" for "nr" consecutive pages.
>  
>  	A port may use this information in any way it so chooses.
>  	For example, it could use this event to pre-load TLB
> @@ -306,17 +306,18 @@ maps this page at its virtual address.
>  	private".  The kernel guarantees that, for pagecache pages, it will
>  	clear this bit when such a page first enters the pagecache.
>  
> -	This allows these interfaces to be implemented much more efficiently.
> -	It allows one to "defer" (perhaps indefinitely) the actual flush if
> -	there are currently no user processes mapping this page.  See sparc64's
> -	flush_dcache_page and update_mmu_cache implementations for an example
> -	of how to go about doing this.
> +	This allows these interfaces to be implemented much more
> +	efficiently.  It allows one to "defer" (perhaps indefinitely) the
> +	actual flush if there are currently no user processes mapping this
> +	page.  See sparc64's flush_dcache_page and update_mmu_cache_range
> +	implementations for an example of how to go about doing this.
>  
> -	The idea is, first at flush_dcache_page() time, if page_file_mapping()
> -	returns a mapping, and mapping_mapped on that mapping returns %false,
> -	just mark the architecture private page flag bit.  Later, in
> -	update_mmu_cache(), a check is made of this flag bit, and if set the
> -	flush is done and the flag bit is cleared.
> +	The idea is, first at flush_dcache_page() time, if
> +	page_file_mapping() returns a mapping, and mapping_mapped on that
> +	mapping returns %false, just mark the architecture private page
> +	flag bit.  Later, in update_mmu_cache_range(), a check is made
> +	of this flag bit, and if set the flush is done and the flag bit
> +	is cleared.
>  
>  	.. important::
>  
> @@ -369,7 +370,7 @@ maps this page at its virtual address.
>    ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
>  
>  	All the functionality of flush_icache_page can be implemented in
> -	flush_dcache_page and update_mmu_cache. In the future, the hope
> +	flush_dcache_page and update_mmu_cache_range. In the future, the hope
>  	is to remove this interface completely.
>  
>  The final category of APIs is for I/O to deliberately aliased address
> diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h
> index f46258d1a080..09d51a680765 100644
> --- a/include/asm-generic/cacheflush.h
> +++ b/include/asm-generic/cacheflush.h
> @@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
>  #endif
>  
>  #ifndef flush_icache_page
> +static inline void flush_icache_pages(struct vm_area_struct *vma,
> +				     struct page *page, unsigned int nr)
> +{
> +}
> +
>  static inline void flush_icache_page(struct vm_area_struct *vma,
>  				     struct page *page)
>  {
> -- 
> 2.39.2
> 
> 

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ