lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 10 Oct 2021 15:49:07 -0700 (PDT)
From:   David Rientjes <rientjes@...gle.com>
To:     Hyeonggon Yoo <42.hyeyoo@...il.com>
cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH] mm, slub: Use prefetchw instead of prefetch

On Fri, 8 Oct 2021, Hyeonggon Yoo wrote:

> It's certain that an object will be not only read, but also
> written after allocation.
> 

Why is it certain?  I think perhaps what you meant to say is that if we 
are doing any prefetching here, then access will benefit from prefetchw 
instead of prefetch.  But it's not "certain" that allocated memory will be 
accessed at all.

> Use prefetchw instead of prefetchw. On supported architecture

If we're using prefetchw instead of prefetchw, I think the diff would be 
0 lines changed :)

> like x86, it helps to invalidate cache line when the object exists
> in other processors' cache.
> 
> Signed-off-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
> ---
>  mm/slub.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 3d2025f7163b..2aca7523165e 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -352,9 +352,9 @@ static inline void *get_freepointer(struct kmem_cache *s, void *object)
>  	return freelist_dereference(s, object + s->offset);
>  }
>  
> -static void prefetch_freepointer(const struct kmem_cache *s, void *object)
> +static void prefetchw_freepointer(const struct kmem_cache *s, void *object)
>  {
> -	prefetch(object + s->offset);
> +	prefetchw(object + s->offset);
>  }
>  
>  static inline void *get_freepointer_safe(struct kmem_cache *s, void *object)
> @@ -3195,10 +3195,9 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
>  			note_cmpxchg_failure("slab_alloc", s, tid);
>  			goto redo;
>  		}
> -		prefetch_freepointer(s, next_object);
> +		prefetchw_freepointer(s, next_object);
>  		stat(s, ALLOC_FASTPATH);
>  	}
> -
>  	maybe_wipe_obj_freeptr(s, object);
>  	init = slab_want_init_on_alloc(gfpflags, s);
>  
> -- 
> 2.27.0
> 
> 

Powered by blists - more mailing lists