lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAB=+i9QaA+ewMeGM9ngkn06ag2HFoJqCRhmfG4qP-_G3Gv5DTQ@mail.gmail.com>
Date:   Fri, 18 Aug 2023 20:47:37 +0900
From:   Hyeonggon Yoo <42.hyeyoo@...il.com>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     "Liam R. Howlett" <Liam.Howlett@...cle.com>,
        Matthew Wilcox <willy@...radead.org>,
        Christoph Lameter <cl@...ux.com>,
        David Rientjes <rientjes@...gle.com>,
        Pekka Enberg <penberg@...nel.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, patches@...ts.linux.dev
Subject: Re: [RFC v1 1/5] mm, slub: fix bulk alloc and free stats

On Tue, Aug 8, 2023 at 6:53 PM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> The SLUB sysfs stats enabled CONFIG_SLUB_STATS have two deficiencies
> identified wrt bulk alloc/free operations:
>
> - Bulk allocations from cpu freelist are not counted. Add the
>   ALLOC_FASTPATH counter there.
>
> - Bulk fastpath freeing will count a list of multiple objects with a
>   single FREE_FASTPATH inc. Add a stat_add() variant to count them all.
>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> ---
>  mm/slub.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index e3b5d5c0eb3a..a9437d48840c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -341,6 +341,14 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si)
>  #endif
>  }
>
> +static inline void stat_add(const struct kmem_cache *s, enum stat_item si, int v)
> +{
> +#ifdef CONFIG_SLUB_STATS
> +       raw_cpu_add(s->cpu_slab->stat[si], v);
> +#endif
> +}
> +
> +
>  /*
>   * Tracks for which NUMA nodes we have kmem_cache_nodes allocated.
>   * Corresponds to node_state[N_NORMAL_MEMORY], but can temporarily
> @@ -3776,7 +3784,7 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
>
>                 local_unlock(&s->cpu_slab->lock);
>         }
> -       stat(s, FREE_FASTPATH);
> +       stat_add(s, FREE_FASTPATH, cnt);

Should bulk free slowpath also be counted in the same way?

>  }
>  #else /* CONFIG_SLUB_TINY */
>  static void do_slab_free(struct kmem_cache *s,
> @@ -3978,6 +3986,7 @@ static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags,
>                 c->freelist = get_freepointer(s, object);
>                 p[i] = object;
>                 maybe_wipe_obj_freeptr(s, p[i]);
> +               stat(s, ALLOC_FASTPATH);
>         }
>         c->tid = next_tid(c->tid);
>         local_unlock_irqrestore(&s->cpu_slab->lock, irqflags);
> --
> 2.41.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ