[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c12f642f-0f04-5a58-0966-41cbeb74c066@suse.cz>
Date: Mon, 7 Jun 2021 12:12:04 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Faiyaz Mohammed <faiyazm@...eaurora.org>, cl@...ux.com,
penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, greg@...ah.com, glittao@...il.com
Cc: vinmenon@...eaurora.org
Subject: Re: [PATCH v10] mm: slub: move sysfs slab alloc/free interfaces to
debugfs
On 6/6/21 6:14 PM, Faiyaz Mohammed wrote:
> alloc_calls and free_calls implementation in sysfs have two issues,
> one is PAGE_SIZE limitiation of sysfs and other is it does not adhere
> to "one value per file" rule.
>
> To overcome this issues, move the alloc_calls and free_calls implemeation
> to debugfs.
>
> Debugfs cache will be created if SLAB_STORE_USER flag is set.
>
> Rename the alloc_calls/free_calls to alloc_traces/free_traces,
> to be inline with what it does.
>
> Signed-off-by: Faiyaz Mohammed <faiyazm@...eaurora.org>
> ---
> mm/slab.h | 8 ++
> mm/slab_common.c | 2 +
> mm/slub.c | 292 +++++++++++++++++++++++++++++++++++++------------------
> 3 files changed, 209 insertions(+), 93 deletions(-)
>
...
> +static int slab_debug_trace_open(struct inode *inode, struct file *filep)
> +{
> +
> + struct kmem_cache_node *n;
> + enum track_item alloc;
> + int node;
> + struct loc_track *t = __seq_open_private(filep, &slab_debugfs_sops,
> + sizeof(struct loc_track));
> + struct kmem_cache *s = file_inode(filep)->i_private;
> +
> + if (strcmp(filep->f_path.dentry->d_name.name, "alloc_traces") == 0)
> + alloc = TRACK_ALLOC;
^ extra space here?
> + else
> + alloc = TRACK_FREE;
same here
> +
> + if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL)) {
> + pr_err("Out of memory\n");
Hm I would remove this. It doesn't print any context, so it's not useful to let
users know where/why we ran out of memory. Also if a GFP_KERNEL allocation
fails, there will be a big warning including stacktrace from the page allocator
anyway.
> + return -ENOMEM;
> + }
> +
> + /* Push back cpu slabs */
> + flush_all(s);
> +
> + for_each_kmem_cache_node(s, node, n) {
> + unsigned long flags;
> + struct page *page;
> +
> + if (!atomic_long_read(&n->nr_slabs))
> + continue;
> +
> + spin_lock_irqsave(&n->list_lock, flags);
> + list_for_each_entry(page, &n->partial, slab_list)
> + process_slab(t, s, page, alloc);
> + list_for_each_entry(page, &n->full, slab_list)
> + process_slab(t, s, page, alloc);
> + spin_unlock_irqrestore(&n->list_lock, flags);
At least this is not Python, so it's just a visual flaw :)
> + }
> +
> + return 0;
> +}
> +
> +static int slab_debug_trace_release(struct inode *inode, struct file *file)
> +{
> + struct seq_file *seq = file->private_data;
> + struct loc_track *t = seq->private;
> +
> + free_loc_track(t);
> + kfree(seq->private);
> + seq->private = NULL;
> + return seq_release(inode, file);
You can call seq_release_private() instead and deal just with free_loc_track here.
Thanks!
Vlastimil
Powered by blists - more mailing lists