[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201201162847.654f3013@gandalf.local.home>
Date: Tue, 1 Dec 2020 16:28:47 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Axel Rasmussen <axelrasmussen@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Chinwen Chang <chinwen.chang@...iatek.com>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
David Rientjes <rientjes@...gle.com>,
Davidlohr Bueso <dbueso@...e.de>,
Ingo Molnar <mingo@...hat.com>, Jann Horn <jannh@...gle.com>,
Laurent Dufour <ldufour@...ux.ibm.com>,
Michel Lespinasse <walken@...gle.com>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Vlastimil Babka <vbabka@...e.cz>,
Yafang Shao <laoar.shao@...il.com>, davem@...emloft.net,
dsahern@...nel.org, gregkh@...uxfoundation.org, kuba@...nel.org,
liuhangbin@...il.com, tj@...nel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v2] mm: mmap_lock: fix use-after-free race and css ref
leak in tracepoints
On Tue, 1 Dec 2020 12:32:49 -0800
Axel Rasmussen <axelrasmussen@...gle.com> wrote:
> +/* Called with reg_lock held. */
The above comment is reduntant, as the lockdep_is_held() below also suggest
that it is ;-)
> +static void free_memcg_path_bufs(void)
> +{
> + int cpu;
> + char *old;
> +
> + for_each_possible_cpu(cpu) {
> + old = rcu_dereference_protected(per_cpu(memcg_path_buf, cpu),
> + lockdep_is_held(®_lock));
> + if (old == NULL)
> + break;
Hmm, what if the topology of the system has missing CPU numbers (this is
possible I believe on some systems)?
> + rcu_assign_pointer(per_cpu(memcg_path_buf, cpu), NULL);
> + /* Wait for inflight memcg_path_buf users to finish. */
> + synchronize_rcu();
Please break this up into two loops. You will need to have another array
that is created in trace_mmap_lock_reg() function:
static char **path_holders;
trace_mmap_lock_reg()
{
[..]
path_holders = kmalloc(num_possible_cpus * sizeof(*path_holders));
[..]
}
Then this function can be:
static void free_memcg_path_bufs(void)
{
int cpu;
for_each_possible_cpu(cpu) {
path_holders[cpu] = rcu_dereference_protected(per_cpu(memcg_path_buf, cpu),
lockdep_is_held(®_lock));
rcu_assign_pointer(per_cpu(memcg_path_buf, cpu), NULL);
}
/* Wait for inflight memcg_path_buf users to finish. */
synchronize_rcu();
for_each_possible_cpu(cpu) {
kfree(path_holders[cpu]);
}
kfree(path_holders);
path_holders = NULL;
}
Otherwise, if you have a machine with 128 possible CPUs, doing 128
synchronize_rcu()s is going to be expensive!
> + kfree(old);
> + }
> +}
>
> static inline char *get_memcg_path_buf(void)
> {
> + char *buf;
> int idx;
>
> + rcu_read_lock();
The caller of get_mm_memcg_path() has preemption disabled, which is also
now an RCU lock. So the rcu_read_lock() is somewhat redundant.
Oh, and looking at the original patch:
+ memcg_path != NULL ? memcg_path : "", \
The above could be shorten to:
memcg_path ? : "",
As gcc has a trick with the "? :" which is if there's nothing in between
the "?" and ":" it will use what was tested as the result if it is not zero
or NULL.
-- Steve
> + buf = rcu_dereference(*this_cpu_ptr(&memcg_path_buf));
> + if (buf == NULL)
> + return NULL;
> idx = this_cpu_add_return(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE) -
> MEMCG_PATH_BUF_SIZE;
> - return &this_cpu_read(memcg_path_buf)[idx];
> + return &buf[idx];
> }
Powered by blists - more mailing lists