[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANN689F4jcm_rx3t8hwD7-F-kwJpMJRsi50ab9eh039ZzVBOQg@mail.gmail.com>
Date: Thu, 8 Oct 2020 00:40:05 -0700
From: Michel Lespinasse <walken@...gle.com>
To: Axel Rasmussen <axelrasmussen@...gle.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Daniel Jordan <daniel.m.jordan@...cle.com>,
Laurent Dufour <ldufour@...ux.ibm.com>,
Jann Horn <jannh@...gle.com>,
Chinwen Chang <chinwen.chang@...iatek.com>,
Yafang Shao <laoar.shao@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH v2 2/2] mmap_lock: add tracepoints around lock acquisition
On Wed, Oct 7, 2020 at 11:44 AM Axel Rasmussen <axelrasmussen@...gle.com> wrote:
> The goal of these tracepoints is to be able to debug lock contention
> issues. This lock is acquired on most (all?) mmap / munmap / page fault
> operations, so a multi-threaded process which does a lot of these can
> experience significant contention.
>
> We trace just before we start acquisition, when the acquisition returns
> (whether it succeeded or not), and when the lock is released (or
> downgraded). The events are broken out by lock type (read / write).
>
> The events are also broken out by memcg path. For container-based
> workloads, users often think of several processes in a memcg as a single
> logical "task", so collecting statistics at this level is useful.
>
> The end goal is to get latency information. This isn't directly included
> in the trace events. Instead, users are expected to compute the time
> between "start locking" and "acquire returned", using e.g. synthetic
> events or BPF. The benefit we get from this is simpler code.
>
> Because we use tracepoint_enabled() to decide whether or not to trace,
> this patch has effectively no overhead unless tracepoints are enabled at
> runtime. If tracepoints are enabled, there is a performance impact, but
> how much depends on exactly what e.g. the BPF program does.
>
> Signed-off-by: Axel Rasmussen <axelrasmussen@...gle.com>
Thanks for working on this.
I like that there is no overhead unless CONFIG_TRACING is set.
However, I think the __mmap_lock_traced_lock and similar functions are
the wrong level of abstraction, especially considering that we are
considering to switch away from using rwsem as the underlying lock
implementation. Would you consider something along the following lines
instead for include/linux/mmap_lock.h ?
#ifdef CONFIG_TRACING
DECLARE_TRACEPOINT(...);
void __mmap_lock_do_trace_start_locking(struct mm_struct *mm, bool write);
static inline void mmap_lock_trace_start_locking(struct mm_struct *mm,
bool write)
{
if (tracepoint_enabled(mmap_lock_start_locking))
__mmap_lock_do_trace_start_locking(mm, write);
}
#else
static inline void mmap_lock_trace_start_locking(struct mm_struct *mm,
bool write) {}
#endif
static inline void mmap_write_lock(struct mm_struct *mm)
{
mmap_lock_trace_start_locking(mm, true);
down_write(&mm->mmap_lock);
mmap_lock_trace_acquire_returned(mm, true, true);
}
I think this is more straightforward, and also the
mmap_lock_trace_start_locking and similar functions don't depend on
the underlying lock implementation.
The changes to the other files look fine to me.
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
Powered by blists - more mailing lists