[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YMsLtseEHC8dWwag@hirez.programming.kicks-ass.net>
Date: Thu, 17 Jun 2021 10:45:42 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andy Lutomirski <luto@...nel.org>
Cc: Nicholas Piggin <npiggin@...il.com>, x86@...nel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...el.com>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [PATCH 4/8] membarrier: Make the post-switch-mm barrier explicit
On Wed, Jun 16, 2021 at 11:41:19AM -0700, Andy Lutomirski wrote:
> mmgrab() and mmdrop() would be better if they were not full barriers. As a
> trivial optimization,
> mmgrab() could use a relaxed atomic and mmdrop()
> could use a release on architectures that have these operations.
mmgrab() *is* relaxed, mmdrop() is a full barrier but could trivially be
made weaker once membarrier stops caring about it.
static inline void mmdrop(struct mm_struct *mm)
{
unsigned int val = atomic_dec_return_release(&mm->mm_count);
if (unlikely(!val)) {
/* Provide REL+ACQ ordering for free() */
smp_acquire__after_ctrl_dep();
__mmdrop(mm);
}
}
It's slightly less optimal for not being able to use the flags from the
decrement. Or convert the whole thing to refcount_t (if appropriate)
which already does something like the above.
Powered by blists - more mailing lists