[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180720124212.7260d76d83e2b8e5e3349ea5@linux-foundation.org>
Date: Fri, 20 Jul 2018 12:42:12 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: jbaron@...mai.com, viro@...iv.linux.org.uk,
linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH -next 0/2] fs/epoll: loosen irq safety when possible
On Fri, 20 Jul 2018 10:29:54 -0700 Davidlohr Bueso <dave@...olabs.net> wrote:
> Hi,
>
> Both patches replace saving+restoring interrupts when taking the
> ep->lock (now the waitqueue lock), with just disabling local irqs.
> This shows immediate performance benefits in patch 1 for an epoll
> workload running on Xen.
I'm surprised. Is spin_lock_irqsave() significantly more expensive
than spin_lock_irq()? Relative to all the other stuff those functions
are doing? If so, how come? Some architectural thing makes
local_irq_save() much more costly than local_irq_disable()?
> The main concern we need to have with this
> sort of changes in epoll is the ep_poll_callback() which is passed
> to the wait queue wakeup and is done very often under irq context,
> this patch does not touch this call.
Yeah, these changes are scary. For the code as it stands now, and for
the code as it evolves.
I'd have more confidence if we had some warning mechanism if we run
spin_lock_irq() when IRQs are disabled, which is probably-a-bug. But
afaict we don't have that. Probably for good reasons - I wonder what
they are?
> Patches have been tested pretty heavily with the customer workload,
> microbenchmarks, ltp testcases and two high level workloads that
> use epoll under the hood: nginx and libevent benchmarks.
>
> Details are in the individual patches.
>
> Applies on top of mmotd.
Please convince me about the performance benefits?
Powered by blists - more mailing lists