lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230310085527.mxezm3qzytet2duu@wittgenstein>
Date:   Fri, 10 Mar 2023 09:55:27 +0100
From:   Christian Brauner <brauner@...nel.org>
To:     Paolo Abeni <pabeni@...hat.com>
Cc:     netdev@...r.kernel.org, Soheil Hassas Yeganeh <soheil@...gle.com>,
        Al Viro <viro@...iv.linux.org.uk>,
        Carlos Maiolino <cmaiolino@...hat.com>,
        Eric Biggers <ebiggers@...nel.org>,
        Jacob Keller <jacob.e.keller@...el.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jens Axboe <axboe@...nel.dk>, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v5] epoll: use refcount to reduce ep_mutex contention

On Thu, Mar 09, 2023 at 01:47:23PM +0100, Paolo Abeni wrote:
> On Thu, 2023-03-09 at 12:18 +0100, Christian Brauner wrote:
> > On Wed, Mar 08, 2023 at 10:51:31PM +0100, Paolo Abeni wrote:
> > > We are observing huge contention on the epmutex during an http
> > > connection/rate test:
> > > 
> > >  83.17% 0.25%  nginx            [kernel.kallsyms]         [k] entry_SYSCALL_64_after_hwframe
> > > [...]
> > >            |--66.96%--__fput
> > >                       |--60.04%--eventpoll_release_file
> > >                                  |--58.41%--__mutex_lock.isra.6
> > >                                            |--56.56%--osq_lock
> > > 
> > > The application is multi-threaded, creates a new epoll entry for
> > > each incoming connection, and does not delete it before the
> > > connection shutdown - that is, before the connection's fd close().
> > > 
> > > Many different threads compete frequently for the epmutex lock,
> > > affecting the overall performance.
> > > 
> > > To reduce the contention this patch introduces explicit reference counting
> > > for the eventpoll struct. Each registered event acquires a reference,
> > > and references are released at ep_remove() time.
> > > 
> > > The eventpoll struct is released by whoever - among EP file close() and
> > > and the monitored file close() drops its last reference.
> > > 
> > > Additionally, this introduces a new 'dying' flag to prevent races between
> > > the EP file close() and the monitored file close().
> > > ep_eventpoll_release() marks, under f_lock spinlock, each epitem as dying
> > > before removing it, while EP file close() does not touch dying epitems.
> > > 
> > > The above is needed as both close operations could run concurrently and
> > > drop the EP reference acquired via the epitem entry. Without the above
> > > flag, the monitored file close() could reach the EP struct via the epitem
> > > list while the epitem is still listed and then try to put it after its
> > > disposal.
> > > 
> > > An alternative could be avoiding touching the references acquired via
> > > the epitems at EP file close() time, but that could leave the EP struct
> > > alive for potentially unlimited time after EP file close(), with nasty
> > > side effects.
> > > 
> > > With all the above in place, we can drop the epmutex usage at disposal time.
> > > 
> > > Overall this produces a significant performance improvement in the
> > > mentioned connection/rate scenario: the mutex operations disappear from
> > > the topmost offenders in the perf report, and the measured connections/rate
> > > grows by ~60%.
> > > 
> > > To make the change more readable this additionally renames ep_free() to
> > > ep_clear_and_put(), and moves the actual memory cleanup in a separate
> > > ep_free() helper.
> > > 
> > > Tested-by: Xiumei Mu <xmu@...hiat.com>
> > 
> > Is that a typo "redhiat" in the mail?
> 
> Indeed yes! Thanks for noticing. Should I share a new revision to
> address that?

No, I think we can just fix that up...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ