lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YecAQ6GsHimESUHq@unreal>
Date:   Tue, 18 Jan 2022 20:00:35 +0200
From:   Leon Romanovsky <leon@...nel.org>
To:     Santosh Shilimkar <santosh.shilimkar@...cle.com>
Cc:     Praveen Kannoju <praveen.kannoju@...cle.com>,
        "David S . Miller" <davem@...emloft.net>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "rds-devel@....oracle.com" <rds-devel@....oracle.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Rama Nichanamatlu <rama.nichanamatlu@...cle.com>,
        Rajesh Sivaramasubramaniom 
        <rajesh.sivaramasubramaniom@...cle.com>
Subject: Re: [PATCH RFC] rds: ib: Reduce the contention caused by the
 asynchronous workers to flush the mr pool

On Tue, Jan 18, 2022 at 04:48:43PM +0000, Santosh Shilimkar wrote:
> 
> > On Jan 18, 2022, at 6:47 AM, Praveen Kannoju <praveen.kannoju@...cle.com> wrote:
> > 
> > This patch aims to reduce the number of asynchronous workers being spawned
> > to execute the function "rds_ib_flush_mr_pool" during the high I/O
> > situations. Synchronous call path's to this function "rds_ib_flush_mr_pool"
> > will be executed without being disturbed. By reducing the number of
> > processes contending to flush the mr pool, the total number of D state
> > processes waiting to acquire the mutex lock will be greatly reduced, which
> > otherwise were causing DB instance crash as the corresponding processes
> > were not progressing while waiting to acquire the mutex lock.
> > 
> > Signed-off-by: Praveen Kumar Kannoju <praveen.kannoju@...cle.com>
> > —
> > 
> […]
> 
> > diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
> > index 8f070ee..6b640b5 100644
> > --- a/net/rds/ib_rdma.c
> > +++ b/net/rds/ib_rdma.c
> > @@ -393,6 +393,8 @@ int rds_ib_flush_mr_pool(struct rds_ib_mr_pool *pool,
> > 	 */
> > 	dirty_to_clean = llist_append_to_list(&pool->drop_list, &unmap_list);
> > 	dirty_to_clean += llist_append_to_list(&pool->free_list, &unmap_list);
> > +	WRITE_ONCE(pool->flush_ongoing, true);
> > +	smp_wmb();
> > 	if (free_all) {
> > 		unsigned long flags;
> > 
> > @@ -430,6 +432,8 @@ int rds_ib_flush_mr_pool(struct rds_ib_mr_pool *pool,
> > 	atomic_sub(nfreed, &pool->item_count);
> > 
> > out:
> > +	WRITE_ONCE(pool->flush_ongoing, false);
> > +	smp_wmb();
> > 	mutex_unlock(&pool->flush_lock);
> > 	if (waitqueue_active(&pool->flush_wait))
> > 		wake_up(&pool->flush_wait);
> > @@ -507,8 +511,17 @@ void rds_ib_free_mr(void *trans_private, int invalidate)
> > 
> > 	/* If we've pinned too many pages, request a flush */
> > 	if (atomic_read(&pool->free_pinned) >= pool->max_free_pinned ||
> > -	    atomic_read(&pool->dirty_count) >= pool->max_items / 5)
> > -		queue_delayed_work(rds_ib_mr_wq, &pool->flush_worker, 10);
> > +	    atomic_read(&pool->dirty_count) >= pool->max_items / 5) {
> > +		smp_rmb();
> You won’t need these explicit barriers since above atomic and write once already
> issue them.
> 
> Other than that, it seems a good idea to me. With that fixed, feel free to add my
> ack and repost.

I disagree with the sentence that it is "good idea". This flush_ongoing
bool is racy, nothing prevents from changing it immediately after it was
read. The queue_delayed_work will anyway ensure that only one work is
running, so much more clean solution will be to check inside rds_ib_flush_mr_pool()
if you need to flush or not.

Thanks

> 
> Regards,
> Santosh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ