lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171115140020.GA6771@cmpxchg.org>
Date:   Wed, 15 Nov 2017 09:00:20 -0500
From:   Johannes Weiner <hannes@...xchg.org>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
        Minchan Kim <minchan@...nel.org>,
        Huang Ying <ying.huang@...el.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Greg Thelen <gthelen@...gle.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm,vmscan: Kill global shrinker lock.

On Wed, Nov 15, 2017 at 10:02:51AM +0100, Michal Hocko wrote:
> On Tue 14-11-17 06:37:42, Tetsuo Handa wrote:
> > This patch uses polling loop with short sleep for unregister_shrinker()
> > rather than wait_on_atomic_t(), for we can save reader's cost (plain
> > atomic_dec() compared to atomic_dec_and_test()), we can expect that
> > do_shrink_slab() of unregistering shrinker likely returns shortly, and
> > we can avoid khungtaskd warnings when do_shrink_slab() of unregistering
> > shrinker unexpectedly took so long.
> 
> I would use wait_event_interruptible in the remove path rather than the
> short sleep loop which is just too ugly. The shrinker walk would then
> just wake_up the sleeper when the ref. count drops to 0. Two
> synchronize_rcu is quite ugly as well, but I was not able to simplify
> them. I will keep thinking. It just sucks how we cannot follow the
> standard rcu list with dynamically allocated structure pattern here.

It's because the refcount is dropped too early. The refcount protects
the object during shrink, but not for the list_next(), and so you need
an additional grace period just for that part.

I think you could drop the reference count in the next iteration. This
way the list_next() works without requiring a second RCU grace period.

ref count protects the object and its list pointers; RCU protects what
the list pointers point to before we acquire the reference:

	rcu_read_lock();
	list_for_each_entry_rcu(pos, list) {
		if (!atomic_inc_not_zero(&pos->ref))
			continue;
		rcu_read_unlock();

		if (prev)
			atomic_dec(&prev->ref);
		prev = pos;

		shrink();

		rcu_read_lock();
	}
	rcu_read_unlock();
	if (prev)
		atomic_dec(&prev->ref);

In any case, Minchan's lock breaking seems way preferable over that
level of headscratching complexity for an unusual case like Shakeel's.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ