lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yo5B1tLcYPUoaACS@cmpxchg.org>
Date:   Wed, 25 May 2022 10:48:54 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Muchun Song <songmuchun@...edance.com>
Cc:     mhocko@...nel.org, roman.gushchin@...ux.dev, shakeelb@...gle.com,
        cgroups@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, duanxiongchun@...edance.com,
        longman@...hat.com
Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU
 pages are reparented

On Wed, May 25, 2022 at 09:03:59PM +0800, Muchun Song wrote:
> On Wed, May 25, 2022 at 08:30:15AM -0400, Johannes Weiner wrote:
> > On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote:
> > > On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote:
> > > > On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote:
> > > > > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
> > > > >   */
> > > > >  struct lruvec *folio_lruvec_lock(struct folio *folio)
> > > > >  {
> > > > > -	struct lruvec *lruvec = folio_lruvec(folio);
> > > > > +	struct lruvec *lruvec;
> > > > >  
> > > > > +	rcu_read_lock();
> > > > > +retry:
> > > > > +	lruvec = folio_lruvec(folio);
> > > > >  	spin_lock(&lruvec->lru_lock);
> > > > > -	lruvec_memcg_debug(lruvec, folio);
> > > > > +
> > > > > +	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
> > > > > +		spin_unlock(&lruvec->lru_lock);
> > > > > +		goto retry;
> > > > > +	}
> > > > > +
> > > > > +	/*
> > > > > +	 * Preemption is disabled in the internal of spin_lock, which can serve
> > > > > +	 * as RCU read-side critical sections.
> > > > > +	 */
> > > > > +	rcu_read_unlock();
> > > > 
> > > > The code looks right to me, but I don't understand the comment: why do
> > > > we care that the rcu read-side continues? With the lru_lock held,
> > > > reparenting is on hold and the lruvec cannot be rcu-freed anyway, no?
> > > >
> > > 
> > > Right. We could hold rcu read lock until end of reparting.  So you mean
> > > we do rcu_read_unlock in folio_lruvec_lock()?
> > 
> > The comment seems to suggest that disabling preemption is what keeps
> > the lruvec alive. But it's the lru_lock that keeps it alive. The
> > cgroup destruction path tries to take the lru_lock long before it even
> > gets to synchronize_rcu(). Once you hold the lru_lock, having an
> > implied read-side critical section as well doesn't seem to matter.
> >
> 
> Well, I thought that spinlocks have implicit read-side critical sections
> because it disables preemption (I learned from the comments above
> synchronize_rcu() that says interrupts, preemption, or softirqs have been
> disabled also serve as RCU read-side critical sections).  So I have a
> question: is it still true in a PREEMPT_RT kernel (I am not familiar with
> this)?

Yes, but you're missing my point.

> > Should the comment be deleted?
> 
> I think we could remove the comments. If the above question is false, seems
> like we should continue holding rcu read lock.

It's true.

But assume it's false for a second. Why would you need to continue
holding it? What would it protect? The lruvec would be pinned by the
spinlock even if it DIDN'T imply an RCU lock, right?

So I don't understand the point of the comment. If the implied RCU
lock is protecting something not covered by the bare spinlock itself,
it should be added to the comment. Otherwise, the comment should go.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ