[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180302163603.GQ19312@magnolia>
Date: Fri, 2 Mar 2018 08:36:03 -0800
From: "Darrick J. Wong" <darrick.wong@...cle.com>
To: Vratislav Bendel <vbendel@...hat.com>
Cc: linux-xfs@...r.kernel.org, Brian Foster <bfoster@...hat.com>,
linux-kernel@...r.kernel.org, djwong@...nel.org
Subject: Re: [PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic
On Thu, Mar 01, 2018 at 02:48:00PM -0800, Darrick J. Wong wrote:
> On Wed, Feb 28, 2018 at 04:49:51PM +0100, Vratislav Bendel wrote:
> > The function xfs_buftarg_isolate() used by xfs buffer schrinkers
> > to determine whether a buffer should be isolated and disposed
> > from LRU list, has inverted logic.
> >
> > Excerpt from xfs_buftarg_isolate():
> > /*
> > * Decrement the b_lru_ref count unless the value is already
> > * zero. If the value is already zero, we need to reclaim the
> > * buffer, otherwise it gets another trip through the LRU.
> > */
> > if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> > spin_unlock(&bp->b_lock);
> > return LRU_ROTATE;
> > }
> >
> > However, as per documentation, atomic_add_unless() returns _zero_
> > if the atomic value was originally equal to the specified *unsless* value.
> >
> > Ultimately causing a xfs_buffer with ->b_lru_ref == 0, to take another
> > trip around LRU, while isolating buffers with non-zero b_lru_ref.
> >
> > Signed-off-by: Vratislav Bendel <vbendel@...hat.com>
> > CC: Brian Foster <bfoster@...hat.com>
>
> Looks ok, will test...
> Reviewed-by: Darrick J. Wong <darrick.wong@...cle.com>
This tests ok, but please address Brian and Luis' comments before I put
this in the upstream tream.
--D
> --D
>
> > ---
> > fs/xfs/xfs_buf.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> > index d1da2ee9e6db..ac669a10c62f 100644
> > --- a/fs/xfs/xfs_buf.c
> > +++ b/fs/xfs/xfs_buf.c
> > @@ -1708,7 +1708,7 @@ xfs_buftarg_isolate(
> > * zero. If the value is already zero, we need to reclaim the
> > * buffer, otherwise it gets another trip through the LRU.
> > */
> > - if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> > + if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
> > spin_unlock(&bp->b_lock);
> > return LRU_ROTATE;
> > }
> > --
> > 2.14.3
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists