[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20190805.105500.1555481916904502971.davem@davemloft.net>
Date: Mon, 05 Aug 2019 10:55:00 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: hslester96@...il.com
Cc: vishal@...lsio.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] cxgb4: smt: Use refcount_t for refcount
From: Chuhong Yuan <hslester96@...il.com>
Date: Fri, 2 Aug 2019 16:35:47 +0800
> refcount_t is better for reference counters since its
> implementation can prevent overflows.
> So convert atomic_t ref counters to refcount_t.
>
> Signed-off-by: Chuhong Yuan <hslester96@...il.com>
> ---
> Changes in v2:
> - Convert refcount from 0-base to 1-base.
The existing code is buggy and should be fixed before you start making
conversions to it.
> @@ -111,7 +111,7 @@ static void t4_smte_free(struct smt_entry *e)
> */
> void cxgb4_smt_release(struct smt_entry *e)
> {
> - if (atomic_dec_and_test(&e->refcnt))
> + if (refcount_dec_and_test(&e->refcnt))
> t4_smte_free(e);
This runs without any locking and therefore:
> if (e) {
> spin_lock(&e->lock);
> - if (!atomic_read(&e->refcnt)) {
> - atomic_set(&e->refcnt, 1);
> + if (refcount_read(&e->refcnt) == 1) {
> + refcount_set(&e->refcnt, 2);
This test is not safe, since the reference count can asynchronously decrement
to zero above outside of any locks.
Then you'll need to add locking, and as a result the need to an atomic
counter goes away and just a normal int can be used.
Powered by blists - more mailing lists