[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <A4D65CDD-48D9-47BC-AA6D-0B4E10427E0C@intel.com>
Date: Fri, 27 Oct 2017 09:33:56 +0000
From: "Dilger, Andreas" <andreas.dilger@...el.com>
To: NeilBrown <neilb@...e.com>
CC: "Drokin, Oleg" <oleg.drokin@...el.com>,
James Simmons <jsimmons@...radead.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"lustre-devel@...ts.lustre.org" <lustre-devel@...ts.lustre.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 8/9] staging: lustre: ldlm: remove unnecessary
'ownlocks' variable.
On Oct 22, 2017, at 18:53, NeilBrown <neilb@...e.com> wrote:
>
> Now that the code has been simplified, 'ownlocks' is not
> necessary.
>
> The loop which sets it exits with 'lock' having the same value as
> 'ownlocks', or pointing to the head of the list if ownlocks is NULL.
>
> The current code then tests ownlocks and sets 'lock' to exactly the
> value that it currently has.
>
> So discard 'ownlocks'.
>
> Also remove unnecessary initialization of 'lock'.
>
> Signed-off-by: NeilBrown <neilb@...e.com>
Reviewed-by: Andreas Dilger <andreas.dilger@...el.com>
> ---
> drivers/staging/lustre/lustre/ldlm/ldlm_flock.c | 15 +++------------
> 1 file changed, 3 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> index 0bf6dce1c5b1..774d8667769a 100644
> --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> @@ -115,8 +115,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> struct ldlm_resource *res = req->l_resource;
> struct ldlm_namespace *ns = ldlm_res_to_ns(res);
> struct ldlm_lock *tmp;
> - struct ldlm_lock *ownlocks = NULL;
> - struct ldlm_lock *lock = NULL;
> + struct ldlm_lock *lock;
> struct ldlm_lock *new = req;
> struct ldlm_lock *new2 = NULL;
> enum ldlm_mode mode = req->l_req_mode;
> @@ -140,22 +139,14 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> /* This loop determines where this processes locks start
> * in the resource lr_granted list.
> */
> - list_for_each_entry(lock, &res->lr_granted, l_res_link) {
> - if (ldlm_same_flock_owner(lock, req)) {
> - ownlocks = lock;
> + list_for_each_entry(lock, &res->lr_granted, l_res_link)
> + if (ldlm_same_flock_owner(lock, req))
> break;
> - }
> - }
>
> /* Scan the locks owned by this process to find the insertion point
> * (as locks are ordered), and to handle overlaps.
> * We may have to merge or split existing locks.
> */
> - if (ownlocks)
> - lock = ownlocks;
> - else
> - lock = list_entry(&res->lr_granted,
> - struct ldlm_lock, l_res_link);
> list_for_each_entry_safe_from(lock, tmp, &res->lr_granted, l_res_link) {
>
> if (!ldlm_same_flock_owner(lock, new))
>
>
Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation
Powered by blists - more mailing lists