[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <565DB4BA.80208@unisys.com>
Date: Tue, 1 Dec 2015 09:54:50 -0500
From: Ben Romer <benjamin.romer@...sys.com>
To: Dan Carpenter <dan.carpenter@...cle.com>,
Sudip Mukherjee <sudipm.mukherjee@...il.com>
CC: David Kershner <david.kershner@...sys.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
<devel@...verdev.osuosl.org>, <sparmaintainer@...sys.com>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] staging: unisys: use common return path
On 12/01/2015 03:00 AM, Dan Carpenter wrote:
> Doing One Err style error handling is often a mistake but it's ok here.
Why is it okay here? I don't understand why this function would be any
different than the other places where the code used a goto.
If we *have* to change it I would prefer that we not add a goto and
instead add an additional boolean local variable to control serverdown
completion. That's less complex and makes the intent clear.
like this:
visornic_serverdown(struct visornic_devdata *devdata,
visorbus_state_complete_func complete_func)
{
unsigned long flags;
int retval = 0;
bool complete_serverdown = false;
spin_lock_irqsave(&devdata->priv_lock, flags);
if (!devdata->server_down && !devdata->server_change_state) {
if (devdata->going_away) {
dev_dbg(&devdata->dev->device,
"%s aborting because device removal pending\n",
__func__);
retval = -ENODEV;
} else {
devdata->server_change_state = true;
devdata->server_down_complete_func = complete_func;
complete_serverdown = true;
}
} else if (devdata->server_change_state) {
dev_dbg(&devdata->dev->device, "%s changing state\n",
__func__);
spin_unlock_irqrestore(&devdata->priv_lock, flags);
retval = -EINVAL;
}
spin_unlock_irqrestore(&devdata->priv_lock, flags);
if (complete_serverdown)
visornic_serverdown_complete(devdata);
return retval;
}
-- Ben
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists