lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160404143521.GB8807@hmsreliant.think-freely.org>
Date:	Mon, 4 Apr 2016 10:35:21 -0400
From:	Neil Horman <nhorman@...hat.com>
To:	"Sell, Timothy C" <Timothy.Sell@...sys.com>
Cc:	Iban Rodriguez <iban.rodriguez@....com>,
	"Kershner, David A" <David.Kershner@...sys.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Benjamin Romer <benjamin.romer@...sys.com>,
	*S-Par-Maintainer <SParMaintainer@...sys.com>,
	"devel@...verdev.osuosl.org" <devel@...verdev.osuosl.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: Staging: unisys/verisonic: Correct double unlock

On Sat, Apr 02, 2016 at 11:20:14PM +0000, Sell, Timothy C wrote:
> > -----Original Message-----
> > From: Iban Rodriguez [mailto:iban.rodriguez@....com]
> > Sent: Saturday, April 02, 2016 1:47 PM
> > To: Kershner, David A; Greg Kroah-Hartman; Benjamin Romer; Sell, Timothy
> > C; Neil Horman
> > Cc: *S-Par-Maintainer; devel@...verdev.osuosl.org; linux-
> > kernel@...r.kernel.org; Iban Rodriguez
> > Subject: Staging: unisys/verisonic: Correct double unlock
> > 
> > 'priv_lock' is unlocked twice. The first one is removed and
> > the function 'visornic_serverdown_complete' is now called with
> > 'priv_lock' locked because 'devdata' is modified inside.
> > 
> > Signed-off-by: Iban Rodriguez <iban.rodriguez@....com>
> > ---
> >  drivers/staging/unisys/visornic/visornic_main.c | 1 -
> >  1 file changed, 1 deletion(-)
> > 
> > diff --git a/drivers/staging/unisys/visornic/visornic_main.c
> > b/drivers/staging/unisys/visornic/visornic_main.c
> > index be0d057346c3..af03f2938fe9 100644
> > --- a/drivers/staging/unisys/visornic/visornic_main.c
> > +++ b/drivers/staging/unisys/visornic/visornic_main.c
> > @@ -368,7 +368,6 @@ visornic_serverdown(struct visornic_devdata
> > *devdata,
> >  		}
> >  		devdata->server_change_state = true;
> >  		devdata->server_down_complete_func = complete_func;
> > -		spin_unlock_irqrestore(&devdata->priv_lock, flags);
> >  		visornic_serverdown_complete(devdata);
> >  	} else if (devdata->server_change_state) {
> >  		dev_dbg(&devdata->dev->device, "%s changing state\n",
> 
> I agree there is a bug here involving priv_lock being unlocked
> twice, but this patch isn't the appropriate fix.  Reason is, we can NOT
> call visornic_serverdown_complete() while holding a spinlock
> (which is what this patch would cause to occur) because
> visornic_serverdown_complete() might block when it calls
> rtnl_lock() in this code sequence (rtnl_lock() grabs a mutex):
> 
>     rtnl_lock();
>     dev_close(netdev);
>     rtnl_unlock();
> 
> Blocking with a spinlock held is always a bad idea.  :-(
> 

You should just get rid of the priv_lock entirely, its not needed.

priv_lock is used the following functions:

visornic_serverdown - only called at the end of a tx_timeout reset operation, so
you are sure that the rx and tx paths are quiesced (i.e. no data access
happening)

visornic_disable_with_timeout - move the netif_stop_queue operation to the top
of this function and you will be guaranteed no concurrent access in the tx path

visornic_enable_with_timeout - same as above, make sure that netif_start_queue
and napi_enable are at the end of the function and you are guarantted no
concurrent access.

visornic_xmit - The queue lock in the netdev_start_xmit routine guarantees you
single access here from multiple transmits.

visornic_xmit_timeout - only called on a tx timeout, when you are guaranteed not
to have concurrent transmit occuing, by definition.

visornic_rx - the only tests made here are to devdata members that are altered
in service_resp_queue, and the visornic_rx is only called from
service_resp_queue, so you are guaranteed a stable data structure, as there is
only ever one context in service_resp_queue as its called from the napi poll
routine

service_resp_queue - Same as above, for any given queue, service_resp_queue only
has one context exectuing at once.

host_side_disappeared - only called from visornic_remove, when implies that all
associated devices are closed already, guaranteeing single access.

visornic_remove 
visornic_resume - Both of these function only get called when all network
interfaces are quiesced.

just remove the lock and make the minor changes needed to guarantee isolated
access.  It makes the code cleaner and faster

Neil


> > --
> > 1.9.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ