[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121128100412.GA6269@shrek.podlesie.net>
Date: Wed, 28 Nov 2012 11:04:12 +0100
From: Krzysztof Mazur <krzysiek@...lesie.net>
To: David Laight <David.Laight@...LAB.COM>
Cc: chas williams - CONTRACTOR <chas@....nrl.navy.mil>,
David Woodhouse <dwmw2@...radead.org>, davem@...emloft.net,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
nathan@...verse.com.au
Subject: Re: [PATCH v2 3/3] pppoatm: protect against freeing of vcc
On Wed, Nov 28, 2012 at 09:21:37AM -0000, David Laight wrote:
> > On Tue, 27 Nov 2012 18:02:29 +0000
> > David Woodhouse <dwmw2@...radead.org> wrote:
> >
> > > In solos-pci at least, the ops->close() function doesn't flush all
> > > pending skbs for this vcc before returning. So can be a tasklet
> > > somewhere which has loaded the address of the vcc->pop function from one
> > > of them, and is going to call it in some unspecified amount of time.
> > >
> > > Should we make the device's ->close function wait for all TX and RX skbs
> > > for this vcc to complete?
> >
> > the driver's close routine should wait for any of the pending tx and rx
> > to complete. take a look at the he.c in driver/atm
>
> I'm not sure that sleeping for long periods in close() is always a
> good idea. If the process is event driven it will be unable to
> handle events on other fd until the close completes.
> This may be known not to be true in this case, but is more generally
> a problem.
> In this case the close should probably (IMHO at least) only sleep
> while pending tx and rx are aborted/discarded.
>
> Even when it might make sense to sleep in close until tx drains
> there needs to be a finite timeout before it become abortive.
>
The ->close() routine can just abort any pending rx/tx and just wait
for completion of currently running rx/tx code. That shouldn't take
long.
Krzysiek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists