[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 03 Mar 2018 11:33:53 +0200
From: Denys Fedoryshchenko <nuclearcat@...learcat.com>
To: Guillaume Nault <g.nault@...halink.fr>
Cc: Cong Wang <xiyou.wangcong@...il.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
netdev-owner@...r.kernel.org
Subject: Re: ppp/pppoe, still panic 4.15.3 in ppp_push
On 2018-03-02 19:43, Guillaume Nault wrote:
> On Thu, Mar 01, 2018 at 10:07:05PM +0200, Denys Fedoryshchenko wrote:
>> On 2018-03-01 22:01, Guillaume Nault wrote:
>> > diff --git a/drivers/net/ppp/ppp_generic.c
>> > b/drivers/net/ppp/ppp_generic.c
>> > index 255a5def56e9..2acf4b0eabd1 100644
>> > --- a/drivers/net/ppp/ppp_generic.c
>> > +++ b/drivers/net/ppp/ppp_generic.c
>> > @@ -3161,6 +3161,15 @@ ppp_connect_channel(struct channel *pch, int
>> > unit)
>> > goto outl;
>> >
>> > ppp_lock(ppp);
>> > + spin_lock_bh(&pch->downl);
>> > + if (!pch->chan) {
>> > + /* Don't connect unregistered channels */
>> > + ppp_unlock(ppp);
>> > + spin_unlock_bh(&pch->downl);
>
> This is obviously wrong. It should have been
> + spin_unlock_bh(&pch->downl);
> + ppp_unlock(ppp);
>
> Sorry, I shouldn't have hurried.
> This is fixed in the official version.
>
>> > + ret = -ENOTCONN;
>> > + goto outl;
>> > + }
>> > + spin_unlock_bh(&pch->downl);
>> > if (pch->file.hdrlen > ppp->file.hdrlen)
>> > ppp->file.hdrlen = pch->file.hdrlen;
>> > hdrlen = pch->file.hdrlen + 2; /* for protocol bytes */
>> Ok, i will try to test that at night.
>> Thanks a lot! For me also problem solved anyway by removing
>> unit-cache, just
>> i think it's nice to have bug fixed :)
>>
> I think this bug has been there forever, indeed it's good to have it
> fixed.
> Thanks a lot for your help (and patience!).
>
> FYI, if you see accel-ppp logs like
> "ioctl(PPPIOCCONNECT): Transport endpoint is not connected", then that
> means the patch prevented the scenario that was leading to the original
> crash.
>
> Out of curiosity, did unit-cache really bring performance improvements
> on your workload?
On old kernels it definitely did, due local specifics (electricity
outages) i might have few thousands of interfaces deleted and created
again in short period of time.
And before interfaces creation/deletion (especially when there is
thousands of them) was very expensive.
Powered by blists - more mailing lists