[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180511204228.GO4977@localhost.localdomain>
Date: Fri, 11 May 2018 17:42:28 -0300
From: Marcelo Ricardo Leitner <marcelo.leitner@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Dmitry Vyukov <dvyukov@...gle.com>,
syzbot <syzbot+fc78715ba3b3257caf6a@...kaller.appspotmail.com>,
Vladislav Yasevich <vyasevich@...il.com>,
Neil Horman <nhorman@...driver.com>,
linux-sctp@...r.kernel.org, Andrei Vagin <avagin@...tuozzo.com>,
David Miller <davem@...emloft.net>,
Kirill Tkhai <ktkhai@...tuozzo.com>,
LKML <linux-kernel@...r.kernel.org>,
netdev <netdev@...r.kernel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>
Subject: Re: INFO: rcu detected stall in kfree_skbmem
On Fri, May 11, 2018 at 12:08:33PM -0700, Eric Dumazet wrote:
>
>
> On 05/11/2018 11:41 AM, Marcelo Ricardo Leitner wrote:
>
> > But calling ip6_xmit with rcu_read_lock is expected. tcp stack also
> > does it.
> > Thus I think this is more of an issue with IPv6 stack. If a host has
> > an extensive ip6tables ruleset, it probably generates this more
> > easily.
> >
> >>> sctp_v6_xmit+0x4a5/0x6b0 net/sctp/ipv6.c:225
> >>> sctp_packet_transmit+0x26f6/0x3ba0 net/sctp/output.c:650
> >>> sctp_outq_flush+0x1373/0x4370 net/sctp/outqueue.c:1197
> >>> sctp_outq_uncork+0x6a/0x80 net/sctp/outqueue.c:776
> >>> sctp_cmd_interpreter net/sctp/sm_sideeffect.c:1820 [inline]
> >>> sctp_side_effects net/sctp/sm_sideeffect.c:1220 [inline]
> >>> sctp_do_sm+0x596/0x7160 net/sctp/sm_sideeffect.c:1191
> >>> sctp_generate_heartbeat_event+0x218/0x450 net/sctp/sm_sideeffect.c:406
> >>> call_timer_fn+0x230/0x940 kernel/time/timer.c:1326
> >>> expire_timers kernel/time/timer.c:1363 [inline]
> >
> > Having this call from a timer means it wasn't processing sctp stack
> > for too long.
> >
>
> I feel the problem is that this part is looping, in some infinite loop.
>
> I have seen this stack traces in other reports.
Checked mail history now, seems at least two other reports on RCU
stalls had sctp_generate_heartbeat_event involved.
>
> Maybe some kind of list corruption.
Could be.
Do we know if it generated a flood of packets?
Marcelo
Powered by blists - more mailing lists