[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7b89cd46-3957-4d57-88ff-ca30e517d214@intel.com>
Date: Wed, 14 Feb 2024 09:06:16 -0800
From: Alan Brady <alan.brady@...el.com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
CC: <intel-wired-lan@...ts.osuosl.org>, <netdev@...r.kernel.org>,
<willemdebruijn.kernel@...il.com>, <przemyslaw.kitszel@...el.com>,
<igor.bagnucki@...el.com>
Subject: Re: [PATCH v4 00/10 iwl-next] idpf: refactor virtchnl messages
On 2/14/2024 6:49 AM, Alexander Lobakin wrote:
> From: Alexander Lobakin <aleksander.lobakin@...el.com>
> Date: Tue, 6 Feb 2024 18:02:33 +0100
>
>> From: Alan Brady <alan.brady@...el.com>
>> Date: Mon, 5 Feb 2024 19:37:54 -0800
>>
>>> The motivation for this series has two primary goals. We want to enable
>>> support of multiple simultaneous messages and make the channel more
>>> robust. The way it works right now, the driver can only send and receive
>>> a single message at a time and if something goes really wrong, it can
>>> lead to data corruption and strange bugs.
>>
>> This works better than v3.
>> For the basic scenarios:
>>
>> Tested-by: Alexander Lobakin <aleksander.lobakin@...el.com>
>
> Sorry I'm reverting my tag.
> After the series, the CP segfaults on each rmmod idpf:
>
> root@...-imc:/usr/bin/cplane# cp_pim_mdd_handler: MDD interrupt detected
> cp_pim_mdd_event_handler: reg = 40
> Jan 1 00:27:57 mev-imc local0.warn LanCP[190]: cp_pim_mdd_handler: MDD
> interrupt detected
> cp_pim_mdd_event_handler: reg = 1
> Jan 1 00:28:54 mev-imc local0.warn LanCP[190]: [hma_create_vport/4257]
> WARNING: RSS is configured on 1st contiguous num of queuJan 1 00:28:54
> mev-imc local0.warn LanCP[190]: [hma_create_vport/4257] WARNING: RSS is
> configured on 1st contiguous num of queuJan 1 00:28:55 mev-imc
> local0.warn LanCP[190]: [hma_create_vport/4257] WARNING: RSS is
> configured on 1st contiguous num of queues= 16 start Qid= 34
> Jan 1 00:28:55 mev-imc local0.warn LanCP[190]: [hma_create_vport/4257]
> WARNING: RSS is configured on 1st contiguous num of queu16 start Qid= 640
> Jan 1 00:28:55 mev-imc local0.err LanCP[190]:
> [cp_del_node_rxbuff_lst/4179] ERR: Resource list is empty, so nothing to
> delete here
> Jan 1 00:29:08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_tc_q_region/222] ERR: Failed to init vsi LUT on vsi 1.
> Jan 1 00::08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_fxp_config/1101] ERR: cp_uninit_vsi_rss_config() failed
> on vsi (1).
> Jan 1 00:29:08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_tc_q_region/222] ERR: Failed to init vsi LUT on vsi 6.
> Jan 1 00:29:08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_rss_config/340] ERR: faininit_vsi_rss_config() failed on
> vsi (6).
> Jan 1 00:29:08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_tc_q_region/222] ERR: Failed to init vsi LUT on vsi 7.
> Jan 1 00:29:08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_rss_config/340] ERR: failed to remove vsi (7)'s queue
> regions.
> Jan 1 00:29:08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_fxp_config/1101] ERR: cp_uninit_vo init vsi LUT on vsi 8.
> Jan 1 00:29:08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_rss_config/340] ERR: failed to remove vsi (8)'s queue
> regions.
> Jan 1 00:29:08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_fxp_config/1101] ERR: cp_uninit_vsi_rss_config() failed
> on vsi (8).
> Jan 1 00:29:08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_tc_q_region/222] ERR: Failed to init vsi LUT on vsi 1.
> Jan 1 00::08 mev-imc local0.err LanCP[190]:
> [cp_uninit_vsi_fxp_config/1101] ERR: cp_uninit_vsi_rss_config() failed
> on vsi (1).
>
> [1]+ Segmentation fault ./imccp 0000:00:01.6 0 cp_init.cfg
>
> Only restarting the CP helps -- restarting the imccp daemon makes it
> immediately crash again.
>
> This should be dropped from the next-queue until it's fixed.
>
> Thanks,
> Olek
I definitely tested rmmod so I'm frankly not understanding how this can
be possible. If you would like I'm happy to sync on how you're able to
cause this to happen. Our internal validation verified 1000 load/unloads
passed their testing.
Is there more you're doing other than just rmmod?
Alan
Powered by blists - more mailing lists