[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250512100954.GU3339421@horms.kernel.org>
Date: Mon, 12 May 2025 11:09:54 +0100
From: Simon Horman <horms@...nel.org>
To: Subbaraya Sundeep <sbhatta@...vell.com>
Cc: andrew+netdev@...n.ch, davem@...emloft.net, edumazet@...gle.com,
kuba@...nel.org, pabeni@...hat.com, gakula@...vell.com,
hkelam@...vell.com, sgoutham@...vell.com, lcherian@...vell.com,
netdev@...r.kernel.org
Subject: Re: [PATCH] octeontx2-af: Send Link events one by one
On Wed, May 07, 2025 at 10:46:23PM +0530, Subbaraya Sundeep wrote:
> Send link events one after another otherwise new message
> is overwriting the message which is being processed by PF.
>
> Fixes: a88e0f936ba9 ("octeontx2: Detect the mbox up or down message via register")
> Signed-off-by: Subbaraya Sundeep <sbhatta@...vell.com>
> ---
> drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
> index 992fa0b..ebb56eb 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
> @@ -272,6 +272,8 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu)
>
> otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pfid);
Hi Subbaraya,
Are there other callers of otx2_mbox_msg_send_up()
which also need this logic? If so, perhaps a helper is useful.
If not, could you clarify why?
>
> + otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pfid);
This can return an error. Which is checked in otx2_sync_mbox_up_msg().
Does it make sense to do so here too?
> +
> mutex_unlock(&rvu->mbox_lock);
> } while (pfmap);
> }
> --
> 2.7.4
>
Powered by blists - more mailing lists