[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20171008.214028.221318843312741195.davem@davemloft.net>
Date: Sun, 08 Oct 2017 21:40:28 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: Michal.Kalderon@...ium.com
Cc: netdev@...r.kernel.org, linux-rdma@...r.kernel.org,
dledford@...hat.com, Ariel.Elior@...ium.com
Subject: Re: [PATCH v2 net-next 06/12] qed: Add LL2 slowpath handling
From: "Kalderon, Michal" <Michal.Kalderon@...ium.com>
Date: Tue, 3 Oct 2017 18:05:32 +0000
> From: David Miller <davem@...emloft.net>
> Sent: Tuesday, October 3, 2017 8:17 PM
>>> @@ -423,6 +423,41 @@ static void qed_ll2_rxq_parse_reg(struct qed_hwfn *p_hwfn,
>>> }
>>>
>>> static int
>>> +qed_ll2_handle_slowpath(struct qed_hwfn *p_hwfn,
>>> + struct qed_ll2_info *p_ll2_conn,
>>> + union core_rx_cqe_union *p_cqe,
>>> + unsigned long *p_lock_flags)
>>> +{
>>...
>>> + spin_unlock_irqrestore(&p_rx->lock, *p_lock_flags);
>>> +
>>
>>You can't drop this lock.
>>
>>Another thread can enter the loop of our caller and process RX queue
>>entries, then we would return from here and try to process the same
>>entries again.
>
> The lock is there to synchronize access to chains between qed_ll2_rxq_completion
> and qed_ll2_post_rx_buffer. qed_ll2_rxq_completion can't be called from
> different threads, the light l2 uses the single sp status block we have.
> The reason we release the lock is to avoid a deadlock where as a result of calling
> upper-layer driver it will potentially post additional rx-buffers.
Ok, please repost this patch series.
Thanks.
Powered by blists - more mailing lists