[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CY1PR0701MB20128130D21FD3C54E45B5A188720@CY1PR0701MB2012.namprd07.prod.outlook.com>
Date: Tue, 3 Oct 2017 18:05:32 +0000
From: "Kalderon, Michal" <Michal.Kalderon@...ium.com>
To: David Miller <davem@...emloft.net>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"dledford@...hat.com" <dledford@...hat.com>,
"Elior, Ariel" <Ariel.Elior@...ium.com>
Subject: Re: [PATCH v2 net-next 06/12] qed: Add LL2 slowpath handling
From: David Miller <davem@...emloft.net>
Sent: Tuesday, October 3, 2017 8:17 PM
>> @@ -423,6 +423,41 @@ static void qed_ll2_rxq_parse_reg(struct qed_hwfn *p_hwfn,
>> }
>>
>> static int
>> +qed_ll2_handle_slowpath(struct qed_hwfn *p_hwfn,
>> + struct qed_ll2_info *p_ll2_conn,
>> + union core_rx_cqe_union *p_cqe,
>> + unsigned long *p_lock_flags)
>> +{
>...
>> + spin_unlock_irqrestore(&p_rx->lock, *p_lock_flags);
>> +
>
>You can't drop this lock.
>
>Another thread can enter the loop of our caller and process RX queue
>entries, then we would return from here and try to process the same
>entries again.
The lock is there to synchronize access to chains between qed_ll2_rxq_completion
and qed_ll2_post_rx_buffer. qed_ll2_rxq_completion can't be called from
different threads, the light l2 uses the single sp status block we have.
The reason we release the lock is to avoid a deadlock where as a result of calling
upper-layer driver it will potentially post additional rx-buffers.
Powered by blists - more mailing lists