[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20161029134949.356140870@linuxfoundation.org>
Date: Sat, 29 Oct 2016 09:49:30 -0400
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Ashok Raj Nagarajan <arnagara@....qualcomm.com>,
Kalle Valo <kvalo@....qualcomm.com>
Subject: [PATCH 4.8 052/125] ath10k: fix sending frame in management path in push txq logic
4.8-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ashok Raj Nagarajan <arnagara@....qualcomm.com>
commit e4fd726f21cdae0dc9cea6cbfcb7e27f21393f88 upstream.
In the wake tx queue path, we are not checking if the frame to be sent
takes management path or not. For eg. QOS null func frame coming here will
take the management path. Since we are not incrementing the descriptor
counter (num_pending_mgmt_tx) w.r.t tx management, on tx completion it is
possible to see negative values.
When the above counter reaches a negative value, we will not be sending a
probe response out.
if (is_presp &&
ar->hw_params.max_probe_resp_desc_thres < htt->num_pending_mgmt_tx)
For IPQ4019, max_probe_resp_desc_thres (u32) is 24 is compared against
num_pending_mgmt_tx (int) and the above condtions comes true if the counter
is negative and we drop the probe response.
To avoid this, check on the wake tx queue path as well for the tx path of
the frame and increment the appropriate counters
Fixes: cac085524cf1 "ath10k: move mgmt descriptor limit handle under mgmt_tx"
Signed-off-by: Ashok Raj Nagarajan <arnagara@....qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@....qualcomm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/net/wireless/ath/ath10k/mac.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
--- a/drivers/net/wireless/ath/ath10k/mac.c
+++ b/drivers/net/wireless/ath/ath10k/mac.c
@@ -3777,7 +3777,9 @@ int ath10k_mac_tx_push_txq(struct ieee80
enum ath10k_hw_txrx_mode txmode;
enum ath10k_mac_tx_path txpath;
struct sk_buff *skb;
+ struct ieee80211_hdr *hdr;
size_t skb_len;
+ bool is_mgmt, is_presp;
int ret;
spin_lock_bh(&ar->htt.tx_lock);
@@ -3801,6 +3803,22 @@ int ath10k_mac_tx_push_txq(struct ieee80
skb_len = skb->len;
txmode = ath10k_mac_tx_h_get_txmode(ar, vif, sta, skb);
txpath = ath10k_mac_tx_h_get_txpath(ar, skb, txmode);
+ is_mgmt = (txpath == ATH10K_MAC_TX_HTT_MGMT);
+
+ if (is_mgmt) {
+ hdr = (struct ieee80211_hdr *)skb->data;
+ is_presp = ieee80211_is_probe_resp(hdr->frame_control);
+
+ spin_lock_bh(&ar->htt.tx_lock);
+ ret = ath10k_htt_tx_mgmt_inc_pending(htt, is_mgmt, is_presp);
+
+ if (ret) {
+ ath10k_htt_tx_dec_pending(htt);
+ spin_unlock_bh(&ar->htt.tx_lock);
+ return ret;
+ }
+ spin_unlock_bh(&ar->htt.tx_lock);
+ }
ret = ath10k_mac_tx(ar, vif, sta, txmode, txpath, skb);
if (unlikely(ret)) {
@@ -3808,6 +3826,8 @@ int ath10k_mac_tx_push_txq(struct ieee80
spin_lock_bh(&ar->htt.tx_lock);
ath10k_htt_tx_dec_pending(htt);
+ if (is_mgmt)
+ ath10k_htt_tx_mgmt_dec_pending(htt);
spin_unlock_bh(&ar->htt.tx_lock);
return ret;
Powered by blists - more mailing lists