[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <1274341448-14924-8-git-send-email-sjur.brandeland@stericsson.com>
Date: Thu, 20 May 2010 09:44:07 +0200
From: sjur.brandeland@...ricsson.com
To: netdev@...r.kernel.org, davem@...emloft.net
Cc: marcel@...tmann.org, daniel.martensson@...ricsson.com,
linus.walleji@...ricsson.com, sjurbren@...il.com,
Sjur Braendeland <sjur.brandeland@...ricsson.com>
Subject: [PATCH net-next-2.6 7/8] caif: Bugfix - Poll can't return POLLHUP
while connecting.
From: Sjur Braendeland <sjur.brandeland@...ricsson.com>
While connecting poll should not return POLLHUP,
but POLLOUT when connected.
Also fixed the sysfs for flow control that was not counting correctly.
Signed-off-by: Sjur Braendeland <sjur.brandeland@...ricsson.com>
---
net/caif/caif_socket.c | 22 ++++++----------------
1 files changed, 6 insertions(+), 16 deletions(-)
diff --git a/net/caif/caif_socket.c b/net/caif/caif_socket.c
index 2c2f7d7..4bf2ed6 100644
--- a/net/caif/caif_socket.c
+++ b/net/caif/caif_socket.c
@@ -139,7 +139,7 @@ void caif_flow_ctrl(struct sock *sk, int mode)
{
struct caifsock *cf_sk;
cf_sk = container_of(sk, struct caifsock, sk);
- if (cf_sk->layer.dn)
+ if (cf_sk->layer.dn && cf_sk->layer.dn->modemcmd)
cf_sk->layer.dn->modemcmd(cf_sk->layer.dn, mode);
}
@@ -163,9 +163,8 @@ int caif_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
atomic_read(&cf_sk->sk.sk_rmem_alloc),
sk_rcvbuf_lowwater(cf_sk));
set_rx_flow_off(cf_sk);
- if (cf_sk->layer.dn)
- cf_sk->layer.dn->modemcmd(cf_sk->layer.dn,
- CAIF_MODEMCMD_FLOW_OFF_REQ);
+ dbfs_atomic_inc(&cnt.num_rx_flow_off);
+ caif_flow_ctrl(sk, CAIF_MODEMCMD_FLOW_OFF_REQ);
}
err = sk_filter(sk, skb);
@@ -176,9 +175,8 @@ int caif_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
trace_printk("CAIF: %s():"
" sending flow OFF due to rmem_schedule\n",
__func__);
- if (cf_sk->layer.dn)
- cf_sk->layer.dn->modemcmd(cf_sk->layer.dn,
- CAIF_MODEMCMD_FLOW_OFF_REQ);
+ dbfs_atomic_inc(&cnt.num_rx_flow_off);
+ caif_flow_ctrl(sk, CAIF_MODEMCMD_FLOW_OFF_REQ);
}
skb->dev = NULL;
skb_set_owner_r(skb, sk);
@@ -286,17 +284,13 @@ static void caif_check_flow_release(struct sock *sk)
{
struct caifsock *cf_sk = container_of(sk, struct caifsock, sk);
- if (cf_sk->layer.dn == NULL || cf_sk->layer.dn->modemcmd == NULL)
- return;
-
if (rx_flow_is_on(cf_sk))
return;
if (atomic_read(&sk->sk_rmem_alloc) <= sk_rcvbuf_lowwater(cf_sk)) {
dbfs_atomic_inc(&cnt.num_rx_flow_on);
set_rx_flow_on(cf_sk);
- cf_sk->layer.dn->modemcmd(cf_sk->layer.dn,
- CAIF_MODEMCMD_FLOW_ON_REQ);
+ caif_flow_ctrl(sk, CAIF_MODEMCMD_FLOW_ON_REQ);
}
}
@@ -1019,10 +1013,6 @@ static unsigned int caif_poll(struct file *file,
(sk->sk_shutdown & RCV_SHUTDOWN))
mask |= POLLIN | POLLRDNORM;
- /* Connection-based need to check for termination and startup */
- if (sk->sk_state == CAIF_DISCONNECTED)
- mask |= POLLHUP;
-
/*
* we set writable also when the other side has shut down the
* connection. This prevents stuck sockets.
--
1.6.3.3
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists