[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1430941450.14545.84.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Wed, 06 May 2015 12:44:10 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Willem de Bruijn <willemb@...gle.com>
Cc: netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: [PATCH net-next 4/7] packet: rollover lock contention avoidance
On Wed, 2015-05-06 at 14:27 -0400, Willem de Bruijn wrote:
> From: Willem de Bruijn <willemb@...gle.com>
>
> @@ -3718,6 +3726,10 @@ static unsigned int packet_poll(struct file *file, struct socket *sock,
> mask |= POLLOUT | POLLWRNORM;
> }
> spin_unlock_bh(&sk->sk_write_queue.lock);
> +
> + if (po->pressure && !(mask & POLLIN))
> + xchg(&po->pressure, 0);
> +
> return mask;
This look racy to me : several threads could run here, and a thread
could remove pressure just set by another one.
Also waiting for queue being completely empty before releasing pressure
might depend on scheduling to utilize queue shortly.
(We usually use max_occupancy/2 thresholds)
Maybe this would be better, but please check.
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index 5102c3cc4eec..fab59f8bb336 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -3694,6 +3694,8 @@ static unsigned int packet_poll(struct file *file, struct socket *sock,
TP_STATUS_KERNEL))
mask |= POLLIN | POLLRDNORM;
}
+ if (po->pressure && !(mask & POLLIN))
+ xchg(&po->pressure, 0);
spin_unlock_bh(&sk->sk_receive_queue.lock);
spin_lock_bh(&sk->sk_write_queue.lock);
if (po->tx_ring.pg_vec) {
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists