[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200819174740.47291-1-viremana@linux.microsoft.com>
Date: Wed, 19 Aug 2020 17:47:40 +0000
From: Vineeth Pillai <viremana@...ux.microsoft.com>
To: Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Wei Liu <wei.liu@...nel.org>
Cc: Vineeth Pillai <viremana@...ux.microsoft.com>,
"K . Y . Srinivasan" <kys@...rosoft.com>,
linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH] hv_utils: drain the timesync packets on onchannelcallback
There could be instances where a system stall prevents the timesync
packaets to be consumed. And this might lead to more than one packet
pending in the ring buffer. Current code empties one packet per callback
and it might be a stale one. So drain all the packets from ring buffer
on each callback.
Signed-off-by: Vineeth Pillai <viremana@...ux.microsoft.com>
---
drivers/hv/hv_util.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c
index 1357861fd8ae..c0491b727fd5 100644
--- a/drivers/hv/hv_util.c
+++ b/drivers/hv/hv_util.c
@@ -387,10 +387,23 @@ static void timesync_onchannelcallback(void *context)
struct ictimesync_ref_data *refdata;
u8 *time_txf_buf = util_timesynch.recv_buffer;
- vmbus_recvpacket(channel, time_txf_buf,
- HV_HYP_PAGE_SIZE, &recvlen, &requestid);
+ /*
+ * Drain the ring buffer and use the last packet to update
+ * host_ts
+ */
+ while (1) {
+ int ret = vmbus_recvpacket(channel, time_txf_buf,
+ HV_HYP_PAGE_SIZE, &recvlen,
+ &requestid);
+ if (ret) {
+ pr_warn("TimeSync IC pkt recv failed (Err: %d)\n",
+ ret);
+ break;
+ }
+
+ if (!recvlen)
+ break;
- if (recvlen > 0) {
icmsghdrp = (struct icmsg_hdr *)&time_txf_buf[
sizeof(struct vmbuspipe_hdr)];
--
2.17.1
Powered by blists - more mailing lists