[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D0F6C98FD@AcuExch.aculab.com>
Date: Mon, 24 Feb 2014 11:30:21 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Sathya Perla' <Sathya.Perla@...lex.Com>,
Somnath Kotur <Somnath.Kotur@...lex.Com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC: "davem@...emloft.net" <davem@...emloft.net>,
Vasundhara Volam <Vasundhara.Volam@...lex.Com>
Subject: RE: [PATCH net 4/5] be2net: wait longer to reap TX compls in
be_close()
> From: Sathya Perla
> > From: David Laight [mailto:David.Laight@...LAB.COM]
> >
> > From: Of Somnath Kotur
> > > be_close() currently waits for a max of 200ms to receive all pending
> > > TX compls. This timeout value was roughly calcuated based on 10G
> > > transmission speeds and the TX queue depth. This timeout may not be
> > > enough when the link is operating at lower speeds or in
> > > multi-channel/SR-IOV configs with TX-rate limiting setting.
> > > Increase the timeout to 2000ms and also bail-out if there is
> > > a HW-error.
> >
> > What about monitoring whether transmits are actually completing,
> > and giving up if none are completed within a suitable time period?
> >
>
> David, the code already does that. All the queues are checked for
> any new TX compls each 1ms in the be_close()->be_tx_compl_clean()
> path. This patch is not changing that logic.
Yes, but it might be better to modify the logic rather than just
increase the timeout.
The code is waiting for any pending transmits to complete.
The purpose of the timeout is to give up if the transmits aren't going
to complete.
So instead of assuming that a specific interval is long enough, abort
if no transmits completed in (say) a 10ms period.
Move the sleep() to the top of the loop and abort if nothing is found
on any queue.
Oh and move the zeroing of cmpl and num_wrbs to a sensible place.
David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists