[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ada7inz85u4.fsf@cisco.com>
Date: Mon, 13 Aug 2007 14:47:31 -0700
From: Roland Dreier <rdreier@...co.com>
To: hadi@...erus.ca
Cc: Shirley Ma <xma@...ibm.com>, David Miller <davem@...emloft.net>,
jgarzik@...ox.com, netdev@...r.kernel.org, rusty@...tcorp.com.au,
shemminger@...ux-foundation.org
Subject: Re: [PATCH RFC]: napi_struct V5
> > Could you explain why this is unfair?
>
> The simple answer is the core attempts DRR scheduling (search for the
> paper by Varghese et al for more details)
> If you have multiple users of a resource (network interfaces in this
> case), then the quantum defines their weight. If you use more than your
> fair quota, then you are being unfair.
OK, I think we were talking past each other. As you noted later, the
current behavior *is* fair, since it uses less than the full quota.
What I couldn't understand was why everyone was telling me that using
*less* than a full quota was unfair -- and now I think we all agree
that it is fair.
> Talk to your vendor (your hardware guys in your case ;->) next time
> to fix their chip. The best scheme is to allow a Clear-on-write
> only on the specific bit/event.
Actually it is more of an IB spec issue than a hardware issue. IPoIB
is really more of a protocol than a hardware driver in many ways; one
analogy would be PPP, which runs on various serial drivers ranging
from an analog modem to cellular data cards using USB serial to PPPoE.
The primitives for event handling that the IB spec defines are such
that this NAPI event handling gap is unavoidable. But in fact a lot
of IB adapter hardware has somewhat better event handling so that the
gap never occurs. We take advantage of this by having the "request
event" operation simply be hard-coded never to return the missed event
hint on hardware where we know the missed event is impossible by
design. However IPoIB has to run on all IB hardware so we can only
assume the least-common-denominator behavior, which means we have to
have the missed event handling for the hardware where it does apply.
> > The second one seems to perform better because in the missed event
> > case, it gives a few more packets a chance to arrive so that we can
> > amortize the polling overhead a little more.
>
> Theory makes sense. Have you validated?
Yes, IBM (the people with the adapter hardware where this path
triggers) did benchmarking and report that allowing the work to pile
up makes a huge performance difference. In fact it is kind of
suspicious that the difference is huge -- I wouldn't expect the missed
event path to be common enough to make that much of a difference, but
for some reason it is.
- R.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists