[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1167836172.4187.9.camel@stevo-desktop>
Date: Wed, 03 Jan 2007 08:56:12 -0600
From: Steve Wise <swise@...ngridcomputing.com>
To: "Michael S. Tsirkin" <mst@...lanox.co.il>
Cc: Roland Dreier <rdreier@...co.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, openib-general@...nib.org
Subject: Re: [PATCH v4 01/13] Linux RDMA Core Changes
> > >
> > > It seems all Chelsio needs is to pass in a consumer index - so, how about a new
> > > entry point? Something like void set_cq_udata(struct ib_cq *cq, struct ib_udata *udata)?
> > >
> >
> > Adding a new entry point would hurt chelsio's user mode performance if
> > if then requires 2 kernel transitions to rearm the cq.
>
> No, it won't need 2 transitions - just an extra function call,
> so it won't hurt performance - it would improve performance.
>
> ib_uverbs_req_notify_cq would call
>
> ib_uverbs_req_notify_cq()
> {
> ib_set_cq_udata(cq, udata)
> ib_req_notify_cq(cq, cmd.solicited_only ?
> IB_CQ_SOLICITED : IB_CQ_NEXT_COMP);
> }
>
ib_set_cq_udata() would transition into the kernel to pass in the
consumer's index. In addition, ib_req_notify_cq would also transition
into the kernel since its not a bypass function for chelsio.
> This way kernel consumers don't incur any overhead,
> and in userspace users extra function call is dwarfed
> by system call overhead.
>
> > Passing in user data is sort of SOP for these sorts of verbs.
>
> I don't see other examples. Where we did pass extra user data
> is in non-data pass verbs such as create QP.
>
> This is most inner tight loop in many ULPs, so we should be very careful
> about adding code there - these things do add up.
> See recent IRQ API update in kernel.
Roland, do you have any comments on this? You previously indicated
these patches were good to go once chelsio's ethernet driver gets pulled
in.
> > How much does passing one more param cost for kernel users?
>
> Donnu. I just reviewed the code.
> It really should be up to patch submitter to check the performance
> effect of his patch, if there might be any.
I've run this code with mthca and didn't notice any performance
degradation, but I wasn't specifically measuring cq_poll overhead in a
tight loop...
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists