[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070820.235804.85409183.davem@davemloft.net>
Date: Mon, 20 Aug 2007 23:58:04 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: rdreier@...co.com
Cc: tom@...ngridcomputing.com, jeff@...zik.org,
swise@...ngridcomputing.com, mshefty@...ips.intel.com,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
general@...ts.openfabrics.org
Subject: Re: [ofa-general] Re: [PATCH RFC] RDMA/CMA: Allocate PS_TCP ports
from the host TCP port space.
From: Roland Dreier <rdreier@...co.com>
Date: Mon, 20 Aug 2007 18:16:54 -0700
> And direct data placement really does give you a factor of two at
> least, because otherwise you're stuck receiving the data in one
> buffer, looking at some of the data at least, and then figuring out
> where to copy it. And memory bandwidth is if anything becoming more
> valuable; maybe LRO + header splitting + page remapping tricks can get
> you somewhere but as NCPUS grows then it seems the TLB shootdown cost
> of page flipping is only going to get worse.
As Herbert has said already, people can code for this just like
they have to code for RDMA.
There is no fundamental difference from converting an application
to sendfile or similar.
The only thing this needs is a
"recvmsg_I_dont_care_where_the_data_is()" call. There are no alignment
issues unless you are trying to push this data directly into the
page cache.
Couple this with a card that makes sure that on a per-page basis, only
data for a particular flow (or group of flows) will accumulate.
People already make cards that can do stuff like this, it can be done
statelessly with an on-chip dynamically maintained flow table.
And best yet it doesn't turn off every feature in the networking nor
bypass it for the actual protocol processing.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists