[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1186144072.11797.55.camel@lappy>
Date: Fri, 03 Aug 2007 14:27:52 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Evgeniy Polyakov <johnpol@....mipt.ru>
Cc: Daniel Phillips <phillips@...nq.net>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Arnaldo Carvalho de Melo <acme@...stprotocols.net>
Subject: Re: Distributed storage.
On Fri, 2007-08-03 at 14:57 +0400, Evgeniy Polyakov wrote:
> For receiving situation is worse, since system does not know in advance
> to which socket given packet will belong to, so it must allocate from
> global pool (and thus there must be independent global reserve), and
> then exchange part of the socket's reserve to the global one (or just
> copy packet to the new one, allocated from socket's reseve is it was
> setup, or drop it otherwise). Global independent reserve is what I
> proposed when stopped to advertise network allocator, but it seems that
> it was not taken into account, and reserve was always allocated only
> when system has serious memory pressure in Peter's patches without any
> meaning for per-socket reservation.
This is not true. I have a global reserve which is set-up a priori. You
cannot allocate a reserve when under pressure, that does not make sense.
Let me explain my approach once again.
At swapon(8) time we allocate a global reserve. And associate the needed
sockets with it. The size of this global reserve is make up of two
parts:
- TX
- RX
The RX pool is the most interresting part. It again is made up of two
parts:
- skb
- auxilary data
The skb part is scaled such that it can overflow the IP fragment
reassembly, the aux pool such that it can overflow the route cache (that
was the largest other allocator in the RX path)
All (reserve) RX skb allocations are accounted, so as to never allocate
more than we reserved.
All packets are received (given the limit) and are processed up to
socket demux. At that point all packets not targeted at an associated
socket are dropped and the skb memory freed - ready for another packet.
All packets targeted for associated sockets get processed. This requires
that this packet processing happens in-kernel. Since we are swapping
user-space might be waiting for this data, and we'd deadlock.
I'm not quite sure why you need per socket reservations.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists