[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200708031241.58635.phillips@phunq.net>
Date: Fri, 3 Aug 2007 12:41:58 -0700
From: Daniel Phillips <phillips@...nq.net>
To: Evgeniy Polyakov <johnpol@....mipt.ru>
Cc: Peter Zijlstra <peterz@...radead.org>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Arnaldo Carvalho de Melo <acme@...stprotocols.net>
Subject: Re: Distributed storage.
On Friday 03 August 2007 06:49, Evgeniy Polyakov wrote:
> ...rx has global reserve (always allocated on
> startup or sometime way before reclaim/oom)where data is originally
> received (including skb, shared info and whatever is needed, page is
> just an exmaple), then it is copied into per-socket reserve and
> reused for the next packet. Having per-socket reserve allows to have
> progress in any situation not only in cases where single action must
> be received/processed, and allows to be completely fair for all
> users, but not only special sockets, thus admin for example would be
> allowed to login, ipsec would work and so on...
And when the global reserve is entirely used up your system goes back to
dropping vm writeout acknowledgements, not so good. I like your
approach, and specifically the copying idea cuts out considerable
complexity. But I believe the per-socket flag to mark a socket as part
of the vm writeout path is not optional, and in this case it will be a
better world if it is a slightly unfair world in favor of vm writeout
traffic.
Ssh will still work fine even with vm getting priority access to the
pool. During memory crunches, non-vm ssh traffic may get bumped till
after the crunch, but vm writeout is never supposed to hog the whole
machine. If vm writeout hogs your machine long enough to delay an ssh
login then that is a vm bug and should be fixed at that level.
Regards,
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists