[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1322012755.2039.36.camel@mojatatu>
Date: Tue, 22 Nov 2011 20:45:55 -0500
From: Jamal Hadi Salim <jhs@...atatu.com>
To: Jesse Gross <jesse@...ira.com>
Cc: "David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
dev@...nvswitch.org
Subject: Re: [PATCH net-next 4/4] net: Add Open vSwitch kernel components.
On Tue, 2011-11-22 at 15:11 -0800, Jesse Gross wrote:
> As you mention, one of the biggest benefits of Open vSwitch is how
> simple the kernel portions are (it's less than 6000 lines).
I said that was the reason _you_ were using to justify things
and i argue that is not accurate.
You will be adding more actions and more classification fields to
the datapath - and you are going to add them to that monolithic
"simple" code. And it is going to grow.
BTW, you _are using some of the actions_ already (the policer for
example to do rate control; no disrespect intended but in a terrible
way).
Eventually you will canibalize that in your code because it is "simpler"
to do that.
So to be explicit: I dont think this is a good arguement.
> It's
> existed as an out-of-tree project for several years now so it's
> actually fairly mature already and unlikely that there will be a
> sudden influx of new code over the coming months. There's already
> quite a bit of functionality that has been implemented on top of it
> and it's been mentioned that several other components can be written
> in terms of it
I very much empathize with this point. But that is not a technical
issue.
> so I think that it's fairly generic infrastructure that
> can be used in many ways. Over time, I think it will result in a net
> reduction of code in the kernel as the design is heavily focused on
> delegating work to userspace.
Both your goal and the Linux qos/filtering/action code is to be be
modular and move policy control out of the kernel. In our case,
any of the actions, classifiers, qos schedulers can be experimented
with out of tree with zero patch needs and when ready pushed into the
kernel with zero code changes to the core. So nothing in what we have
says the policy control sits in the kernel.
> I would view it as similar in many ways to the recently added team
> device, which is based on the idea of keeping simple things simple.
Good analogy, but wrong direction: Bonding is a monolithic christmas
tree which people kept adding code to because it was "simpler" to do
so.
Your code is heading that way because as openflow progresses or some
new thing comes along (I notice capwap) you'll be adding more code for
more classifiers and more actions and maybe more schedulers and will
have to replicate things we provide. And they all go into this
monolithic code because it is "simpler".
Is there anything we do that makes it hard for you to use the
infrastructure provided? Is there anything you do that we cant
provide via the classifier-action-scheduler infrastructure?
If you need help let me know.
cheers,
jamal
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists