[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALDO+SYW1-Or7z93+qzKvx-wtMAH-h1fpsTcPCXNuWDfOepBnQ@mail.gmail.com>
Date: Thu, 3 Oct 2019 10:08:57 -0700
From: William Tu <u9012063@...il.com>
To: xiangxia.m.yue@...il.com
Cc: ovs dev <dev@...nvswitch.org>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
David Miller <davem@...emloft.net>,
Greg Rose <gvrose8192@...il.com>,
Eelco Chaudron <echaudro@...hat.com>,
pravin shelar <pshelar@....org>
Subject: Re: [ovs-dev] [PATCH net-next 0/9] optimize openvswitch flow looking up
Hi Tonghao,
Thanks for the patch.
> On 29 Sep 2019, at 19:09, xiangxia.m.yue@...il.com wrote:
>
> > From: Tonghao Zhang <xiangxia.m.yue@...il.com>
> >
> > This series patch optimize openvswitch.
> >
> > Patch 1, 2, 4: Port Pravin B Shelar patches to
> > linux upstream with little changes.
> >
I thought the idea of adding another cache before the flow mask
was rejected before, due to all the potential issue of caches, ex:
cache is exploitable, and performance still suffers when your cache
is full. See David's slides below:
[1] http://vger.kernel.org/~davem/columbia2012.pdf
Do you have a rough number about how many flows this flow mask
cache can handle?
> > Patch 5, 6, 7: Optimize the flow looking up and
> > simplify the flow hash.
I think this is great.
I wonder what's the performance improvement when flow mask
cache is full?
Thanks
William
> >
> > Patch 8: is a bugfix.
> >
> > The performance test is on Intel Xeon E5-2630 v4.
> > The test topology is show as below:
> >
> > +-----------------------------------+
> > | +---------------------------+ |
> > | | eth0 ovs-switch eth1 | | Host0
> > | +---------------------------+ |
> > +-----------------------------------+
> > ^ |
> > | |
> > | |
> > | |
> > | v
> > +-----+----+ +----+-----+
> > | netperf | Host1 | netserver| Host2
> > +----------+ +----------+
> >
> > We use netperf send the 64B frame, and insert 255+ flow-mask:
> > $ ovs-dpctl add-flow ovs-switch
> > "in_port(1),eth(dst=00:01:00:00:00:00/ff:ff:ff:ff:ff:01),eth_type(0x0800),ipv4(frag=no)"
> > 2
> > ...
> > $ ovs-dpctl add-flow ovs-switch
> > "in_port(1),eth(dst=00:ff:00:00:00:00/ff:ff:ff:ff:ff:ff),eth_type(0x0800),ipv4(frag=no)"
> > 2
> > $ netperf -t UDP_STREAM -H 2.2.2.200 -l 40 -- -m 18
> >
> > * Without series patch, throughput 8.28Mbps
> > * With series patch, throughput 46.05Mbps
> >
> > Tonghao Zhang (9):
> > net: openvswitch: add flow-mask cache for performance
> > net: openvswitch: convert mask list in mask array
> > net: openvswitch: shrink the mask array if necessary
> > net: openvswitch: optimize flow mask cache hash collision
> > net: openvswitch: optimize flow-mask looking up
> > net: openvswitch: simplify the flow_hash
> > net: openvswitch: add likely in flow_lookup
> > net: openvswitch: fix possible memleak on destroy flow table
> > net: openvswitch: simplify the ovs_dp_cmd_new
> >
> > net/openvswitch/datapath.c | 63 +++++----
> > net/openvswitch/flow.h | 1 -
> > net/openvswitch/flow_table.c | 318
> > +++++++++++++++++++++++++++++++++++++------
> > net/openvswitch/flow_table.h | 19 ++-
> > 4 files changed, 330 insertions(+), 71 deletions(-)
> >
> > --
> > 1.8.3.1
> _______________________________________________
> dev mailing list
> dev@...nvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
Powered by blists - more mailing lists