lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 8 Oct 2019 09:41:09 +0800
From:   Tonghao Zhang <xiangxia.m.yue@...il.com>
To:     William Tu <u9012063@...il.com>
Cc:     ovs dev <dev@...nvswitch.org>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        David Miller <davem@...emloft.net>,
        Greg Rose <gvrose8192@...il.com>,
        Eelco Chaudron <echaudro@...hat.com>,
        pravin shelar <pshelar@....org>
Subject: Re: [ovs-dev] [PATCH net-next 0/9] optimize openvswitch flow looking up

On Fri, Oct 4, 2019 at 1:09 AM William Tu <u9012063@...il.com> wrote:
>
> Hi Tonghao,
>
> Thanks for the patch.
>
> > On 29 Sep 2019, at 19:09, xiangxia.m.yue@...il.com wrote:
> >
> > > From: Tonghao Zhang <xiangxia.m.yue@...il.com>
> > >
> > > This series patch optimize openvswitch.
> > >
> > > Patch 1, 2, 4: Port Pravin B Shelar patches to
> > > linux upstream with little changes.
> > >
>
> I thought the idea of adding another cache before the flow mask
> was rejected before, due to all the potential issue of caches, ex:
> cache is exploitable, and performance still suffers when your cache
> is full. See David's slides below:
> [1] http://vger.kernel.org/~davem/columbia2012.pdf
>
> Do you have a rough number about how many flows this flow mask
> cache can handle?
Now we can cache 256 flows on a CPU, so if there are 40 CPUs, 256*10
flows will be cached.
the value of flow-mask is defined using MC_HASH_ENTRIES macro define.
We can change the value
according to different use case and CPU L1d cache.

> > > Patch 5, 6, 7: Optimize the flow looking up and
> > > simplify the flow hash.
>
> I think this is great.
> I wonder what's the performance improvement when flow mask
> cache is full?
I will test the case, I think this feature should work well with RSS
and irq affinity.
> Thanks
> William
>
> > >
> > > Patch 8: is a bugfix.
> > >
> > > The performance test is on Intel Xeon E5-2630 v4.
> > > The test topology is show as below:
> > >
> > > +-----------------------------------+
> > > |   +---------------------------+   |
> > > |   | eth0   ovs-switch    eth1 |   | Host0
> > > |   +---------------------------+   |
> > > +-----------------------------------+
> > >       ^                       |
> > >       |                       |
> > >       |                       |
> > >       |                       |
> > >       |                       v
> > > +-----+----+             +----+-----+
> > > | netperf  | Host1       | netserver| Host2
> > > +----------+             +----------+
> > >
> > > We use netperf send the 64B frame, and insert 255+ flow-mask:
> > > $ ovs-dpctl add-flow ovs-switch
> > > "in_port(1),eth(dst=00:01:00:00:00:00/ff:ff:ff:ff:ff:01),eth_type(0x0800),ipv4(frag=no)"
> > > 2
> > > ...
> > > $ ovs-dpctl add-flow ovs-switch
> > > "in_port(1),eth(dst=00:ff:00:00:00:00/ff:ff:ff:ff:ff:ff),eth_type(0x0800),ipv4(frag=no)"
> > > 2
> > > $ netperf -t UDP_STREAM -H 2.2.2.200 -l 40 -- -m 18
> > >
> > > * Without series patch, throughput 8.28Mbps
> > > * With series patch, throughput 46.05Mbps
> > >
> > > Tonghao Zhang (9):
> > >   net: openvswitch: add flow-mask cache for performance
> > >   net: openvswitch: convert mask list in mask array
> > >   net: openvswitch: shrink the mask array if necessary
> > >   net: openvswitch: optimize flow mask cache hash collision
> > >   net: openvswitch: optimize flow-mask looking up
> > >   net: openvswitch: simplify the flow_hash
> > >   net: openvswitch: add likely in flow_lookup
> > >   net: openvswitch: fix possible memleak on destroy flow table
> > >   net: openvswitch: simplify the ovs_dp_cmd_new
> > >
> > >  net/openvswitch/datapath.c   |  63 +++++----
> > >  net/openvswitch/flow.h       |   1 -
> > >  net/openvswitch/flow_table.c | 318
> > > +++++++++++++++++++++++++++++++++++++------
> > >  net/openvswitch/flow_table.h |  19 ++-
> > >  4 files changed, 330 insertions(+), 71 deletions(-)
> > >
> > > --
> > > 1.8.3.1
> > _______________________________________________
> > dev mailing list
> > dev@...nvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ