lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 02 Sep 2020 22:00:32 -0700
From:   John Fastabend <john.fastabend@...il.com>
To:     John Fastabend <john.fastabend@...il.com>,
        Lukas Wunner <lukas@...ner.de>,
        Pablo Neira Ayuso <pablo@...filter.org>,
        Jozsef Kadlecsik <kadlec@...filter.org>,
        Florian Westphal <fw@...len.de>
Cc:     netfilter-devel@...r.kernel.org, coreteam@...filter.org,
        netdev@...r.kernel.org, Daniel Borkmann <daniel@...earbox.net>,
        Alexei Starovoitov <ast@...nel.org>,
        Eric Dumazet <edumazet@...gle.com>,
        Thomas Graf <tgraf@...g.ch>, Laura Garcia <nevola@...il.com>,
        David Miller <davem@...emloft.net>
Subject: RE: [PATCH nf-next v3 3/3] netfilter: Introduce egress hook

John Fastabend wrote:
> Lukas Wunner wrote:
> > Commit e687ad60af09 ("netfilter: add netfilter ingress hook after
> > handle_ing() under unique static key") introduced the ability to
> > classify packets on ingress.
> > 
> > Support the same on egress.  This allows filtering locally generated
> > traffic such as DHCP, or outbound AF_PACKETs in general.  It will also
> > allow introducing in-kernel NAT64 and NAT46.  A patch for nftables to
> > hook up egress rules from user space has been submitted separately.
> > 
> > Position the hook immediately before a packet is handed to traffic
> > control and then sent out on an interface, thereby mirroring the ingress
> > order.  This order allows marking packets in the netfilter egress hook
> > and subsequently using the mark in tc.  Another benefit of this order is
> > consistency with a lot of existing documentation which says that egress
> > tc is performed after netfilter hooks.
> > 
> > To avoid a performance degradation in the default case (with neither
> > netfilter nor traffic control used), Daniel Borkmann suggests "a single
> > static_key which wraps an empty function call entry which can then be
> > patched by the kernel at runtime. Inside that trampoline we can still
> > keep the ordering [between netfilter and traffic control] intact":
> > 
> > https://lore.kernel.org/netdev/20200318123315.GI979@breakpoint.cc/
> > 
> > To this end, introduce nf_sch_egress() which is dynamically patched into
> > __dev_queue_xmit(), contingent on egress_needed_key.  Inside that
> > function, nf_egress() and sch_handle_egress() is called, each contingent
> > on its own separate static_key.
> > 
> > nf_sch_egress() is declared noinline per Florian Westphal's suggestion.
> > This change alone causes a speedup if neither netfilter nor traffic
> > control is used, apparently because it reduces instruction cache
> > pressure.  The same effect was previously observed by Eric Dumazet for
> > the ingress path:
> > 
> > https://lore.kernel.org/netdev/1431387038.566.47.camel@edumazet-glaptop2.roam.corp.google.com/
> > 
> > Overall, performance improves with this commit if neither netfilter nor
> > traffic control is used. However it degrades a little if only traffic
> > control is used, due to the "noinline", the additional outer static key
> > and the added netfilter code:

I don't think it actualy improves performance at least I didn't observe
that. From the code its not clear why this would be the case either. As
a nit I would prefer that line removed from the commit message.

I guess the Before/After below is just showing some noise in the
measurement.

> > 
> > * Before:       4730418pps 2270Mb/sec (2270600640bps)
> > * After:        4759206pps 2284Mb/sec (2284418880bps)
> 
> These baseline numbers seem low to me.

I used a 10Gbps ixgbe nic to measure the performance after the dummy
device hung on me for some reason. I'll try to investigate what happened
later. It was unrelated to these patches though.

But, with 10Gbps NIC and doing a pktgen benchmark with and without
the patches applied I didn't see any measurable differences. Both
cases reached 14Mpps.

> 
> > 
> > * Before + tc:  4063912pps 1950Mb/sec (1950677760bps)
> > * After  + tc:  4007728pps 1923Mb/sec (1923709440bps)

Same here before/after aggregate appears to be the same. Even the
numbers above show a 1.2% degradation. Just curious is the above
from a single run or averaged over multiple runs or something
else? Seems like noise to me.

I did see something odd where it appeared fairness between threads
was slightly worse. I don't have any explanation for this? Did
you have a chance to run the test with -t >1?

Also the overhead on your system for adding a tc rule seems
a bit high. In my case a single tc drop rule added ~7% overhead
at 14mpps. Above it looks more like 16% so double. Maybe a
missing JIT or some other configuration. Either a perf trace
or looking at your config would help figure that out.

> > 
> > * After  + nft: 3714546pps 1782Mb/sec (1782982080bps)
> > 

I haven't had a chance to do these benchmarks, but for my use
cases its more important to _not_ degrade tc performance.

I will note though that this is getting close to a 10% perf
degradation from using tc. I haven't looked much into it,
but that seems high to simply drop a packet.

Do you have plans to address the performance degradation? Otherwise
if I was building some new components its unclear why we would
choose the slower option over the tc hook. The two suggested
use cases security policy and DSR sound like new features, any
reason to not just use existing infrastructure?

Is the use case primarily legacy things already running in
nft infrastructure? I guess if you have code running now
moving it to this hook is faster and even if its 10% slower
than it could be that may be better than a rewrite?

> > Measured on a bare-metal Core i7-3615QM.
> 
> OK I have some server class systems here I would like to run these
> benchmarks again on to be sure we don't have any performance
> regressions on that side.
> 
> I'll try to get to it asap, but likely will be Monday morning
> by the time I get to it. I assume that should be no problem
> seeing we are only on rc2.

Sorry Monday had to look into a different bug.

> 
> Thanks.
> 

Thanks.
John

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ