lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 11 May 2015 08:49:03 -0400
From:	Jamal Hadi Salim <jhs@...atatu.com>
To:	Alexei Starovoitov <ast@...mgrid.com>,
	Pablo Neira Ayuso <pablo@...filter.org>,
	Daniel Borkmann <daniel@...earbox.net>
CC:	netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: [PATCH 2/2 net-next] net: move qdisc ingress filtering code where
 it belongs


we need to agree on what the benchmark is. The metrics below are
fine, whoever is doing measurements (expecting different kernels
and hardware to behave differently, so a kernel config will help):

1)no ingress (if drop is in ip_rcv, does it look at some header?)
2)ingress on other dev (this should mean still #1 drop applies)
3)ingress on this dev (#1 drop in effect)
4)ingress on this dev (#1 drop in effect)
5)ingress on this dev + drop at u32 classifier/action.

Variations of above:
A) Current kernel
B) ingress static key introduced
C)Pablo's vs Daniel's settings

The one thing i see in Pablo's comments is extra code will slow
down things. I see Alexei refuting it below since that code is
not being executed.
So some variation with extra code to see what happens.

It is hard, but please stay calm and carry on - we all want
whats best.

cheers,
jamal

On 05/11/15 01:57, Alexei Starovoitov wrote:
> On 5/10/15 4:43 PM, Pablo Neira Ayuso wrote:
>>
>> The noop is patched to an unconditional branch to skip that code, but
>> the code is still there in that path, even if it's dormant.
>>
>> What the numbers show is rather simple: The more code is in the path,
>> the less performance you get, and the qdisc ingress specific code
>> embedded there is reducing performance for people that are not using
>> qdisc ingress, hence it should go where it belongs. The static key
>> cannot save you from that.
>
> hmm, did I miss these numbers ?
>
> My numbers are showing the opposite. There is no degradation whatsoever.
> To recap, here are the numbers from my box:
> no ingress - 37.6
> ingress on other dev - 36.5
> ingress on this dev - 28.8
> ingress on this dev + u32 - 24.1
>
> after Daniel's two patches:
> no ingress - 37.6
> ingress on other dev - 36.5
> ingress on this dev - 36.5
> ingress on this dev + u32 - 25.2
>
> Explanation of the first lines:
> 'no ingress' means pure netif_receive_skb with drop in ip_rcv.
> This is the case when ingress qdisc is not attached to any of the
> devices.
> Here static_key is off, so:
>          if (static_key_false(&ingress_needed)) {
>                  ... never reaches here ...
>                  skb = handle_ing(skb, &pt_prev, &ret, orig_dev);
>
> So the code path is the same and numbers before and after are
> proving it is the case.
>
> 'ingress on other dev' means that ingress qdisc is attached to
> some other device. Meaning that static_key is now on and
> handle_ing() after two patches does:
> cl = rcu_dereference_bh(skb->dev->ingress_cl_list);
> if (!cl)
>    return skb; ... returns here ...
>
> Prior to two Daniel's patches handle_ing() does:
>   rxq = rcu_dereference(skb->dev->ingress_queue);
>   if (!rxq || rcu_access_pointer(rxq->qdisc) == &noop_qdisc)
>      return skb; ... returns here ...
>
> so the number of instructions is the same for 'ingress on other dev'
> case too.
> Not surprisingly before and after numbers for 'no ingress' and
> 'ingress on other dev' are exactly the same.
>
> It sounds like you're saying that code that not even being executed
> is somehow affecting the speed? How is that possible ?
> What kind of benchmark do you run? I really want to get to the bottom
> of it. We cannot use 'drive-by reviewing' and 'gut feel' to make
> decisions. If you think my numbers are wrong, please rerun them.
> I'm using pktgen with 'xmit_mode netif_receive'. If there is some
> other benchmark please bring it on. Everyone will benefit.
> If you think pktgen as a test is not representative of real numbers,
> sure, that's a different discussion. Let's talk what is the right
> test to use. Just claiming that inline of a function into
> netif_receive_skb is hurting performance is bogus. Inlining hurts
> when size of executed code increases beyond I-cache size. That's
> clearly not the case here. Compilers moves inlined handle_ing()
> into cold path of netif_receive_skb. So for anyone who doesn't
> enable ingress qdisc there is no difference, since that code doesn't
> even get into I-cache.
> Here is snippet from net/core/dev.s:
>      /* below is line 3696: if (static_key_false(&ingress_needed)) */
>          1:.byte 0x0f,0x1f,0x44,0x00,0
>          .pushsection __jump_table,  "aw"
>           .balign 8
>           .quad 1b, .L1756, ingress_needed       #,
>          .popsection
>      /* below is line 3702: skb->tc_verd = 0; */
>      movw    $0, 150(%rbx)   #, skb_310->tc_verd
>
> As you can see the inlined handle_ing() is not in fall through path
> and it's placed by gcc at the end of netif_receive_skb_core at the label
> L1756. So I cannot possible see how it can affect performance.
> Even if we make handle_ing() twice as big, there is still won't be
> any difference for users that don't use ingress qdisc.
>

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ