lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 10 Apr 2015 18:14:29 -0700 From: Cong Wang <cwang@...pensource.com> To: Daniel Borkmann <daniel@...earbox.net> Cc: David Miller <davem@...emloft.net>, Jamal Hadi Salim <jhs@...atatu.com>, Alexei Starovoitov <ast@...mgrid.com>, Eric Dumazet <edumazet@...gle.com>, netdev <netdev@...r.kernel.org> Subject: Re: [PATCH net-next] net: use jump label patching for ingress qdisc in __netif_receive_skb_core On Fri, Apr 10, 2015 at 2:07 PM, Daniel Borkmann <daniel@...earbox.net> wrote: > Even if we make use of classifier and actions from the egress > path, we're going into handle_ing() executing additional code > on a per-packet cost for ingress qdisc, just to realize that > nothing is attached on ingress. > > Instead, this can just be blinded out as a no-op entirely with > the use of a static key. On input fast-path, we already make > use of static keys in various places, e.g. skb time stamping, > in RPS, etc. It makes sense to not waste time when we're assured > that no ingress qdisc is attached anywhere. So the following code is slow enough to deserve a static key optimization? struct netdev_queue *rxq = rcu_dereference(skb->dev->ingress_queue); if (!rxq || rcu_access_pointer(rxq->qdisc) == &noop_qdisc) goto out; > > Enabling/disabling of that code path is being done via two > helpers, namely net_{inc,dec}_ingress_queue(), that are being > invoked under RTNL mutex when a ingress qdisc is being either > initialized or destructed. Since static key can't be embedded into net_device, I doubt it is useful to have a static key here, since qdisc is per net device. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists