lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <vbfzhqrq01r.fsf@mellanox.com>
Date:   Tue, 19 Feb 2019 15:20:37 +0000
From:   Vlad Buslov <vladbu@...lanox.com>
To:     Cong Wang <xiyou.wangcong@...il.com>
CC:     Ido Schimmel <idosch@...sch.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "jhs@...atatu.com" <jhs@...atatu.com>,
        "jiri@...nulli.us" <jiri@...nulli.us>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "ast@...nel.org" <ast@...nel.org>,
        "daniel@...earbox.net" <daniel@...earbox.net>
Subject: Re: [PATCH net-next v4 07/17] net: sched: protect filter_chain list
 with filter_chain_lock mutex


On Tue 19 Feb 2019 at 05:08, Cong Wang <xiyou.wangcong@...il.com> wrote:
> On Fri, Feb 15, 2019 at 2:02 AM Vlad Buslov <vladbu@...lanox.com> wrote:
>>
>> I looked at the code and problem seems to be matchall classifier
>> specific. My implementation of unlocked cls API assumes that concurrent
>> insertions are possible and checks for it when deleting "empty" tp.
>> Since classifiers don't expose number of elements, the only way to test
>> this is to do tp->walk() on them and assume that walk callback is called
>> once per filter on every classifier. In your example new tp is created
>> for second filter, filter insertion fails, number of elements on newly
>> created tp is checked with tp->walk() before deleting it. However,
>> matchall classifier always calls the tp->walk() callback once, even when
>> it doesn't have a valid filter (in this case with NULL filter pointer).
>
> Again, this can be eliminated by just switching to normal
> non-retry logic. This is yet another headache to review this
> kind of unlock-and-retry logic, I have no idea why you are such
> a big fan of it.

The retry approach was suggested to me multiple times by Jiri on
previous code reviews so I assumed it is preferred approach in such
cases. I don't have a strong preference in this regard, but locking
whole tp on filter update will remove any parallelism when updating same
classifier instance concurrently. The goal of these changes is to allow
parallel rule update and to achieve that I had to introduce some
complexity into the code.

Now let me explain why these two approaches result completely different
performance in this case. Lets start with a list of most CPU-consuming
parts in new filter creation process in descending order (raw data at
the end of this mail):

1) Hardware offload - if available and no skip_hw.
2) Exts (actions) initalization - most expensive part even with single
action, CPU usage increases with number of actions per filter.
3) cls API.
4) Flower classifier data structure initialization.

Note that 1)+2) is ~80% of cost of creating a flower filter. So if we
just lock the whole flower classifier instance during rule update we
serialize 1, 2 and 4, and only cls API (~13% of CPU cost) can be
executed concurrently. However, in proposed flower implementation hw
offloading and action initialization code is called without any locks
and tp->lock is only obtained when modifying flower data structures,
which means that only 3) is serialized and everything else (87% of CPU
cost) can be executed in parallel.

First page of profiling data:

Samples: 100K of event 'cycles:ppp', Event count (approx.): 11191878316
  Children      Self  Command  Shared Object       Symbol
+   84.71%     0.08%  tc       [kernel.vmlinux]    [k] entry_SYSCALL_64_after_hwframe
+   84.62%     0.06%  tc       [kernel.vmlinux]    [k] do_syscall_64
+   82.63%     0.01%  tc       libc-2.25.so        [.] __libc_sendmsg
+   82.37%     0.00%  tc       [kernel.vmlinux]    [k] __sys_sendmsg
+   82.37%     0.00%  tc       [kernel.vmlinux]    [k] ___sys_sendmsg
+   82.34%     0.00%  tc       [kernel.vmlinux]    [k] sock_sendmsg
+   82.34%     0.01%  tc       [kernel.vmlinux]    [k] netlink_sendmsg
+   82.15%     0.15%  tc       [kernel.vmlinux]    [k] netlink_unicast
+   82.10%     0.11%  tc       [kernel.vmlinux]    [k] netlink_rcv_skb
+   80.76%     0.22%  tc       [kernel.vmlinux]    [k] rtnetlink_rcv_msg
+   80.10%     0.24%  tc       [kernel.vmlinux]    [k] tc_new_tfilter
+   69.30%     2.11%  tc       [cls_flower]        [k] fl_change
+   33.56%     0.05%  tc       [kernel.vmlinux]    [k] tcf_exts_validate
+   33.50%     0.12%  tc       [kernel.vmlinux]    [k] tcf_action_init
+   33.30%     0.10%  tc       [kernel.vmlinux]    [k] tcf_action_init_1
+   32.78%     0.11%  tc       [act_gact]          [k] tcf_gact_init
+   30.93%     0.16%  tc       [kernel.vmlinux]    [k] tc_setup_cb_call
+   29.96%     0.60%  tc       [mlx5_core]         [k] mlx5e_configure_flower
+   27.62%     0.23%  tc       [mlx5_core]         [k] mlx5e_tc_add_nic_flow
+   27.31%     0.45%  tc       [kernel.vmlinux]    [k] tcf_idr_create
+   25.45%     1.75%  tc       [kernel.vmlinux]    [k] pcpu_alloc
+   16.33%     0.07%  tc       [mlx5_core]         [k] mlx5_cmd_exec
+   16.26%     1.96%  tc       [mlx5_core]         [k] cmd_exec
+   14.28%     1.05%  tc       [mlx5_core]         [k] mlx5_add_flow_rules
+   14.02%     0.26%  tc       [kernel.vmlinux]    [k] pcpu_alloc_area
+   13.09%     0.13%  tc       [mlx5_core]         [k] mlx5_fc_create
+    9.77%     0.30%  tc       [mlx5_core]         [k] add_rule_fg.isra.28
+    9.08%     0.84%  tc       [mlx5_core]         [k] mlx5_cmd_set_fte
+    8.90%     0.09%  tc       [mlx5_core]         [k] mlx5_cmd_fc_alloc
+    7.90%     0.12%  tc       [kernel.vmlinux]    [k] tfilter_notify
+    7.34%     0.61%  tc       [kernel.vmlinux]    [k] __queue_work
+    7.25%     0.26%  tc       [kernel.vmlinux]    [k] tcf_fill_node
+    6.73%     0.23%  tc       [kernel.vmlinux]    [k] wait_for_completion_timeout
+    6.67%     0.20%  tc       [cls_flower]        [k] fl_dump
+    6.52%     5.93%  tc       [kernel.vmlinux]    [k] memset_erms
+    5.77%     0.49%  tc       [kernel.vmlinux]    [k] schedule_timeout
+    5.57%     1.29%  tc       [kernel.vmlinux]    [k] try_to_wake_up
+    5.50%     0.11%  tc       [kernel.vmlinux]    [k] pcpu_block_update_hint_alloc
+    5.40%     0.85%  tc       [kernel.vmlinux]    [k] pcpu_block_refresh_hint
+    5.28%     0.11%  tc       [kernel.vmlinux]    [k] queue_work_on
+    5.19%     4.96%  tc       [kernel.vmlinux]    [k] find_next_bit
+    4.77%     0.11%  tc       [kernel.vmlinux]    [k] idr_alloc_u32
+    4.71%     0.10%  tc       [kernel.vmlinux]    [k] schedule
+    4.62%     0.30%  tc       [kernel.vmlinux]    [k] __sched_text_start
+    4.48%     4.41%  tc       [kernel.vmlinux]    [k] idr_get_free
+    4.19%     0.04%  tc       [kernel.vmlinux]    [k] tcf_idr_check_alloc

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ