lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw5wLd7UhdsMUsx4hsUSYK5gnY9iyq-=kvYz2U01p_G6SA@mail.gmail.com>
Date:   Fri, 29 Jul 2022 11:14:14 -0700
From:   Dave Taht <dave.taht@...il.com>
To:     Tony Nguyen <anthony.l.nguyen@...el.com>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Eric Dumazet <edumazet@...gle.com>,
        Przemyslaw Patynowski <przemyslawx.patynowski@...el.com>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Grzegorz Szczurek <grzegorzx.szczurek@...el.com>,
        Kiran Patil <kiran.patil@...el.com>,
        Jedrzej Jagielski <jedrzej.jagielski@...el.com>,
        Bharathi Sreenivas <bharathi.sreenivas@...el.com>
Subject: Re: [PATCH net v2 2/2] iavf: Fix 'tc qdisc show' listing too many queues

On Fri, Jul 29, 2022 at 11:11 AM Tony Nguyen <anthony.l.nguyen@...el.com> wrote:
>
> From: Przemyslaw Patynowski <przemyslawx.patynowski@...el.com>
>
> Fix tc qdisc show dev <ethX> root displaying too many fq_codel qdiscs.
> tc_modify_qdisc, which is caller of ndo_setup_tc, expects driver to call
> netif_set_real_num_tx_queues, which prepares qdiscs.
> Without this patch, fq_codel qdiscs would not be adjusted to number of
> queues on VF.
> e.g.:
> tc qdisc show dev <ethX>
> qdisc mq 0: root
> qdisc fq_codel 0: parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
> qdisc fq_codel 0: parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
> qdisc fq_codel 0: parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
> qdisc fq_codel 0: parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
> tc qdisc add dev <ethX> root mqprio num_tc 2 map 1 0 0 0 0 0 0 0 queues 1@0 1@1 hw 1 mode channel shaper bw_rlimit max_rate 5000Mbit 150Mbit
> tc qdisc show dev <ethX>
> qdisc mqprio 8003: root tc 2 map 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>              queues:(0:0) (1:1)
>              mode:channel
>              shaper:bw_rlimit   max_rate:5Gbit 150Mbit
> qdisc fq_codel 0: parent 8003:4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
> qdisc fq_codel 0: parent 8003:3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
> qdisc fq_codel 0: parent 8003:2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
> qdisc fq_codel 0: parent 8003:1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
>
> While after fix:
> tc qdisc add dev <ethX> root mqprio num_tc 2 map 1 0 0 0 0 0 0 0 queues 1@0 1@1 hw 1 mode channel shaper bw_rlimit max_rate 5000Mbit 150Mbit
> tc qdisc show dev <ethX> #should show 2, shows 4
> qdisc mqprio 8004: root tc 2 map 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
>              queues:(0:0) (1:1)
>              mode:channel
>              shaper:bw_rlimit   max_rate:5Gbit 150Mbit
> qdisc fq_codel 0: parent 8004:2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
> qdisc fq_codel 0: parent 8004:1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64

I am curious if the rate setting vs a vs codel in this scenario is
effective. Could you post tc -s qdisc show output while running a few
dozen iperfs or netperfs through it? (e.g. add -s
)
>
> Fixes: d5b33d024496 ("i40evf: add ndo_setup_tc callback to i40evf")
> Signed-off-by: Przemyslaw Patynowski <przemyslawx.patynowski@...el.com>
> Co-developed-by: Grzegorz Szczurek <grzegorzx.szczurek@...el.com>
> Signed-off-by: Grzegorz Szczurek <grzegorzx.szczurek@...el.com>
> Co-developed-by: Kiran Patil <kiran.patil@...el.com>
> Signed-off-by: Kiran Patil <kiran.patil@...el.com>
> Signed-off-by: Jedrzej Jagielski <jedrzej.jagielski@...el.com>
> Tested-by: Bharathi Sreenivas <bharathi.sreenivas@...el.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com>
> ---
>  drivers/net/ethernet/intel/iavf/iavf.h      |  5 +++++
>  drivers/net/ethernet/intel/iavf/iavf_main.c | 21 +++++++++++++++++++++
>  2 files changed, 26 insertions(+)
>
> diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
> index c241fbc30f93..a988c08e906f 100644
> --- a/drivers/net/ethernet/intel/iavf/iavf.h
> +++ b/drivers/net/ethernet/intel/iavf/iavf.h
> @@ -431,6 +431,11 @@ struct iavf_adapter {
>         /* lock to protect access to the cloud filter list */
>         spinlock_t cloud_filter_list_lock;
>         u16 num_cloud_filters;
> +       /* snapshot of "num_active_queues" before setup_tc for qdisc add
> +        * is invoked. This information is useful during qdisc del flow,
> +        * to restore correct number of queues
> +        */
> +       int orig_num_active_queues;
>
>  #define IAVF_MAX_FDIR_FILTERS 128      /* max allowed Flow Director filters */
>         u16 fdir_active_fltr;
> diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
> index 51ae10eb348c..3dbfaead2ac7 100644
> --- a/drivers/net/ethernet/intel/iavf/iavf_main.c
> +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
> @@ -3429,6 +3429,7 @@ static int __iavf_setup_tc(struct net_device *netdev, void *type_data)
>                         netif_tx_disable(netdev);
>                         iavf_del_all_cloud_filters(adapter);
>                         adapter->aq_required = IAVF_FLAG_AQ_DISABLE_CHANNELS;
> +                       total_qps = adapter->orig_num_active_queues;
>                         goto exit;
>                 } else {
>                         return -EINVAL;
> @@ -3472,7 +3473,21 @@ static int __iavf_setup_tc(struct net_device *netdev, void *type_data)
>                                 adapter->ch_config.ch_info[i].offset = 0;
>                         }
>                 }
> +
> +               /* Take snapshot of original config such as "num_active_queues"
> +                * It is used later when delete ADQ flow is exercised, so that
> +                * once delete ADQ flow completes, VF shall go back to its
> +                * original queue configuration
> +                */
> +
> +               adapter->orig_num_active_queues = adapter->num_active_queues;
> +
> +               /* Store queue info based on TC so that VF gets configured
> +                * with correct number of queues when VF completes ADQ config
> +                * flow
> +                */
>                 adapter->ch_config.total_qps = total_qps;
> +
>                 netif_tx_stop_all_queues(netdev);
>                 netif_tx_disable(netdev);
>                 adapter->aq_required |= IAVF_FLAG_AQ_ENABLE_CHANNELS;
> @@ -3489,6 +3504,12 @@ static int __iavf_setup_tc(struct net_device *netdev, void *type_data)
>                 }
>         }
>  exit:
> +       if (test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))
> +               return 0;
> +
> +       netif_set_real_num_rx_queues(netdev, total_qps);
> +       netif_set_real_num_tx_queues(netdev, total_qps);
> +
>         return ret;
>  }
>
> --
> 2.35.1
>


-- 
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ