lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
 <DS0PR11MB80501F73B82384761D57AB01834E2@DS0PR11MB8050.namprd11.prod.outlook.com>
Date: Thu, 24 Oct 2024 07:34:25 +0000
From: <Mohan.Prasad@...rochip.com>
To: <pabeni@...hat.com>, <netdev@...r.kernel.org>, <davem@...emloft.net>,
	<kuba@...nel.org>, <andrew@...n.ch>
CC: <edumazet@...gle.com>, <shuah@...nel.org>, <linux-kernel@...r.kernel.org>,
	<linux-kselftest@...r.kernel.org>, <horms@...nel.org>,
	<brett.creeley@....com>, <rosenp@...il.com>, <UNGLinuxDriver@...rochip.com>,
	<willemb@...gle.com>, <petrm@...dia.com>
Subject: RE: [PATCH net-next v3 3/3] selftests: nic_performance: Add selftest
 for performance of NIC driver

Hello Paolo,

Thank you for the review comments.

> > Add selftest case to check the send and receive throughput.
> > Supported link modes between local NIC driver and partner are varied.
> > Then send and receive throughput is captured and verified. Test uses
> > iperf3 tool.
> >
> > Signed-off-by: Mohan Prasad J <mohan.prasad@...rochip.com>
> > ---
> >  .../testing/selftests/drivers/net/hw/Makefile |   1 +
> >  .../drivers/net/hw/nic_performance.py         | 121 ++++++++++++++++++
> >  2 files changed, 122 insertions(+)
> >  create mode 100644
> > tools/testing/selftests/drivers/net/hw/nic_performance.py
> >
> > diff --git a/tools/testing/selftests/drivers/net/hw/Makefile
> > b/tools/testing/selftests/drivers/net/hw/Makefile
> > index 0dac40c4e..289512092 100644
> > --- a/tools/testing/selftests/drivers/net/hw/Makefile
> > +++ b/tools/testing/selftests/drivers/net/hw/Makefile
> > @@ -11,6 +11,7 @@ TEST_PROGS = \
> >       hw_stats_l3_gre.sh \
> >       loopback.sh \
> >       nic_link_layer.py \
> > +     nic_performance.py \
> >       pp_alloc_fail.py \
> >       rss_ctx.py \
> >       #
> > diff --git a/tools/testing/selftests/drivers/net/hw/nic_performance.py
> > b/tools/testing/selftests/drivers/net/hw/nic_performance.py
> > new file mode 100644
> > index 000000000..152c62511
> > --- /dev/null
> > +++ b/tools/testing/selftests/drivers/net/hw/nic_performance.py
> > @@ -0,0 +1,121 @@
> > +#!/usr/bin/env python3
> > +# SPDX-License-Identifier: GPL-2.0
> > +
> > +#Introduction:
> > +#This file has basic performance test for generic NIC drivers.
> > +#The test comprises of throughput check for TCP and UDP streams.
> > +#
> > +#Setup:
> > +#Connect the DUT PC with NIC card to partner pc back via ethernet
> > +medium of your choice(RJ45, T1) #
> > +#        DUT PC                                              Partner PC
> > +#┌───────────────────────┐
> ┌──────────────────────────┐
> > +#│                       │                         │                          │
> > +#│                       │                         │                          │
> > +#│           ┌───────────┐                         │                          │
> > +#│           │DUT NIC    │         Eth             │                          │
> > +#│           │Interface ─┼─────────────────────────┼─    any eth Interface
> │
> > +#│           └───────────┘                         │                          │
> > +#│                       │                         │                          │
> > +#│                       │                         │                          │
> > +#└───────────────────────┘
> └──────────────────────────┘
> > +#
> > +#Configurations:
> > +#To prevent interruptions, Add ethtool, ip to the sudoers list in remote PC
> and get the ssh key from remote.
> > +#Required minimum ethtool version is 6.10 #Change the below
> > +configuration based on your hw needs.
> > +# """Default values"""
> > +time_delay = 8 #time taken to wait for transitions to happen, in seconds.
> > +test_duration = 10  #performance test duration for the throughput check,
> in seconds.
> > +send_throughput_threshold = 80 #percentage of send throughput
> > +required to pass the check receive_throughput_threshold = 50
> > +#percentage of receive throughput required to pass the check
> 
> Please allow the user to override this parameters with env variable and/or
> with the command line.

I will update it to take these parameters from env variable and/or command line.

> 
> > +
> > +import time
> > +import json
> > +from lib.py import ksft_run, ksft_exit, ksft_pr, ksft_true from
> > +lib.py import KsftFailEx, KsftSkipEx from lib.py import NetDrvEpEnv
> > +from lib.py import cmd from lib.py import LinkConfig
> > +
> > +def verify_throughput(cfg, link_config) -> None:
> > +    protocols = ["TCP", "UDP"]
> > +    common_link_modes = link_config.common_link_modes
> > +    speeds, duplex_modes =
> link_config.get_speed_duplex_values(common_link_modes)
> > +    """Test duration in seconds"""
> > +    duration = test_duration
> > +    target_ip = cfg.remote_addr
> > +
> > +    for protocol in protocols:
> > +        ksft_pr(f"{protocol} test")
> > +        test_type = "-u" if protocol == "UDP" else ""
> > +        send_throughput = []
> > +        receive_throughput = []
> > +        for idx in range(0, len(speeds)):
> > +            bit_rate = f"-b {speeds[idx]}M" if protocol == "UDP" else ""
> 
> Always use '-b 0'. Will work with both TCP and UDP and is usually more
> efficient than forcing a specific speed.

As suggested, this will be updated to use '-b 0'. You can find it in next revision.

> 
> > +            if link_config.set_speed_and_duplex(speeds[idx],
> duplex_modes[idx]) == False:
> > +                raise KsftFailEx(f"Not able to set speed and duplex parameters for
> {cfg.ifname}")
> > +            time.sleep(time_delay)
> > +            if link_config.verify_link_up() == False:
> > +                raise KsftSkipEx(f"Link state of interface {cfg.ifname} is DOWN")
> > +            send_command=f"iperf3 {test_type} -c {target_ip} {bit_rate} -t
> {duration} --json"
> > +            receive_command=f"iperf3 {test_type} -c {target_ip} {bit_rate} -t
> {duration} --reverse --json"
> > +            send_result = cmd(send_command)
> > +            receive_result = cmd(receive_command)
> > +            if send_result.ret != 0 or receive_result.ret != 0:
> > +                raise KsftSkipEx("Unexpected error occurred during
> > + transmit/receive")
> > +
> > +            send_output = send_result.stdout
> > +            receive_output = receive_result.stdout
> > +
> > +            send_data = json.loads(send_output)
> > +            receive_data = json.loads(receive_output)
> > +            """Convert throughput to Mbps"""
> > +
> send_throughput.append(round(send_data['end']['sum_sent']['bits_per_sec
> ond'] / 1e6, 2))
> > +
> > + receive_throughput.append(round(receive_data['end']['sum_received'][
> > + 'bits_per_second'] / 1e6, 2))
> > +
> > +            ksft_pr(f"{protocol}: Send throughput:
> > + {send_throughput[idx]} Mbps, Receive throughput:
> > + {receive_throughput[idx]} Mbps")
> > +
> > +        """Check whether throughput is not below the threshold (default
> values set at start)"""
> > +        for idx in range(0, len(speeds)):
> > +            send_threshold = float(speeds[idx]) *
> float(send_throughput_threshold / 100)
> > +            receive_threshold = float(speeds[idx]) *
> float(receive_throughput_threshold / 100)
> > +            ksft_true(send_throughput[idx] >= send_threshold, f"{protocol}:
> Send throughput is below threshold for {speeds[idx]} Mbps in
> {duplex_modes[idx]} duplex")
> > +            ksft_true(receive_throughput[idx] >= receive_threshold,
> > + f"{protocol}: Receive throughput is below threshold for
> > + {speeds[idx]} Mbps in {duplex_modes[idx]} duplex")
> > +
> > +def test_throughput(cfg, link_config) -> None:
> > +    common_link_modes = link_config.common_link_modes
> > +    if not common_link_modes:
> > +        KsftSkipEx("No common link modes found")
> > +    if link_config.partner_netif == None:
> > +        KsftSkipEx("Partner interface name not available")
> > +    if link_config.check_autoneg_supported() and
> link_config.check_autoneg_supported(remote=True):
> > +        KsftSkipEx("Auto-negotiation not supported by local or remote")
> > +    cfg.require_cmd("iperf3", remote=True)
> > +    try:
> > +        """iperf3 server to be run in the remote pc"""
> > +        command = "iperf3 -s -D"
> > +        process = cmd(command, host=cfg.remote)
> 
> It's probably better use '--one-off' and run the command in background.
> 
> You should wait for the listener to be available with wait_port_listen()
> 
> Also you can consider extending the existing GenerateTraffic() class in
> 
> tools/testing/selftests/drivers/net/lib/py/load.py

As suggested, I will update the command to run in background with '--one-off' and wait for the listener.
Extending existing GenerateTraffic() class would be beneficial. I will update it accordingly.

> 
> [...]
> > +def main() -> None:
> > +    with NetDrvEpEnv(__file__, nsim_test=False) as cfg:
> > +        link_config = LinkConfig(cfg)
> > +        ksft_run(globs=globals(), case_pfx={"test_"}, args=(cfg,
> > +link_config,))
> 
> Instead of having a single test with all proto and speeds, what about using a
> tests list, each of them using a given protocol and speed, so that the user see
> more fine grain results?

As suggested, test will be updated to use test list for testing the protocols with speeds separately.

You can find all the changes in the next revision.

Thanks,
Mohan Prasad J

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ