[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260129180320.1c69a0c4@kernel.org>
Date: Thu, 29 Jan 2026 18:03:20 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Michael Chan <michael.chan@...adcom.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
pabeni@...hat.com, andrew+netdev@...n.ch, pavan.chebbi@...adcom.com,
andrew.gospodarek@...adcom.com
Subject: Re: [PATCH net 4/4] selftests: drv-net: rss_ctx: test RSS contexts
persist after ifdown/up
On Wed, 28 Jan 2026 22:16:46 -0800 Michael Chan wrote:
> From: Pavan Chebbi <pavan.chebbi@...adcom.com>
>
> Add tests to verify
> 1. that RSS contexts persist across interface down/up
> 2. that RSS contexts persist across interface down/up along
> with their associated Ntuple fitlers
>
> Testing on bnxt_en:
>
> TAP version 13
> 1..1
> # timeout set to 0
> # selftests: drivers/net/hw: rss_ctx.py
> # TAP version 13
> # 1..2
> # ok 1 rss_ctx.test_rss_context_persist_ifupdown
> # ok 2 rss_ctx.test_rss_context_ntuple_persist_ifupdown
> # # Totals: pass:2 fail:0 xfail:0 xpass:0 skip:0 error:0
> ok 1 selftests: drivers/net/hw: rss_ctx.py
Thanks for the test. Please make sure that
ruff check $file
is clean
> diff --git a/tools/testing/selftests/drivers/net/hw/rss_ctx.py b/tools/testing/selftests/drivers/net/hw/rss_ctx.py
> index ed7e405682f0..4e46c5931c7f 100755
> --- a/tools/testing/selftests/drivers/net/hw/rss_ctx.py
> +++ b/tools/testing/selftests/drivers/net/hw/rss_ctx.py
> @@ -809,6 +809,119 @@ def test_rss_default_context_rule(cfg):
> 'noise' : (0, 1) })
>
>
Please put
@ksft_disruptive
decorator before test cases which may disconnect the machine (take the
link down). We won't be able to run them remotely
> +def test_rss_context_persist_ifupdown(cfg):
> + """
> + Check that RSS contexts persist across an interface down/up cycle.
> + """
> +
> + require_context_cnt(cfg, 10)
Why 10? I don't think we're gaining any coverage with > 2.
> + # Create 10 RSS contexts and store their IDs and configurations
> + ctx_ids = []
> + ctx_configs_before = {}
appears unused
> + try:
> + for i in range(10):
> + ctx_id = ethtool_create(cfg, "-X", "context new")
> + ctx_ids.append(ctx_id)
> + defer(ethtool, f"-X {cfg.ifname} context {ctx_id} delete")
> + except CmdExitFailure:
> + raise KsftSkipEx(f"Could only create {len(ctx_ids)} contexts, test requires 10")
> +
> + # Bring interface down
these comments are pointless and make the whole thing look AI generated
> + ip(f"link set dev {cfg.ifname} down")
> +
> + # Bring interface back up
> + ip(f"link set dev {cfg.ifname} up")
> +
> + # Wait for interface to be fully up
> + cfg.wait_hw_stats_settle()
Why?
> + # Verify all 10 contexts still exist after ifup
> + missing_contexts = []
> + persisted_contexts = []
> +
> + for ctx_id in ctx_ids:
> + try:
> + data = get_rss(cfg, context=ctx_id)
> + _rss_key_check(cfg, data=data, context=ctx_id)
> + persisted_contexts.append(ctx_id)
> + except CmdExitFailure:
> + missing_contexts.append(ctx_id)
> + ksft_pr(f"Context {ctx_id} is missing after ifup")
You can use netlink to get them all
ctxs = cfg.ethnl.rss_get({}, dump=True)
> +def test_rss_context_ntuple_persist_ifupdown(cfg):
> + """
> + Test that RSS contexts and their associated ntuple filters persist across
> + an interface down/up cycle.
> + """
> +
> + require_ntuple(cfg)
> + require_context_cnt(cfg, 10)
> +
> + # Create 10 RSS contexts with ntuple filters
> + ctx_ids = []
> + ntuple_ids = []
> + ports = []
> +
> + try:
> + for i in range(10):
> + # Create RSS context
> + ctx_id = ethtool_create(cfg, "-X", "context new")
> + ctx_ids.append(ctx_id)
> + defer(ethtool, f"-X {cfg.ifname} context {ctx_id} delete")
> +
> + # Create ntuple filter for this context
> + port = rand_port()
> + ports.append(port)
> + flow = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} dst-port {port} context {ctx_id}"
> + ntuple_id = ethtool_create(cfg, "-N", flow)
> + ntuple_ids.append(ntuple_id)
> + defer(ethtool, f"-N {cfg.ifname} delete {ntuple_id}")
> +
> + except CmdExitFailure:
> + raise KsftSkipEx(f"Could only create {len(ctx_ids)} contexts with ntuple filters, test requires 10")
> +
> + # Bring interface down
> + ip(f"link set dev {cfg.ifname} down")
> +
> + # Bring interface back up
> + ip(f"link set dev {cfg.ifname} up")
> +
> + # Wait for interface to be fully up
> + cfg.wait_hw_stats_settle()
> +
> + # Verify all contexts and ntuple rules still exist after ifup
> + missing_contexts = []
> + persisted_contexts = []
> + missing_ntuple_rules = []
> + persisted_ntuple_rules = []
> + broken_associations = []
> +
> + for i, ctx_id in enumerate(ctx_ids):
> + # Check if context persists
> + try:
> + data = get_rss(cfg, context=ctx_id)
> + _rss_key_check(cfg, data=data, context=ctx_id)
> + persisted_contexts.append(ctx_id)
> + except CmdExitFailure:
> + missing_contexts.append(ctx_id)
> + ksft_pr(f"Context {ctx_id} is missing after ifup")
> + continue
> +
> + # Check if ntuple rule persists
> + ntuple_id = ntuple_ids[i]
> + try:
> + _ntuple_rule_check(cfg, ntuple_id, ctx_id)
> + persisted_ntuple_rules.append(ntuple_id)
> + except CmdExitFailure:
> + missing_ntuple_rules.append(ntuple_id)
> + ksft_pr(f"Ntuple rule {ntuple_id} is missing after ifup")
> + except Exception as e:
> + broken_associations.append((ntuple_id, ctx_id))
> + ksft_pr(f"Ntuple rule {ntuple_id} exists but is not properly associated with context {ctx_id}: {e}")
Not sure why this is a separate test, TBH. You can remove the test
without the ntuple filters. The two cases I would have expected based
on patches is:
tests1:
- add a couple of contexts
- add a n-tuple filter to one of the context
- ifdown/ifup
- check contexts and filters are there
- run some traffic to make sure it flows right
tests2:
- ifdown
- add a couple of contexts
- add a n-tuple filter to one of the context
- ifup
- check contexts and filters are there
- run some traffic to make sure it flows right
You can probably have one shared implementation and pass a param to
it to tell it whether to "pre-down" the interface.
--
pw-bot: cr
Powered by blists - more mailing lists