[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <38cb3493-1b13-4b8a-b84c-81a6845d876f@redhat.com>
Date: Thu, 10 Jul 2025 11:45:19 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Krishna Kumar <krikku@...il.com>, netdev@...r.kernel.org,
davem@...emloft.net, edumazet@...gle.com
Cc: tom@...bertland.com, bhutchings@...arflare.com, kuba@...nel.org,
horms@...nel.org, sdf@...ichev.me, kuniyu@...gle.com, ahmed.zaki@...el.com,
aleksander.lobakin@...el.com, atenart@...nel.org, jdamato@...tly.com,
krishna.ku@...pkart.com
Subject: Re: [PATCH] net: Fix RPS table slot collision overwriting flow
Hi,
The patch subject is slightly misleading, as this is a control plane
improvement more than a fix. Also please specify the target tree
('net-next' in this case) in the subj prefix.
On 7/8/25 10:15 AM, Krishna Kumar wrote:
> +/**
> + * rps_flow_is_active - check whether the flow is recently active.
> + * @rflow: Specific flow to check activity.
> + * @flow_table: Check activity against the flow_table's size.
> + * @cpu: CPU saved in @rflow.
> + *
> + * If the CPU has processed many packets since the flow's last activity
> + * (beyond 10 times the table size), the flow is considered stale.
> + *
> + * Return values:
> + * True: Flow has recent activity.
> + * False: Flow does not have recent activity.
> + */
> +static inline bool rps_flow_is_active(struct rps_dev_flow *rflow,
> + struct rps_dev_flow_table *flow_table,
> + unsigned int cpu)
No 'inline' function in c files.
> +{
> + return cpu < nr_cpu_ids &&
> + ((int)(READ_ONCE(per_cpu(softnet_data, cpu).input_queue_head) -
> + READ_ONCE(rflow->last_qtail)) < (int)(10 << flow_table->log));
> +}
> +
> static struct rps_dev_flow *
> set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
> - struct rps_dev_flow *rflow, u16 next_cpu)
> + struct rps_dev_flow *rflow, u16 next_cpu, u32 hash,
> + u32 flow_id)
> {
> if (next_cpu < nr_cpu_ids) {
> u32 head;
> @@ -4847,8 +4870,9 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
> struct netdev_rx_queue *rxqueue;
> struct rps_dev_flow_table *flow_table;
> struct rps_dev_flow *old_rflow;
> + struct rps_dev_flow *tmp_rflow;
> + unsigned int tmp_cpu;
> u16 rxq_index;
> - u32 flow_id;
> int rc;
>
> /* Should we steer this flow to a different hardware queue? */
> @@ -4863,14 +4887,54 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
> flow_table = rcu_dereference(rxqueue->rps_flow_table);
> if (!flow_table)
> goto out;
> - flow_id = rfs_slot(skb_get_hash(skb), flow_table);
Consolidating the flow_id computation in the caller looks like a nice
improvement, but not strictly related to rest of the changes here. I
think it should be moved into a separate patch.
/P
Powered by blists - more mailing lists