[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <40be3195-62e0-483a-9448-cf8a342d95f6@redhat.com>
Date: Thu, 8 Jan 2026 12:16:27 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Parvathi Pudi <parvathi@...thit.com>, andrew+netdev@...n.ch,
davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
danishanwar@...com, rogerq@...nel.org, pmohan@...thit.com,
basharath@...thit.com, afd@...com
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, alok.a.tiwari@...cle.com,
horms@...nel.org, pratheesh@...com, j-rameshbabu@...com, vigneshr@...com,
praneeth@...com, srk@...com, rogerq@...com, krishna@...thit.com,
mohan@...thit.com
Subject: Re: [PATCH net-next v11 1/3] net: ti: icssm-prueth: Add helper
functions to configure and maintain FDB
On 1/5/26 1:23 PM, Parvathi Pudi wrote:
> +static void icssm_prueth_sw_fdb_update_index_tbl(struct prueth *prueth,
> + u16 left, u16 right)
> +{
> + unsigned int hash, hash_prev;
> + u8 mac[ETH_ALEN];
> + unsigned int i;
> +
> + /* To ensure we don't improperly update the
> + * bucket index, initialize with an invalid
> + * hash in case we are in leftmost slot
> + */
> + hash_prev = 0xff;
Why 0xff is an invalid index if the hash table size is 256?
> +
> + if (left > 0) {
> + memcpy_fromio(mac, FDB_MAC_TBL_ENTRY(left - 1)->mac, ETH_ALEN);
> + hash_prev = icssm_prueth_sw_fdb_hash(mac);
> + }
> +
> + /* For each moved element, update the bucket index */
> + for (i = left; i <= right; i++) {
> + memcpy_fromio(mac, FDB_MAC_TBL_ENTRY(i)->mac, ETH_ALEN);
> + hash = icssm_prueth_sw_fdb_hash(mac);
> +
> + /* Only need to update buckets once */
> + if (hash != hash_prev)
> + writew(i, &FDB_IDX_TBL_ENTRY(hash)->bucket_idx);
> +
> + hash_prev = hash;
> + }
> +}
> +
> +static struct fdb_mac_tbl_entry __iomem *
> +icssm_prueth_sw_find_free_mac(struct prueth *prueth, struct fdb_index_tbl_entry
> + __iomem *bucket_info, u8 suggested_mac_tbl_idx,
> + bool *update_indexes, const u8 *mac)
> +{
> + s16 empty_slot_idx = 0, left = 0, right = 0;
> + unsigned int mti = suggested_mac_tbl_idx;
> + struct fdb_mac_tbl_array __iomem *mt;
> + struct fdb_tbl *fdb;
> + u8 flags;
> +
> + fdb = prueth->fdb_tbl;
> + mt = fdb->mac_tbl_a;
> +
> + flags = readb(&FDB_MAC_TBL_ENTRY(mti)->flags);
> + if (!(flags & FLAG_ACTIVE)) {
> + /* Claim the entry */
> + flags |= FLAG_ACTIVE;
> + writeb(flags, &FDB_MAC_TBL_ENTRY(mti)->flags);
> +
> + return FDB_MAC_TBL_ENTRY(mti);
> + }
> +
> + if (fdb->total_entries == FDB_MAC_TBL_MAX_ENTRIES)
> + return NULL;
> +
> + empty_slot_idx = icssm_prueth_sw_fdb_empty_slot_left(mt, mti);
> + if (empty_slot_idx == -1) {
> + /* Nothing available on the left. But table isn't full
> + * so there must be space to the right,
> + */
> + empty_slot_idx = icssm_prueth_sw_fdb_empty_slot_right(mt, mti);
> +
> + /* Shift right */
> + left = mti;
> + right = empty_slot_idx;
> + icssm_prueth_sw_fdb_move_range_right(prueth, left, right);
> +
> + /* Claim the entry */
> + flags = readb(&FDB_MAC_TBL_ENTRY(mti)->flags);
> + flags |= FLAG_ACTIVE;
> + writeb(flags, &FDB_MAC_TBL_ENTRY(mti)->flags);
> +
> + memcpy_toio(FDB_MAC_TBL_ENTRY(mti)->mac, mac, ETH_ALEN);
> +
> + /* There is a chance we moved something in a
> + * different bucket, update index table
> + */
> + icssm_prueth_sw_fdb_update_index_tbl(prueth, left, right);
> +
> + return FDB_MAC_TBL_ENTRY(mti);
AI review found what looks like a valid issue above:
"""
In this branch, FLAG_ACTIVE is set on FDB_MAC_TBL_ENTRY(mti) but the
function returns FDB_MAC_TBL_ENTRY(empty_slot_idx). The caller in
icssm_prueth_sw_insert_fdb_entry() then writes the MAC address to the
returned entry (empty_slot_idx), leaving entry mti marked active with
stale data.
Should FLAG_ACTIVE be set on empty_slot_idx instead? For comparison,
the other paths in this function (lines 270-277, 294-306, and 330-342)
all set FLAG_ACTIVE on the same entry they return and write MAC data to.
"""
Generally speaking the hash table handling looks complex and error
prone. Is keeping the collided entries sorted really a win? I guess that
always head-inserting would simplify the code a bit.
/P
Powered by blists - more mailing lists