lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66dff631-3f6d-4a7c-b0f2-627c25c49967@redhat.com>
Date: Tue, 26 Aug 2025 11:32:57 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: John Ousterhout <ouster@...stanford.edu>, netdev@...r.kernel.org
Cc: edumazet@...gle.com, horms@...nel.org, kuba@...nel.org
Subject: Re: [PATCH net-next v15 05/15] net: homa: create homa_peer.h and
 homa_peer.c

On 8/18/25 10:55 PM, John Ousterhout wrote:
> +/**
> + * homa_peer_rcu_callback() - This function is invoked as the callback
> + * for an invocation of call_rcu. It just marks a peertab to indicate that
> + * it was invoked.
> + * @head:    Contains information used to locate the peertab.
> + */
> +void homa_peer_rcu_callback(struct rcu_head *head)
> +{
> +	struct homa_peertab *peertab;
> +
> +	peertab = container_of(head, struct homa_peertab, rcu_head);
> +	atomic_set(&peertab->call_rcu_pending, 0);
> +}

The free schema is quite convoluted and different from the usual RCU
handling. Why don't you simply call_rcu() on the given peer once that
the refcount reaches zero?

> +
> +/**
> + * homa_peer_free_dead() - Release peers on peertab->dead_peers
> + * if possible.
> + * @peertab:    Check the dead peers here.
> + */
> +void homa_peer_free_dead(struct homa_peertab *peertab)
> +	__must_hold(peertab->lock)
> +{
> +	struct homa_peer *peer, *tmp;
> +
> +	/* A dead peer can be freed only if:
> +	 * (a) there are no call_rcu calls pending (if there are, it's
> +	 *     possible that a new reference might get created for the
> +	 *     peer)
> +	 * (b) the peer's reference count is zero.
> +	 */
> +	if (atomic_read(&peertab->call_rcu_pending))
> +		return;
> +	list_for_each_entry_safe(peer, tmp, &peertab->dead_peers, dead_links) {
> +		if (atomic_read(&peer->refs) == 0) {
> +			list_del_init(&peer->dead_links);
> +			homa_peer_free(peer);
> +		}
> +	}
> +}
> +
> +/**
> + * homa_peer_wait_dead() - Don't return until all of the dead peers have
> + * been freed.
> + * @peertab:    Overall information about peers, which includes a dead list.
> + *
> + */
> +void homa_peer_wait_dead(struct homa_peertab *peertab)
> +{
> +	while (1) {
> +		spin_lock_bh(&peertab->lock);
> +		homa_peer_free_dead(peertab);
> +		if (list_empty(&peertab->dead_peers)) {
> +			spin_unlock_bh(&peertab->lock);
> +			return;
> +		}
> +		spin_unlock_bh(&peertab->lock);
> +	}
> +}

Apparently unused.

> +/**
> + * homa_dst_refresh() - This method is called when the dst for a peer is
> + * obsolete; it releases that dst and creates a new one.
> + * @peertab:  Table containing the peer.
> + * @peer:     Peer whose dst is obsolete.
> + * @hsk:      Socket that will be used to transmit data to the peer.
> + */
> +void homa_dst_refresh(struct homa_peertab *peertab, struct homa_peer *peer,
> +		      struct homa_sock *hsk)
> +{
> +	struct dst_entry *dst;
> +
> +	dst = homa_peer_get_dst(peer, hsk);
> +	if (IS_ERR(dst))
> +		return;
> +	dst_release(peer->dst);
> +	peer->dst = dst;

Why the above does not need any lock? Can multiple RPC race on the same
peer concurrently?

> +/**
> + * struct homa_peer - One of these objects exists for each machine that we
> + * have communicated with (either as client or server).
> + */
> +struct homa_peer {
> +	/** @ht_key: The hash table key for this peer in peertab->ht. */
> +	struct homa_peer_key ht_key;
> +
> +	/**
> +	 * @ht_linkage: Used by rashtable implement to link this peer into
> +	 * peertab->ht.
> +	 */
> +	struct rhash_head ht_linkage;
> +
> +	/** @dead_links: Used to link this peer into peertab->dead_peers. */
> +	struct list_head dead_links;
> +
> +	/**
> +	 * @refs: Number of unmatched calls to homa_peer_hold; it's not safe
> +	 * to free this object until the reference count is zero.
> +	 */
> +	atomic_t refs ____cacheline_aligned_in_smp;

Please use refcount_t instead.

> +/**
> + * homa_peer_hash() - Hash function used for @peertab->ht.
> + * @data:    Pointer to key for which a hash is desired. Must actually
> + *           be a struct homa_peer_key.
> + * @dummy:   Not used
> + * @seed:    Seed for the hash.
> + * Return:   A 32-bit hash value for the given key.
> + */
> +static inline u32 homa_peer_hash(const void *data, u32 dummy, u32 seed)
> +{
> +	/* This is MurmurHash3, used instead of the jhash default because it
> +	 * is faster (25 ns vs. 40 ns as of May 2025).
> +	 */
> +	BUILD_BUG_ON(sizeof(struct homa_peer_key) & 0x3);

It's likely worthy to place the hash function implementation in a
standalone header.

> +	const u32 len = sizeof(struct homa_peer_key) >> 2;
> +	const u32 c1 = 0xcc9e2d51;
> +	const u32 c2 = 0x1b873593;
> +	const u32 *key = data;
> +	u32 h = seed;
> +
> +	for (size_t i = 0; i < len; i++) {
> +		u32 k = key[i];
> +
> +		k *= c1;
> +		k = (k << 15) | (k >> (32 - 15));
> +		k *= c2;
> +
> +		h ^= k;
> +		h = (h << 13) | (h >> (32 - 13));
> +		h = h * 5 + 0xe6546b64;
> +	}
> +
> +	h ^= len * 4;  // Total number of input bytes

Please avoid C99 comments

/P


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ