lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <028f492f-41db-4c70-9527-cf0db03da4df@redhat.com>
Date: Thu, 23 Jan 2025 20:01:57 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: John Ousterhout <ouster@...stanford.edu>, netdev@...r.kernel.org
Cc: edumazet@...gle.com, horms@...nel.org, kuba@...nel.org
Subject: Re: [PATCH net-next v6 07/12] net: homa: create homa_sock.h and
 homa_sock.c

On 1/15/25 7:59 PM, John Ousterhout wrote:
> +	spin_unlock_bh(&socktab->write_lock);
> +
> +	return homa_socktab_next(scan);
> +}
> +
> +/**
> + * homa_socktab_next() - Return the next socket in an iteration over a socktab.
> + * @scan:      State of the scan.
> + *
> + * Return:     The next socket in the table, or NULL if the iteration has
> + *             returned all of the sockets in the table. Sockets are not
> + *             returned in any particular order. It's possible that the
> + *             returned socket has been destroyed.
> + */
> +struct homa_sock *homa_socktab_next(struct homa_socktab_scan *scan)
> +{
> +	struct homa_socktab_links *links;
> +	struct homa_sock *hsk;
> +
> +	while (1) {
> +		while (!scan->next) {
> +			struct hlist_head *bucket;
> +
> +			scan->current_bucket++;
> +			if (scan->current_bucket >= HOMA_SOCKTAB_BUCKETS)
> +				return NULL;
> +			bucket = &scan->socktab->buckets[scan->current_bucket];
> +			scan->next = (struct homa_socktab_links *)
> +				      rcu_dereference(hlist_first_rcu(bucket));

The only caller for this function so far is not under RCU lock: you
should see a splat here if you build and run this code with:

CONFIG_LOCKDEP=y

(which in turn is highly encouraged)

> +		}
> +		links = scan->next;
> +		hsk = links->sock;
> +		scan->next = (struct homa_socktab_links *)
> +				rcu_dereference(hlist_next_rcu(&links->hash_links));

homa_socktab_links is embedded into the home sock; if the RCU protection
is released and re-acquired after a homa_socktab_next() call, there is
no guarantee links/hsk are still around and the above statement could
cause UaF.

This homa_socktab things looks quite complex. A simpler implementation
could use a simple RCU list _and_ acquire a reference to the hsk before
releasing the RCU lock.

> +		return hsk;
> +	}
> +}
> +
> +/**
> + * homa_socktab_end_scan() - Must be invoked on completion of each scan
> + * to clean up state associated with the scan.
> + * @scan:      State of the scan.
> + */
> +void homa_socktab_end_scan(struct homa_socktab_scan *scan)
> +{
> +	spin_lock_bh(&scan->socktab->write_lock);
> +	list_del(&scan->scan_links);
> +	spin_unlock_bh(&scan->socktab->write_lock);
> +}
> +
> +/**
> + * homa_sock_init() - Constructor for homa_sock objects. This function
> + * initializes only the parts of the socket that are owned by Homa.
> + * @hsk:    Object to initialize.
> + * @homa:   Homa implementation that will manage the socket.
> + *
> + * Return: 0 for success, otherwise a negative errno.
> + */
> +int homa_sock_init(struct homa_sock *hsk, struct homa *homa)
> +{
> +	struct homa_socktab *socktab = homa->port_map;
> +	int starting_port;
> +	int result = 0;
> +	int i;
> +
> +	spin_lock_bh(&socktab->write_lock);

A single contended lock for the whole homa sock table? Why don't you use
per bucket locks?

[...]
> +struct homa_rpc_bucket {
> +	/**
> +	 * @lock: serves as a lock both for this bucket (e.g., when
> +	 * adding and removing RPCs) and also for all of the RPCs in
> +	 * the bucket. Must be held whenever manipulating an RPC in
> +	 * this bucket. This dual purpose permits clean and safe
> +	 * deletion and garbage collection of RPCs.
> +	 */
> +	spinlock_t lock;
> +
> +	/** @rpcs: list of RPCs that hash to this bucket. */
> +	struct hlist_head rpcs;
> +
> +	/**
> +	 * @id: identifier for this bucket, used in error messages etc.
> +	 * It's the index of the bucket within its hash table bucket
> +	 * array, with an additional offset to separate server and
> +	 * client RPCs.
> +	 */
> +	int id;

On 64 bit arches this struct will have 2 4-bytes holes. If you reorder
the field:
	spinlock_t lock;
	int id;
	struct hlist_head rpcs;

the struct size will decrease by 8 bytes.

> +};
> +
> +/**
> + * define HOMA_CLIENT_RPC_BUCKETS - Number of buckets in hash tables for
> + * client RPCs. Must be a power of 2.
> + */
> +#define HOMA_CLIENT_RPC_BUCKETS 1024
> +
> +/**
> + * define HOMA_SERVER_RPC_BUCKETS - Number of buckets in hash tables for
> + * server RPCs. Must be a power of 2.
> + */
> +#define HOMA_SERVER_RPC_BUCKETS 1024
> +
> +/**
> + * struct homa_sock - Information about an open socket.
> + */
> +struct homa_sock {
> +	/* Info for other network layers. Note: IPv6 info (struct ipv6_pinfo
> +	 * comes at the very end of the struct, *after* Homa's data, if this
> +	 * socket uses IPv6).
> +	 */
> +	union {
> +		/** @sock: generic socket data; must be the first field. */
> +		struct sock sock;
> +
> +		/**
> +		 * @inet: generic Internet socket data; must also be the
> +		 first field (contains sock as its first member).
> +		 */
> +		struct inet_sock inet;
> +	};

Why adding this union? Just
	struct inet_sock inet;
would do.

/P


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ