[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e3d43c09-7df2-4447-bcaa-7cec550bdf62@redhat.com>
Date: Tue, 26 Aug 2025 14:05:09 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: John Ousterhout <ouster@...stanford.edu>, netdev@...r.kernel.org
Cc: edumazet@...gle.com, horms@...nel.org, kuba@...nel.org
Subject: Re: [PATCH net-next v15 12/15] net: homa: create homa_incoming.c
On 8/18/25 10:55 PM, John Ousterhout wrote:
> +/**
> + * homa_dispatch_pkts() - Top-level function that processes a batch of packets,
> + * all related to the same RPC.
> + * @skb: First packet in the batch, linked through skb->next.
> + */
> +void homa_dispatch_pkts(struct sk_buff *skb)
> +{
> +#define MAX_ACKS 10
> + const struct in6_addr saddr = skb_canonical_ipv6_saddr(skb);
> + struct homa_data_hdr *h = (struct homa_data_hdr *)skb->data;
> + u64 id = homa_local_id(h->common.sender_id);
> + int dport = ntohs(h->common.dport);
> +
> + /* Used to collect acks from data packets so we can process them
> + * all at the end (can't process them inline because that may
> + * require locking conflicting RPCs). If we run out of space just
> + * ignore the extra acks; they'll be regenerated later through the
> + * explicit mechanism.
> + */
> + struct homa_ack acks[MAX_ACKS];
> + struct homa_rpc *rpc = NULL;
> + struct homa_sock *hsk;
> + struct homa_net *hnet;
> + struct sk_buff *next;
> + int num_acks = 0;
No black lines in the variable declaration section, and the stack usage
feel a bit too high.
> +
> + /* Find the appropriate socket.*/
> + hnet = homa_net_from_skb(skb);
> + hsk = homa_sock_find(hnet, dport);
> + if (!hsk || (!homa_is_client(id) && !hsk->is_server)) {
> + if (skb_is_ipv6(skb))
> + icmp6_send(skb, ICMPV6_DEST_UNREACH,
> + ICMPV6_PORT_UNREACH, 0, NULL, IP6CB(skb));
> + else
> + icmp_send(skb, ICMP_DEST_UNREACH,
> + ICMP_PORT_UNREACH, 0);
> + while (skb) {
> + next = skb->next;
> + kfree_skb(skb);
> + skb = next;
> + }
> + if (hsk)
> + sock_put(&hsk->sock);
> + return;
> + }
> +
> + /* Each iteration through the following loop processes one packet. */
> + for (; skb; skb = next) {
> + h = (struct homa_data_hdr *)skb->data;
> + next = skb->next;
> +
> + /* Relinquish the RPC lock temporarily if it's needed
> + * elsewhere.
> + */
> + if (rpc) {
> + int flags = atomic_read(&rpc->flags);
> +
> + if (flags & APP_NEEDS_LOCK) {
> + homa_rpc_unlock(rpc);
> +
> + /* This short spin is needed to ensure that the
> + * other thread gets the lock before this thread
> + * grabs it again below (the need for this
> + * was confirmed experimentally in 2/2025;
> + * without it, the handoff fails 20-25% of the
> + * time). Furthermore, the call to homa_spin
> + * seems to allow the other thread to acquire
> + * the lock more quickly.
> + */
> + homa_spin(100);
> + homa_rpc_lock(rpc);
This can still fail due to a number of reasons, e.g. if multiple threads
are spinning on the rpc lock, or in fully preemptable kernels.
You need to either ensure that:
- the loop works just fine even if the handover fails with high
frequency - even without the homa_spin() call,
or
- there is explicit handover notification.
/P
Powered by blists - more mailing lists