lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <412e6f7f0911192111jbc8b237sc619a54510219336@mail.gmail.com> Date: Fri, 20 Nov 2009 13:11:39 +0800 From: Changli Gao <xiaosuo@...il.com> To: Eric Dumazet <eric.dumazet@...il.com> Cc: "David S. Miller" <davem@...emloft.net>, Tom Herbert <therbert@...gle.com>, Linux Netdev List <netdev@...r.kernel.org> Subject: Re: [PATCH net-next-2.6] net: Xmit Packet Steering (XPS) On Fri, Nov 20, 2009 at 12:58 PM, Eric Dumazet <eric.dumazet@...il.com> wrote: > Changli Gao a écrit : >> On Fri, Nov 20, 2009 at 7:46 AM, Eric Dumazet <eric.dumazet@...il.com> wrote: >>> diff --git a/net/core/dev.c b/net/core/dev.c >>> index 9977288..9e134f6 100644 >>> --- a/net/core/dev.c >>> +++ b/net/core/dev.c >>> @@ -2000,6 +2001,7 @@ gso: >>> */ >>> rcu_read_lock_bh(); >>> >>> + skb->sending_cpu = cpu = smp_processor_id(); >>> txq = dev_pick_tx(dev, skb); >>> q = rcu_dereference(txq->qdisc); >> >> I think assigning cpu to skb->sending_cpu just before calling >> hard_start_xmit is better, because the CPU which dequeues the skb will >> be another one. > > I want to record the application CPU, because I want the application CPU > to call sock_wfree(), not the CPU that happened to dequeue skb to transmit it > in case of txq contention. > got it. >> >>> @@ -2024,8 +2026,6 @@ gso: >>> Either shot noqueue qdisc, it is even simpler 8) >>> */ >>> if (dev->flags & IFF_UP) { >>> - int cpu = smp_processor_id(); /* ok because BHs are off */ >>> - >>> if (txq->xmit_lock_owner != cpu) { >>> >>> HARD_TX_LOCK(dev, txq, cpu); >>> @@ -2967,7 +2967,7 @@ static void net_rx_action(struct softirq_action *h) >>> } >>> out: >>> local_irq_enable(); >>> - >>> + xps_flush(); >> >> If there isn't any new skbs, the memory will be hold forever. I know >> you want to eliminate unnecessary IPI, how about sending IPI only when >> the remote xps_pcpu_queues are changed from empty to nonempty? > > I dont understand your remark, and dont see the problem, yet. > > I send IPI only on cpus I know I have at least one skb queueud for them. > For each cpu taking TX completion interrupts I have : > > One bitmask (xps_cpus) of cpus I will eventually send IPI at end of net_rx_action() > You call xps_flush() in net_rx_aciton(). It means that if no new packet arrives, xps_flush() won't be called forever, and the memory used by skbs will be hold forever. Did I misunderstand? Your algorithm only works with packet forwarding but sending packets from local sockets. -- Regards, Changli Gao(xiaosuo@...il.com) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists