lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B500AC0.4060909@gmail.com>
Date:	Fri, 15 Jan 2010 07:27:12 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Tom Herbert <therbert@...gle.com>
CC:	davem@...emloft.net, netdev@...r.kernel.org
Subject: Re: [PATCH v5] rps: Receive Packet Steering

Le 14/01/2010 22:56, Tom Herbert a écrit :
> This patch implements software receive side packet steering (RPS).  RPS
> distributes the load of received packet processing across multiple CPUs.
> 
> Problem statement: Protocol processing done in the NAPI context for
> received
> packets is serialized per device queue and becomes a bottleneck under high
> packet load.  This substantially limits pps that can be achieved on a
> single
> queue NIC and provides no scaling with multiple cores.
> 
> This solution queues packets early on in the receive path on the backlog
> queues
> of other CPUs.   This allows protocol processing (e.g. IP and TCP) to be
> performed on packets in parallel.   For each device (or NAPI instance for
> a multi-queue device) a mask of CPUs is set to indicate the CPUs that can
> process packets for the device. A CPU is selected on a per packet basis by
> hashing contents of the packet header (the TCP or UDP 4-tuple) and using
> the
> result to index into the CPU mask.  The IPI mechanism is used to raise
> networking receive softirqs between CPUs.  This effectively emulates in
> software what a multi-queue NIC can provide, but is generic requiring no
> device
> support.
> 
> Many devices now provide a hash over the 4-tuple on a per packet basis
> (Toeplitz is popular).  This patch allow drivers to set the HW reported
> hash
> in an skb field, and that value in turn is used to index into the RPS maps.
> Using the HW generated hash can avoid cache misses on the packet when
> steering the packet to a remote CPU.
> 
> The CPU masks is set on a per device basis in the sysfs variable
> /sys/class/net/<device>/rps_cpus.  This is a set of canonical bit maps for
> each NAPI nstance of the device.  For example:
> 
> echo "0b 0b0 0b00 0b000" > /sys/class/net/eth0/rps_cpus
> 
> would set maps for four NAPI instances on eth0.
> 
> Generally, we have found this technique increases pps capabilities of a
> single
> queue device with good CPU utilization.  Optimal settings for the CPU mask
> seems to depend on architectures and cache hierarcy.  Below are some
> results
> running 500 instances of netperf TCP_RR test with 1 byte req. and resp.
> Results show cumulative transaction rate and system CPU utilization.
> 
> e1000e on 8 core Intel
>    Without RPS: 90K tps at 33% CPU
>    With RPS:    239K tps at 60% CPU
> 
> foredeth on 16 core AMD
>    Without RPS: 103K tps at 15% CPU
>    With RPS:    285K tps at 49% CPU
> 
> Caveats:
> - The benefits of this patch are dependent on architecture and cache
> hierarchy.
> Tuning the masks to get best performance is probably necessary.
> - This patch adds overhead in the path for processing a single packet.  In
> a lightly loaded server this overhead may eliminate the advantages of
> increased parallelism, and possibly cause some relative performance
> degradation.
> We have found that RPS masks that are cache aware (share same caches with
> the interrupting CPU) mitigate much of this.
> - The RPS masks can be changed dynamically, however whenever the mask is
> changed
> this introduces the possbility of generating out of order packets.  It's
> probably best not change the masks too frequently.
> 
> Signed-off-by: Tom Herbert <therbert@...gle.com>


> 
> +/*
> + * net_rps_action sends any pending IPI's for rps.  This is only called
> from
> + * softirq and interrupts must be enabled.
> + */
> +static void net_rps_action(void)
> +{
> +    int cpu;
> +
> +    /* Send pending IPI's to kick RPS processing on remote cpus. */
> +    for_each_cpu_mask_nr(cpu, __get_cpu_var(rps_remote_softirq_cpus)) {
> +        struct softnet_data *queue = &per_cpu(softnet_data, cpu);
> +        cpu_clear(cpu, __get_cpu_var(rps_remote_softirq_cpus));
> +        if (cpu_online(cpu))
> +            __smp_call_function_single(cpu, &queue->csd, 0);
> +    }
> +}
> 

So we have this last bit that might have a reentrance problem...

Do you plan a followup patch to copy the rps_remote_softirq_cpus in a local variable
before enabling interrupts and calling net_rps_action() ?

	cpumask_t rps_copy;

	copy and clean rps_remote_softirq_cpus
	local_irq_enable();
	net_rps_action(&rps_copy); 
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ