lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 3 Feb 2010 17:58:46 -0800
From:	Stephen Hemminger <shemminger@...tta.com>
To:	David Miller <davem@...emloft.net>
Cc:	netdev@...r.kernel.org
Subject: Re: [RFC] NAPI as kobject proposal

On Wed, 03 Feb 2010 17:33:05 -0800 (PST)
David Miller <davem@...emloft.net> wrote:

> From: Stephen Hemminger <shemminger@...tta.com>
> Date: Fri, 29 Jan 2010 10:18:39 -0800
> 
> > As part of receive packet steering there is a requirement to add an
> > additional parameter to this for the CPU map.
> 
> Hmmm, where did this come from?
> 
> The RPS maps are per-device.
> 
> I think I vaguely recall you "suggesting" that the RPS maps become
> per-NAPI.
> 
> But, firstly, I didn't see any movement in that part of the
> discussion.
> 
> And, secondly, I don't think this makes any sense at all.
> 
> Things are already overly complicated as it is.  Having the user know
> what traffic goes to a particular RX queue (ie. NAPI instance) and set
> the RPS map in some way specific to that RX queue is over the top.
> 
> If the issue is the case of sharing a NAPI instance between two
> devices, there are a few other ways to deal with this.
> 
> One I would suggest is to simply clone the RPS map amongst the
> devices sharing a NAPI instance.
> 
> I currently see NAPI kobjects is just an over-abstraction for a
> perceived need rather than a real one.

It started with doing RPS, and not wanting to implement the proposed
sysfs interface (anything doing get_token is misuse of sysfs).

The usage model I see is wanting to have:
  1. only some cores being used for receive traffic
     on single Rx devices (NAPI)
  2. only some cores being used for receive traffic
     on legacy devices (non-NAPI)
  3. being able to configure a set of cpus with same   
     IRQ/cache when doing Rx multi-queue.  Assign MSI-X
     IRQ per core and allow both HT on core to split
     that RX traffic.

All this should be manageable by some user utility like irqbalance.

#1 and #2 argue for a per device map (like irq_affinity) but
#3 is harder; not sure the right API for that.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ