lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 Sep 2010 19:17:05 -0400
From:	"Loke, Chetan" <Chetan.Loke@...scout.com>
To:	"Tom Herbert" <therbert@...gle.com>
Cc:	"Stephen Hemminger" <shemminger@...tta.com>,
	"David Miller" <davem@...emloft.net>, <eric.dumazet@...il.com>,
	<netdev@...r.kernel.org>
Subject: RE: [PATCH] xps-mq: Transmit Packet Steering for multiqueue

> From: Tom Herbert [mailto:therbert@...gle.com]
> Sent: September 02, 2010 3:53 PM
> To: Loke, Chetan
> Cc: Stephen Hemminger; David Miller; eric.dumazet@...il.com;
> netdev@...r.kernel.org
> Subject: Re: [PATCH] xps-mq: Transmit Packet Steering for multiqueue
> 
> > userland folks who actually try to exploit the MQ/MSIX-ness will
> almost
> > always pin down their high-prio[or subset of] threads/processes.
> >
> I don't really see that.  Pinning is a last resort and in this context
> we could only do that on a dedicated server.  On a shared server, with
> many different apps, pinning for MQ/MSIX is not an easy option;
> meeting scheduler constraints will be the first priority and its up to
> networking to work with the scheduler to to the right thing.
> Scheduler aware networking (or vice versa) is important.
> 

For my use-case it's an appliance. Newer adapters might have like 64+(?)
h/w queues and not just f/w emulated queues. With these many queues you
could partition your threads/queues, no?
It's easier to get started that way. All you need is a shim(or just a
driver stub because you can then load it on any box that had older
kernels) in the kernel that will tell you which queue-set(for a
MQ-capable adapter) is still under the high-watermark. If all are full
then it should just round-robin(across queues and nodes).So make a
syscall(or shoot a mbx-cmd or pick your trick), find out which queue you
could use, get the binding info and then launch your threads. So once
you narrow down the scope, the scheduler will have less work to do.

If the worker threads are short lived then there's no point in this
binding. And for long-lived tasks, a couple of initial prep-calls will
not hurt performance that much.And if you still care about syscalls at
runtime then you could have a dedicated mgmt-thread that will receive
async-events from the shim. And all other user-land logic could consult
this mgmt-thread.


> Tom
Chetan Loke
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ