lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 17 Jul 2007 22:26:35 -0700
From:	"Sean Hefty" <sean.hefty@...el.com>
To:	"'Roland Dreier'" <rdreier@...co.com>,
	"Or Gerlitz" <or.gerlitz@...il.com>
Cc:	<linux-kernel@...r.kernel.org>, <general@...ts.openfabrics.org>
Subject: RE: [ofa-general] Further 2.6.23 merge plans...

>I think this is an important question.  If we merge the local SA
>stuff, then are we creating a problem for dealing with QoS?

Yes - I do believe that merging PR caching and QoS together will be difficult.
I don't think the problems are insurmountable, but I can't say that I have a
definite solution for how to deal with this. 

My current thoughts are that the purpose of the cache is to increase SA
scalability on large clusters.  We've seen issues running MPI, trying to
establish all-to-all connections, on our 256 node cluster.  (With 4 processes
per node, this results in about 500,000+ PR queries hitting the SA.)  The SA was
swamped with work, and it wasn't trying to enforce QoS requirements across the
cluster.

I just don't see how an SA that is already having trouble scaling to this number
of nodes will be able to perform the additional task of providing QoS across the
cluster.  It may be that, at least initially, an administrator may need to
select between enabling PR caching or QoS.

>Are we going to have to revert the local SA stuff once the QoS stuff is
>available?

In the best case, the local SA will need enhancements added to the base support.
In the worst case, a user would have to choose between QoS or PR caching.  If
all users choose QoS, then it would make sense to remove the local SA. 

>Or is there at least a sketch of a plan on how to handle this?

This is only a rough idea, and it depends on how the QoS is implemented.  The
idea is to create a local QoS module on each node.  The local QoS modules would
be programmed with basic QoS information.  For example, which types of queries
to handle locally, versus which ones to forward to the SA.  Locally handled
queries would return PRs based on some QoS mapping table.  (I haven't looked
into any details of this.)

Ideally, local QoS modules would be programmed by a QoS master.  This would
require a new vendor-specific protocol, but would allow for a simple distributed
QoS manager.

We will have a better idea of the issues and possible solutions once the QoS
spec is released, and we can hold discussions on it.  I will be working more
details on QoS enhancements starting in the next couple of weeks. 

- Sean
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ