lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 14 Sep 2015 15:45:05 +0530
From:	Parav Pandit <pandit.parav@...il.com>
To:	"Hefty, Sean" <sean.hefty@...el.com>
Cc:	Tejun Heo <tj@...nel.org>, Doug Ledford <dledford@...hat.com>,
	"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
	"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
	"lizefan@...wei.com" <lizefan@...wei.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Jonathan Corbet <corbet@....net>,
	"james.l.morris@...cle.com" <james.l.morris@...cle.com>,
	"serge@...lyn.com" <serge@...lyn.com>,
	Haggai Eran <haggaie@...lanox.com>,
	Or Gerlitz <ogerlitz@...lanox.com>,
	Matan Barak <matanb@...lanox.com>,
	"raindel@...lanox.com" <raindel@...lanox.com>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	"linux-security-module@...r.kernel.org" 
	<linux-security-module@...r.kernel.org>
Subject: Re: [PATCH 0/7] devcg: device cgroup extension for rdma resource

On Sat, Sep 12, 2015 at 12:52 AM, Hefty, Sean <sean.hefty@...el.com> wrote:
>> So, the existence of resource limitations is fine.  That's what we
>> deal with all the time.  The problem usually with this sort of
>> interfaces which expose implementation details to users directly is
>> that it severely limits engineering manuevering space.  You usually
>> want your users to express their intentions and a mechanism to
>> arbitrate resources to satisfy those intentions (and in a way more
>> graceful than "we can't, maybe try later?"); otherwise, implementing
>> any sort of high level resource distribution scheme becomes painful
>> and usually the only thing possible is preventing runaway disasters -
>> you don't wanna pin unused resource permanently if there actually is
>> contention around it, so usually all you can do with hard limits is
>> overcommiting limits so that it at least prevents disasters.
>
> I agree with Tejun that this proposal is at the wrong level of abstraction.
>
> If you look at just trying to limit QPs, it's not clear what that attempts to accomplish.  Conceptually, a QP is little more than an addressable endpoint.  It may or may not map to HW resources (for Intel NICs it does not).  Even when HW resources do back the QP, the hardware is limited by how many QPs can realistically be active at any one time, based on how much caching is available in the NIC.
>

cgroups as it stands today provides resource controls in effective
manner of existing defined resource, such as cpu cycles, memory in
user and kernel space, tcp bytes, IOPS etc.
Similarly RDMA programming model defines its own set of resources
which is used by applications which accesses those resources directly.

What we are debating here is that, RDMA exposing hardware resources is
not correct, and therefore whether a cgroup controller is needed or
not.
There are two points here.
1. Whether RDMA programming model is correct or not which works on
defined resources of IB spec.
2. Assuming that programming model is fine, (because we have actively
maintained IB stack in kernel and adoption of user space components in
OS),
whether we need to control those resources or not via cgroup.

Tejun trying to say that because point_1 is doesn't seem to be right
way to solve problem, point_2 should not be done or done at different
level of abstraction.
More questions/comments in Jason and Sean thread.

Sean,
Even though there is no one to one map of verb-QP to hw-QP, in order
for driver or lower layer to effectively map the right verb-QP to
hw-QP, such vendor specific layer needs to know how is it going to be
used. Otherwise two contending applications for a QP may not get the
right number of hw-QPs to use.

> Trying to limit the number of QPs that an app can allocate, therefore, just limits how much of the address space an app can use.  There's no clear link between QP limits and HW resource limits, unless you assume a very specific underlying implementation.
>
> - Sean
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ