[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1828884A29C6694DAF28B7E6B8A82373A903A586@ORSMSX109.amr.corp.intel.com>
Date: Fri, 11 Sep 2015 19:22:56 +0000
From: "Hefty, Sean" <sean.hefty@...el.com>
To: Tejun Heo <tj@...nel.org>, Doug Ledford <dledford@...hat.com>
CC: Parav Pandit <pandit.parav@...il.com>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"lizefan@...wei.com" <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
Jonathan Corbet <corbet@....net>,
"james.l.morris@...cle.com" <james.l.morris@...cle.com>,
"serge@...lyn.com" <serge@...lyn.com>,
Haggai Eran <haggaie@...lanox.com>,
Or Gerlitz <ogerlitz@...lanox.com>,
Matan Barak <matanb@...lanox.com>,
"raindel@...lanox.com" <raindel@...lanox.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-security-module@...r.kernel.org"
<linux-security-module@...r.kernel.org>
Subject: RE: [PATCH 0/7] devcg: device cgroup extension for rdma resource
> So, the existence of resource limitations is fine. That's what we
> deal with all the time. The problem usually with this sort of
> interfaces which expose implementation details to users directly is
> that it severely limits engineering manuevering space. You usually
> want your users to express their intentions and a mechanism to
> arbitrate resources to satisfy those intentions (and in a way more
> graceful than "we can't, maybe try later?"); otherwise, implementing
> any sort of high level resource distribution scheme becomes painful
> and usually the only thing possible is preventing runaway disasters -
> you don't wanna pin unused resource permanently if there actually is
> contention around it, so usually all you can do with hard limits is
> overcommiting limits so that it at least prevents disasters.
I agree with Tejun that this proposal is at the wrong level of abstraction.
If you look at just trying to limit QPs, it's not clear what that attempts to accomplish. Conceptually, a QP is little more than an addressable endpoint. It may or may not map to HW resources (for Intel NICs it does not). Even when HW resources do back the QP, the hardware is limited by how many QPs can realistically be active at any one time, based on how much caching is available in the NIC.
Trying to limit the number of QPs that an app can allocate, therefore, just limits how much of the address space an app can use. There's no clear link between QP limits and HW resource limits, unless you assume a very specific underlying implementation.
- Sean
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists