[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160405144608.GD7822@mtj.duckdns.org>
Date: Tue, 5 Apr 2016 10:46:08 -0400
From: Tejun Heo <tj@...nel.org>
To: Parav Pandit <pandit.parav@...il.com>
Cc: cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-rdma@...r.kernel.org,
lizefan@...wei.com, Johannes Weiner <hannes@...xchg.org>,
Doug Ledford <dledford@...hat.com>,
Liran Liss <liranl@...lanox.com>,
"Hefty, Sean" <sean.hefty@...el.com>,
Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
Haggai Eran <haggaie@...lanox.com>,
Jonathan Corbet <corbet@....net>, james.l.morris@...cle.com,
serge@...lyn.com, Or Gerlitz <ogerlitz@...lanox.com>,
Matan Barak <matanb@...lanox.com>, akpm@...ux-foundation.org,
linux-security-module@...r.kernel.org
Subject: Re: [PATCHv10 1/3] rdmacg: Added rdma cgroup controller
Hello, Parav.
On Tue, Apr 05, 2016 at 07:25:11AM -0700, Parav Pandit wrote:
> > As it is a fairly isolated domain, to certain extent, it could be okay
> > to let it go. At the same time, I know that these enterprise things
> > tend to go completely wayward and am worried about individual drivers
> > going crazy with custom attributes in a non-sensical way.
>
> If its crazy at driver level, I am sure it will be equally crazy for
> any end-user too. Therefore no user would use those crazy resources
> either.
So, the above paragraph simply isn't true. It's not difficult to
twist things bit so that things work in a hackish and often horrible
way and we know this happens quite a bit, especially in enterprise
settings.
> Intent is certainly not for the individual drivers as we agreed in
> past.
You and I agreeing to that doesn't really matter all that much.
> IB stack maintainers would be reviewing new enum addition
> anyway, whether its verb or hw resource (unlikely).
Well, if the additions are unlikely...
> > In the original review message, I mentioned creating an interface
> > which creates the hierarchy of objects as necessary and returns the
> > target pool with lock held, can you please give it a shot? Something
> > like the following.
>
> o.k. I will attempt. Looks doable.
> I am on travel so it will take few more days for me to turn around
> with below approach with tested code.
So, if you go with single mutex, you don't really need get_and_lock
semantics, you can just call the get function with mutex held. Please
do whichever looks best.
Thanks.
--
tejun
Powered by blists - more mailing lists