[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48671433.1060409@linux.vnet.ibm.com>
Date: Sun, 29 Jun 2008 10:18:51 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: Paul Menage <menage@...gle.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
YAMAMOTO Takashi <yamamoto@...inux.co.jp>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [RFC 5/5] Memory controller soft limit reclaim on contention
Paul Menage wrote:
> On Fri, Jun 27, 2008 at 8:19 AM, Balbir Singh <balbir@...ux.vnet.ibm.com> wrote:
>> +/*
>> + * Create a heap of memory controller structures. The heap is reverse
>> + * sorted by size. This heap is used for implementing soft limits. Our
>> + * current heap implementation does not allow dynamic heap updates, but
>> + * eventually, the costliest controller (over it's soft limit should
>
> it's -> its
>
>> + old_mem = heap_insert(&mem_cgroup_heap, mem,
>> + HEAP_REP_LEAF);
>> + mem->on_heap = 1;
>> + if (old_mem)
>> + old_mem->on_heap = 0;
>
> Maybe a comment here that mem might == old_mem?
>
>> + * When the soft limit is exceeded, look through the heap and start
>> + * reclaiming from all groups over thier soft limit
>
> thier -> their
>
>> + if (!res_counter_check_under_soft_limit(&mem->res)) {
>> + /*
>> + * The current task might already be over it's soft
>> + * limit and trying to aggressively grow. We check to
>> + * see if it the memory group associated with the
>> + * current task is on the heap when the current group
>> + * is over it's soft limit. If not, we add it
>> + */
>> + if (!mem->on_heap) {
>> + struct mem_cgroup *old_mem;
>> +
>> + old_mem = heap_insert(&mem_cgroup_heap, mem,
>> + HEAP_REP_LEAF);
>> + mem->on_heap = 1;
>> + if (old_mem)
>> + old_mem->on_heap = 0;
>> + }
>> + }
>
> This and the other similar code for adding to the heap should be
> refactored into a separate function.
>
>> +static int mem_cgroup_compare_soft_limits(void *p1, void *p2)
>> +{
>> + struct mem_cgroup *mem1 = (struct mem_cgroup *)p1;
>> + struct mem_cgroup *mem2 = (struct mem_cgroup *)p2;
>> + unsigned long long delta1, delta2;
>> +
>> + delta1 = res_counter_soft_limit_delta(&mem1->res);
>> + delta2 = res_counter_soft_limit_delta(&mem2->res);
>> +
>> + return delta1 > delta2;
>> +}
>
> This isn't a valid comparator, since it isn't a constant function of
> its two input pointers - calling mem_cgroup_compare_soft_limits(m1,
> m2) can give different results at different times. So your heap
> invariant will become invalid over time.
>
> I think if you want to do this, you're going to need to periodically
> take a snapshot of each cgroup's excess and use that snapshot in the
> comparator; whenever you update the snapshots, you'll need to restore
> the heap invariant.
I see your point. Keeping snapshots sounds OK, but updating the heap each time
is expensive, since it's hard to find a node in the heap. If we could, then we
could call heap_adjust frequently (whenever the delta changes) and keep the heap
correctly formed. I wonder if keeping two snapshots will help. One for use by
the ->gt callback (called old_snapshot) and then we switch over to the new
snapshot when we reinsert the element after it has been deleted from the heap.
Thinking further, snapshotting might work, provided we take snapshots at the
time of insertion only. When an element is deleted and re-inserted we update the
snapshot. That way the invariant is not broken.
--
Warm Regards,
Balbir Singh
Linux Technology Center
IBM, ISTL
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists