[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6599ad830907022116n7a711c7fs52ff9b400ec8797f@mail.gmail.com>
Date: Thu, 2 Jul 2009 21:16:15 -0700
From: Paul Menage <menage@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Benjamin Blum <bblum@...gle.com>, lizf@...fujitzu.com,
serue@...ibm.com, containers@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] Adds a read-only "procs" file similar to "tasks" that
shows only unique tgids
On Thu, Jul 2, 2009 at 7:08 PM, Andrew Morton<akpm@...ux-foundation.org> wrote:
>
> Why are we doing all this anyway? To avoid presenting duplicated pids
> to userspace? Nothing else?
To present the pids or tgids in sorted order. Removing duplicates is
only for the case of the "procs" file; that could certainly be left to
userspace, but it wouldn't by itself remove the existing requirement
for a contiguous array.
The seq_file iterator for these files relies on them being sorted so
that it can pick up where it left off even in the event of the pid set
changing between reads - it does a binary search to find the first pid
greater than the last one that was returned, so as to guarantee that
we return every pid that was in the cgroup before the scan started and
remained in the cgroup until after the scan finished; there are no
guarantees about pids that enter/leave the cgroup during the scan.
> Or we can do it the other way? Create an initially-empty local IDR
> tree or radix tree and, within that, mark off any pids which we've
> already emitted? That'll have a worst-case memory consumption of
> approximately PID_MAX_LIMIT bits -- presently that's half a megabyte.
> With no large allocations needed?
>
But that would be half a megabyte per open fd? That's a lot of memory
that userspace can pin down by opening fds. The reason for the current
pid array approach is to mean that there's only ever one pid array
allocated at a time per cgroup, rather than per open fd.
There's actually a structure already for doing that - cgroup_scanner,
which uses a high-watermark and a priority heap to provide a similar
guarantee, with a constant memory size overhead (typically one page).
But it can take O(n^2) time to scan a large cgroup, as would, I
suspect, using an IDR, so it's only used for cases where we really
can't avoid it due to locking reasons. I'd rather have something that
accumulates unsorted pids in page-size chunks as we iterate through
the cgroup, and then sorts them using something like Lai Jiangshan's
patch did.
>
> btw, did pidlist_uniq() actually needs to allocate new memory for the
> output array? Could it have done the filtering in-place?
Yes - or just omit duplicates in the seq_file iterator, I guess
Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists