lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 Jul 2009 19:08:45 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Benjamin Blum <bblum@...gle.com>
Cc:	Paul Menage <menage@...gle.com>, lizf@...fujitzu.com,
	serue@...ibm.com, containers@...ts.linux-foundation.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] Adds a read-only "procs" file similar to "tasks"
 that  shows only unique tgids

On Thu, 2 Jul 2009 18:17:56 -0700 Benjamin Blum <bblum@...gle.com> wrote:

> On Thu, Jul 2, 2009 at 6:08 PM, Paul Menage<menage@...gle.com> wrote:
> > On Thu, Jul 2, 2009 at 5:53 PM, Andrew Morton<akpm@...ux-foundation.org> wrote:
> >>> In the first snippet, count will be at most equal to length. As length
> >>> is determined from cgroup_task_count, it can be no greater than the
> >>> total number of pids on the system.
> >>
> >> Well that's a problem, because there can be tens or hundreds of
> >> thousands of pids, and there's a fairly low maximum size for kmalloc()s
> >> (include/linux/kmalloc_sizes.h).
> >>
> >> And even if this allocation attempt doesn't exceed KMALLOC_MAX_SIZE,
> >> large allocations are less unreliable. __There is a large break point at
> >> 8*PAGE_SIZE (PAGE_ALLOC_COSTLY_ORDER).
> >
> > This has been a long-standing problem with the tasks file, ever since
> > the cpusets days.
> >
> > There are ways around it - Lai Jiangshan <laijs@...fujitsu.com> posted
> > a patch that allocated an array of pages to store pids in, with a
> > custom sorting function that let you specify indirection rather than
> > assuming everything was in one contiguous array. This was technically
> > the right approach in terms of not needing vmalloc and never doing
> > large allocations, but it was very complex; an alternative that was
> > mooted was to use kmalloc for small cgroups and vmalloc for large
> > ones, so the vmalloc penalty wouldn't be paid generally. The thread
> > fizzled AFAICS.
> 
> As it is currently, the kmalloc call will simply fail if there are too
> many pids, correct? Do we prefer not being able to read the file in
> this case, or would we rather use vmalloc?

We'd prefer that we not use vmalloc and that the reads not fail!



Why are we doing all this anyway?  To avoid presenting duplicated pids
to userspace?  Nothing else?

If so, why not stop doing that - userspace can remove dupes (if it
cares) more easily than the kernel can?


Or we can do it the other way?  Create an initially-empty local IDR
tree or radix tree and, within that, mark off any pids which we've
already emitted?  That'll have a worst-case memory consumption of
approximately PID_MAX_LIMIT bits -- presently that's half a megabyte. 
With no large allocations needed?


btw, did pidlist_uniq() actually needs to allocate new memory for the
output array?  Could it have done the filtering in-place?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ