[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140530083830.GA4732@osiris>
Date: Fri, 30 May 2014 10:38:30 +0200
From: Heiko Carstens <heiko.carstens@...ibm.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Christoph Hellwig <hch@...radead.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrea Righi <andrea@...terlinux.com>,
Eric Dumazet <eric.dumazet@...il.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Hendrik Brueckner <brueckner@...ux.vnet.ibm.com>,
Thorsten Diehl <thorsten.diehl@...ibm.com>,
Ian Kent <raven@...maw.net>,
"Elliott, Robert (Server Storage)" <Elliott@...com>
Subject: Re: [PATCH 2/2] fs: proc/stat: use usual seq_file ops rather than
single_open
On Wed, May 28, 2014 at 03:37:04PM -0700, Andrew Morton wrote:
> On Wed, 28 May 2014 11:01:53 +0200 Heiko Carstens <heiko.carstens@...ibm.com> wrote:
> > With this patch it should not happen anymore that reading /proc/stat
> > fails because of a failing high order memory allocation.
>
> So this deletes the problematic allocation which [1/2] kind-of fixed,
> yes?
Yes.
> I agree with Ian - there's a hotplugging race. And [1/2] doesn't do
> anything to address the worst-case allocation size. So I think we may
> as well do this all in a single patch.
Fine with me. However the hotplugging race in 1/2 doesn't matter: if the
result doesn't fit into the preallocated buffer the seq_file infrastructure
would simply allocate a buffer twice as large as before and retry.
The point of patch 1/2 was to have a patch that probably solves the problem
almost always ;) , without having the problems you describe below.
> Without having looked closely at the code I worry a bit about the
> effects. /proc/pid/stat is a complex thing and its contents will vary
It's /proc/stat not /proc/pid/stat.
> So.. can we take this up for 3.16-rc1? See if we can get some careful
> review done then and test it for a couple of months?
Sure.
> Meanwhile, the changelog looks a bit hastily thrown together - some
> smoothing would be nice, and perhaps some work spent identifying
> possible behavioural changes. Timing changes, locking canges, effects
> of concurrent fork/exit activity etc?
Well... I'll try to come up with something better. Even though I only
forward ported an existing patch to address a memory allocation failure.
Oh oh...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists