[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1386781481.6066.55.camel@tursulin-linux.isw.intel.com>
Date: Wed, 11 Dec 2013 17:04:41 +0000
From: Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
To: Alexander Viro <viro@...iv.linux.org.uk>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Potentially unbounded allocations in seq_read?
Hi all,
It seems that the buffer allocation in seq_read can double in size
indefinitely, at least I've seen that in practice with /proc/<pid>/smaps
(attempting to double m->size to 4M on a read of 1000 bytes). This
produces an ugly WARN_ON_ONCE, which should perhaps be avoided? (given
that it can be triggered by userspace at will)
>From the top comment in seq_file.c one would think that it is a
fundamental limitation of the current code that everything which will be
read (even if in chunks) needs to be in the kernel side buffer at the
same time?
If that is true then only way to fix it would be to completely re-design
the seq_file interface, just silencing the allocation failure with
__GFP_NOWARN perhaps as a temporary measure.
As an alternative, since it does sound a bit pathological, perhaps users
for seq_file who know can be printing out such huge amounts of text
should just use a different (new?) facility?
Thanks,
Tvrtko
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists