lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131211180007.GX10323@ZenIV.linux.org.uk>
Date:	Wed, 11 Dec 2013 18:00:07 +0000
From:	Al Viro <viro@...IV.linux.org.uk>
To:	Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: Potentially unbounded allocations in seq_read?

On Wed, Dec 11, 2013 at 05:48:32PM +0000, Tvrtko Ursulin wrote:
> On Wed, 2013-12-11 at 17:04 +0000, Tvrtko Ursulin wrote:
> > Hi all,
> > 
> > It seems that the buffer allocation in seq_read can double in size
> > indefinitely, at least I've seen that in practice with /proc/<pid>/smaps
> > (attempting to double m->size to 4M on a read of 1000 bytes). This
> > produces an ugly WARN_ON_ONCE, which should perhaps be avoided? (given
> > that it can be triggered by userspace at will)
> > 
> > From the top comment in seq_file.c one would think that it is a
> > fundamental limitation of the current code that everything which will be
> > read (even if in chunks) needs to be in the kernel side buffer at the
> > same time?
> 
> Oh-oh, seems that m->size is doubled on every read. So if app is reading
> with a buffer smaller than data available, it can do nine reads before
> it hits a >MAX_ORDER allocation. Not good. :)

Huh?  Is that from observation or from reading the code?  If it's the former,
I would really like to see details, if it's the latter... you are misreading
it.  m->size is doubled until it's large enough to hold ->show() output;
size argument of seq_read() has nothing to do with that.  Once the damn
thing is large enough, read() is served from it.  So are subsequent reads,
until you manage to eat all that had been generated.  Then the same buffer
is used for the next entry; again, no doubling unless that next entry is
even bigger and won't fit.  Doubling on each read(2) takes really strange
iterator to trigger and you'll need ->show() spewing bigger and bigger
entries.  Again - details, please...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ