[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20061016195952.415f4939.pj@sgi.com>
Date: Mon, 16 Oct 2006 19:59:52 -0700
From: Paul Jackson <pj@....com>
To: sekharan@...ibm.com
Cc: greg@...ah.com, menage@...gle.com, ckrm-tech@...ts.sourceforge.net,
linux-kernel@...r.kernel.org, matthltc@...ibm.com
Subject: Re: [ckrm-tech] [PATCH 0/5] Allow more than PAGESIZE data read in
configfs
> Quick look at the seq_file interfaces shows there is no such capability.
Perhaps a seq file size limit could be added. Not sure if that's
a good idea or not ...
Ah - cap the count + ppos the user passed in to configfs_read_file,
before passing these values to flush_read_buffer().
If the user asks for more than is allowed, give them only what they are
allowed, by passing a smaller count to flush_read_buffer(). If they
start at a position past what's allowed, force a huge ppos and let them
see the resulting EOF. Disclaimer - I too am no seq_file expert ;).
This should be just a few more lines in configfs_read_file() on
top of your current patches adapting it to seq_file.
Granted - it is not going in directly in one step to the objective you
seek, that being a configfs suitable for displaying a long vector of
process id's, so that Resource Groups can make use of it (and maybe
someday cpusets, too.)
But it gains the code reduction and reuse benefits of your patch
set, and gets this conversation unwedged, so we can go on to discuss
whether or not it would be a good idea go add a suitable vector
interface to seq_file, without threatening the excellent improvements
that sysfs/configfs have made over the old /proc style mess.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@....com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists