lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081120074803.GC26308@kernel.dk>
Date:	Thu, 20 Nov 2008 08:48:03 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	mtk.manpages@...il.com
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-man@...r.kernel.org
Subject: Re: CLONE_IO documentation

On Wed, Nov 19 2008, Michael Kerrisk wrote:
> Hi Jens,
> 
> Following up after a long time on this:
> 
> On Mon, Apr 14, 2008 at 12:13 PM, Jens Axboe <jens.axboe@...cle.com> wrote:
> > On Mon, Apr 14 2008, Michael Kerrisk wrote:
> >> Hi Jens,
> >>
> >> Could you supply some text describing CLONE_IO suitable for inclusion
> >> in the clone.2 man page?
> >> ( http://www.kernel.org/doc/man-pages/online/pages/man2/clone.2.html
> >> ).  In that text it would be helpful to explain what an "I/O context"
> >> is.
> >
> > Sure, I'll see if I can come up with something. Or perhaps you can help
> > me a bit, being the writer ;-)
> >
> > If the CLONE_IO flag is set, the process will share the same io context.
> > The I/O context is the I/O scope of the disk scheduler. So if you think
> > of the I/O context as what the I/O scheduler uses to map to a process,
> > when CLONE_IO is set multiple processes will map to the same I/O context
> > and will be treated as one by the I/O scheduler. What this means is that
> > they get to share disk time. For the anticipatory and CFQ scheduler, if
> > process A and process B share I/O context, they will be allowed to
> > interleave their disk access. So if you have several threads doing I/O
> > on behalf of the same process (aio_read(), for instance), they should
> > set CLONE_IO to get better I/O performance with CFQ and AS.
> >
> > A man page should not mention the specific schedulers, just mention that
> > it'll improve the information available to the kernel and the
> > performance of the app for the scenario described. In practice, it'll
> > only really apply to CFQ and AS. For deadline and noop, they'll be
> > essentially zero difference as they have no concept of I/O contexts.
> 
> I took your text as a base but did some reworking, so *please check
> the following carefully*,  and let me know if there are things to
> change and/or add:
> 
>        CLONE_IO (since Linux 2.4.25)
>               If  CLONE_IO  is set, then the new process shares an I/O
>               context with the calling process.  If this flag  is  not
>               set,  then (as with fork(2)) the new process has its own
>               I/O context.
> 
>               The I/O context is the I/O scope of the  disk  scheduler
>               (i.e, what the I/O scheduler uses to model scheduling of
>               a process's I/O).  If processes share the same I/O  con-
>               text,  they are treated as one by the I/O scheduler.  As
>               a consequence, they get to share disk  time.   For  some
>               I/O  schedulers,  if two processes share an I/O context,
>               they will be allowed to interleave  their  disk  access.
>               If  several  threads are doing I/O on behalf of the same
>               process (aio_read(3), for instance), they should  employ
>               CLONE_IO to get better I/O performance.
> 
>               If  the  kernel  is not configured with the CONFIG_BLOCK
>               option, this flag is a no-op.
> 
> The patch against clone.2 is below.

That looks good, but you typoed the kernel version - it should read
'since 2.6.25' :-)

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ