[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1218779462.5291.59.camel@sebastian.kern.oss.ntt.co.jp>
Date: Fri, 15 Aug 2008 14:51:02 +0900
From: Fernando Luis Vázquez Cao
<fernando@....ntt.co.jp>
To: Rusty Russell <rusty@...tcorp.com.au>
Cc: Jens Axboe <jens.axboe@...cle.com>, linux-kernel@...r.kernel.org,
吉川 拓哉
<yoshikawa.takuya@....ntt.co.jp>, dpshah@...gle.com
Subject: Re: request->ioprio
On Thu, 2008-08-14 at 12:16 +1000, Rusty Russell wrote:
> On Wednesday 13 August 2008 17:06:03 Fernando Luis Vázquez Cao wrote:
> > Besides, I guess that accessing the io context information (such as
> > ioprio) of a request through elevator-specific private structures is not
> > something we want virtio_blk (or future users) to do.
>
> The only semantic I assumed was "higher is better". The server (ie. host) can
> really only use the information to schedule between I/Os for that particular
> guest anyway.
Does that mean you are not going to incorporate the prio class system
that is used in Linux?
Please note that the three upper bits of ioprio contain the ioprio
class. We currently have three classes encoded as follows:
IOPRIO_CLASS_RT: 1
IOPRIO_CLASS_BE: 2
IOPRIO_CLASS_IDLE: 3
As things stand now, if we passed ioprio as is to the backend driver
requests would get priority inverted. For example, requests in the idle
class would be prioritized in detriment of the real time ones.
This problem could be solved in req_get_ioprio() (already in Jen's
tree), or, alternatively, we could change the enum where the ioprio
classes are defined.
What are your plans regarding prio classes?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists