[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160228150251.GC6989@atrey.karlin.mff.cuni.cz>
Date: Sun, 28 Feb 2016 16:02:51 +0100
From: Pavel Machek <pavel@....cz>
To: Shaohua Li <shli@...com>
Cc: linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
axboe@...nel.dk, tj@...nel.org, Vivek Goyal <vgoyal@...hat.com>,
"jmoyer @ redhat . com" <jmoyer@...hat.com>, Kernel-team@...com
Subject: Re: [PATCH V2 00/13] block-throttle: proportional throttle
Hi!
> The problem is we don't know the max bandwidth a disk can provide for a
> specific workload, which depends on the device and IO pattern. The estimated
> bandwidth by patch 1 will be always not accurate unless the disk is already in
> max bandwidth. To solve this issue, we always over estimate the bandwidth. Over
> esitmate bandwidth, workload dispatchs more IO, estimated bandwidth becomes
> higher, dispatches even more IO. The loop will run till we enter a stable
> state, in which the disk gets max bandwidth. The 'slightly adjust and run into
> stable state' is the core algorithm the patch series use. We also use it to
> detect inactive cgroup.
Ok, so you want to reach a steady state, but what if workloads varies
a lot?
Lets say random writes for ten minutes, then linear write.
Will the linear write be severely throttled because of the previous
seeks?
Can a task get bigger bandwidth by doing some additional (useless)
work?
Like "I do bigger reads in the random read phase, so that I'm not
throttled that badly when I do the linear read"?
Pavel
Powered by blists - more mailing lists