lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161004181245.GC25323@redhat.com>
Date:   Tue, 4 Oct 2016 14:12:45 -0400
From:   Vivek Goyal <vgoyal@...hat.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     Shaohua Li <shli@...com>, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, axboe@...com, Kernel-team@...com,
        jmoyer@...hat.com
Subject: Re: [PATCH V3 00/11] block-throttle: add .high limit

On Tue, Oct 04, 2016 at 11:56:16AM -0400, Tejun Heo wrote:
> Hello, Vivek.
> 
> On Tue, Oct 04, 2016 at 09:28:05AM -0400, Vivek Goyal wrote:
> > On Mon, Oct 03, 2016 at 02:20:19PM -0700, Shaohua Li wrote:
> > > Hi,
> > > 
> > > The background is we don't have an ioscheduler for blk-mq yet, so we can't
> > > prioritize processes/cgroups.
> > 
> > So this is an interim solution till we have ioscheduler for blk-mq?
> 
> It's a common permanent solution which applies to both !mq and mq.
> 
> > > This patch set tries to add basic arbitration
> > > between cgroups with blk-throttle. It adds a new limit io.high for
> > > blk-throttle. It's only for cgroup2.
> > > 
> > > io.max is a hard limit throttling. cgroups with a max limit never dispatch more
> > > IO than their max limit. While io.high is a best effort throttling. cgroups
> > > with high limit can run above their high limit at appropriate time.
> > > Specifically, if all cgroups reach their high limit, all cgroups can run above
> > > their high limit. If any cgroup runs under its high limit, all other cgroups
> > > will run according to their high limit.
> > 
> > Hi Shaohua,
> > 
> > I still don't understand why we should not implement a weight based
> > proportional IO mechanism and how this mechanism is better than proportional IO .
> 
> Oh, if we actually can implement proportional IO control, it'd be
> great.  The problem is that we have no way of knowing IO cost for
> highspeed ssd devices.  CFQ gets around the problem by using the
> walltime as the measure of resource usage and scheduling time slices,
> which works fine for rotating disks but horribly for highspeed ssds.
> 
> We can get some semblance of proportional control by just counting bw
> or iops but both break down badly as a means to measure the actual
> resource consumption depending on the workload.  While limit based
> control is more tedious to configure, it doesn't misrepresent what's
> going on and is a lot less likely to produce surprising outcomes.
> 
> We *can* try to concoct something which tries to do proportional
> control for highspeed ssds but that's gonna be quite a bit of
> complexity and I'm not so sure it'd be justifiable given that we can't
> even figure out measurement of the most basic operating unit.

Hi Tejun,

Agreed that we don't have a good basic unit to measure IO cost. I was
thinking of measuring cost in terms of sectors as that's simple and
gets more accurate on faster devices with almost no seek penalty. And
in fact this proposal is also providing fairness in terms of bandwitdh.
One extra feature seems to be this notion of minimum bandwidth for each
cgroup and until and unless all competing groups have met their minimum,
other cgroups can't cross their limits.

(BTW, should we call io.high, io.minimum instead. To say, this is the
 minimum bandwidth group should get before others get to cross their
 minimum limit till max limit).

> 
> > Agreed that we have issues with proportional IO and we don't have good
> > solutions for these problems. But I can't see that how this mechanism
> > will overcome these problems either.
> 
> It mostly defers the burden to the one who's configuring the limits
> and expects it to know the characteristics of the device and workloads
> and configure accordingly.  It's quite a bit more tedious to use but
> should be able to cover good portion of use cases without being overly
> complicated.  I agree that it'd be nice to have a simple proportional
> control but as you said can't see a good solution for it at the
> moment.

Ok, so idea is that if we can't provide something accurate in kernel,
then expose a very low level knob, which is harder to configure but
should work in some cases where users know their devices and workload
very well. 

> 
> > IIRC, biggest issue with proportional IO was that a low prio group might
> > fill up the device queue with plenty of IO requests and later when high
> > prio cgroup comes, it will still experience latencies anyway. And solution
> > to the problem probably would be to get some awareness in device about 
> > priority of request and map weights to those priority. That way higher
> > prio requests get prioritized.
> 
> Nah, the real problem is that we can't even decide what the
> proportions should be based on.  The most fundamental part is missing.
> 
> > Or run device at lower queue depth. That will improve latencies but migth
> > reduce overall throughput.
> 
> And that we can't do this (and thus basically operate close to
> scheduling time slices) for highspeed ssds.
> 
> > Or thorottle number of buffered writes (as Jens's writeback throttling)
> > patches were doing. Buffered writes seem to be biggest culprit for 
> > increased latencies and being able to control these should help.
> 
> That's a different topic.
> 
> > ioprio/weight based proportional IO mechanism is much more generic and
> > much easier to configure for any kind of storage. io.high is absolute
> > limit and makes it much harder to configure. One needs to know a lot
> > about underlying volume/device's bandwidth (which varies a lot anyway
> > based on workload).
> 
> Yeap, no disagreement there, but it still is a workable solution.
> 
> > IMHO, we seem to be trying to cater to one specific use case using
> > this mechanism. Something ioprio/weight based will be much more
> > generic and we should explore implementing that along with building
> > notion of ioprio in devices. When these two work together, we might
> > be able to see good results. Just software mechanism alone might not
> > be enough.
> 
> I don't think it's catering to specific use cases.  It is a generic
> mechanism which demands knowledge and experimentation to configure.
> It's more a way for the kernel to cop out and defer figuring out
> device characteristics to userland.  If you have a better idea, I'm
> all ears.

I don't think I have a better idea as such. Once we had talked and you
mentioned that for faster devices we should probably do some token based
mechanism (which I believe would probably mean sector based IO
accounting). 

If a proportional IO controller based on sector as unit of measurement
not good enough and does not solve the issues real world workloads are
facing, then we can think of giving additional control in
blk-throttle to atleast get some of the use cases working.

Thanks
Vivek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ