lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAC8teKVi3Pwp2as6Q8y+35QAtRWGLywLZfWM-g2SoG8GzaPZxw@mail.gmail.com>
Date:	Sun, 16 Dec 2012 12:38:15 +0800
From:	Zhu Yanhai <zhu.yanhai@...il.com>
To:	Zhao Shuai <zhaoshuai@...ebsd.org>
Cc:	Vivek Goyal <vgoyal@...hat.com>, tj@...nel.org, axboe@...nel.dk,
	ctalbott@...gle.com, rni@...gle.com, linux-kernel@...r.kernel.org,
	cgroups@...r.kernel.org, containers@...ts.linux-foundation.org
Subject: Re: performance drop after using blkcg

2012/12/12 Zhao Shuai <zhaoshuai@...ebsd.org>
>
> 2012/12/11 Vivek Goyal <vgoyal@...hat.com>:
> > These results are with slice_idle=0?
>
> Yes, slice_idle is disabled.
>
> > What's the storage you are using. Looking at the speed of IO I would
> > guess it is not one of those rotational disks.
>
> I have done the same test on 3 different type of boxes,and all of them
> show a performance drop(30%-40%) after using blkcg. Though they
> have different type of disk, all the storage they use are traditional
> rotational
> devices(e.g."HP EG0146FAWHU", "IBM-ESXS").

Or you may want to try IO-throttle (i.e
blkio.throttle.read_iops_device and blkio.throttle.write_iops_device)
instead of blkcg. We use it as a compromised solution between
performance and bandwidth allocation fairness on some clusters whose
storage backend is ioDrive from FusionIO, which is also a really fast
device.
CFS/blkcg is based on time-sharing against the storage devices
(allocation based on IOPS mode is just convert IOPS to virtual time,
it's still time-sharing in fact), so the device only services single
one group at one slice. Since many modern device requires enough
degree of parallelism to get its full capability, the device can't run
at full speed if every single group can't give it enough pressure,
although they do so if you add them up, that's why you can get good
score if you run them under the deadline scheduler.

--
Regards,
Zhu Yanhai

>
> > So if somebody wants to experiment, just tweak the code a bit to allow
> > preemption when a queue which lost share gets backlogged and you
> > practially have a prototype of iops based group scheduling.
>
> Could you please explain more on this? How to adjust the code? I have test
> the following code piece, the result is we lost group differentiation.
>
> cfq_group_served() {
>          if (iops_mode(cfqd))
>                  charge = cfqq->slice_dispatch;
>          cfqg->vdisktime += cfq_scale_slice(charge, cfqg);
>  }
>
>
> --
> Regards,
> Zhao Shuai
> --
> To unsubscribe from this list: send the line "unsubscribe cgroups" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ