lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090507090450.GA4613@linux>
Date:	Thu, 7 May 2009 11:04:50 +0200
From:	Andrea Righi <righi.andrea@...il.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>, nauman@...gle.com,
	dpshah@...gle.com, lizf@...fujitsu.com, mikew@...gle.com,
	fchecconi@...il.com, paolo.valente@...more.it,
	jens.axboe@...cle.com, ryov@...inux.co.jp, fernando@....ntt.co.jp,
	s-uchida@...jp.nec.com, taka@...inux.co.jp,
	guijianfeng@...fujitsu.com, jmoyer@...hat.com,
	dhaval@...ux.vnet.ibm.com, balbir@...ux.vnet.ibm.com,
	linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, agk@...hat.com,
	dm-devel@...hat.com, snitzer@...hat.com, m-ikeda@...jp.nec.com,
	peterz@...radead.org
Subject: Re: IO scheduler based IO Controller V2

On Wed, May 06, 2009 at 05:52:35PM -0400, Vivek Goyal wrote:
> > > Without io-throttle patches
> > > ---------------------------
> > > - Two readers, first BE prio 7, second BE prio 0
> > > 
> > > 234179072 bytes (234 MB) copied, 4.12074 s, 56.8 MB/s
> > > High prio reader finished
> > > 234179072 bytes (234 MB) copied, 5.36023 s, 43.7 MB/s
> > > 
> > > Note: There is no service differentiation between prio 0 and prio 7 task
> > >       with io-throttle patches.
> > > 
> > > Test 3
> > > ======
> > > - Run the one RT reader and one BE reader in root cgroup without any
> > >   limitations. I guess this should mean unlimited BW and behavior should
> > >   be same as with CFQ without io-throttling patches.
> > > 
> > > With io-throttle patches
> > > =========================
> > > Ran the test 4 times because I was getting different results in different
> > > runs.
> > > 
> > > - Two readers, one RT prio 0  other BE prio 7
> > > 
> > > 234179072 bytes (234 MB) copied, 2.74604 s, 85.3 MB/s
> > > 234179072 bytes (234 MB) copied, 5.20995 s, 44.9 MB/s
> > > RT task finished
> > > 
> > > 234179072 bytes (234 MB) copied, 4.54417 s, 51.5 MB/s
> > > RT task finished
> > > 234179072 bytes (234 MB) copied, 5.23396 s, 44.7 MB/s
> > > 
> > > 234179072 bytes (234 MB) copied, 5.17727 s, 45.2 MB/s
> > > RT task finished
> > > 234179072 bytes (234 MB) copied, 5.25894 s, 44.5 MB/s
> > > 
> > > 234179072 bytes (234 MB) copied, 2.74141 s, 85.4 MB/s
> > > 234179072 bytes (234 MB) copied, 5.20536 s, 45.0 MB/s
> > > RT task finished
> > > 
> > > Note: Out of 4 runs, looks like twice it is complete priority inversion
> > >       and RT task finished after BE task. Rest of the two times, the
> > >       difference between BW of RT and BE task is much less as compared to
> > >       without patches. In fact once it was almost same.
> > 
> > This is strange. If you don't set any limit there shouldn't be any
> > difference respect to the other case (without io-throttle patches).
> > 
> > At worst a small overhead given by the task_to_iothrottle(), under
> > rcu_read_lock(). I'll repeat this test ASAP and see if I'll be able to
> > reproduce this strange behaviour.
> 
> Ya, I also found this strange. At least in root group there should not be
> any behavior change (at max one might expect little drop in throughput
> because of extra code).

Hi Vivek,

I'm not able to reproduce the strange behaviour above.

Which commands are you running exactly? is the system isolated (stupid
question) no cron or background tasks doing IO during the tests?

Following the script I've used:

$ cat test.sh
#!/bin/sh
echo 3 > /proc/sys/vm/drop_caches
ionice -c 1 -n 0 dd if=bigfile1 of=/dev/null bs=1M 2>&1 | sed "s/\(.*\)/RT: \1/" &
cat /proc/$!/cgroup | sed "s/\(.*\)/RT: \1/"
ionice -c 2 -n 7 dd if=bigfile2 of=/dev/null bs=1M 2>&1 | sed "s/\(.*\)/BE: \1/" &
cat /proc/$!/cgroup | sed "s/\(.*\)/BE: \1/"
for i in 1 2; do
	wait
done

And the results on my PC:

2.6.30-rc4
~~~~~~~~~~
$ sudo sh test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 21.3406 s, 11.5 MB/s
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 11.989 s, 20.5 MB/s
$ sudo sh test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 23.4436 s, 10.5 MB/s
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 11.9555 s, 20.5 MB/s
$ sudo sh test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 21.622 s, 11.3 MB/s
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 11.9856 s, 20.5 MB/s
$ sudo sh test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 21.5664 s, 11.4 MB/s
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 11.8522 s, 20.7 MB/s

2.6.30-rc4 + io-throttle, no BW limit, both tasks in the root cgroup
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$ sudo sh ./test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 23.6739 s, 10.4 MB/s
BE: cgroup 4:blockio:/
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 12.2853 s, 20.0 MB/s
RT: 4:blockio:/
$ sudo sh ./test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 23.7483 s, 10.3 MB/s
BE: cgroup 4:blockio:/
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 12.3597 s, 19.9 MB/s
RT: 4:blockio:/
$ sudo sh ./test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 23.6843 s, 10.4 MB/s
BE: cgroup 4:blockio:/
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 12.4886 s, 19.6 MB/s
RT: 4:blockio:/
$ sudo sh ./test.sh | sort
BE: 234+0 records in
BE: 234+0 records out
BE: 245366784 bytes (245 MB) copied, 23.8621 s, 10.3 MB/s
BE: cgroup 4:blockio:/
RT: 234+0 records in
RT: 234+0 records out
RT: 245366784 bytes (245 MB) copied, 12.6737 s, 19.4 MB/s
RT: 4:blockio:/

The difference seems to be just the expected overhead.

-Andrea
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ