lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091116213709.GK13235@redhat.com>
Date:	Mon, 16 Nov 2009 16:37:09 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	"Alan D. Brunelle" <Alan.Brunelle@...com>
Cc:	linux-kernel@...r.kernel.org, jens.axboe@...cle.com
Subject: Re: [RFC] Block IO Controller V2 - some results

On Mon, Nov 16, 2009 at 04:32:15PM -0500, Alan D. Brunelle wrote:
> On Mon, 2009-11-16 at 16:14 -0500, Vivek Goyal wrote:
> > On Mon, Nov 16, 2009 at 03:51:00PM -0500, Alan D. Brunelle wrote:
> > > Hi Vivek: 
> > > 
> > > I'm finding some things that don't quite seem right - executive
> > > summary: 
> > 
> > Hi Alan,
> > 
> > Thanks a lot for such an extensive testing and test results. I am still
> > digesting the results but I thought I will make a quick note about writes.
> > This patchset works only for sync IO. If you are performing buffered
> > writes then you will not see any service differentiation. Providing
> > support for buffered write path is in TODO list. 
> 
> Ah, I thought you meant sync I/O versus async I/O. So do you mean that
> the testing should use _direct_ I/O (bypassing the cache)? 

Only for Writes. Reads will anyway show up as sync IO at the CFQ, so
that's not a problem. You can choose to test these either as direct IO
or let them go through page cache.

For writes, you need to use direct IO if you are looking for service
differentiation with current patchset.

Thanks
Vivek

> 
> > 
> > > 
> > > o  I think the apportionment algorithm doesn't work consistently well
> > > for writes.
> > > 
> > > o  I think there are problems with significant performance loss when
> > > doing random I/Os.
> > 
> > This concerns me. I had a quick look and as per your results, even with
> > group_idle=0 you are seeing this regression. I guess this might be coming
> > from the fact that we idle on sync-noidle workload per group and that
> > idling becomes significant as number of groups increase.
> > 
> > Thanks
> > Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ