[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OFA5BC9264.067973F3-ON49257C48.000B85B4-49257C48.000B85B4@lge.com>
Date: Sat, 21 Dec 2013 11:05:51 +0900
From: "Chanho Min" <chanho.min@....com>
To: Minchan Kim <minchan@...nel.org>
Cc: Phillip Lougher <phillip@...ashfs.org.uk>,
linux-kernel@...r.kernel.org,
ÀÓÈ¿ÁØ <hyojun.im@....com>,
ÀÌ°ÇÈ£ <gunho.lee@....com>
Subject: Re: Re: Re : Re: [PATCH] Squashfs: add asynchronous read support
> Please don't break thread.
> You should reply to my mail instead of your original post.
Sorry, It seems to be my mailer issue. I'm trying to fix it.
> It's a result which isn't what I want to know.
> What I wnat to know is why upper layer issues more I/O per second.
> For example, you read 32K so MM layer will prepare 8 pages to read in but
> at issuing at a first page, squashfs make 32 pages and fill the page cache
> if we assume you use 128K compression so MM layer's already prepared 7
> page
> would be freed without further I/O and do_generic_file_read will wait for
> completion by lock_page without further I/O queueing. It's not suprising.
> One of page freed is a READA marked page so readahead couldn't work.
> If readahead works, it would be just by luck. Actually, by simulation
> 64K dd, I found readahead logic would be triggered but it's just by luck
> and it's not intended, I think.
MM layer's readahead pages would not be freed immediately.
Squashfs can use them by grab_cache_page_nowait and READA marked page is available.
Intentional or not, readahead works pretty well. I checked in experiment.
> If first issued I/O complete, squashfs decompress the I/O with 128K pages
> so all 4 iteration(128K/32K) would be hit in page cache.
> If all 128K hit in page cache, mm layer start to issue next I/O and
> repeat above logic until you ends up reading all file size.
> So my opition is that upper layer wouldn't issue more I/O logically.
> If it worked, it's not what we expect but side-effect.
>
> That's why I'd like to know what's your thought for increasing IOPS.
> Please, could you say your thought why IOPS increased, not a result
> on low level driver?
It is because readahead can works asynchronously in background.
Suppose that you read a large file by 128k partially and contiguously
like "dd bs=128k". Two IOs can be issued per 128k reading,
First IO is for intended pages, second IO is for readahead.
If first IO hit in cache thank to previous readahead, no wait for IO completion
is needed, because intended page is up-to-date already.
But, current squashfs waits for second IO's completion unnecessarily.
That is one of reason that we should move page's up-to-date
to the asynchronous area like my patch.
> Anyway, in my opinion, we should take care of MM layer's readahead for
> enhance sequential I/O. For it, we should use buffer pages passed by MM
> instead of freeing them and allocating new pages in squashfs.
> IMHO, it would be better to implement squashfs_readpages but my insight
> is very weak so I guess Phillip will give more good idea/insight about
> the issue.
That's a good point. Also, I think my patch is another way which can be implemented
without significant impact on current implementation and I wait for Phillip's comment.
Thanks
Chanho
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists