lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 19 Sep 2018 14:15:10 +0000
From:   "Bean Huo (beanhuo)" <beanhuo@...ron.com>
To:     Jan Kara <jack@...e.cz>
CC:     Andreas Dilger <adilger@...ger.ca>,
        "jeffm@...e.com" <jeffm@...e.com>,
        "Theodore Y. Ts'o" <tytso@....edu>,
        "linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>
Subject: RE: [EXT] how to disable readahead

>> >> >>
>> >> >> And then used btrace to monitor the I/O requests sent to the device:
>> >> >>
>> >> >> 252,4    0      413     0.077274997 14645  Q   R 4408 + 8 [dd]
>> >> >> 252,4    2       77     0.077355648  5529  C   R 4408 + 8 [0]
>> >> >> 252,4    0      414     0.077393725 14645  Q   R 4416 + 8 [dd]
>> >> >> 252,4    2       78     0.077630722  5529  C   R 4416 + 8 [0]
>> >> >> 	...
>> >> >>
>> >> >> ... and indeed, the reads are being sent to the device in 4k chunks.
>> >> >> That's indeed surprising.  I'd have to do some debugging with
>> >> >> tracepoints to see what requests are being issued from the
>> >> >> mm/filemap.c to the file system.
>> >> >
>> >> > And this is in fact expected. There are two basic ways how data
>> >> > can appear in page cache: ->readpage and ->readpages filesystem
>> >> > callbacks. The second one is what readahead (and only readahead)
>> >> > uses, the first one is used as a fallback when readahead fails
>> >> > for some reason. So if you disable readahead, you're left only
>> >> > with -
>> >>readpage call which does only one-page (4k) reads.
>> >>
>> >> Even *with* readahead, why would we add the overhead of processing
>> >> each page separately instead of handling all pages in a single
>> >> batch via
>> >readpages()?
>> >
>> >Hum, I don't understand. With readahead enabled, we should be
>> >submitting larger batches of IO as generated by ->readpages call and
>> >->readpage actually never ends up issuing any IO (see how
>> >generic_file_buffered_read() calls
>> >page_cache_sync_readahead() first which ends up locking pages and
>> >submitting reads) and only then we go, search for the page again and
>> >lock it - which effectively waits for the readahead to pull in the first page.
>> >
>> >								Honza
>> >--
>> >Jan Kara <jack@...e.com>
>> >SUSE Labs, CR
>>
>> 'read_ahead_kb' should be only used for the read ahead (second time
>> read internal), should be used as a flag to change the first read request
>chunk size came from user space read.
>> Even the 'read_ahead_kb' configured 0.
>
>OK, so you made me look into details how the read request size gets
>computed :).  The thing is: When read_ahead_kb is 0, we really do single page
>reads as all the cleverness in trying to issue large read requests gets disabled.
>Once read_ahead_kb is >0 (you have to write there at least PAGE_SIZE - i.e.  4
>on x86_64), we will actually issue requests of size at least requested in the
>syscall.
>
>								Honza
>--
>Jan Kara <jack@...e.com>
>SUSE Labs, CR

Meanwhile, I noticed that if 'read_ahead_kb' is 128(128KB), when you read the data in 512KB chunk size,
The 512KB request data length will be split into 4*128KB requests to read from HW device;
When the 'read_ahead_kb' is 512 (512kB), the 512kB chunk read request will directly pass to lower layers.
This also doesn't make sense.  Lower layers can buffer 512KB size data, 512KB shouldn't be split into 4 times 128KB.


--Bean Huo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ