lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140108103843.GA8256@quack.suse.cz>
Date:	Wed, 8 Jan 2014 11:38:43 +0100
From:	Jan Kara <jack@...e.cz>
To:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
Cc:	Jan Kara <jack@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>,
	Fengguang Wu <fengguang.wu@...el.com>,
	David Cohen <david.a.cohen@...ux.intel.com>,
	Al Viro <viro@...iv.linux.org.uk>,
	Damien Ramonda <damien.ramonda@...el.com>,
	Linus <torvalds@...ux-foundation.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH V3] mm readahead: Fix the readahead fail in case of
 empty numa node

On Wed 08-01-14 14:07:03, Raghavendra K T wrote:
> On 01/06/2014 04:26 PM, Jan Kara wrote:
> >On Mon 06-01-14 15:51:55, Raghavendra K T wrote:
> >>Currently, max_sane_readahead returns zero on the cpu with empty numa node,
> >>fix this by checking for potential empty numa node case during calculation.
> >>We also limit the number of readahead pages to 4k.
> >>
> >>Signed-off-by: Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
> >>---
> >>The current patch limits the readahead into 4k pages (16MB was suggested
> >>by Linus).  and also handles the case of memoryless cpu issuing readahead
> >>failures.  We still do not consider [fm]advise() specific calculations
> >>here.  I have dropped the iterating over numa node to calculate free page
> >>idea.  I do not have much idea whether there is any impact on big
> >>streaming apps..  Comments/suggestions ?
> >   As you say I would be also interested what impact this has on a streaming
> >application. It should be rather easy to check - create 1 GB file, drop
> >caches. Then measure how long does it take to open the file, call fadvise
> >FADV_WILLNEED, read the whole file (for a kernel with and without your
> >patch). Do several measurements so that we get some meaningful statistics.
> >Resulting numbers can then be part of the changelog. Thanks!
> >
> 
> Hi Honza,
> 
> Thanks for the idea. (sorry for the delay, spent my own time to do some
> fadvise and other benchmarking). Here is the result on my x240 machine
> with 32 cpu (w/ HT) 128GB ram.
> 
> Below test was for 1gb test file as per suggestion.
> 
> x base_result
> + patched_result
>     N           Min           Max        Median           Avg        Stddev
> x  12         7.217         7.444        7.2345     7.2603333    0.06442802
> +  12          7.24         7.431         7.243     7.2684167   0.059649672
> 
> From the result we could see that there is not much impact with the
> patch.
> I shall include the result in changelog when I resend/next version
> depending on the others' comment.
> 
> ---
> test file looked something like this:
> 
> char buf[4096];
> 
> int main()
> {
> int fd = open("testfile", O_RDONLY);
> unsigned long read_bytes = 0;
> int sz;
> posix_fadvise(fd, 0, 0, POSIX_FADV_DONTNEED);
  Hum, but this call should have rather been:
struct stat st;

fstat(fd, &st);
posix_fadvise(fd, 0, st.st_size, POSIX_FADV_WILLNEED);

The posix_fadvise() call you had doesn't do anything...

								Honza

> do {
> 	sz = read(fd, buf, 4096);
> 	read_bytes += sz;
> } while (sz > 0);
> 
> close(fd);
> printf (" Total bytes read = %lu \n", read_bytes);
> return 0;
> }
> 
> 
> 
> 
> 
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ