lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 9 Apr 2010 14:50:49 -0500
From:	Brian Haslett <knotwurk@...il.com>
To:	steve@...idescorp.com
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] increase pipe size/buffers/atomicity :D

> On Wed, 2010-04-07 at 19:38 -0600, brian wrote:
>> (tested and working with 2.6.32.8 kernel, on a Athlon/686)
>
> It would be good to know what issue this addresses. Gives people a way
> to weigh any side-effects/drawbacks against the benefits, and an
> opportunity to suggest alternate/better approaches.
>

I wouldn't say it addresses anything that I'd really consider broken;
it started as a personal experiment of mine, aimed at some little
performance gain.  I figured, hey, bigger pipes, why not? Looks like
these pipe sizes have practically been around since the epoch.


>>  #define PIPE_BUF_FLAG_LRU      0x01    /* page is on the LRU */
>>  #define PIPE_BUF_FLAG_ATOMIC   0x02    /* was atomically mapped */
>> --- include/asm-generic/page.h.orig     2010-04-06 22:57:08.000000000
>> -0500
>> +++ include/asm-generic/page.h  2010-04-06 22:57:23.000000000 -0500
>> @@ -12,7 +12,7 @@
>>
>>  /* PAGE_SHIFT determines the page size */
>>
>> -#define PAGE_SHIFT     12
>> +#define PAGE_SHIFT     13
>
> This has pretty wide-ranging implications, both within and across
> arches. I don't think it's something that can be changed easily. Also I
> don't believe this #define used in your configuration (Athlon/686)
> unless you're running without a MMU.
>

actually, the reason I went after this, gets into the only reason I
started this whole ordeal to begin with, line#135 in pipe_fs_i.h that
reads "#define PIPE_SIZE    PAGE_SIZE".


>>  #ifdef __ASSEMBLY__
>>  #define PAGE_SIZE      (1 << PAGE_SHIFT)
>>  #else
>> --- include/linux/limits.h.orig 2010-04-06 22:54:15.000000000 -0500
>> +++ include/linux/limits.h      2010-04-06 22:56:28.000000000 -0500
>> @@ -10,7 +10,7 @@
>>  #define MAX_INPUT        255   /* size of the type-ahead buffer */
>>  #define NAME_MAX         255   /* # chars in a file name */
>>  #define PATH_MAX        4096   /* # chars in a path name including nul */
>> -#define PIPE_BUF        4096   /* # bytes in atomic write to a pipe */
>> +#define PIPE_BUF        8192   /* # bytes in atomic write to a pipe */
>

You'd think so (according to some posts I'd read before I tried this),
but I actually tried several variations on a few things, and until I
changed *this one in particular*, my kernel would in fact boot up
fine, but the shell/init/system phase itself would start giving me
errors to the effect of "unable to create pipe" and "too many file
descriptors open" over and over again.

>> --- include/linux/pipe_fs_i.h.orig      2010-04-06 22:56:51.000000000
>> -0500
>> +++ include/linux/pipe_fs_i.h   2010-04-06 22:56:58.000000000 -0500
>> @@ -3,7 +3,7 @@
>>
>>  #define PIPEFS_MAGIC 0x50495045
>>
>> -#define PIPE_BUFFERS (16)
>> +#define PIPE_BUFFERS (32)
>
> This worries me. In several places there are functions with 2 or 3
> pointer arrays of dimension [PIPE_BUFFERS] on the stack. So this adds
> anywhere from 128 to 384 bytes to the stack in these functions depending
> on sizeof(void*) and the number of arrays.
>

As my initial hope/goal was to just increase the size of the pipes, I
figured I may as well increase the buffers as well (although I'll
admit ignorance to not having poked around every little .c/.h file
that calls it).

I guess I wasn't seriously trying to push anyone into jumping through
hoops for this thing, I was just a little excited and figured I'd
share with you all.  I probably spent the better part of a few days
either researching, poking around the kernel headers, or experimenting
with different combinations.   As such, I've attached a .txt file
explaining the controlled (but probably not as thorough as you're used
to) benchmark I ran.   It's not a pretty graph, I know, but gimme a
break, I wrote it in vim and did the math with bc ;)

View attachment "benchmark1.txt" of type "text/plain" (4418 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ