lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <535AD62D.20509@sr71.net>
Date:	Fri, 25 Apr 2014 14:39:57 -0700
From:	Dave Hansen <dave@...1.net>
To:	Mel Gorman <mgorman@...e.de>
CC:	x86@...nel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	akpm@...ux-foundation.org, kirill.shutemov@...ux.intel.com,
	ak@...ux.intel.com, riel@...hat.com, alex.shi@...aro.org,
	dave.hansen@...ux.intel.com
Subject: Re: [PATCH 2/6] x86: mm: rip out complicated, out-of-date, buggy
 TLB flushing

On 04/24/2014 01:45 AM, Mel Gorman wrote:
>> > +/*
>> > + * See Documentation/x86/tlb.txt for details.  We choose 33
>> > + * because it is large enough to cover the vast majority (at
>> > + * least 95%) of allocations, and is small enough that we are
>> > + * confident it will not cause too much overhead.  Each single
>> > + * flush is about 100 cycles, so this caps the maximum overhead
>> > + * at _about_ 3,000 cycles.
>> > + */
>> > +/* in units of pages */
>> > +unsigned long tlb_single_page_flush_ceiling = 1;
>> > +
> This comment is premature. The documentation file does not exist yet and
> 33 means nothing yet. Out of curiousity though, how confident are you
> that a TLB flush is generally 100 cycles across different generations
> and manufacturers of CPUs? I'm not suggesting you change it or auto-tune
> it, am just curious.

First of all, I changed the units here at some point, and I screwed up
the comments.  I meant 100 nanoseconds, *not* cycles.

For the sake of completeness, here are the data on a Westmere CPU.  I'm
not _quite_ sure why the <=5 pages cases are so slow per-page compared
to when we're flushing larger numbers of pages.  (I also only printed
out the flush sizes with >100 samples):

The overall average was 151ns, and for 6 pages and up it was 107ns.

     1  1560658    279861777 avg/page:   179
     2   179981     85329139 avg/page:   237
     3    99797    146972011 avg/page:   490
     4   161470    133072233 avg/page:   206
     5    44150     42142670 avg/page:   190
     6    17364     12063833 avg/page:   115
     7    12325      9899412 avg/page:   114
     8     4202      3838077 avg/page:   114
     9      811       990320 avg/page:   135
    10     4448      4955283 avg/page:   111
    11    69051     86723229 avg/page:   114
    12      465       642204 avg/page:   115
    13      157       226814 avg/page:   111
    16      781      1741461 avg/page:   139
    17     1506      2778201 avg/page:   108
    18      110       211216 avg/page:   106
    19    13322     27941893 avg/page:   110
    21     1828      4092988 avg/page:   106
    24     1566      4057605 avg/page:   107
    25      246       646463 avg/page:   105
    29      411      1275101 avg/page:   106
    33     3191     11775818 avg/page:   111
    52     3096     17297873 avg/page:   107
    65     2244     15349445 avg/page:   105
   129     2278     33246120 avg/page:   113
   240    12181    305529055 avg/page:   104

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ