lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 25 Mar 2008 22:16:00 -0600
From:	John Marvin <jsm@...hp.com>
To:	linux-ia64@...r.kernel.org
Cc:	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: larger default page sizes...

Peter Chubb wrote:

> 
> You end up having to repeat PTEs to fit into Linux's page table
> structure *anyway* (unless we can change Linux's page table).  But
> there's no place in the short format hardware-walked page table (that
> reuses the leaf entries in Linux's table) for a page size.  And if you
> use some of the holes in the format, the hardware walker doesn't
> understand it --- so you have to turn off the hardware walker for
> *any* regions where there might be a superpage.  

No, you can set an illegal memory attribute in the pte for any superpage entry, 
and leave the hardware walker enabled for the base page size. The software tlb 
miss handler can then install the superpage tlb entry. I posted a working 
prototype of Shimizu superpages working on ia64 using short format vhpt's to the 
linux kernel list a while back.

> 
> If you use the long format VHPT, you have a choice:  load the
> hash table with just the translation that caused the miss, load all
> possible hash entries that could have caused the miss for the page, or
> preload the hash table when the page is instantiated, with all
> possible entries that could hash to the huge page.  I don't remember
> the details, but I seem to remember all these being bad choices for
> one reason or other ... Ian, can you elaborate?

When I was doing measurements of long format vs. short format, the two main 
problems with long format (and why I eventually chose to stick with short 
format) were:

1) There was no easy way of determining what size the long format vhpt cache 
should be automatically, and changing it dynamically would be too painful. 
Different workloads performed better with different size vhpt caches.

2) Regardless of the size, the vhpt cache is duplicated information. Using long 
format vhpt's significantly increased the number of cache misses for some 
workloads. Theoretically there should have been some cases where the long format 
solution would have performed better than the short format solution, but I was 
never able to create such a case. In many cases the performance difference 
between the long format solution and the short format solution was essentially 
the same. In other cases the short format vhpt solution outperformed the long 
format solution, and in those cases there was a significant difference in cache 
misses that I believe explained the performance difference.

John
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ