lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 10 Aug 2021 12:23:48 -0700
From:   Andi Kleen <>
To:     Dave Hansen <>,
        "Kirill A. Shutemov" <>,
        Borislav Petkov <>,
        Andy Lutomirski <>,
        Sean Christopherson <>,
        Andrew Morton <>,
        Joerg Roedel <>
Cc:     Kuppuswamy Sathyanarayanan 
        David Rientjes <>,
        Vlastimil Babka <>,
        Tom Lendacky <>,
        Thomas Gleixner <>,
        Peter Zijlstra <>,
        Paolo Bonzini <>,
        Ingo Molnar <>,
        Varad Gautam <>,
        Dario Faggioli <>,,,,,
        "Kirill A. Shutemov" <>
Subject: Re: [PATCH 1/5] mm: Add support for unaccepted memory

> But, not everyone is going to agree with me.

Both the Intel TDX and the AMD SEV side independently came to opposite 
conclusions. In general people care a lot about boot time of VM guests.

> This also begs the question of how folks know when this "blip" is over.
>   Do we have a counter for offline pages?  Is there any way to force page
> acceptance?  Or, are we just stuck allocating a bunch of memory to warm
> up the system?
> How do folks who care about these new blips avoid them?

It's not different than any other warmup. At warmup time you always have 
lots of blips until the working set stabilizes. For example in 
virtualization first touch of a new page is usually an EPT violation 
handled to the host. Or in the native case you may need to do IO or free 
memory. Everybody who based their critical latency percentiles around a 
warming up process would be foolish, the picture would be completely 

So the basic operation is adding some overhead, but I don't think 
anything is that unusual compared to the state of the art.

Now perhaps the locking might be a problem if the other operations all 
run in parallel, causing unnecessary serialization If that's really a 
problem I guess we can optimize later. I don't think there's anything 
fundamental about the current locking.


Powered by blists - more mailing lists