lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CA+55aFwku2tDH4+rfaC67xc4-cEwSrXgnQaci=e2id5ZCRE9JQ@mail.gmail.com> Date: Wed, 11 Jul 2018 09:23:54 -0700 From: Linus Torvalds <torvalds@...ux-foundation.org> To: Michal Hocko <mhocko@...nel.org> Cc: wei.w.wang@...el.com, virtio-dev@...ts.oasis-open.org, Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, virtualization <virtualization@...ts.linux-foundation.org>, KVM list <kvm@...r.kernel.org>, linux-mm <linux-mm@...ck.org>, "Michael S. Tsirkin" <mst@...hat.com>, Andrew Morton <akpm@...ux-foundation.org>, Paolo Bonzini <pbonzini@...hat.com>, liliang.opensource@...il.com, yang.zhang.wz@...il.com, quan.xu0@...il.com, nilal@...hat.com, Rik van Riel <riel@...hat.com>, peterx@...hat.com Subject: Re: [PATCH v35 1/5] mm: support to get hints of free page blocks On Wed, Jul 11, 2018 at 2:21 AM Michal Hocko <mhocko@...nel.org> wrote: > > We already have an interface for that. alloc_pages(GFP_NOWAIT, MAX_ORDER -1). > So why do we need any array based interface? That was actually my original argument in the original thread - that the only new interface people might want is one that just tells how many of those MAX_ORDER-1 pages there are. See the thread in v33 with the subject "[PATCH v33 1/4] mm: add a function to get free page blocks" and look for me suggesting just using #define GFP_MINFLAGS (__GFP_NORETRY | __GFP_NOWARN | __GFP_THISNODE | __GFP_NOMEMALLOC) struct page *page = alloc_pages(GFP_MINFLAGS, MAX_ORDER-1); for this all. But I could also see an argument for "allocate N pages of size MAX_ORDER-1", with some small N, simply because I can see the advantage of not taking and releasing the locking and looking up the zone individually N times. If you want to get gigabytes of memory (or terabytes), doing it in bigger chunks than one single maximum-sized page sounds fairly reasonable. I just don't think that "thousands of pages" is reasonable. But "tens of max-sized pages" sounds fair enough to me, and it would certainly not be a pain for the VM. So I'm open to new interfaces. I just want those new interfaces to make sense, and be low latency and simple for the VM to do. I'm objecting to the incredibly baroque and heavy-weight one that can return near-infinite amounts of memory. The real advantage of jjuist the existing "alloc_pages()" model is that I think the ballooning people can use that to *test* things out. If it turns out that taking and releasing the VM locks is a big cost, we can see if a batch interface that allows you to get tens of pages at the same time is worth it. So yes, I'd suggest starting with just the existing alloc_pages. Maybe it's not enough, but it should be good enough for testing. Linus
Powered by blists - more mailing lists