lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAKgT0Ue0j1-EF+X0miM4ZYQgJ7xLX79BwN58rqHcaEBzC6BBcg@mail.gmail.com>
Date:   Mon, 11 Jul 2022 11:23:01 -0700
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     Maurizio Lombardi <mlombard@...hat.com>
Cc:     Jakub Kicinski <kuba@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Netdev <netdev@...r.kernel.org>, Chen Lin <chen45464546@....com>
Subject: Re: [PATCH] mm: prevent page_frag_alloc() from corrupting the memory

On Mon, Jul 11, 2022 at 9:17 AM Maurizio Lombardi <mlombard@...hat.com> wrote:
>
> po 11. 7. 2022 v 17:34 odesílatel Alexander Duyck
> <alexander.duyck@...il.com> napsal:
> >
> > Rather than forcing us to free the page it might be better to move the
> > lines getting the size and computing the offset to the top of the "if
> > (unlikely(offset < 0)) {" block. Then instead of freeing the page we
> > could just return NULL and don't have to change the value of any
> > fields in the page_frag_cache.
> >
> > That way a driver performing bad requests can't force us to start
> > allocating and freeing pages like mad by repeatedly flushing the
> > cache.
> >
>
> I understand. On the other hand, if we free the cache page then the
> next time __page_frag_cache_refill() runs it may be successful
> at allocating the order=3 cache, the normal page_frag_alloc() behaviour will
> therefore be restored.

That is a big "maybe". My concern is that it will actually make memory
pressure worse by forcing us to reduce the number of uses for a lower
order page. One bad actor will have us flushing memory like mad so a
guy expecting a small fragment may end up allocating 32K pages because
someone else is trying to allocate them.

I recommend we do not optimize for a case which this code was not
designed for. Try to optimize for the standard case that most of the
drivers are using. These drivers that are allocating higher order
pages worth of memory should really be using alloc_pages. Using this
to allocate pages over 4K in size is just a waste since they are not
likely to see page reuse which is what this code expects to see.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ