lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpE9dVK01c-aNT_uwTC=m8RSdEiXsoe6XBR48GjL=ezsmg@mail.gmail.com>
Date:   Wed, 3 May 2023 12:57:09 -0700
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     Yosry Ahmed <yosryahmed@...gle.com>
Cc:     Matthew Wilcox <willy@...radead.org>, akpm@...ux-foundation.org,
        hannes@...xchg.org, mhocko@...e.com, josef@...icpanda.com,
        jack@...e.cz, ldufour@...ux.ibm.com, laurent.dufour@...ibm.com,
        michel@...pinasse.org, liam.howlett@...cle.com, jglisse@...gle.com,
        vbabka@...e.cz, minchan@...gle.com, dave@...olabs.net,
        punit.agrawal@...edance.com, lstoakes@...il.com, hdanton@...a.com,
        apopple@...dia.com, linux-mm@...ck.org,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        kernel-team@...roid.com
Subject: Re: [PATCH 1/3] mm: handle swap page faults under VMA lock if page is uncontended

On Wed, May 3, 2023 at 1:34 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> On Tue, May 2, 2023 at 4:05 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
> >
> > On Tue, May 2, 2023 at 3:31 PM Matthew Wilcox <willy@...radead.org> wrote:
> > >
> > > On Tue, May 02, 2023 at 09:36:03AM -0700, Suren Baghdasaryan wrote:
> > > > On Tue, May 2, 2023 at 8:03 AM Matthew Wilcox <willy@...radead.org> wrote:
> > > > >
> > > > > On Mon, May 01, 2023 at 10:04:56PM -0700, Suren Baghdasaryan wrote:
> > > > > > On Mon, May 1, 2023 at 8:22 PM Matthew Wilcox <willy@...radead.org> wrote:
> > > > > > >
> > > > > > > On Mon, May 01, 2023 at 07:30:13PM -0700, Suren Baghdasaryan wrote:
> > > > > > > > On Mon, May 1, 2023 at 7:02 PM Matthew Wilcox <willy@...radead.org> wrote:
> > > > > > > > >
> > > > > > > > > On Mon, May 01, 2023 at 10:50:23AM -0700, Suren Baghdasaryan wrote:
> > > > > > > > > > +++ b/mm/memory.c
> > > > > > > > > > @@ -3711,11 +3711,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> > > > > > > > > >       if (!pte_unmap_same(vmf))
> > > > > > > > > >               goto out;
> > > > > > > > > >
> > > > > > > > > > -     if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> > > > > > > > > > -             ret = VM_FAULT_RETRY;
> > > > > > > > > > -             goto out;
> > > > > > > > > > -     }
> > > > > > > > > > -
> > > > > > > > > >       entry = pte_to_swp_entry(vmf->orig_pte);
> > > > > > > > > >       if (unlikely(non_swap_entry(entry))) {
> > > > > > > > > >               if (is_migration_entry(entry)) {
> > > > > > > > >
> > > > > > > > > You're missing the necessary fallback in the (!folio) case.
> > > > > > > > > swap_readpage() is synchronous and will sleep.
> > > > > > > >
> > > > > > > > True, but is it unsafe to do that under VMA lock and has to be done
> > > > > > > > under mmap_lock?
> > > > > > >
> > > > > > > ... you were the one arguing that we didn't want to wait for I/O with
> > > > > > > the VMA lock held?
> > > > > >
> > > > > > Well, that discussion was about waiting in folio_lock_or_retry() with
> > > > > > the lock being held. I argued against it because currently we drop
> > > > > > mmap_lock lock before waiting, so if we don't drop VMA lock we would
> > > > > > be changing the current behavior which might introduce new
> > > > > > regressions. In the case of swap_readpage and swapin_readahead we
> > > > > > already wait with mmap_lock held, so waiting with VMA lock held does
> > > > > > not introduce new problems (unless there is a need to hold mmap_lock).
> > > > > >
> > > > > > That said, you are absolutely correct that this situation can be
> > > > > > improved by dropping the lock in these cases too. I just didn't want
> > > > > > to attack everything at once. I believe after we agree on the approach
> > > > > > implemented in https://lore.kernel.org/all/20230501175025.36233-3-surenb@google.com
> > > > > > for dropping the VMA lock before waiting, these cases can be added
> > > > > > easier. Does that make sense?
> > > > >
> > > > > OK, I looked at this path some more, and I think we're fine.  This
> > > > > patch is only called for SWP_SYNCHRONOUS_IO which is only set for
> > > > > QUEUE_FLAG_SYNCHRONOUS devices, which are brd, zram and nvdimms
> > > > > (both btt and pmem).  So the answer is that we don't sleep in this
> > > > > path, and there's no need to drop the lock.
> > > >
> > > > Yes but swapin_readahead does sleep, so I'll have to handle that case
> > > > too after this.
> > >
> > > Sleeping is OK, we do that in pXd_alloc()!  Do we block on I/O anywhere
> > > in swapin_readahead()?  It all looks like async I/O to me.
> >
> > Hmm. I thought that we have synchronous I/O in the following paths:
> >     swapin_readahead()->swap_cluster_readahead()->swap_readpage()
> >     swapin_readahead()->swap_vma_readahead()->swap_readpage()
> > but just noticed that in both cases swap_readpage() is called with the
> > synchronous parameter being false. So you are probably right here...
>
> In both swap_cluster_readahead() and swap_vma_readahead() it looks
> like if the readahead window is 1 (aka we are not reading ahead), then
> we jump to directly calling read_swap_cache_async() passing do_poll =
> true, which means we may end up calling swap_readpage() passing
> synchronous = true.
>
> I am not familiar with readahead heuristics, so I am not sure how
> common this is, but it's something to think about.

Uh, you are correct. If this branch is common, we could use the same
"drop the lock and retry" pattern inside read_swap_cache_async(). That
would be quite easy to implement.
Thanks for checking on it!

>
> > Does that mean swapin_readahead() might return a page which does not
> > have its content swapped-in yet?
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ