[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1703238070775.29652@unisoc.com>
Date: Fri, 22 Dec 2023 09:41:10 +0000
From: 黄朝阳 (Zhaoyang Huang)
<zhaoyang.huang@...soc.com>
To: Zhaoyang Huang <huangzhaoyang@...il.com>, Yu Zhao <yuzhao@...gle.com>
CC: Matthew Wilcox <willy@...radead.org>,
Andrew Morton
<akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
康纪滨 (Steve Kang) <Steve.Kang@...soc.com>
Subject: reply: reply: [RFC PATCH 1/1] mm: mark folio accessed in minor fault
On Fri, Dec 22, 2023 at 2:45 PM Yu Zhao <yuzhao@...gle.com> wrote:
>
> On Thu, Dec 21, 2023 at 11:29 PM 黄朝阳 (Zhaoyang Huang)
> <zhaoyang.huang@...soc.com> wrote:
> >
> >
> > On Thu, Dec 21, 2023 at 10:53 PM Zhaoyang Huang <huangzhaoyang@...il.com> wrote:
> > >
> > > On Thu, Dec 21, 2023 at 2:33 PM Yu Zhao <yuzhao@...gle.com> wrote:
> > > >
> > > > On Wed, Dec 20, 2023 at 11:28 PM Zhaoyang Huang <huangzhaoyang@...il.com> wrote:
> > > > >
> > > > > On Thu, Dec 21, 2023 at 12:53 PM Yu Zhao <yuzhao@...gle.com> wrote:
> > > > > >
> > > > > > On Wed, Dec 20, 2023 at 9:09 PM Matthew Wilcox <willy@...radead.org> wrote:
> > > > > > >
> > > > > > > On Thu, Dec 21, 2023 at 09:58:25AM +0800, Zhaoyang Huang wrote:
> > > > > > > > On Wed, Dec 20, 2023 at 10:14 PM Matthew Wilcox <willy@...radead.org> wrote:
> > > > > > > > >
> > > > > > > > > On Wed, Dec 20, 2023 at 06:29:48PM +0800, zhaoyang.huang wrote:
> > > > > > > > > > From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> > > > > > > > > >
> > > > > > > > > > Inactive mapped folio will be promoted to active only when it is
> > > > > > > > > > scanned in shrink_inactive_list, while the vfs folio will do this
> > > > > > > > > > immidiatly when it is accessed. These will introduce two affections:
> > > > > > > > > >
> > > > > > > > > > 1. NR_ACTIVE_FILE is not accurate as expected.
> > > > > > > > > > 2. Low reclaiming efficiency caused by dummy nactive folio which should
> > > > > > > > > > be kept as earlier as shrink_active_list.
> > > > > > > > > >
> > > > > > > > > > I would like to suggest mark the folio be accessed in minor fault to
> > > > > > > > > > solve this situation.
> > > > > > > > >
> > > > > > > > > This isn't going to be as effective as you imagine. Almost all file
> > > > > > > > > faults are handled through filemap_map_pages(). So I must ask, what
> > > > > > > > > testing have you done with this patch?
> > > > > > > > >
> > > > > > > > > And while you're gathering data, what effect would this patch have on your
> > > > > > > > > workloads?
> > > > > > > > Thanks for heads-up, I am out of date for readahead mechanism. My goal
> > > > > > >
> > > > > > > It's not a terribly new mechanism ... filemap_map_pages() was added nine
> > > > > > > years ago in 2014 by commit f1820361f83d
> > > > > > >
> > > > > > > > is to have mapped file pages behave like other pages which could be
> > > > > > > > promoted immediately when they are accessed. I will update the patch
> > > > > > > > and provide benchmark data in new patch set.
> > > > > > >
> > > > > > > Understood. I don't know the history of this, so I'm not sure if the
> > > > > > > decision to not mark folios as accessed here was intentional or not.
> > > > > > > I suspect it's entirely unintentional.
> > > > > >
> > > > > > It's intentional. For the active/inactive LRU, all folios start
> > > > > > inactive. The first scan of a folio transfers the A-bit (if it's set
> > > > > > during the initial fault) to PG_referenced; the second scan of this
> > > > > > folio, if the A-bit is set again, moves it to the active list. This
> > > > > > way single-use folios, i.e., folios mapped for file streaming, can be
> > > > > > reclaimed quickly, since they are "demoted" rather than "promoted" on
> > > > > > the second scan. This RFC would regress memory streaming workloads.
> > > > > Thanks. Please correct me if I am wrong. IMO, there will be no
> > > > > minor-fault for single-use folios
> > > >
> > > > Why not? What prevents a specific *access pattern* from triggering minor faults?
> > > Please find the following chart for mapped page state machine
> > > transfication.
> >
> > > I'm not sure what you are asking me to look at -- is the following
> > > trying to illustrate something related to my question above?
> >
> > sorry for my fault on table generation, resend it, I am trying to present how RFC performs in a page's stat transfer
> >
> > 1. RFC behaves the same as the mainline in (1)(2)
> > 2. VM_EXEC mapped pages are activated earlier than mainline which help improve scan efficiency in (3)(4)
> > 3. none VM_EXEC mapped pages are dropped as vfs pages do during 3rd scan.
> >
> > (1)
> > 1st access shrink_active_list 1st scan(shink_folio_list) 2nd scan(shrink_folio_list')
> > mainline INA/UNR NA INA/REF DROP
> > RFC INA/UNR NA INA/REF DROP
>
> > I don't think this is the case -- with this RFC, *readahead* folios,
> > which are added into pagecache as INA/UNR, become PG_referenced upon
> > the initial fault (first access), i.e., INA/REF. The first scan will
> > actually activate them, i.e., they become ACT/UNR, because they have
> > both PG_referenced and the A-bit.
> No,Sorry for the confusion. This RFC actually aims at minor fault of
> the faulted pages(with one pte setup). In terms of the readahead
> pages, can we solve it by add one criteria as bellow, which unifies
> all kinds of mapped pages in RFC.
>
> @@ -3273,6 +3273,12 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
> */
> folio = filemap_get_folio(mapping, index);
> if (likely(!IS_ERR(folio))) {
> + /*
> + * try to promote inactive folio here when it is accessed
> + * as minor fault
> + */
> + if(folio_mapcount(folio))
> + folio_mark_accessed(folio);
> /*
> * We found the page, so try async readahead before waiting for
> * the lock.
>
Please find bellow for the stat machine table of updated RFC, where RFC behaves same or enhances the scan efficiency by promoting the page in shrink_active_list.
(1)
1st access shrink_active_list 1st scan(shink_folio_list) 2nd scan(shrink_folio_list')
mainline INA/UNR NA INA/REF DROP
RFC INA/UNR NA INA/REF DROP
RA INA/UNR NA INA/REF DROP
(2)
1st access 2nd access shrink_active_list 1st scan(shink_folio_list)
mainline INA/UNR INA/UNR NA ACT/REF
RFC INA/UNR INA/REF NA ACT/REF
RA INA/UNR INA/REF NA ACT/REF
(3)
1st access 1st scan(shink_folio_list) 2nd access 2nd scan(shrink_active_list) 3rd scan(shrink_folio_list)
mainline INA/UNR INA/REF INA/REF NA ACT/REF
RFC INA/UNR INA/REF ACT/REF ACT/REF NA
(VM_EXEC)
RFC INA/UNR INA/REF ACT/REF INA/REF DROP
(non VM_EXEC)
RA INA/UNR INA/REF INA/REF NA ACT/REF
(4)
1st access 2nd access 3rd access 1st scan(shrink_active_list) 2nd scan(shink_folio_list)
mainline INA/UNR INA/UNR INA/UNR NA ACT/REF
RFC INA/UNR INA/REF ACT/REF ACT/REF NA
(VM_EXEC)
RFC INA/UNR INA/REF ACT/REF ACT/REF NA
(Non VM_EXEC)
RA INA/UNR INA/REF ACT/REF ACT/REF NA
> >
> > So it doesn't behave the same way the mainline does for the first case
> > you listed. (I didn't look at the rest of the cases.)
Powered by blists - more mailing lists