[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202103091952.Nlz922lP-lkp@intel.com>
Date: Tue, 9 Mar 2021 19:11:35 +0800
From: kernel test robot <lkp@...el.com>
To: Minchan Kim <minchan@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: kbuild-all@...org,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, joaodias@...gle.com,
surenb@...gle.com, cgoldswo@...eaurora.org, willy@...radead.org,
mhocko@...e.com, david@...hat.com, vbabka@...e.cz
Subject: Re: [PATCH v2 2/2] mm: fs: Invalidate BH LRU during page migration
Hi Minchan,
I love your patch! Yet something to improve:
[auto build test ERROR on linux/master]
[also build test ERROR on linus/master v5.12-rc2 next-20210309]
[cannot apply to hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/mm-disable-LRU-pagevec-during-the-migration-temporarily/20210309-131826
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 144c79ef33536b4ecb4951e07dbc1f2b7fa99d32
config: openrisc-randconfig-r026-20210308 (attached as .config)
compiler: or1k-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/dfca8699b8fb8cf3bed2297e261fca53c0fc523c
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Minchan-Kim/mm-disable-LRU-pagevec-during-the-migration-temporarily/20210309-131826
git checkout dfca8699b8fb8cf3bed2297e261fca53c0fc523c
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=openrisc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@...el.com>
All errors (new ones prefixed by >>):
mm/swap.c:745:6: warning: no previous prototype for '__lru_add_drain_all' [-Wmissing-prototypes]
745 | void __lru_add_drain_all(bool force_all_cpus)
| ^~~~~~~~~~~~~~~~~~~
mm/swap.c: In function '__lru_add_drain_all':
>> mm/swap.c:827:7: error: implicit declaration of function 'has_bh_in_lru' [-Werror=implicit-function-declaration]
827 | has_bh_in_lru(cpu, NULL)) {
| ^~~~~~~~~~~~~
cc1: some warnings being treated as errors
vim +/has_bh_in_lru +827 mm/swap.c
744
745 void __lru_add_drain_all(bool force_all_cpus)
746 {
747 /*
748 * lru_drain_gen - Global pages generation number
749 *
750 * (A) Definition: global lru_drain_gen = x implies that all generations
751 * 0 < n <= x are already *scheduled* for draining.
752 *
753 * This is an optimization for the highly-contended use case where a
754 * user space workload keeps constantly generating a flow of pages for
755 * each CPU.
756 */
757 static unsigned int lru_drain_gen;
758 static struct cpumask has_work;
759 static DEFINE_MUTEX(lock);
760 unsigned cpu, this_gen;
761
762 /*
763 * Make sure nobody triggers this path before mm_percpu_wq is fully
764 * initialized.
765 */
766 if (WARN_ON(!mm_percpu_wq))
767 return;
768
769 /*
770 * Guarantee pagevec counter stores visible by this CPU are visible to
771 * other CPUs before loading the current drain generation.
772 */
773 smp_mb();
774
775 /*
776 * (B) Locally cache global LRU draining generation number
777 *
778 * The read barrier ensures that the counter is loaded before the mutex
779 * is taken. It pairs with smp_mb() inside the mutex critical section
780 * at (D).
781 */
782 this_gen = smp_load_acquire(&lru_drain_gen);
783
784 mutex_lock(&lock);
785
786 /*
787 * (C) Exit the draining operation if a newer generation, from another
788 * lru_add_drain_all(), was already scheduled for draining. Check (A).
789 */
790 if (unlikely(this_gen != lru_drain_gen && !force_all_cpus))
791 goto done;
792
793 /*
794 * (D) Increment global generation number
795 *
796 * Pairs with smp_load_acquire() at (B), outside of the critical
797 * section. Use a full memory barrier to guarantee that the new global
798 * drain generation number is stored before loading pagevec counters.
799 *
800 * This pairing must be done here, before the for_each_online_cpu loop
801 * below which drains the page vectors.
802 *
803 * Let x, y, and z represent some system CPU numbers, where x < y < z.
804 * Assume CPU #z is is in the middle of the for_each_online_cpu loop
805 * below and has already reached CPU #y's per-cpu data. CPU #x comes
806 * along, adds some pages to its per-cpu vectors, then calls
807 * lru_add_drain_all().
808 *
809 * If the paired barrier is done at any later step, e.g. after the
810 * loop, CPU #x will just exit at (C) and miss flushing out all of its
811 * added pages.
812 */
813 WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
814 smp_mb();
815
816 cpumask_clear(&has_work);
817 for_each_online_cpu(cpu) {
818 struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
819
820 if (force_all_cpus ||
821 pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) ||
822 data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) ||
823 pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) ||
824 pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) ||
825 pagevec_count(&per_cpu(lru_pvecs.lru_lazyfree, cpu)) ||
826 need_activate_page_drain(cpu) ||
> 827 has_bh_in_lru(cpu, NULL)) {
828 INIT_WORK(work, lru_add_drain_per_cpu);
829 queue_work_on(cpu, mm_percpu_wq, work);
830 __cpumask_set_cpu(cpu, &has_work);
831 }
832 }
833
834 for_each_cpu(cpu, &has_work)
835 flush_work(&per_cpu(lru_add_drain_work, cpu));
836
837 done:
838 mutex_unlock(&lock);
839 }
840
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
Download attachment ".config.gz" of type "application/gzip" (24758 bytes)
Powered by blists - more mailing lists