[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202307171539.3d8d0e8-oliver.sang@intel.com>
Date: Mon, 17 Jul 2023 15:43:46 +0800
From: kernel test robot <oliver.sang@...el.com>
To: Waiman Long <longman@...hat.com>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>,
<linux-kernel@...r.kernel.org>, <ltp@...ts.linux.it>,
<aubrey.li@...ux.intel.com>, <yu.c.chen@...el.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
"Juri Lelli" <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
"Valentin Schneider" <vschneid@...hat.com>,
Phil Auld <pauld@...hat.com>,
Brent Rowsell <browsell@...hat.com>,
Peter Hunt <pehunt@...hat.com>,
Waiman Long <longman@...hat.com>, <oliver.sang@...el.com>
Subject: Re: [PATCH] sched/core: Use empty mask to reset cpumasks in
sched_setaffinity()
Hello,
kernel test robot noticed "ltp.sched_setaffinity01.fail" on:
commit: 5ae608d0d3901386665fb64090f93843f4135cc0 ("[PATCH] sched/core: Use empty mask to reset cpumasks in sched_setaffinity()")
url: https://github.com/intel-lab-lkp/linux/commits/Waiman-Long/sched-core-Use-empty-mask-to-reset-cpumasks-in-sched_setaffinity/20230629-052600
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git ebb83d84e49b54369b0db67136a5fe1087124dcc
patch link: https://lore.kernel.org/all/20230628211637.1679348-1-longman@redhat.com/
patch subject: [PATCH] sched/core: Use empty mask to reset cpumasks in sched_setaffinity()
in testcase: ltp
version: ltp-x86_64-14c1f76-1_20230708
with following parameters:
disk: 1HDD
fs: f2fs
test: syscalls-04/sched_setaffinity01
compiler: gcc-12
test machine: 4 threads 1 sockets Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz (Ivy Bridge) with 8G memory
(please refer to attached dmesg/kmsg for entire log/backtrace)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@...el.com>
| Closes: https://lore.kernel.org/oe-lkp/202307171539.3d8d0e8-oliver.sang@intel.com
Running tests.......
<<<test_start>>>
tag=sched_setaffinity01 stime=1689382567
cmdline="sched_setaffinity01"
contacts=""
analysis=exit
<<<test_output>>>
tst_test.c:1558: TINFO: Timeout per run is 0h 02m 30s
sched_setaffinity01.c:83: TPASS: sched_setaffinity() failed: EFAULT (14)
sched_setaffinity01.c:73: TFAIL: sched_setaffinity() succeded unexpectedly
tst_test.c:1612: TINFO: If you are running on slow machine, try exporting LTP_TIMEOUT_MUL > 1
tst_test.c:1614: TBROK: Test killed! (timeout?)
Summary:
passed 1
failed 1
broken 1
skipped 0
warnings 0
incrementing stop
<<<execution_status>>>
initiation_status="ok"
duration=0 termination_type=exited termination_id=3 corefile=no
cutime=0 cstime=1
<<<test_end>>>
INFO: ltp-pan reported some tests FAIL
LTP Version: 20230516-68-g9512c5da4
###############################################################
Done executing testcases.
LTP Version: 20230516-68-g9512c5da4
###############################################################
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
View attachment "config-6.4.0-rc1-00051-g5ae608d0d390" of type "text/plain" (161847 bytes)
View attachment "job-script" of type "text/plain" (6336 bytes)
Download attachment "dmesg.xz" of type "application/x-xz" (9028 bytes)
View attachment "ltp" of type "text/plain" (12003 bytes)
View attachment "job.yaml" of type "text/plain" (5100 bytes)
View attachment "reproduce" of type "text/plain" (293 bytes)
Powered by blists - more mailing lists