[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202401182318.vEGddOt1-lkp@intel.com>
Date: Thu, 18 Jan 2024 23:16:17 +0800
From: kernel test robot <lkp@...el.com>
To: Alexander Graf <graf@...zon.com>, linux-kernel@...r.kernel.org
Cc: oe-kbuild-all@...ts.linux.dev, linux-trace-kernel@...r.kernel.org,
linux-mm@...ck.org, devicetree@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, kexec@...ts.infradead.org,
linux-doc@...r.kernel.org, x86@...nel.org,
Eric Biederman <ebiederm@...ssion.com>,
"H . Peter Anvin" <hpa@...or.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mark Rutland <mark.rutland@....com>,
Tom Lendacky <thomas.lendacky@....com>,
Ashish Kalra <ashish.kalra@....com>,
James Gowans <jgowans@...zon.com>,
Stanislav Kinsburskii <skinsburskii@...ux.microsoft.com>,
arnd@...db.de, pbonzini@...hat.com, madvenka@...ux.microsoft.com,
Anthony Yznaga <anthony.yznaga@...cle.com>,
Usama Arif <usama.arif@...edance.com>,
David Woodhouse <dwmw@...zon.co.uk>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Rob Herring <robh+dt@...nel.org>,
Krzysztof Kozlowski <krzk@...nel.org>
Subject: Re: [PATCH v3 13/17] tracing: Recover trace buffers from kexec
handover
Hi Alexander,
kernel test robot noticed the following build warnings:
[auto build test WARNING on linus/master]
[cannot apply to tip/x86/core arm64/for-next/core akpm-mm/mm-everything v6.7 next-20240118]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Alexander-Graf/mm-memblock-Add-support-for-scratch-memory/20240117-225136
base: linus/master
patch link: https://lore.kernel.org/r/20240117144704.602-14-graf%40amazon.com
patch subject: [PATCH v3 13/17] tracing: Recover trace buffers from kexec handover
config: i386-randconfig-061-20240118 (https://download.01.org/0day-ci/archive/20240118/202401182318.vEGddOt1-lkp@intel.com/config)
compiler: ClangBuiltLinux clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240118/202401182318.vEGddOt1-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@...el.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202401182318.vEGddOt1-lkp@intel.com/
sparse warnings: (new ones prefixed by >>)
kernel/trace/ring_buffer.c:1105:32: sparse: sparse: incorrect type in return expression (different base types) @@ expected restricted __poll_t @@ got int @@
kernel/trace/ring_buffer.c:1105:32: sparse: expected restricted __poll_t
kernel/trace/ring_buffer.c:1105:32: sparse: got int
kernel/trace/ring_buffer.c:4955:9: sparse: sparse: context imbalance in 'ring_buffer_peek' - different lock contexts for basic block
kernel/trace/ring_buffer.c:5041:9: sparse: sparse: context imbalance in 'ring_buffer_consume' - different lock contexts for basic block
kernel/trace/ring_buffer.c:5421:17: sparse: sparse: context imbalance in 'ring_buffer_empty' - different lock contexts for basic block
kernel/trace/ring_buffer.c:5451:9: sparse: sparse: context imbalance in 'ring_buffer_empty_cpu' - different lock contexts for basic block
>> kernel/trace/ring_buffer.c:5937:82: sparse: sparse: non size-preserving integer to pointer cast
kernel/trace/ring_buffer.c:5939:84: sparse: sparse: non size-preserving integer to pointer cast
vim +5937 kernel/trace/ring_buffer.c
5896
5897 static int rb_kho_replace_buffers(struct ring_buffer_per_cpu *cpu_buffer,
5898 struct rb_kho_cpu *kho)
5899 {
5900 bool first_loop = true;
5901 struct list_head *tmp;
5902 int err = 0;
5903 int i = 0;
5904
5905 if (!IS_ENABLED(CONFIG_FTRACE_KHO))
5906 return -EINVAL;
5907
5908 if (kho->nr_mems != cpu_buffer->nr_pages * 2)
5909 return -EINVAL;
5910
5911 for (tmp = rb_list_head(cpu_buffer->pages);
5912 tmp != rb_list_head(cpu_buffer->pages) || first_loop;
5913 tmp = rb_list_head(tmp->next), first_loop = false) {
5914 struct buffer_page *bpage = (struct buffer_page *)tmp;
5915 const struct kho_mem *mem_bpage = &kho->mem[i++];
5916 const struct kho_mem *mem_page = &kho->mem[i++];
5917 const uint64_t rb_page_head = 1;
5918 struct buffer_page *old_bpage;
5919 void *old_page;
5920
5921 old_bpage = __va(mem_bpage->addr);
5922 if (!bpage)
5923 goto out;
5924
5925 if ((ulong)old_bpage->list.next & rb_page_head) {
5926 struct list_head *new_lhead;
5927 struct buffer_page *new_head;
5928
5929 new_lhead = rb_list_head(bpage->list.next);
5930 new_head = (struct buffer_page *)new_lhead;
5931
5932 /* Assume the buffer is completely full */
5933 cpu_buffer->tail_page = bpage;
5934 cpu_buffer->commit_page = bpage;
5935 /* Set the head pointers to what they were before */
5936 cpu_buffer->head_page->list.prev->next = (struct list_head *)
> 5937 ((ulong)cpu_buffer->head_page->list.prev->next & ~rb_page_head);
5938 cpu_buffer->head_page = new_head;
5939 bpage->list.next = (struct list_head *)((ulong)new_lhead | rb_page_head);
5940 }
5941
5942 if (rb_page_entries(old_bpage) || rb_page_write(old_bpage)) {
5943 /*
5944 * We want to recycle the pre-kho page, it contains
5945 * trace data. To do so, we unreserve it and swap the
5946 * current data page with the pre-kho one
5947 */
5948 old_page = kho_claim_mem(mem_page);
5949
5950 /* Recycle the old page, it contains data */
5951 free_page((ulong)bpage->page);
5952 bpage->page = old_page;
5953
5954 bpage->write = old_bpage->write;
5955 bpage->entries = old_bpage->entries;
5956 bpage->real_end = old_bpage->real_end;
5957
5958 local_inc(&cpu_buffer->pages_touched);
5959 } else {
5960 kho_return_mem(mem_page);
5961 }
5962
5963 kho_return_mem(mem_bpage);
5964 }
5965
5966 out:
5967 return err;
5968 }
5969
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Powered by blists - more mailing lists