lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230801020206.1957986-1-zhaotianrui@loongson.cn>
Date:   Tue,  1 Aug 2023 10:02:02 +0800
From:   Tianrui Zhao <zhaotianrui@...ngson.cn>
To:     Shuah Khan <shuah@...nel.org>, Paolo Bonzini <pbonzini@...hat.com>,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc:     Vishal Annapurve <vannapurve@...gle.com>,
        Huacai Chen <chenhuacai@...nel.org>,
        WANG Xuerui <kernel@...0n.name>, loongarch@...ts.linux.dev,
        Peter Xu <peterx@...hat.com>,
        Vipin Sharma <vipinsh@...gle.com>, maobibo@...ngson.cn,
        zhaotianrui@...ngson.cn
Subject: [PATCH v1 0/4] selftests: kvm: Add LoongArch support

This patch series base on the Linux LoongArch KVM patch:
Based-on: <20230720062813.4126751-1-zhaotianrui@...ngson.cn>

We add LoongArch support into KVM selftests and there are some KVM
test cases we have passed:
  kvm_create_max_vcpus
  demand_paging_test
  kvm_page_table_test
  set_memory_region_test
  memslot_modification_stress_test
  memslot_perf_test

The test results:
1..6
selftests: kvm: kvm_create_max_vcpus
  KVM_CAP_MAX_VCPU_ID: 256
  KVM_CAP_MAX_VCPUS: 256
  Testing creating 256 vCPUs, with IDs 0...255.
  ok 1 selftests: kvm: kvm_create_max_vcpus

selftests: kvm: demand_paging_test
  Testing guest mode: PA-bits:36,  VA-bits:47, 16K pages
  guest physical test memory: [0xfbfffc000, 0xfffffc000)
  Finished creating vCPUs and starting uffd threads
  Started all vCPUs
  All vCPU threads joined
  Total guest execution time: 0.787727423s
  Overall demand paging rate: 83196.291111 pgs/sec
  ok 2 selftests: kvm: demand_paging_test

selftests: kvm: kvm_page_table_test
  Testing guest mode: PA-bits:36,  VA-bits:47, 16K pages
  Testing memory backing src type: anonymous
  Testing memory backing src granularity: 0x4000
  Testing memory size(aligned): 0x40000000
  Guest physical test memory offset: 0xfbfffc000
  Host  virtual  test memory offset: 0xffb011c000
  Number of testing vCPUs: 1
  Started all vCPUs successfully
  KVM_CREATE_MAPPINGS: total execution time: -3.-672213074s
  KVM_UPDATE_MAPPINGS: total execution time: -4.-381650744s
  KVM_ADJUST_MAPPINGS: total execution time: -4.-434860241s
  ok 3 selftests: kvm: kvm_page_table_test

selftests: kvm: set_memory_region_test
  Allowed number of memory slots: 256
  Adding slots 0..255, each memory region with 2048K size
  ok 4 selftests: kvm: set_memory_region_test

selftests: kvm: memslot_modification_stress_test
  Testing guest mode: PA-bits:36,  VA-bits:47, 16K pages
  guest physical test memory: [0xfbfffc000, 0xfffffc000)
  Finished creating vCPUs
  Started all vCPUs
  All vCPU threads joined
  ok 5 selftests: kvm: memslot_modification_stress_test

selftests: kvm: memslot_perf_test
  Testing map performance with 1 runs, 5 seconds each
  Test took 0.003797735s for slot setup + 5.012294023s all iterations
  Done 369 iterations, avg 0.013583452s each
  Best runtime result was 0.013583452s per iteration (with 369 iterations)

  Testing unmap performance with 1 runs, 5 seconds each
  Test took 0.003841196s for slot setup + 5.001802893s all iterations
  Done 341 iterations, avg 0.014668043s each
  Best runtime result was 0.014668043s per iteration (with 341 iterations)

  Testing unmap chunked performance with 1 runs, 5 seconds each
  Test took 0.003784356s for slot setup + 5.000265398s all iterations
  Done 7376 iterations, avg 0.000677910s each
  Best runtime result was 0.000677910s per iteration (with 7376 iterations)

  Testing move active area performance with 1 runs, 5 seconds each
  Test took 0.003828075s for slot setup + 5.000021760s all iterations
  Done 85449 iterations, avg 0.000058514s each
  Best runtime result was 0.000058514s per iteration (with 85449 iterations)

  Testing move inactive area performance with 1 runs, 5 seconds each
  Test took 0.003809146s for slot setup + 5.000024149s all iterations
  Done 181908 iterations, avg 0.000027486s each
  Best runtime result was 0.000027486s per iteration (with 181908 iterations)

  Testing RW performance with 1 runs, 5 seconds each
  Test took 0.003780596s for slot setup + 5.001116175s all iterations
  Done 391 iterations, avg 0.012790578s each
  Best runtime result was 0.012790578s per iteration (with 391 iterations)
  Best slot setup time for the whole test area was 0.003780596s
  ok 6 selftests: kvm: memslot_perf_test

changes for v1:
1. Add kvm selftests header files for LoongArch.
2. Add processor tests for LoongArch KVM.
3. Add ucall tests for LoongArch KVM.
4. Add LoongArch tests into makefile.

Tianrui Zhao (4):
  selftests: kvm: Add kvm selftests header files for LoongArch
  selftests: kvm: Add processor tests for LoongArch KVM
  selftests: kvm: Add ucall tests for LoongArch KVM
  selftests: kvm: Add LoongArch tests into makefile

 tools/testing/selftests/kvm/Makefile          |  11 +
 .../selftests/kvm/include/kvm_util_base.h     |   5 +
 .../kvm/include/loongarch/processor.h         |  28 ++
 .../selftests/kvm/include/loongarch/sysreg.h  |  89 +++++
 .../selftests/kvm/lib/loongarch/exception.S   |  27 ++
 .../selftests/kvm/lib/loongarch/processor.c   | 367 ++++++++++++++++++
 .../selftests/kvm/lib/loongarch/ucall.c       |  44 +++
 7 files changed, 571 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/include/loongarch/processor.h
 create mode 100644 tools/testing/selftests/kvm/include/loongarch/sysreg.h
 create mode 100644 tools/testing/selftests/kvm/lib/loongarch/exception.S
 create mode 100644 tools/testing/selftests/kvm/lib/loongarch/processor.c
 create mode 100644 tools/testing/selftests/kvm/lib/loongarch/ucall.c

-- 
2.39.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ