[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2024120912-outshine-unstirred-b349@gregkh>
Date: Mon, 9 Dec 2024 11:48:12 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org,
torvalds@...ux-foundation.org,
stable@...r.kernel.org
Cc: lwn@....net,
jslaby@...e.cz,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: Linux 6.6.64
diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
index 36c3cb547901..33675e718a37 100644
--- a/Documentation/ABI/testing/sysfs-fs-f2fs
+++ b/Documentation/ABI/testing/sysfs-fs-f2fs
@@ -311,10 +311,13 @@ Description: Do background GC aggressively when set. Set to 0 by default.
GC approach and turns SSR mode on.
gc urgent low(2): lowers the bar of checking I/O idling in
order to process outstanding discard commands and GC a
- little bit aggressively. uses cost benefit GC approach.
+ little bit aggressively. always uses cost benefit GC approach,
+ and will override age-threshold GC approach if ATGC is enabled
+ at the same time.
gc urgent mid(3): does GC forcibly in a period of given
gc_urgent_sleep_time and executes a mid level of I/O idling check.
- uses cost benefit GC approach.
+ always uses cost benefit GC approach, and will override
+ age-threshold GC approach if ATGC is enabled at the same time.
What: /sys/fs/f2fs/<disk>/gc_urgent_sleep_time
Date: August 2017
diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst
index ca7b7cd806a1..30080ff6f406 100644
--- a/Documentation/RCU/stallwarn.rst
+++ b/Documentation/RCU/stallwarn.rst
@@ -249,7 +249,7 @@ ticks this GP)" indicates that this CPU has not taken any scheduling-clock
interrupts during the current stalled grace period.
The "idle=" portion of the message prints the dyntick-idle state.
-The hex number before the first "/" is the low-order 12 bits of the
+The hex number before the first "/" is the low-order 16 bits of the
dynticks counter, which will have an even-numbered value if the CPU
is in dyntick-idle mode and an odd-numbered value otherwise. The hex
number between the two "/"s is the value of the nesting, which will be
diff --git a/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml b/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
index 5e942bccf277..2b2041818a0a 100644
--- a/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
+++ b/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
@@ -26,9 +26,21 @@ properties:
description:
Specifies the reference clock(s) from which the output frequency is
derived. This must either reference one clock if only the first clock
- input is connected or two if both clock inputs are connected.
- minItems: 1
- maxItems: 2
+ input is connected or two if both clock inputs are connected. The last
+ clock is the AXI bus clock that needs to be enabled so we can access the
+ core registers.
+ minItems: 2
+ maxItems: 3
+
+ clock-names:
+ oneOf:
+ - items:
+ - const: clkin1
+ - const: s_axi_aclk
+ - items:
+ - const: clkin1
+ - const: clkin2
+ - const: s_axi_aclk
'#clock-cells':
const: 0
@@ -40,6 +52,7 @@ required:
- compatible
- reg
- clocks
+ - clock-names
- '#clock-cells'
additionalProperties: false
@@ -50,5 +63,6 @@ examples:
compatible = "adi,axi-clkgen-2.00.a";
#clock-cells = <0>;
reg = <0xff000000 0x1000>;
- clocks = <&osc 1>;
+ clocks = <&osc 1>, <&clkc 15>;
+ clock-names = "clkin1", "s_axi_aclk";
};
diff --git a/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml b/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
index 96340a05754c..2beed2e2406c 100644
--- a/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
+++ b/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
@@ -26,7 +26,7 @@ properties:
maxItems: 1
spi-max-frequency:
- maximum: 30000000
+ maximum: 66000000
reset-gpios:
maxItems: 1
diff --git a/Documentation/devicetree/bindings/serial/rs485.yaml b/Documentation/devicetree/bindings/serial/rs485.yaml
index 9418fd66a8e9..b93254ad2a28 100644
--- a/Documentation/devicetree/bindings/serial/rs485.yaml
+++ b/Documentation/devicetree/bindings/serial/rs485.yaml
@@ -18,16 +18,15 @@ properties:
description: prop-encoded-array <a b>
$ref: /schemas/types.yaml#/definitions/uint32-array
items:
- items:
- - description: Delay between rts signal and beginning of data sent in
- milliseconds. It corresponds to the delay before sending data.
- default: 0
- maximum: 100
- - description: Delay between end of data sent and rts signal in milliseconds.
- It corresponds to the delay after sending data and actual release
- of the line.
- default: 0
- maximum: 100
+ - description: Delay between rts signal and beginning of data sent in
+ milliseconds. It corresponds to the delay before sending data.
+ default: 0
+ maximum: 100
+ - description: Delay between end of data sent and rts signal in milliseconds.
+ It corresponds to the delay after sending data and actual release
+ of the line.
+ default: 0
+ maximum: 100
rs485-rts-active-high:
description: drive RTS high when sending (this is the default).
diff --git a/Documentation/devicetree/bindings/sound/mt6359.yaml b/Documentation/devicetree/bindings/sound/mt6359.yaml
index 23d411fc4200..128698630c86 100644
--- a/Documentation/devicetree/bindings/sound/mt6359.yaml
+++ b/Documentation/devicetree/bindings/sound/mt6359.yaml
@@ -23,8 +23,8 @@ properties:
Indicates how many data pins are used to transmit two channels of PDM
signal. 0 means two wires, 1 means one wire. Default value is 0.
enum:
- - 0 # one wire
- - 1 # two wires
+ - 0 # two wires
+ - 1 # one wire
mediatek,mic-type-0:
$ref: /schemas/types.yaml#/definitions/uint32
@@ -53,9 +53,9 @@ additionalProperties: false
examples:
- |
- mt6359codec: mt6359codec {
- mediatek,dmic-mode = <0>;
- mediatek,mic-type-0 = <2>;
+ mt6359codec: audio-codec {
+ mediatek,dmic-mode = <0>;
+ mediatek,mic-type-0 = <2>;
};
...
diff --git a/Documentation/devicetree/bindings/vendor-prefixes.yaml b/Documentation/devicetree/bindings/vendor-prefixes.yaml
index 573578db9509..12a16031d7b6 100644
--- a/Documentation/devicetree/bindings/vendor-prefixes.yaml
+++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml
@@ -923,6 +923,8 @@ patternProperties:
description: National Semiconductor
"^nec,.*":
description: NEC LCD Technologies, Ltd.
+ "^neofidelity,.*":
+ description: Neofidelity Inc.
"^neonode,.*":
description: Neonode Inc.
"^netgear,.*":
diff --git a/Documentation/filesystems/mount_api.rst b/Documentation/filesystems/mount_api.rst
index 9aaf6ef75eb5..0c69aa574ab9 100644
--- a/Documentation/filesystems/mount_api.rst
+++ b/Documentation/filesystems/mount_api.rst
@@ -766,7 +766,8 @@ process the parameters it is given.
* ::
- bool fs_validate_description(const struct fs_parameter_description *desc);
+ bool fs_validate_description(const char *name,
+ const struct fs_parameter_description *desc);
This performs some validation checks on a parameter description. It
returns true if the description is good and false if it is not. It will
diff --git a/Documentation/locking/seqlock.rst b/Documentation/locking/seqlock.rst
index bfda1a5fecad..ec6411d02ac8 100644
--- a/Documentation/locking/seqlock.rst
+++ b/Documentation/locking/seqlock.rst
@@ -153,7 +153,7 @@ Use seqcount_latch_t when the write side sections cannot be protected
from interruption by readers. This is typically the case when the read
side can be invoked from NMI handlers.
-Check `raw_write_seqcount_latch()` for more information.
+Check `write_seqcount_latch()` for more information.
.. _seqlock_t:
diff --git a/Documentation/networking/j1939.rst b/Documentation/networking/j1939.rst
index e4bd7aa1f5aa..544bad175aae 100644
--- a/Documentation/networking/j1939.rst
+++ b/Documentation/networking/j1939.rst
@@ -121,7 +121,7 @@ format, the Group Extension is set in the PS-field.
On the other hand, when using PDU1 format, the PS-field contains a so-called
Destination Address, which is _not_ part of the PGN. When communicating a PGN
-from user space to kernel (or vice versa) and PDU2 format is used, the PS-field
+from user space to kernel (or vice versa) and PDU1 format is used, the PS-field
of the PGN shall be set to zero. The Destination Address shall be set
elsewhere.
diff --git a/Makefile b/Makefile
index 611d7de2e3a2..74f3867461a0 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 6
PATCHLEVEL = 6
-SUBLEVEL = 63
+SUBLEVEL = 64
EXTRAVERSION =
NAME = Pinguïn Aangedreven
diff --git a/arch/arc/kernel/devtree.c b/arch/arc/kernel/devtree.c
index 4c9e61457b2f..cc6ac7d128aa 100644
--- a/arch/arc/kernel/devtree.c
+++ b/arch/arc/kernel/devtree.c
@@ -62,7 +62,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt)
const struct machine_desc *mdesc;
unsigned long dt_root;
- if (!early_init_dt_scan(dt))
+ if (!early_init_dt_scan(dt, __pa(dt)))
return NULL;
mdesc = of_flat_dt_match_machine(NULL, arch_get_next_mach);
diff --git a/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts b/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
index c8ca8cb7f5c9..52ad95a2063a 100644
--- a/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
+++ b/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
@@ -280,8 +280,8 @@ reg_dcdc4: dcdc4 {
reg_dcdc5: dcdc5 {
regulator-always-on;
- regulator-min-microvolt = <1425000>;
- regulator-max-microvolt = <1575000>;
+ regulator-min-microvolt = <1450000>;
+ regulator-max-microvolt = <1550000>;
regulator-name = "vcc-dram";
};
diff --git a/arch/arm/boot/dts/microchip/sam9x60.dtsi b/arch/arm/boot/dts/microchip/sam9x60.dtsi
index 1705c96f4221..ae089d4bd660 100644
--- a/arch/arm/boot/dts/microchip/sam9x60.dtsi
+++ b/arch/arm/boot/dts/microchip/sam9x60.dtsi
@@ -186,6 +186,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 13>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -384,6 +385,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 32>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -433,6 +435,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 33>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -590,6 +593,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 9>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -639,6 +643,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 10>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -688,6 +693,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 11>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -737,6 +743,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 5>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -805,6 +812,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 6>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -873,6 +881,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 7>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -941,6 +950,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 8>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -1064,6 +1074,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 15>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
@@ -1113,6 +1124,7 @@ AT91_XDMAC_DT_PER_IF(1) |
dma-names = "tx", "rx";
clocks = <&pmc PMC_TYPE_PERIPHERAL 16>;
clock-names = "usart";
+ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
atmel,use-dma-rx;
atmel,use-dma-tx;
atmel,fifo-size = <16>;
diff --git a/arch/arm/boot/dts/ti/omap/omap36xx.dtsi b/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
index e6d8070c1bf8..cba8ba51657b 100644
--- a/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
+++ b/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
@@ -72,6 +72,7 @@ opp-1000000000 {
<1375000 1375000 1375000>;
/* only on am/dm37x with speed-binned bit set */
opp-supported-hw = <0xffffffff 2>;
+ turbo-mode;
};
};
diff --git a/arch/arm/kernel/devtree.c b/arch/arm/kernel/devtree.c
index 264827281113..abf13b21ba76 100644
--- a/arch/arm/kernel/devtree.c
+++ b/arch/arm/kernel/devtree.c
@@ -201,7 +201,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt_virt)
mdesc_best = &__mach_desc_GENERIC_DT;
- if (!dt_virt || !early_init_dt_verify(dt_virt))
+ if (!dt_virt || !early_init_dt_verify(dt_virt, __pa(dt_virt)))
return NULL;
mdesc = of_flat_dt_match_machine(mdesc_best, arch_get_next_mach);
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index 6150a716828c..0384fbbdc28d 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -25,6 +25,7 @@
#include <asm/tls.h>
#include <asm/system_info.h>
#include <asm/uaccess-asm.h>
+#include <asm/kasan_def.h>
#include "entry-header.S"
#include <asm/probes.h>
@@ -555,6 +556,13 @@ ENTRY(__switch_to)
@ entries covering the vmalloc region.
@
ldr r2, [ip]
+#ifdef CONFIG_KASAN_VMALLOC
+ @ Also dummy read from the KASAN shadow memory for the new stack if we
+ @ are using KASAN
+ mov_l r2, KASAN_SHADOW_OFFSET
+ add r2, r2, ip, lsr #KASAN_SHADOW_SCALE_SHIFT
+ ldr r2, [r2]
+#endif
#endif
@ When CONFIG_THREAD_INFO_IN_TASK=n, the update of SP itself is what
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index 28873cda464f..f22c50d4bd41 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -411,7 +411,11 @@ ENTRY(secondary_startup)
/*
* Use the page tables supplied from __cpu_up.
*/
+#ifdef CONFIG_XIP_KERNEL
+ ldr r3, =(secondary_data + PLAT_PHYS_OFFSET - PAGE_OFFSET)
+#else
adr_l r3, secondary_data
+#endif
mov_l r12, __secondary_switched
ldrd r4, r5, [r3, #0] @ get secondary_data.pgdir
ARM_BE8(eor r4, r4, r5) @ Swap r5 and r4 in BE:
diff --git a/arch/arm/kernel/psci_smp.c b/arch/arm/kernel/psci_smp.c
index d4392e177484..3bb0c4dcfc5c 100644
--- a/arch/arm/kernel/psci_smp.c
+++ b/arch/arm/kernel/psci_smp.c
@@ -45,8 +45,15 @@ extern void secondary_startup(void);
static int psci_boot_secondary(unsigned int cpu, struct task_struct *idle)
{
if (psci_ops.cpu_on)
+#ifdef CONFIG_XIP_KERNEL
+ return psci_ops.cpu_on(cpu_logical_map(cpu),
+ ((phys_addr_t)(&secondary_startup)
+ - XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR)
+ + CONFIG_XIP_PHYS_ADDR));
+#else
return psci_ops.cpu_on(cpu_logical_map(cpu),
virt_to_idmap(&secondary_startup));
+#endif
return -ENODEV;
}
diff --git a/arch/arm/mm/idmap.c b/arch/arm/mm/idmap.c
index 448e57c6f653..4a833e89782a 100644
--- a/arch/arm/mm/idmap.c
+++ b/arch/arm/mm/idmap.c
@@ -84,8 +84,15 @@ static void identity_mapping_add(pgd_t *pgd, const char *text_start,
unsigned long addr, end;
unsigned long next;
+#ifdef CONFIG_XIP_KERNEL
+ addr = (phys_addr_t)(text_start) - XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR)
+ + CONFIG_XIP_PHYS_ADDR;
+ end = (phys_addr_t)(text_end) - XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR)
+ + CONFIG_XIP_PHYS_ADDR;
+#else
addr = virt_to_idmap(text_start);
end = virt_to_idmap(text_end);
+#endif
pr_info("Setting up static identity map for 0x%lx - 0x%lx\n", addr, end);
prot |= PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AF;
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 2129070065c3..1c5aeba9bc27 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -23,6 +23,7 @@
*/
#include <linux/module.h>
#include <linux/errno.h>
+#include <linux/kasan.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
#include <linux/io.h>
@@ -115,16 +116,40 @@ int ioremap_page(unsigned long virt, unsigned long phys,
}
EXPORT_SYMBOL(ioremap_page);
+#ifdef CONFIG_KASAN
+static unsigned long arm_kasan_mem_to_shadow(unsigned long addr)
+{
+ return (unsigned long)kasan_mem_to_shadow((void *)addr);
+}
+#else
+static unsigned long arm_kasan_mem_to_shadow(unsigned long addr)
+{
+ return 0;
+}
+#endif
+
+static void memcpy_pgd(struct mm_struct *mm, unsigned long start,
+ unsigned long end)
+{
+ end = ALIGN(end, PGDIR_SIZE);
+ memcpy(pgd_offset(mm, start), pgd_offset_k(start),
+ sizeof(pgd_t) * (pgd_index(end) - pgd_index(start)));
+}
+
void __check_vmalloc_seq(struct mm_struct *mm)
{
int seq;
do {
- seq = atomic_read(&init_mm.context.vmalloc_seq);
- memcpy(pgd_offset(mm, VMALLOC_START),
- pgd_offset_k(VMALLOC_START),
- sizeof(pgd_t) * (pgd_index(VMALLOC_END) -
- pgd_index(VMALLOC_START)));
+ seq = atomic_read_acquire(&init_mm.context.vmalloc_seq);
+ memcpy_pgd(mm, VMALLOC_START, VMALLOC_END);
+ if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
+ unsigned long start =
+ arm_kasan_mem_to_shadow(VMALLOC_START);
+ unsigned long end =
+ arm_kasan_mem_to_shadow(VMALLOC_END);
+ memcpy_pgd(mm, start, end);
+ }
/*
* Use a store-release so that other CPUs that observe the
* counter's new value are guaranteed to see the results of the
diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
index 87847116ab6d..b0885a389951 100644
--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
+++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
@@ -202,6 +202,9 @@ accelerometer@68 {
interrupts = <7 5 IRQ_TYPE_EDGE_RISING>; /* PH5 */
vdd-supply = <®_dldo1>;
vddio-supply = <®_dldo1>;
+ mount-matrix = "0", "1", "0",
+ "-1", "0", "0",
+ "0", "0", "1";
};
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
index 14d20a33af8e..6c48fa4b0d0c 100644
--- a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
@@ -145,7 +145,7 @@ reg_usdhc2_vmmc: regulator-usdhc2 {
regulator-max-microvolt = <3300000>;
regulator-min-microvolt = <3300000>;
regulator-name = "+V3.3_SD";
- startup-delay-us = <2000>;
+ startup-delay-us = <20000>;
};
reserved-memory {
diff --git a/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
index e9e4fcb562f1..b9902adbfe62 100644
--- a/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
@@ -134,7 +134,7 @@ reg_usdhc2_vmmc: regulator-usdhc2 {
regulator-max-microvolt = <3300000>;
regulator-min-microvolt = <3300000>;
regulator-name = "+V3.3_SD";
- startup-delay-us = <2000>;
+ startup-delay-us = <20000>;
};
reserved-memory {
diff --git a/arch/arm64/boot/dts/mediatek/mt6357.dtsi b/arch/arm64/boot/dts/mediatek/mt6357.dtsi
index 3330a03c2f74..5fafa842d312 100644
--- a/arch/arm64/boot/dts/mediatek/mt6357.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt6357.dtsi
@@ -10,6 +10,11 @@ &pwrap {
mt6357_pmic: pmic {
compatible = "mediatek,mt6357";
+ pmic_adc: adc {
+ compatible = "mediatek,mt6357-auxadc";
+ #io-channel-cells = <1>;
+ };
+
regulators {
mt6357_vproc_reg: buck-vproc {
regulator-name = "vproc";
diff --git a/arch/arm64/boot/dts/mediatek/mt6358.dtsi b/arch/arm64/boot/dts/mediatek/mt6358.dtsi
index b605313bed99..9a549069a483 100644
--- a/arch/arm64/boot/dts/mediatek/mt6358.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt6358.dtsi
@@ -12,12 +12,17 @@ pmic: pmic {
interrupts = <182 IRQ_TYPE_LEVEL_HIGH>;
#interrupt-cells = <2>;
- mt6358codec: mt6358codec {
+ pmic_adc: adc {
+ compatible = "mediatek,mt6358-auxadc";
+ #io-channel-cells = <1>;
+ };
+
+ mt6358codec: audio-codec {
compatible = "mediatek,mt6358-sound";
mediatek,dmic-mode = <0>; /* two-wires */
};
- mt6358regulator: mt6358regulator {
+ mt6358regulator: regulators {
compatible = "mediatek,mt6358-regulator";
mt6358_vdram1_reg: buck_vdram1 {
diff --git a/arch/arm64/boot/dts/mediatek/mt6359.dtsi b/arch/arm64/boot/dts/mediatek/mt6359.dtsi
index df3e822232d3..8e1b8c85c6ed 100644
--- a/arch/arm64/boot/dts/mediatek/mt6359.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt6359.dtsi
@@ -9,6 +9,11 @@ pmic: pmic {
interrupt-controller;
#interrupt-cells = <2>;
+ pmic_adc: adc {
+ compatible = "mediatek,mt6359-auxadc";
+ #io-channel-cells = <1>;
+ };
+
mt6359codec: mt6359codec {
};
diff --git a/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi b/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
index bdcd35cecad9..fd6230352f4f 100644
--- a/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
@@ -43,6 +43,14 @@ trackpad2: trackpad@2c {
interrupts = <117 IRQ_TYPE_LEVEL_LOW>;
reg = <0x2c>;
hid-descr-addr = <0x0020>;
+ /*
+ * The trackpad needs a post-power-on delay of 100ms,
+ * but at time of writing, the power supply for it on
+ * this board is always on. The delay is therefore not
+ * added to avoid impacting the readiness of the
+ * trackpad.
+ */
+ vdd-supply = <&mt6397_vgp6_reg>;
wakeup-source;
};
};
diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
index 19c1e2bee494..20b71f2e7159 100644
--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
+++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
@@ -30,3 +30,6 @@ touchscreen@2c {
};
};
+&i2c2 {
+ i2c-scl-internal-delay-ns = <4100>;
+};
diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
index 072133fb0f01..47905f84bc16 100644
--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
+++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
@@ -17,6 +17,8 @@ &i2c_tunnel {
};
&i2c2 {
+ i2c-scl-internal-delay-ns = <25000>;
+
trackpad@2c {
compatible = "hid-over-i2c";
reg = <0x2c>;
diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
index 552bfc726999..9a166dccd727 100644
--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
@@ -31,3 +31,6 @@ &qca_wifi {
qcom,ath10k-calibration-variant = "GO_DAMU";
};
+&i2c2 {
+ i2c-scl-internal-delay-ns = <20000>;
+};
diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
index bbe6c338f465..f9c1ec366b26 100644
--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
@@ -25,3 +25,6 @@ trackpad@2c {
};
};
+&i2c2 {
+ i2c-scl-internal-delay-ns = <21500>;
+};
diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
index 32f6899f885e..629c4b7ecbc6 100644
--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
@@ -8,28 +8,32 @@
#include <arm/cros-ec-keyboard.dtsi>
/ {
- pp1200_mipibrdg: pp1200-mipibrdg {
+ pp1000_mipibrdg: pp1000-mipibrdg {
compatible = "regulator-fixed";
- regulator-name = "pp1200_mipibrdg";
+ regulator-name = "pp1000_mipibrdg";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
pinctrl-names = "default";
- pinctrl-0 = <&pp1200_mipibrdg_en>;
+ pinctrl-0 = <&pp1000_mipibrdg_en>;
enable-active-high;
regulator-boot-on;
gpio = <&pio 54 GPIO_ACTIVE_HIGH>;
+ vin-supply = <&pp1800_alw>;
};
pp1800_mipibrdg: pp1800-mipibrdg {
compatible = "regulator-fixed";
regulator-name = "pp1800_mipibrdg";
pinctrl-names = "default";
- pinctrl-0 = <&pp1800_lcd_en>;
+ pinctrl-0 = <&pp1800_mipibrdg_en>;
enable-active-high;
regulator-boot-on;
gpio = <&pio 36 GPIO_ACTIVE_HIGH>;
+ vin-supply = <&pp1800_alw>;
};
pp3300_panel: pp3300-panel {
@@ -44,18 +48,20 @@ pp3300_panel: pp3300-panel {
regulator-boot-on;
gpio = <&pio 35 GPIO_ACTIVE_HIGH>;
+ vin-supply = <&pp3300_alw>;
};
- vddio_mipibrdg: vddio-mipibrdg {
+ pp3300_mipibrdg: pp3300-mipibrdg {
compatible = "regulator-fixed";
- regulator-name = "vddio_mipibrdg";
+ regulator-name = "pp3300_mipibrdg";
pinctrl-names = "default";
- pinctrl-0 = <&vddio_mipibrdg_en>;
+ pinctrl-0 = <&pp3300_mipibrdg_en>;
enable-active-high;
regulator-boot-on;
gpio = <&pio 37 GPIO_ACTIVE_HIGH>;
+ vin-supply = <&pp3300_alw>;
};
volume_buttons: volume-buttons {
@@ -152,9 +158,9 @@ anx_bridge: anx7625@58 {
panel_flags = <1>;
enable-gpios = <&pio 45 GPIO_ACTIVE_HIGH>;
reset-gpios = <&pio 73 GPIO_ACTIVE_HIGH>;
- vdd10-supply = <&pp1200_mipibrdg>;
+ vdd10-supply = <&pp1000_mipibrdg>;
vdd18-supply = <&pp1800_mipibrdg>;
- vdd33-supply = <&vddio_mipibrdg>;
+ vdd33-supply = <&pp3300_mipibrdg>;
ports {
#address-cells = <1>;
@@ -397,14 +403,14 @@ &pio {
"",
"";
- pp1200_mipibrdg_en: pp1200-mipibrdg-en {
+ pp1000_mipibrdg_en: pp1000-mipibrdg-en {
pins1 {
pinmux = <PINMUX_GPIO54__FUNC_GPIO54>;
output-low;
};
};
- pp1800_lcd_en: pp1800-lcd-en {
+ pp1800_mipibrdg_en: pp1800-mipibrdg-en {
pins1 {
pinmux = <PINMUX_GPIO36__FUNC_GPIO36>;
output-low;
@@ -466,7 +472,7 @@ trackpad-int {
};
};
- vddio_mipibrdg_en: vddio-mipibrdg-en {
+ pp3300_mipibrdg_en: pp3300-mipibrdg-en {
pins1 {
pinmux = <PINMUX_GPIO37__FUNC_GPIO37>;
output-low;
diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
index 0d3c7b8162ff..9eca1c80fe01 100644
--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
@@ -105,9 +105,9 @@ &i2c4 {
clock-frequency = <400000>;
vbus-supply = <&mt6358_vcn18_reg>;
- eeprom@54 {
+ eeprom@50 {
compatible = "atmel,24c32";
- reg = <0x54>;
+ reg = <0x50>;
pagesize = <32>;
vcc-supply = <&mt6358_vcn18_reg>;
};
diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
index e73113cb51f5..29216ebe4de8 100644
--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
@@ -80,9 +80,9 @@ &i2c4 {
clock-frequency = <400000>;
vbus-supply = <&mt6358_vcn18_reg>;
- eeprom@54 {
+ eeprom@50 {
compatible = "atmel,24c64";
- reg = <0x54>;
+ reg = <0x50>;
pagesize = <32>;
vcc-supply = <&mt6358_vcn18_reg>;
};
diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
index 181da69d18f4..b0469a95ddc4 100644
--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
@@ -89,9 +89,9 @@ &i2c4 {
clock-frequency = <400000>;
vbus-supply = <&mt6358_vcn18_reg>;
- eeprom@54 {
+ eeprom@50 {
compatible = "atmel,24c32";
- reg = <0x54>;
+ reg = <0x50>;
pagesize = <32>;
vcc-supply = <&mt6358_vcn18_reg>;
};
diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
index 34e18eb5d7f4..b21663b46b51 100644
--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
@@ -1296,6 +1296,7 @@ &xhci1 {
vusb33-supply = <&mt6359_vusb_ldo_reg>;
vbus-supply = <&usb_vbus>;
+ mediatek,u3p-dis-msk = <1>;
};
&xhci2 {
@@ -1312,7 +1313,6 @@ &xhci3 {
usb2-lpm-disable;
vusb33-supply = <&mt6359_vusb_ldo_reg>;
vbus-supply = <&usb_vbus>;
- mediatek,u3p-dis-msk = <1>;
};
#include <arm/cros-ec-keyboard.dtsi>
diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
index d21ba00a5bd5..5a087404ccc2 100644
--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
@@ -487,7 +487,7 @@ topckgen: syscon@...00000 {
};
infracfg_ao: syscon@...01000 {
- compatible = "mediatek,mt8195-infracfg_ao", "syscon", "simple-mfd";
+ compatible = "mediatek,mt8195-infracfg_ao", "syscon";
reg = <0 0x10001000 0 0x1000>;
#clock-cells = <1>;
#reset-cells = <1>;
@@ -2845,11 +2845,9 @@ &larb19 &larb21 &larb24 &larb25
mutex1: mutex@...01000 {
compatible = "mediatek,mt8195-disp-mutex";
reg = <0 0x1c101000 0 0x1000>;
- reg-names = "vdo1_mutex";
interrupts = <GIC_SPI 494 IRQ_TYPE_LEVEL_HIGH 0>;
power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
clocks = <&vdosys1 CLK_VDO1_DISP_MUTEX>;
- clock-names = "vdo1_mutex";
mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x1000 0x1000>;
mediatek,gce-events = <CMDQ_EVENT_VDO1_STREAM_DONE_ENG_0>;
};
diff --git a/arch/arm64/boot/dts/qcom/sc8180x.dtsi b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
index 92b85de7706d..dfeeada91b78 100644
--- a/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
@@ -3618,7 +3618,7 @@ lmh@...58800 {
};
cpufreq_hw: cpufreq@...23000 {
- compatible = "qcom,cpufreq-hw";
+ compatible = "qcom,sc8180x-cpufreq-hw", "qcom,cpufreq-hw";
reg = <0 0x18323000 0 0x1400>, <0 0x18325800 0 0x1400>;
reg-names = "freq-domain0", "freq-domain1";
diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
index 2efceb49a321..f271b69485c5 100644
--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
@@ -1351,43 +1351,43 @@ gpu_opp_table: opp-table {
opp-850000000 {
opp-hz = /bits/ 64 <850000000>;
opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>;
- opp-supported-hw = <0x02>;
+ opp-supported-hw = <0x03>;
};
opp-800000000 {
opp-hz = /bits/ 64 <800000000>;
opp-level = <RPMH_REGULATOR_LEVEL_TURBO>;
- opp-supported-hw = <0x04>;
+ opp-supported-hw = <0x07>;
};
opp-650000000 {
opp-hz = /bits/ 64 <650000000>;
opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>;
- opp-supported-hw = <0x08>;
+ opp-supported-hw = <0x0f>;
};
opp-565000000 {
opp-hz = /bits/ 64 <565000000>;
opp-level = <RPMH_REGULATOR_LEVEL_NOM>;
- opp-supported-hw = <0x10>;
+ opp-supported-hw = <0x1f>;
};
opp-430000000 {
opp-hz = /bits/ 64 <430000000>;
opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>;
- opp-supported-hw = <0xff>;
+ opp-supported-hw = <0x1f>;
};
opp-355000000 {
opp-hz = /bits/ 64 <355000000>;
opp-level = <RPMH_REGULATOR_LEVEL_SVS>;
- opp-supported-hw = <0xff>;
+ opp-supported-hw = <0x1f>;
};
opp-253000000 {
opp-hz = /bits/ 64 <253000000>;
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>;
- opp-supported-hw = <0xff>;
+ opp-supported-hw = <0x1f>;
};
};
};
diff --git a/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi b/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
index 8e2db1d6ca81..25c55b32aafe 100644
--- a/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
+++ b/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
@@ -69,9 +69,6 @@ &rcar_sound {
status = "okay";
- /* Single DAI */
- #sound-dai-cells = <0>;
-
rsnd_port: port {
rsnd_endpoint: endpoint {
remote-endpoint = <&dw_hdmi0_snd_in>;
diff --git a/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi b/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
index 7fc0339a3ac9..e59191562d06 100644
--- a/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
+++ b/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
@@ -84,9 +84,6 @@ &rcar_sound {
pinctrl-names = "default";
status = "okay";
- /* Single DAI */
- #sound-dai-cells = <0>;
-
/* audio_clkout0/1/2/3 */
#clock-cells = <1>;
clock-frequency = <12288000 11289600>;
diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
index 9299fa7e3e21..e813d426be10 100644
--- a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
+++ b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
@@ -34,7 +34,7 @@ sdio_pwrseq: sdio-pwrseq {
sound {
compatible = "audio-graph-card";
- label = "rockchip,es8388-codec";
+ label = "rockchip,es8388";
widgets = "Microphone", "Mic Jack",
"Headphone", "Headphones";
routing = "LINPUT2", "Mic Jack",
diff --git a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
index 0a5634ca005d..e931c966b7f2 100644
--- a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
@@ -134,7 +134,7 @@ reg_sdhc1_vmmc: regulator-sdhci1 {
regulator-max-microvolt = <3300000>;
regulator-min-microvolt = <3300000>;
regulator-name = "+V3.3_SD";
- startup-delay-us = <2000>;
+ startup-delay-us = <20000>;
};
reg_sdhc1_vqmmc: regulator-sdhci1-vqmmc {
diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
index 7a0c599f2b1c..9b122117ef72 100644
--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
@@ -192,7 +192,7 @@ J721E_IOPAD(0xd0, PIN_OUTPUT, 7) /* (T5) SPI0_D1.GPIO0_55 */
};
};
-&main_pmx1 {
+&main_pmx2 {
main_usbss0_pins_default: main-usbss0-default-pins {
pinctrl-single,pins = <
J721E_IOPAD(0x04, PIN_OUTPUT, 0) /* (T4) USB0_DRVVBUS */
diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
index cdb1d6b2a982..e5ff6f038a9a 100644
--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
@@ -395,7 +395,7 @@ cpts@...00 {
/* TIMERIO pad input CTRLMMR_TIMER*_CTRL registers */
main_timerio_input: pinctrl@...200 {
- compatible = "pinctrl-single";
+ compatible = "ti,j7200-padconf", "pinctrl-single";
reg = <0x0 0x104200 0x0 0x50>;
#pinctrl-cells = <1>;
pinctrl-single,register-width = <32>;
@@ -404,7 +404,7 @@ main_timerio_input: pinctrl@...200 {
/* TIMERIO pad output CTCTRLMMR_TIMERIO*_CTRL registers */
main_timerio_output: pinctrl@...280 {
- compatible = "pinctrl-single";
+ compatible = "ti,j7200-padconf", "pinctrl-single";
reg = <0x0 0x104280 0x0 0x20>;
#pinctrl-cells = <1>;
pinctrl-single,register-width = <32>;
@@ -412,7 +412,7 @@ main_timerio_output: pinctrl@...280 {
};
main_pmx0: pinctrl@...000 {
- compatible = "pinctrl-single";
+ compatible = "ti,j7200-padconf", "pinctrl-single";
/* Proxy 0 addressing */
reg = <0x00 0x11c000 0x00 0x10c>;
#pinctrl-cells = <1>;
@@ -420,10 +420,28 @@ main_pmx0: pinctrl@...000 {
pinctrl-single,function-mask = <0xffffffff>;
};
- main_pmx1: pinctrl@...11c {
- compatible = "pinctrl-single";
+ main_pmx1: pinctrl@...110 {
+ compatible = "ti,j7200-padconf", "pinctrl-single";
/* Proxy 0 addressing */
- reg = <0x00 0x11c11c 0x00 0xc>;
+ reg = <0x00 0x11c110 0x00 0x004>;
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+ };
+
+ main_pmx2: pinctrl@...11c {
+ compatible = "ti,j7200-padconf", "pinctrl-single";
+ /* Proxy 0 addressing */
+ reg = <0x00 0x11c11c 0x00 0x00c>;
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+ };
+
+ main_pmx3: pinctrl@...164 {
+ compatible = "ti,j7200-padconf", "pinctrl-single";
+ /* Proxy 0 addressing */
+ reg = <0x00 0x11c164 0x00 0x008>;
#pinctrl-cells = <1>;
pinctrl-single,register-width = <32>;
pinctrl-single,function-mask = <0xffffffff>;
@@ -897,7 +915,7 @@ main_spi0: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 266 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 266 1>;
+ clocks = <&k3_clks 266 4>;
status = "disabled";
};
@@ -908,7 +926,7 @@ main_spi1: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 267 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 267 1>;
+ clocks = <&k3_clks 267 4>;
status = "disabled";
};
@@ -919,7 +937,7 @@ main_spi2: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 268 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 268 1>;
+ clocks = <&k3_clks 268 4>;
status = "disabled";
};
@@ -930,7 +948,7 @@ main_spi3: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 269 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 269 1>;
+ clocks = <&k3_clks 269 4>;
status = "disabled";
};
@@ -941,7 +959,7 @@ main_spi4: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 270 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 270 1>;
+ clocks = <&k3_clks 270 2>;
status = "disabled";
};
@@ -952,7 +970,7 @@ main_spi5: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 271 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 271 1>;
+ clocks = <&k3_clks 271 4>;
status = "disabled";
};
@@ -963,7 +981,7 @@ main_spi6: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 272 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 272 1>;
+ clocks = <&k3_clks 272 4>;
status = "disabled";
};
@@ -974,7 +992,7 @@ main_spi7: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 273 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 273 1>;
+ clocks = <&k3_clks 273 4>;
status = "disabled";
};
diff --git a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
index 6ffaf85fa63f..8e9d0a25e236 100644
--- a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
@@ -185,7 +185,7 @@ chipid@...00014 {
/* MCU_TIMERIO pad input CTRLMMR_MCU_TIMER*_CTRL registers */
mcu_timerio_input: pinctrl@...04200 {
- compatible = "pinctrl-single";
+ compatible = "ti,j7200-padconf", "pinctrl-single";
reg = <0x0 0x40f04200 0x0 0x28>;
#pinctrl-cells = <1>;
pinctrl-single,register-width = <32>;
@@ -195,7 +195,7 @@ mcu_timerio_input: pinctrl@...04200 {
/* MCU_TIMERIO pad output CTRLMMR_MCU_TIMERIO*_CTRL registers */
mcu_timerio_output: pinctrl@...04280 {
- compatible = "pinctrl-single";
+ compatible = "ti,j7200-padconf", "pinctrl-single";
reg = <0x0 0x40f04280 0x0 0x28>;
#pinctrl-cells = <1>;
pinctrl-single,register-width = <32>;
@@ -204,7 +204,7 @@ mcu_timerio_output: pinctrl@...04280 {
};
wkup_pmx0: pinctrl@...1c000 {
- compatible = "pinctrl-single";
+ compatible = "ti,j7200-padconf", "pinctrl-single";
/* Proxy 0 addressing */
reg = <0x00 0x4301c000 0x00 0x34>;
#pinctrl-cells = <1>;
@@ -213,7 +213,7 @@ wkup_pmx0: pinctrl@...1c000 {
};
wkup_pmx1: pinctrl@...1c038 {
- compatible = "pinctrl-single";
+ compatible = "ti,j7200-padconf", "pinctrl-single";
/* Proxy 0 addressing */
reg = <0x00 0x4301c038 0x00 0x8>;
#pinctrl-cells = <1>;
@@ -222,7 +222,7 @@ wkup_pmx1: pinctrl@...1c038 {
};
wkup_pmx2: pinctrl@...1c068 {
- compatible = "pinctrl-single";
+ compatible = "ti,j7200-padconf", "pinctrl-single";
/* Proxy 0 addressing */
reg = <0x00 0x4301c068 0x00 0xec>;
#pinctrl-cells = <1>;
@@ -231,7 +231,7 @@ wkup_pmx2: pinctrl@...1c068 {
};
wkup_pmx3: pinctrl@...1c174 {
- compatible = "pinctrl-single";
+ compatible = "ti,j7200-padconf", "pinctrl-single";
/* Proxy 0 addressing */
reg = <0x00 0x4301c174 0x00 0x20>;
#pinctrl-cells = <1>;
@@ -481,7 +481,7 @@ mcu_spi0: spi@...00000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 274 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 274 0>;
+ clocks = <&k3_clks 274 4>;
status = "disabled";
};
@@ -492,7 +492,7 @@ mcu_spi1: spi@...10000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 275 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 275 0>;
+ clocks = <&k3_clks 275 4>;
status = "disabled";
};
@@ -503,7 +503,7 @@ mcu_spi2: spi@...20000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 276 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 276 0>;
+ clocks = <&k3_clks 276 2>;
status = "disabled";
};
diff --git a/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
index 05d6ef127ba7..1893d611b173 100644
--- a/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
@@ -637,7 +637,7 @@ mcu_spi0: spi@...00000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 274 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 274 0>;
+ clocks = <&k3_clks 274 1>;
status = "disabled";
};
@@ -648,7 +648,7 @@ mcu_spi1: spi@...10000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 275 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 275 0>;
+ clocks = <&k3_clks 275 1>;
status = "disabled";
};
@@ -659,7 +659,7 @@ mcu_spi2: spi@...20000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 276 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 276 0>;
+ clocks = <&k3_clks 276 1>;
status = "disabled";
};
diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
index 084f8f5b6699..9484347acba7 100644
--- a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
@@ -1569,7 +1569,7 @@ main_spi0: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 339 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 339 1>;
+ clocks = <&k3_clks 339 2>;
status = "disabled";
};
@@ -1580,7 +1580,7 @@ main_spi1: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 340 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 340 1>;
+ clocks = <&k3_clks 340 2>;
status = "disabled";
};
@@ -1591,7 +1591,7 @@ main_spi2: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 341 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 341 1>;
+ clocks = <&k3_clks 341 2>;
status = "disabled";
};
@@ -1602,7 +1602,7 @@ main_spi3: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 342 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 342 1>;
+ clocks = <&k3_clks 342 2>;
status = "disabled";
};
@@ -1613,7 +1613,7 @@ main_spi4: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 343 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 343 1>;
+ clocks = <&k3_clks 343 2>;
status = "disabled";
};
@@ -1624,7 +1624,7 @@ main_spi5: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 344 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 344 1>;
+ clocks = <&k3_clks 344 2>;
status = "disabled";
};
@@ -1635,7 +1635,7 @@ main_spi6: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 345 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 345 1>;
+ clocks = <&k3_clks 345 2>;
status = "disabled";
};
@@ -1646,7 +1646,7 @@ main_spi7: spi@...0000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 346 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 346 1>;
+ clocks = <&k3_clks 346 2>;
status = "disabled";
};
diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
index 71324fec415a..6fc008fbfb00 100644
--- a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
@@ -416,7 +416,7 @@ mcu_spi0: spi@...00000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 347 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 347 0>;
+ clocks = <&k3_clks 347 2>;
status = "disabled";
};
@@ -427,7 +427,7 @@ mcu_spi1: spi@...10000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 348 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 348 0>;
+ clocks = <&k3_clks 348 2>;
status = "disabled";
};
@@ -438,7 +438,7 @@ mcu_spi2: spi@...20000 {
#address-cells = <1>;
#size-cells = <0>;
power-domains = <&k3_pds 349 TI_SCI_PD_EXCLUSIVE>;
- clocks = <&k3_clks 349 0>;
+ clocks = <&k3_clks 349 2>;
status = "disabled";
};
diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index db1aeacd4cd9..0ccf51afde31 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -347,6 +347,7 @@ __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000)
__AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000)
__AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000)
__AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000)
+__AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400)
__AARCH64_INSN_FUNCS(stp, 0x7FC00000, 0x29000000)
__AARCH64_INSN_FUNCS(ldp, 0x7FC00000, 0x29400000)
__AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index af06ccb7ee34..b84ed3ad91a9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -72,8 +72,6 @@ enum kvm_mode kvm_get_mode(void);
static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; };
#endif
-DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
-
extern unsigned int __ro_after_init kvm_sve_max_vl;
int __init kvm_arm_init_sve(void);
diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
index 3496d6169e59..42b69936cee3 100644
--- a/arch/arm64/kernel/probes/decode-insn.c
+++ b/arch/arm64/kernel/probes/decode-insn.c
@@ -58,10 +58,13 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
* Instructions which load PC relative literals are not going to work
* when executed from an XOL slot. Instructions doing an exclusive
* load/store are not going to complete successfully when single-step
- * exception handling happens in the middle of the sequence.
+ * exception handling happens in the middle of the sequence. Memory
+ * copy/set instructions require that all three instructions be placed
+ * consecutively in memory.
*/
if (aarch64_insn_uses_literal(insn) ||
- aarch64_insn_is_exclusive(insn))
+ aarch64_insn_is_exclusive(insn) ||
+ aarch64_insn_is_mops(insn))
return false;
return true;
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 0fcc4eb1a7ab..385fb78845d6 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -429,7 +429,7 @@ static void tls_thread_switch(struct task_struct *next)
if (is_compat_thread(task_thread_info(next)))
write_sysreg(next->thread.uw.tp_value, tpidrro_el0);
- else if (!arm64_kernel_unmapped_at_el0())
+ else
write_sysreg(0, tpidrro_el0);
write_sysreg(*task_user_tls(next), tpidr_el0);
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index c583d1f335f8..040b0175334c 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -190,7 +190,11 @@ static void __init setup_machine_fdt(phys_addr_t dt_phys)
if (dt_virt)
memblock_reserve(dt_phys, size);
- if (!dt_virt || !early_init_dt_scan(dt_virt)) {
+ /*
+ * dt_virt is a fixmap address, hence __pa(dt_virt) can't be used.
+ * Pass dt_phys directly.
+ */
+ if (!early_init_dt_scan(dt_virt, dt_phys)) {
pr_crit("\n"
"Error: invalid device tree blob at physical address %pa (virtual address 0x%px)\n"
"The dtb must be 8-byte aligned and must not exceed 2 MB in size\n"
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 3cd7e76cc562..a553dae9a0d4 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -285,6 +285,9 @@ SECTIONS
__initdata_end = .;
__init_end = .;
+ .data.rel.ro : { *(.data.rel.ro) }
+ ASSERT(SIZEOF(.data.rel.ro) == 0, "Unexpected RELRO detected!")
+
_data = .;
_sdata = .;
RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN)
@@ -336,9 +339,6 @@ SECTIONS
*(.plt) *(.plt.*) *(.iplt) *(.igot .igot.plt)
}
ASSERT(SIZEOF(.plt) == 0, "Unexpected run-time procedure linkages detected!")
-
- .data.rel.ro : { *(.data.rel.ro) }
- ASSERT(SIZEOF(.data.rel.ro) == 0, "Unexpected RELRO detected!")
}
#include "image-vars.h"
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index a1e24228aaaa..d221829502f3 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -206,8 +206,7 @@ void get_timer_map(struct kvm_vcpu *vcpu, struct timer_map *map)
static inline bool userspace_irqchip(struct kvm *kvm)
{
- return static_branch_unlikely(&userspace_irqchip_in_use) &&
- unlikely(!irqchip_in_kernel(kvm));
+ return unlikely(!irqchip_in_kernel(kvm));
}
static void soft_timer_start(struct hrtimer *hrt, u64 ns)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 18413d869cca..4742e6c5ea7a 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -57,7 +57,6 @@ DECLARE_KVM_NVHE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
static bool vgic_present, kvm_arm_initialised;
static DEFINE_PER_CPU(unsigned char, kvm_hyp_initialized);
-DEFINE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
bool is_kvm_arm_initialised(void)
{
@@ -401,9 +400,6 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
{
- if (vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm)))
- static_branch_dec(&userspace_irqchip_in_use);
-
kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache);
kvm_timer_vcpu_terminate(vcpu);
kvm_pmu_vcpu_destroy(vcpu);
@@ -627,14 +623,6 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
return ret;
}
- if (!irqchip_in_kernel(kvm)) {
- /*
- * Tell the rest of the code that there are userspace irqchip
- * VMs in the wild.
- */
- static_branch_inc(&userspace_irqchip_in_use);
- }
-
/*
* Initialize traps for protected VMs.
* NOTE: Move to run in EL2 directly, rather than via a hypercall, once
@@ -856,7 +844,7 @@ static bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu, int *ret)
* state gets updated in kvm_timer_update_run and
* kvm_pmu_update_run below).
*/
- if (static_branch_unlikely(&userspace_irqchip_in_use)) {
+ if (unlikely(!irqchip_in_kernel(vcpu->kvm))) {
if (kvm_timer_should_notify_user(vcpu) ||
kvm_pmu_should_notify_user(vcpu)) {
*ret = -EINTR;
@@ -975,7 +963,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
vcpu->mode = OUTSIDE_GUEST_MODE;
isb(); /* Ensure work in x_flush_hwstate is committed */
kvm_pmu_sync_hwstate(vcpu);
- if (static_branch_unlikely(&userspace_irqchip_in_use))
+ if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
kvm_timer_sync_user(vcpu);
kvm_vgic_sync_hwstate(vcpu);
local_irq_enable();
@@ -1021,7 +1009,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
* we don't want vtimer interrupts to race with syncing the
* timer virtual interrupt state.
*/
- if (static_branch_unlikely(&userspace_irqchip_in_use))
+ if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
kvm_timer_sync_user(vcpu);
kvm_arch_vcpu_ctxsync_fp(vcpu);
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 6b066e04dc5d..3867d6d1f5d1 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -326,7 +326,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) {
reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
- reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
}
diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
index 4f9084ba7949..608c39859f4e 100644
--- a/arch/arm64/kvm/vgic/vgic-its.c
+++ b/arch/arm64/kvm/vgic/vgic-its.c
@@ -855,6 +855,9 @@ static int vgic_its_cmd_handle_discard(struct kvm *kvm, struct vgic_its *its,
ite = find_ite(its, device_id, event_id);
if (ite && its_is_collection_mapped(ite->collection)) {
+ struct its_device *device = find_its_device(its, device_id);
+ int ite_esz = vgic_its_get_abi(its)->ite_esz;
+ gpa_t gpa = device->itt_addr + ite->event_id * ite_esz;
/*
* Though the spec talks about removing the pending state, we
* don't bother here since we clear the ITTE anyway and the
@@ -863,7 +866,8 @@ static int vgic_its_cmd_handle_discard(struct kvm *kvm, struct vgic_its *its,
vgic_its_invalidate_cache(kvm);
its_free_ite(kvm, ite);
- return 0;
+
+ return vgic_its_write_entry_lock(its, gpa, 0, ite_esz);
}
return E_ITS_DISCARD_UNMAPPED_INTERRUPT;
@@ -1211,9 +1215,11 @@ static int vgic_its_cmd_handle_mapd(struct kvm *kvm, struct vgic_its *its,
bool valid = its_cmd_get_validbit(its_cmd);
u8 num_eventid_bits = its_cmd_get_size(its_cmd);
gpa_t itt_addr = its_cmd_get_ittaddr(its_cmd);
+ int dte_esz = vgic_its_get_abi(its)->dte_esz;
struct its_device *device;
+ gpa_t gpa;
- if (!vgic_its_check_id(its, its->baser_device_table, device_id, NULL))
+ if (!vgic_its_check_id(its, its->baser_device_table, device_id, &gpa))
return E_ITS_MAPD_DEVICE_OOR;
if (valid && num_eventid_bits > VITS_TYPER_IDBITS)
@@ -1234,7 +1240,7 @@ static int vgic_its_cmd_handle_mapd(struct kvm *kvm, struct vgic_its *its,
* is an error, so we are done in any case.
*/
if (!valid)
- return 0;
+ return vgic_its_write_entry_lock(its, gpa, 0, dte_esz);
device = vgic_its_alloc_device(its, device_id, itt_addr,
num_eventid_bits);
@@ -2207,7 +2213,6 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
struct its_ite *ite, gpa_t gpa, int ite_esz)
{
- struct kvm *kvm = its->dev->kvm;
u32 next_offset;
u64 val;
@@ -2216,7 +2221,8 @@ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
((u64)ite->irq->intid << KVM_ITS_ITE_PINTID_SHIFT) |
ite->collection->collection_id;
val = cpu_to_le64(val);
- return vgic_write_guest_lock(kvm, gpa, &val, ite_esz);
+
+ return vgic_its_write_entry_lock(its, gpa, val, ite_esz);
}
/**
@@ -2357,7 +2363,6 @@ static int vgic_its_restore_itt(struct vgic_its *its, struct its_device *dev)
static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,
gpa_t ptr, int dte_esz)
{
- struct kvm *kvm = its->dev->kvm;
u64 val, itt_addr_field;
u32 next_offset;
@@ -2368,7 +2373,8 @@ static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,
(itt_addr_field << KVM_ITS_DTE_ITTADDR_SHIFT) |
(dev->num_eventid_bits - 1));
val = cpu_to_le64(val);
- return vgic_write_guest_lock(kvm, ptr, &val, dte_esz);
+
+ return vgic_its_write_entry_lock(its, ptr, val, dte_esz);
}
/**
@@ -2555,7 +2561,8 @@ static int vgic_its_save_cte(struct vgic_its *its,
((u64)collection->target_addr << KVM_ITS_CTE_RDBASE_SHIFT) |
collection->collection_id);
val = cpu_to_le64(val);
- return vgic_write_guest_lock(its->dev->kvm, gpa, &val, esz);
+
+ return vgic_its_write_entry_lock(its, gpa, val, esz);
}
/*
@@ -2571,8 +2578,7 @@ static int vgic_its_restore_cte(struct vgic_its *its, gpa_t gpa, int esz)
u64 val;
int ret;
- BUG_ON(esz > sizeof(val));
- ret = kvm_read_guest_lock(kvm, gpa, &val, esz);
+ ret = vgic_its_read_entry_lock(its, gpa, &val, esz);
if (ret)
return ret;
val = le64_to_cpu(val);
@@ -2610,7 +2616,6 @@ static int vgic_its_save_collection_table(struct vgic_its *its)
u64 baser = its->baser_coll_table;
gpa_t gpa = GITS_BASER_ADDR_48_to_52(baser);
struct its_collection *collection;
- u64 val;
size_t max_size, filled = 0;
int ret, cte_esz = abi->cte_esz;
@@ -2634,10 +2639,7 @@ static int vgic_its_save_collection_table(struct vgic_its *its)
* table is not fully filled, add a last dummy element
* with valid bit unset
*/
- val = 0;
- BUG_ON(cte_esz > sizeof(val));
- ret = vgic_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz);
- return ret;
+ return vgic_its_write_entry_lock(its, gpa, 0, cte_esz);
}
/**
diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
index 48e8b60ff1e3..7c0b23415ad9 100644
--- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
@@ -555,6 +555,7 @@ static void vgic_mmio_write_invlpi(struct kvm_vcpu *vcpu,
unsigned long val)
{
struct vgic_irq *irq;
+ u32 intid;
/*
* If the guest wrote only to the upper 32bit part of the
@@ -566,9 +567,13 @@ static void vgic_mmio_write_invlpi(struct kvm_vcpu *vcpu,
if ((addr & 4) || !vgic_lpis_enabled(vcpu))
return;
+ intid = lower_32_bits(val);
+ if (intid < VGIC_MIN_LPI)
+ return;
+
vgic_set_rdist_busy(vcpu, true);
- irq = vgic_get_irq(vcpu->kvm, NULL, lower_32_bits(val));
+ irq = vgic_get_irq(vcpu->kvm, NULL, intid);
if (irq) {
vgic_its_inv_lpi(vcpu->kvm, irq);
vgic_put_irq(vcpu->kvm, irq);
diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
index 07e48f8a4f23..3fa68827dc89 100644
--- a/arch/arm64/kvm/vgic/vgic.h
+++ b/arch/arm64/kvm/vgic/vgic.h
@@ -145,6 +145,29 @@ static inline int vgic_write_guest_lock(struct kvm *kvm, gpa_t gpa,
return ret;
}
+static inline int vgic_its_read_entry_lock(struct vgic_its *its, gpa_t eaddr,
+ u64 *eval, unsigned long esize)
+{
+ struct kvm *kvm = its->dev->kvm;
+
+ if (KVM_BUG_ON(esize != sizeof(*eval), kvm))
+ return -EINVAL;
+
+ return kvm_read_guest_lock(kvm, eaddr, eval, esize);
+
+}
+
+static inline int vgic_its_write_entry_lock(struct vgic_its *its, gpa_t eaddr,
+ u64 eval, unsigned long esize)
+{
+ struct kvm *kvm = its->dev->kvm;
+
+ if (KVM_BUG_ON(esize != sizeof(eval), kvm))
+ return -EINVAL;
+
+ return vgic_write_guest_lock(kvm, eaddr, &eval, esize);
+}
+
/*
* This struct provides an intermediate representation of the fields contained
* in the GICH_VMCR and ICH_VMCR registers, such that code exporting the GIC
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 166619348b98..5074bd1d37b5 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -1816,6 +1816,12 @@ static void restore_args(struct jit_ctx *ctx, int args_off, int nregs)
}
}
+static bool is_struct_ops_tramp(const struct bpf_tramp_links *fentry_links)
+{
+ return fentry_links->nr_links == 1 &&
+ fentry_links->links[0]->link.type == BPF_LINK_TYPE_STRUCT_OPS;
+}
+
/* Based on the x86's implementation of arch_prepare_bpf_trampoline().
*
* bpf prog and function entry before bpf trampoline hooked:
@@ -1845,6 +1851,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
bool save_ret;
__le32 **branches = NULL;
+ bool is_struct_ops = is_struct_ops_tramp(fentry);
/* trampoline stack layout:
* [ parent ip ]
@@ -1913,11 +1920,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
*/
emit_bti(A64_BTI_JC, ctx);
- /* frame for parent function */
- emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx);
- emit(A64_MOV(1, A64_FP, A64_SP), ctx);
+ /* x9 is not set for struct_ops */
+ if (!is_struct_ops) {
+ /* frame for parent function */
+ emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx);
+ emit(A64_MOV(1, A64_FP, A64_SP), ctx);
+ }
- /* frame for patched function */
+ /* frame for patched function for tracing, or caller for struct_ops */
emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx);
emit(A64_MOV(1, A64_FP, A64_SP), ctx);
@@ -2003,19 +2013,24 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
/* reset SP */
emit(A64_MOV(1, A64_SP, A64_FP), ctx);
- /* pop frames */
- emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
- emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx);
-
- if (flags & BPF_TRAMP_F_SKIP_FRAME) {
- /* skip patched function, return to parent */
- emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
- emit(A64_RET(A64_R(9)), ctx);
+ if (is_struct_ops) {
+ emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
+ emit(A64_RET(A64_LR), ctx);
} else {
- /* return to patched function */
- emit(A64_MOV(1, A64_R(10), A64_LR), ctx);
- emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
- emit(A64_RET(A64_R(10)), ctx);
+ /* pop frames */
+ emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
+ emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx);
+
+ if (flags & BPF_TRAMP_F_SKIP_FRAME) {
+ /* skip patched function, return to parent */
+ emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
+ emit(A64_RET(A64_R(9)), ctx);
+ } else {
+ /* return to patched function */
+ emit(A64_MOV(1, A64_R(10), A64_LR), ctx);
+ emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
+ emit(A64_RET(A64_R(10)), ctx);
+ }
}
if (ctx->image)
diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c
index 106fbf0b6f3b..2d85484ae0e7 100644
--- a/arch/csky/kernel/setup.c
+++ b/arch/csky/kernel/setup.c
@@ -124,9 +124,9 @@ asmlinkage __visible void __init csky_start(unsigned int unused,
pre_trap_init();
if (dtb_start == NULL)
- early_init_dt_scan(__dtb_start);
+ early_init_dt_scan(__dtb_start, __pa(dtb_start));
else
- early_init_dt_scan(dtb_start);
+ early_init_dt_scan(dtb_start, __pa(dtb_start));
start_kernel();
diff --git a/arch/loongarch/include/asm/page.h b/arch/loongarch/include/asm/page.h
index 63f137ce82a4..f49c2782c5c4 100644
--- a/arch/loongarch/include/asm/page.h
+++ b/arch/loongarch/include/asm/page.h
@@ -94,10 +94,7 @@ typedef struct { unsigned long pgprot; } pgprot_t;
extern int __virt_addr_valid(volatile void *kaddr);
#define virt_addr_valid(kaddr) __virt_addr_valid((volatile void *)(kaddr))
-#define VM_DATA_DEFAULT_FLAGS \
- (VM_READ | VM_WRITE | \
- ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0) | \
- VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC
#include <asm-generic/memory_model.h>
#include <asm-generic/getorder.h>
diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
index 065f2db57c09..7ef1c1ff1fc4 100644
--- a/arch/loongarch/kernel/setup.c
+++ b/arch/loongarch/kernel/setup.c
@@ -304,7 +304,7 @@ static void __init fdt_setup(void)
if (!fdt_pointer || fdt_check_header(fdt_pointer))
return;
- early_init_dt_scan(fdt_pointer);
+ early_init_dt_scan(fdt_pointer, __pa(fdt_pointer));
early_init_fdt_reserve_self();
max_low_pfn = PFN_PHYS(memblock_end_of_DRAM());
diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
index 9eb7753d117d..497f8b0a5f1e 100644
--- a/arch/loongarch/net/bpf_jit.c
+++ b/arch/loongarch/net/bpf_jit.c
@@ -179,7 +179,7 @@ static void __build_epilogue(struct jit_ctx *ctx, bool is_tail_call)
if (!is_tail_call) {
/* Set return value */
- move_reg(ctx, LOONGARCH_GPR_A0, regmap[BPF_REG_0]);
+ emit_insn(ctx, addiw, LOONGARCH_GPR_A0, regmap[BPF_REG_0], 0);
/* Return to the caller */
emit_insn(ctx, jirl, LOONGARCH_GPR_RA, LOONGARCH_GPR_ZERO, 0);
} else {
diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
index f597cd08a96b..1a0f6ca0247b 100644
--- a/arch/loongarch/vdso/Makefile
+++ b/arch/loongarch/vdso/Makefile
@@ -22,7 +22,7 @@ ccflags-vdso := \
cflags-vdso := $(ccflags-vdso) \
-isystem $(shell $(CC) -print-file-name=include) \
$(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \
- -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
+ -std=gnu11 -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
-fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
$(call cc-option, -fno-asynchronous-unwind-tables) \
$(call cc-option, -fno-stack-protector)
diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c
index 7dab46728aed..b6958ec2a220 100644
--- a/arch/m68k/coldfire/device.c
+++ b/arch/m68k/coldfire/device.c
@@ -93,7 +93,7 @@ static struct platform_device mcf_uart = {
.dev.platform_data = mcf_uart_platform_data,
};
-#if IS_ENABLED(CONFIG_FEC)
+#ifdef MCFFEC_BASE0
#ifdef CONFIG_M5441x
#define FEC_NAME "enet-fec"
@@ -145,6 +145,7 @@ static struct platform_device mcf_fec0 = {
.platform_data = FEC_PDATA,
}
};
+#endif /* MCFFEC_BASE0 */
#ifdef MCFFEC_BASE1
static struct resource mcf_fec1_resources[] = {
@@ -182,7 +183,6 @@ static struct platform_device mcf_fec1 = {
}
};
#endif /* MCFFEC_BASE1 */
-#endif /* CONFIG_FEC */
#if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
/*
@@ -624,12 +624,12 @@ static struct platform_device mcf_flexcan0 = {
static struct platform_device *mcf_devices[] __initdata = {
&mcf_uart,
-#if IS_ENABLED(CONFIG_FEC)
+#ifdef MCFFEC_BASE0
&mcf_fec0,
+#endif
#ifdef MCFFEC_BASE1
&mcf_fec1,
#endif
-#endif
#if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
&mcf_qspi,
#endif
diff --git a/arch/m68k/include/asm/mcfgpio.h b/arch/m68k/include/asm/mcfgpio.h
index 7abd322c019f..295624d01d3d 100644
--- a/arch/m68k/include/asm/mcfgpio.h
+++ b/arch/m68k/include/asm/mcfgpio.h
@@ -136,7 +136,7 @@ static inline void gpio_free(unsigned gpio)
* read-modify-write as well as those controlled by the EPORT and GPIO modules.
*/
#define MCFGPIO_SCR_START 40
-#elif defined(CONFIGM5441x)
+#elif defined(CONFIG_M5441x)
/* The m5441x EPORT doesn't have its own GPIO port, uses PORT C */
#define MCFGPIO_SCR_START 0
#else
diff --git a/arch/m68k/include/asm/mvme147hw.h b/arch/m68k/include/asm/mvme147hw.h
index e28eb1c0e0bf..dbf88059e47a 100644
--- a/arch/m68k/include/asm/mvme147hw.h
+++ b/arch/m68k/include/asm/mvme147hw.h
@@ -93,8 +93,8 @@ struct pcc_regs {
#define M147_SCC_B_ADDR 0xfffe3000
#define M147_SCC_PCLK 5000000
-#define MVME147_IRQ_SCSI_PORT (IRQ_USER+0x45)
-#define MVME147_IRQ_SCSI_DMA (IRQ_USER+0x46)
+#define MVME147_IRQ_SCSI_PORT (IRQ_USER + 5)
+#define MVME147_IRQ_SCSI_DMA (IRQ_USER + 6)
/* SCC interrupts, for MVME147 */
diff --git a/arch/m68k/kernel/early_printk.c b/arch/m68k/kernel/early_printk.c
index 7d3fe08a48eb..f11ef9f1f56f 100644
--- a/arch/m68k/kernel/early_printk.c
+++ b/arch/m68k/kernel/early_printk.c
@@ -12,8 +12,9 @@
#include <linux/string.h>
#include <asm/setup.h>
-extern void mvme16x_cons_write(struct console *co,
- const char *str, unsigned count);
+
+#include "../mvme147/mvme147.h"
+#include "../mvme16x/mvme16x.h"
asmlinkage void __init debug_cons_nputs(const char *s, unsigned n);
@@ -22,7 +23,9 @@ static void __ref debug_cons_write(struct console *c,
{
#if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
defined(CONFIG_COLDFIRE))
- if (MACH_IS_MVME16x)
+ if (MACH_IS_MVME147)
+ mvme147_scc_write(c, s, n);
+ else if (MACH_IS_MVME16x)
mvme16x_cons_write(c, s, n);
else
debug_cons_nputs(s, n);
diff --git a/arch/m68k/mvme147/config.c b/arch/m68k/mvme147/config.c
index 4e6218115f43..95d4a7e13b33 100644
--- a/arch/m68k/mvme147/config.c
+++ b/arch/m68k/mvme147/config.c
@@ -35,6 +35,7 @@
#include <asm/mvme147hw.h>
#include <asm/config.h>
+#include "mvme147.h"
static void mvme147_get_model(char *model);
extern void mvme147_sched_init(void);
@@ -188,3 +189,32 @@ int mvme147_hwclk(int op, struct rtc_time *t)
}
return 0;
}
+
+static void scc_delay(void)
+{
+ __asm__ __volatile__ ("nop; nop;");
+}
+
+static void scc_write(char ch)
+{
+ do {
+ scc_delay();
+ } while (!(in_8(M147_SCC_A_ADDR) & BIT(2)));
+ scc_delay();
+ out_8(M147_SCC_A_ADDR, 8);
+ scc_delay();
+ out_8(M147_SCC_A_ADDR, ch);
+}
+
+void mvme147_scc_write(struct console *co, const char *str, unsigned int count)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ while (count--) {
+ if (*str == '\n')
+ scc_write('\r');
+ scc_write(*str++);
+ }
+ local_irq_restore(flags);
+}
diff --git a/arch/m68k/mvme147/mvme147.h b/arch/m68k/mvme147/mvme147.h
new file mode 100644
index 000000000000..140bc98b0102
--- /dev/null
+++ b/arch/m68k/mvme147/mvme147.h
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+struct console;
+
+/* config.c */
+void mvme147_scc_write(struct console *co, const char *str, unsigned int count);
diff --git a/arch/m68k/mvme16x/config.c b/arch/m68k/mvme16x/config.c
index f00c7aa058de..2b7eac224138 100644
--- a/arch/m68k/mvme16x/config.c
+++ b/arch/m68k/mvme16x/config.c
@@ -38,6 +38,8 @@
#include <asm/mvme16xhw.h>
#include <asm/config.h>
+#include "mvme16x.h"
+
extern t_bdid mvme_bdid;
static MK48T08ptr_t volatile rtc = (MK48T08ptr_t)MVME_RTC_BASE;
diff --git a/arch/m68k/mvme16x/mvme16x.h b/arch/m68k/mvme16x/mvme16x.h
new file mode 100644
index 000000000000..159c34b70039
--- /dev/null
+++ b/arch/m68k/mvme16x/mvme16x.h
@@ -0,0 +1,6 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+struct console;
+
+/* config.c */
+void mvme16x_cons_write(struct console *co, const char *str, unsigned count);
diff --git a/arch/microblaze/kernel/microblaze_ksyms.c b/arch/microblaze/kernel/microblaze_ksyms.c
index c892e173ec99..a8553f54152b 100644
--- a/arch/microblaze/kernel/microblaze_ksyms.c
+++ b/arch/microblaze/kernel/microblaze_ksyms.c
@@ -16,6 +16,7 @@
#include <asm/page.h>
#include <linux/ftrace.h>
#include <linux/uaccess.h>
+#include <asm/xilinx_mb_manager.h>
#ifdef CONFIG_FUNCTION_TRACER
extern void _mcount(void);
@@ -46,3 +47,12 @@ extern void __udivsi3(void);
EXPORT_SYMBOL(__udivsi3);
extern void __umodsi3(void);
EXPORT_SYMBOL(__umodsi3);
+
+#ifdef CONFIG_MB_MANAGER
+extern void xmb_manager_register(uintptr_t phys_baseaddr, u32 cr_val,
+ void (*callback)(void *data),
+ void *priv, void (*reset_callback)(void *data));
+EXPORT_SYMBOL(xmb_manager_register);
+extern asmlinkage void xmb_inject_err(void);
+EXPORT_SYMBOL(xmb_inject_err);
+#endif
diff --git a/arch/microblaze/kernel/prom.c b/arch/microblaze/kernel/prom.c
index e424c796e297..76ac4cfdfb42 100644
--- a/arch/microblaze/kernel/prom.c
+++ b/arch/microblaze/kernel/prom.c
@@ -18,7 +18,7 @@ void __init early_init_devtree(void *params)
{
pr_debug(" -> early_init_devtree(%p)\n", params);
- early_init_dt_scan(params);
+ early_init_dt_scan(params, __pa(params));
if (!strlen(boot_command_line))
strscpy(boot_command_line, cmd_line, COMMAND_LINE_SIZE);
diff --git a/arch/mips/include/asm/switch_to.h b/arch/mips/include/asm/switch_to.h
index a4374b4cb88f..d6ccd5344021 100644
--- a/arch/mips/include/asm/switch_to.h
+++ b/arch/mips/include/asm/switch_to.h
@@ -97,7 +97,7 @@ do { \
} \
} while (0)
#else
-# define __sanitize_fcr31(next)
+# define __sanitize_fcr31(next) do { (void) (next); } while (0)
#endif
/*
diff --git a/arch/mips/kernel/prom.c b/arch/mips/kernel/prom.c
index f88ce78e13e3..474dc1eec3bb 100644
--- a/arch/mips/kernel/prom.c
+++ b/arch/mips/kernel/prom.c
@@ -39,7 +39,7 @@ char *mips_get_machine_name(void)
void __init __dt_setup_arch(void *bph)
{
- if (!early_init_dt_scan(bph))
+ if (!early_init_dt_scan(bph, __pa(bph)))
return;
mips_set_machine_name(of_flat_dt_get_machine_name());
diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
index 58fc8d089402..6d35d4f7ebe1 100644
--- a/arch/mips/kernel/relocate.c
+++ b/arch/mips/kernel/relocate.c
@@ -337,7 +337,7 @@ void *__init relocate_kernel(void)
#if defined(CONFIG_USE_OF)
/* Deal with the device tree */
fdt = plat_get_fdt();
- early_init_dt_scan(fdt);
+ early_init_dt_scan(fdt, __pa(fdt));
if (boot_command_line[0]) {
/* Boot command line was passed in device tree */
strscpy(arcs_cmdline, boot_command_line, COMMAND_LINE_SIZE);
diff --git a/arch/nios2/kernel/prom.c b/arch/nios2/kernel/prom.c
index 8d98af5c7201..15bbdd78e9bf 100644
--- a/arch/nios2/kernel/prom.c
+++ b/arch/nios2/kernel/prom.c
@@ -26,12 +26,12 @@ void __init early_init_devtree(void *params)
if (be32_to_cpup((__be32 *)CONFIG_NIOS2_DTB_PHYS_ADDR) ==
OF_DT_HEADER) {
params = (void *)CONFIG_NIOS2_DTB_PHYS_ADDR;
- early_init_dt_scan(params);
+ early_init_dt_scan(params, __pa(params));
return;
}
#endif
if (be32_to_cpu((__be32) *dtb) == OF_DT_HEADER)
params = (void *)__dtb_start;
- early_init_dt_scan(params);
+ early_init_dt_scan(params, __pa(params));
}
diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index fd9bb76a610b..206a6da4f31b 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -64,6 +64,9 @@ config STACKTRACE_SUPPORT
config LOCKDEP_SUPPORT
def_bool y
+config FIX_EARLYCON_MEM
+ def_bool y
+
menu "Processor type and features"
choice
diff --git a/arch/openrisc/include/asm/fixmap.h b/arch/openrisc/include/asm/fixmap.h
index ad78e50b7ba3..aece6013fead 100644
--- a/arch/openrisc/include/asm/fixmap.h
+++ b/arch/openrisc/include/asm/fixmap.h
@@ -26,29 +26,18 @@
#include <linux/bug.h>
#include <asm/page.h>
-/*
- * On OpenRISC we use these special fixed_addresses for doing ioremap
- * early in the boot process before memory initialization is complete.
- * This is used, in particular, by the early serial console code.
- *
- * It's not really 'fixmap', per se, but fits loosely into the same
- * paradigm.
- */
enum fixed_addresses {
- /*
- * FIX_IOREMAP entries are useful for mapping physical address
- * space before ioremap() is useable, e.g. really early in boot
- * before kmalloc() is working.
- */
-#define FIX_N_IOREMAPS 32
- FIX_IOREMAP_BEGIN,
- FIX_IOREMAP_END = FIX_IOREMAP_BEGIN + FIX_N_IOREMAPS - 1,
+ FIX_EARLYCON_MEM_BASE,
__end_of_fixed_addresses
};
#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
/* FIXADDR_BOTTOM might be a better name here... */
#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
+#define FIXMAP_PAGE_IO PAGE_KERNEL_NOCACHE
+
+extern void __set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags);
#define __fix_to_virt(x) (FIXADDR_TOP - ((x) << PAGE_SHIFT))
#define __virt_to_fix(x) ((FIXADDR_TOP - ((x)&PAGE_MASK)) >> PAGE_SHIFT)
diff --git a/arch/openrisc/kernel/prom.c b/arch/openrisc/kernel/prom.c
index 19e6008bf114..e424e9bd12a7 100644
--- a/arch/openrisc/kernel/prom.c
+++ b/arch/openrisc/kernel/prom.c
@@ -22,6 +22,6 @@
void __init early_init_devtree(void *params)
{
- early_init_dt_scan(params);
+ early_init_dt_scan(params, __pa(params));
memblock_allow_resize();
}
diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
index 1dcd78c8f0e9..d0cb1a0126f9 100644
--- a/arch/openrisc/mm/init.c
+++ b/arch/openrisc/mm/init.c
@@ -207,6 +207,43 @@ void __init mem_init(void)
return;
}
+static int __init map_page(unsigned long va, phys_addr_t pa, pgprot_t prot)
+{
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ p4d = p4d_offset(pgd_offset_k(va), va);
+ pud = pud_offset(p4d, va);
+ pmd = pmd_offset(pud, va);
+ pte = pte_alloc_kernel(pmd, va);
+
+ if (pte == NULL)
+ return -ENOMEM;
+
+ if (pgprot_val(prot))
+ set_pte_at(&init_mm, va, pte, pfn_pte(pa >> PAGE_SHIFT, prot));
+ else
+ pte_clear(&init_mm, va, pte);
+
+ local_flush_tlb_page(NULL, va);
+ return 0;
+}
+
+void __init __set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t prot)
+{
+ unsigned long address = __fix_to_virt(idx);
+
+ if (idx >= __end_of_fixed_addresses) {
+ BUG();
+ return;
+ }
+
+ map_page(address, phys, prot);
+}
+
static const pgprot_t protection_map[16] = {
[VM_NONE] = PAGE_NONE,
[VM_READ] = PAGE_READONLY_X,
diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
index c91f9c2e61ed..f8d08eab7db8 100644
--- a/arch/parisc/kernel/ftrace.c
+++ b/arch/parisc/kernel/ftrace.c
@@ -87,7 +87,7 @@ int ftrace_enable_ftrace_graph_caller(void)
int ftrace_disable_ftrace_graph_caller(void)
{
- static_key_enable(&ftrace_graph_enable.key);
+ static_key_disable(&ftrace_graph_enable.key);
return 0;
}
#endif
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2fe51e0ad637..6baa8b85601a 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -271,8 +271,8 @@ config PPC
select HAVE_RSEQ
select HAVE_SETUP_PER_CPU_AREA if PPC64
select HAVE_SOFTIRQ_ON_OWN_STACK
- select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r2)
- select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r13)
+ select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,$(m32-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 -mstack-protector-guard-offset=0)
+ select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,$(m64-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 -mstack-protector-guard-offset=0)
select HAVE_STATIC_CALL if PPC32
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_VIRT_CPU_ACCOUNTING
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index f19dbaa1d541..46cbc8ead07d 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -89,13 +89,6 @@ KBUILD_AFLAGS += -m$(BITS)
KBUILD_LDFLAGS += -m elf$(BITS)$(LDEMULATION)
endif
-cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard=tls
-ifdef CONFIG_PPC64
-cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r13
-else
-cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r2
-endif
-
LDFLAGS_vmlinux-y := -Bstatic
LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) += -z notext
@@ -389,9 +382,11 @@ prepare: stack_protector_prepare
PHONY += stack_protector_prepare
stack_protector_prepare: prepare0
ifdef CONFIG_PPC64
- $(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h))
+ $(eval KBUILD_CFLAGS += -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 \
+ -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h))
else
- $(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h))
+ $(eval KBUILD_CFLAGS += -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 \
+ -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h))
endif
endif
diff --git a/arch/powerpc/include/asm/dtl.h b/arch/powerpc/include/asm/dtl.h
index d6f43d149f8d..a5c21bc623cb 100644
--- a/arch/powerpc/include/asm/dtl.h
+++ b/arch/powerpc/include/asm/dtl.h
@@ -1,8 +1,8 @@
#ifndef _ASM_POWERPC_DTL_H
#define _ASM_POWERPC_DTL_H
+#include <linux/rwsem.h>
#include <asm/lppaca.h>
-#include <linux/spinlock_types.h>
/*
* Layout of entries in the hypervisor's dispatch trace log buffer.
@@ -35,7 +35,7 @@ struct dtl_entry {
#define DTL_LOG_ALL (DTL_LOG_CEDE | DTL_LOG_PREEMPT | DTL_LOG_FAULT)
extern struct kmem_cache *dtl_cache;
-extern rwlock_t dtl_access_lock;
+extern struct rw_semaphore dtl_access_lock;
extern void register_dtl_buffer(int cpu);
extern void alloc_dtl_buffers(unsigned long *time_limit);
diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
index 526a6a647312..daa44b2ef35a 100644
--- a/arch/powerpc/include/asm/fadump.h
+++ b/arch/powerpc/include/asm/fadump.h
@@ -32,4 +32,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname,
int depth, void *data);
extern int fadump_reserve_mem(void);
#endif
+
+#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA)
+void fadump_cma_init(void);
+#else
+static inline void fadump_cma_init(void) { }
+#endif
+
#endif /* _ASM_POWERPC_FADUMP_H */
diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
index 50950deedb87..e3d0e714ff28 100644
--- a/arch/powerpc/include/asm/sstep.h
+++ b/arch/powerpc/include/asm/sstep.h
@@ -173,9 +173,4 @@ int emulate_step(struct pt_regs *regs, ppc_inst_t instr);
*/
extern int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op);
-extern void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
- const void *mem, bool cross_endian);
-extern void emulate_vsx_store(struct instruction_op *op,
- const union vsx_reg *reg, void *mem,
- bool cross_endian);
extern int emulate_dcbz(unsigned long ea, struct pt_regs *regs);
diff --git a/arch/powerpc/include/asm/vdso.h b/arch/powerpc/include/asm/vdso.h
index 7650b6ce14c8..8d972bc98b55 100644
--- a/arch/powerpc/include/asm/vdso.h
+++ b/arch/powerpc/include/asm/vdso.h
@@ -25,6 +25,7 @@ int vdso_getcpu_init(void);
#ifdef __VDSO64__
#define V_FUNCTION_BEGIN(name) \
.globl name; \
+ .type name,@function; \
name: \
#define V_FUNCTION_END(name) \
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
index c3fb9fdf5bd7..a84e75fff1df 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -857,7 +857,7 @@ bool __init dt_cpu_ftrs_init(void *fdt)
using_dt_cpu_ftrs = false;
/* Setup and verify the FDT, if it fails we just bail */
- if (!early_init_dt_verify(fdt))
+ if (!early_init_dt_verify(fdt, __pa(fdt)))
return false;
if (!of_scan_flat_dt(fdt_find_cpu_features, NULL))
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index 3ff2da7b120b..1866bac23400 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -80,27 +80,23 @@ static struct cma *fadump_cma;
* But for some reason even if it fails we still have the memory reservation
* with us and we can still continue doing fadump.
*/
-static int __init fadump_cma_init(void)
+void __init fadump_cma_init(void)
{
unsigned long long base, size;
int rc;
- if (!fw_dump.fadump_enabled)
- return 0;
-
+ if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
+ fw_dump.dump_active)
+ return;
/*
* Do not use CMA if user has provided fadump=nocma kernel parameter.
- * Return 1 to continue with fadump old behaviour.
*/
- if (fw_dump.nocma)
- return 1;
+ if (fw_dump.nocma || !fw_dump.boot_memory_size)
+ return;
base = fw_dump.reserve_dump_area_start;
size = fw_dump.boot_memory_size;
- if (!size)
- return 0;
-
rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
if (rc) {
pr_err("Failed to init cma area for firmware-assisted dump,%d\n", rc);
@@ -110,7 +106,7 @@ static int __init fadump_cma_init(void)
* blocked from production system usage. Hence return 1,
* so that we can continue with fadump.
*/
- return 1;
+ return;
}
/*
@@ -127,10 +123,7 @@ static int __init fadump_cma_init(void)
cma_get_size(fadump_cma),
(unsigned long)cma_get_base(fadump_cma) >> 20,
fw_dump.reserve_dump_area_size);
- return 1;
}
-#else
-static int __init fadump_cma_init(void) { return 1; }
#endif /* CONFIG_CMA */
/* Scan the Firmware Assisted dump configuration details. */
@@ -647,8 +640,6 @@ int __init fadump_reserve_mem(void)
pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
(size >> 20), base, (memblock_phys_mem_size() >> 20));
-
- ret = fadump_cma_init();
}
return ret;
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index bf6d8ad3819e..7d5eccf3f80d 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -781,7 +781,7 @@ void __init early_init_devtree(void *params)
DBG(" -> early_init_devtree(%px)\n", params);
/* Too early to BUG_ON(), do it by hand */
- if (!early_init_dt_verify(params))
+ if (!early_init_dt_verify(params, __pa(params)))
panic("BUG: Failed verifying flat device tree, bad version?");
of_scan_flat_dt(early_init_dt_scan_model, NULL);
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 03eaad5949f1..d43db8150767 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -988,9 +988,11 @@ void __init setup_arch(char **cmdline_p)
initmem_init();
/*
- * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
- * be called after initmem_init(), so that pageblock_order is initialised.
+ * Reserve large chunks of memory for use by CMA for fadump, KVM and
+ * hugetlb. These must be called after initmem_init(), so that
+ * pageblock_order is initialised.
*/
+ fadump_cma_init();
kvm_cma_reserve();
gigantic_hugetlb_cma_reserve();
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 394f209536ce..a70fe3969964 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -924,6 +924,7 @@ static int __init disable_hardlockup_detector(void)
hardlockup_detector_disable();
#else
if (firmware_has_feature(FW_FEATURE_LPAR)) {
+ check_kvm_guest();
if (is_kvm_guest())
hardlockup_detector_disable();
}
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index f420df7888a7..7ab4e2fb28b1 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -123,8 +123,6 @@ SECTIONS
*/
*(.sfpr);
*(.text.asan.* .text.tsan.*)
- MEM_KEEP(init.text)
- MEM_KEEP(exit.text)
} :text
. = ALIGN(PAGE_SIZE);
diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
index a3de5369d22c..7b71737ae24c 100644
--- a/arch/powerpc/kexec/file_load_64.c
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -916,13 +916,18 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
if (dn) {
u64 val;
- of_property_read_u64(dn, "opal-base-address", &val);
+ ret = of_property_read_u64(dn, "opal-base-address", &val);
+ if (ret)
+ goto out;
+
ret = kexec_purgatory_get_set_symbol(image, "opal_base", &val,
sizeof(val), false);
if (ret)
goto out;
- of_property_read_u64(dn, "opal-entry-address", &val);
+ ret = of_property_read_u64(dn, "opal-entry-address", &val);
+ if (ret)
+ goto out;
ret = kexec_purgatory_get_set_symbol(image, "opal_entry", &val,
sizeof(val), false);
}
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 1bb00c721544..924689fa5efa 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4090,6 +4090,15 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
}
hvregs.hdec_expiry = time_limit;
+ /*
+ * hvregs has the doorbell status, so zero it here which
+ * enables us to receive doorbells when H_ENTER_NESTED is
+ * in progress for this vCPU
+ */
+
+ if (vcpu->arch.doorbell_request)
+ vcpu->arch.doorbell_request = 0;
+
/*
* When setting DEC, we must always deal with irq_work_raise
* via NMI vs setting DEC. The problem occurs right as we
@@ -4678,7 +4687,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
lpcr |= LPCR_MER;
}
} else if (vcpu->arch.pending_exceptions ||
- vcpu->arch.doorbell_request ||
xive_interrupt_pending(vcpu)) {
vcpu->arch.ret = RESUME_HOST;
goto out;
diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
index 377d0b4a05ee..49144129da42 100644
--- a/arch/powerpc/kvm/book3s_hv_nested.c
+++ b/arch/powerpc/kvm/book3s_hv_nested.c
@@ -32,7 +32,7 @@ void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
struct kvmppc_vcore *vc = vcpu->arch.vcore;
hr->pcr = vc->pcr | PCR_MASK;
- hr->dpdes = vc->dpdes;
+ hr->dpdes = vcpu->arch.doorbell_request;
hr->hfscr = vcpu->arch.hfscr;
hr->tb_offset = vc->tb_offset;
hr->dawr0 = vcpu->arch.dawr0;
@@ -105,7 +105,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu,
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
- hr->dpdes = vc->dpdes;
+ hr->dpdes = vcpu->arch.doorbell_request;
hr->purr = vcpu->arch.purr;
hr->spurr = vcpu->arch.spurr;
hr->ic = vcpu->arch.ic;
@@ -143,7 +143,7 @@ static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state *
struct kvmppc_vcore *vc = vcpu->arch.vcore;
vc->pcr = hr->pcr | PCR_MASK;
- vc->dpdes = hr->dpdes;
+ vcpu->arch.doorbell_request = hr->dpdes;
vcpu->arch.hfscr = hr->hfscr;
vcpu->arch.dawr0 = hr->dawr0;
vcpu->arch.dawrx0 = hr->dawrx0;
@@ -170,7 +170,13 @@ void kvmhv_restore_hv_return_state(struct kvm_vcpu *vcpu,
{
struct kvmppc_vcore *vc = vcpu->arch.vcore;
- vc->dpdes = hr->dpdes;
+ /*
+ * This L2 vCPU might have received a doorbell while H_ENTER_NESTED was being handled.
+ * Make sure we preserve the doorbell if it was either:
+ * a) Sent after H_ENTER_NESTED was called on this vCPU (arch.doorbell_request would be 1)
+ * b) Doorbell was not handled and L2 exited for some other reason (hr->dpdes would be 1)
+ */
+ vcpu->arch.doorbell_request = vcpu->arch.doorbell_request | hr->dpdes;
vcpu->arch.hfscr = hr->hfscr;
vcpu->arch.purr = hr->purr;
vcpu->arch.spurr = hr->spurr;
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 6af97dc0f6d5..efbf18078870 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -780,8 +780,8 @@ static nokprobe_inline int emulate_stq(struct pt_regs *regs, unsigned long ea,
#endif /* __powerpc64 */
#ifdef CONFIG_VSX
-void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
- const void *mem, bool rev)
+static nokprobe_inline void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
+ const void *mem, bool rev)
{
int size, read_size;
int i, j;
@@ -863,11 +863,9 @@ void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
break;
}
}
-EXPORT_SYMBOL_GPL(emulate_vsx_load);
-NOKPROBE_SYMBOL(emulate_vsx_load);
-void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
- void *mem, bool rev)
+static nokprobe_inline void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
+ void *mem, bool rev)
{
int size, write_size;
int i, j;
@@ -955,8 +953,6 @@ void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
break;
}
}
-EXPORT_SYMBOL_GPL(emulate_vsx_store);
-NOKPROBE_SYMBOL(emulate_vsx_store);
static nokprobe_inline int do_vsx_load(struct instruction_op *op,
unsigned long ea, struct pt_regs *regs,
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index b1723094d464..d3e0f5b3ecc7 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -431,10 +431,16 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
/*
* The kernel should never take an execute fault nor should it
* take a page fault to a kernel address or a page fault to a user
- * address outside of dedicated places
+ * address outside of dedicated places.
+ *
+ * Rather than kfence directly reporting false negatives, search whether
+ * the NIP belongs to the fixup table for cases where fault could come
+ * from functions like copy_from_kernel_nofault().
*/
if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address, is_write))) {
- if (kfence_handle_page_fault(address, is_write, regs))
+ if (is_kfence_address((void *)address) &&
+ !search_exception_tables(instruction_pointer(regs)) &&
+ kfence_handle_page_fault(address, is_write, regs))
return 0;
return SIGSEGV;
diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c
index 3f1cdccebc9c..ecc04ef8c53e 100644
--- a/arch/powerpc/platforms/pseries/dtl.c
+++ b/arch/powerpc/platforms/pseries/dtl.c
@@ -191,7 +191,7 @@ static int dtl_enable(struct dtl *dtl)
return -EBUSY;
/* ensure there are no other conflicting dtl users */
- if (!read_trylock(&dtl_access_lock))
+ if (!down_read_trylock(&dtl_access_lock))
return -EBUSY;
n_entries = dtl_buf_entries;
@@ -199,7 +199,7 @@ static int dtl_enable(struct dtl *dtl)
if (!buf) {
printk(KERN_WARNING "%s: buffer alloc failed for cpu %d\n",
__func__, dtl->cpu);
- read_unlock(&dtl_access_lock);
+ up_read(&dtl_access_lock);
return -ENOMEM;
}
@@ -217,7 +217,7 @@ static int dtl_enable(struct dtl *dtl)
spin_unlock(&dtl->lock);
if (rc) {
- read_unlock(&dtl_access_lock);
+ up_read(&dtl_access_lock);
kmem_cache_free(dtl_cache, buf);
}
@@ -232,7 +232,7 @@ static void dtl_disable(struct dtl *dtl)
dtl->buf = NULL;
dtl->buf_entries = 0;
spin_unlock(&dtl->lock);
- read_unlock(&dtl_access_lock);
+ up_read(&dtl_access_lock);
}
/* file interface */
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index c3585e90c6db..cade33aef414 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -169,7 +169,7 @@ struct vcpu_dispatch_data {
*/
#define NR_CPUS_H NR_CPUS
-DEFINE_RWLOCK(dtl_access_lock);
+DECLARE_RWSEM(dtl_access_lock);
static DEFINE_PER_CPU(struct vcpu_dispatch_data, vcpu_disp_data);
static DEFINE_PER_CPU(u64, dtl_entry_ridx);
static DEFINE_PER_CPU(struct dtl_worker, dtl_workers);
@@ -463,7 +463,7 @@ static int dtl_worker_enable(unsigned long *time_limit)
{
int rc = 0, state;
- if (!write_trylock(&dtl_access_lock)) {
+ if (!down_write_trylock(&dtl_access_lock)) {
rc = -EBUSY;
goto out;
}
@@ -479,7 +479,7 @@ static int dtl_worker_enable(unsigned long *time_limit)
pr_err("vcpudispatch_stats: unable to setup workqueue for DTL processing\n");
free_dtl_buffers(time_limit);
reset_global_dtl_mask();
- write_unlock(&dtl_access_lock);
+ up_write(&dtl_access_lock);
rc = -EINVAL;
goto out;
}
@@ -494,7 +494,7 @@ static void dtl_worker_disable(unsigned long *time_limit)
cpuhp_remove_state(dtl_worker_state);
free_dtl_buffers(time_limit);
reset_global_dtl_mask();
- write_unlock(&dtl_access_lock);
+ up_write(&dtl_access_lock);
}
static ssize_t vcpudispatch_stats_write(struct file *file, const char __user *p,
diff --git a/arch/powerpc/platforms/pseries/plpks.c b/arch/powerpc/platforms/pseries/plpks.c
index ed492d38f6ad..fe7a43a8a1f4 100644
--- a/arch/powerpc/platforms/pseries/plpks.c
+++ b/arch/powerpc/platforms/pseries/plpks.c
@@ -683,7 +683,7 @@ void __init plpks_early_init_devtree(void)
out:
fdt_nop_property(fdt, chosen_node, "ibm,plpks-pw");
// Since we've cleared the password, we must update the FDT checksum
- early_init_dt_verify(fdt);
+ early_init_dt_verify(fdt, __pa(fdt));
}
static __init int pseries_plpks_init(void)
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
index ddadee6621f0..1fa501b7d0c8 100644
--- a/arch/riscv/kernel/setup.c
+++ b/arch/riscv/kernel/setup.c
@@ -246,7 +246,7 @@ static void __init init_resources(void)
static void __init parse_dtb(void)
{
/* Early scan of device tree from init memory */
- if (early_init_dt_scan(dtb_early_va)) {
+ if (early_init_dt_scan(dtb_early_va, __pa(dtb_early_va))) {
const char *name = of_flat_dt_get_machine_name();
if (name) {
diff --git a/arch/riscv/kvm/aia_aplic.c b/arch/riscv/kvm/aia_aplic.c
index b467ba5ed910..9d5b04c971c4 100644
--- a/arch/riscv/kvm/aia_aplic.c
+++ b/arch/riscv/kvm/aia_aplic.c
@@ -143,7 +143,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending)
if (sm == APLIC_SOURCECFG_SM_LEVEL_HIGH ||
sm == APLIC_SOURCECFG_SM_LEVEL_LOW) {
if (!pending)
- goto skip_write_pending;
+ goto noskip_write_pending;
if ((irqd->state & APLIC_IRQ_STATE_INPUT) &&
sm == APLIC_SOURCECFG_SM_LEVEL_LOW)
goto skip_write_pending;
@@ -152,6 +152,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending)
goto skip_write_pending;
}
+noskip_write_pending:
if (pending)
irqd->state |= APLIC_IRQ_STATE_PENDING;
else
diff --git a/arch/s390/include/asm/set_memory.h b/arch/s390/include/asm/set_memory.h
index 06fbabe2f66c..cb4cc0f59012 100644
--- a/arch/s390/include/asm/set_memory.h
+++ b/arch/s390/include/asm/set_memory.h
@@ -62,5 +62,6 @@ __SET_MEMORY_FUNC(set_memory_4k, SET_MEMORY_4K)
int set_direct_map_invalid_noflush(struct page *page);
int set_direct_map_default_noflush(struct page *page);
+bool kernel_page_present(struct page *page);
#endif
diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
index 26c08ee87740..ebad8c8b8c57 100644
--- a/arch/s390/kernel/entry.S
+++ b/arch/s390/kernel/entry.S
@@ -458,9 +458,13 @@ SYM_CODE_START(\name)
SYM_CODE_END(\name)
.endm
+ .section .irqentry.text, "ax"
+
INT_HANDLER ext_int_handler,__LC_EXT_OLD_PSW,do_ext_irq
INT_HANDLER io_int_handler,__LC_IO_OLD_PSW,do_io_irq
+ .section .kprobes.text, "ax"
+
/*
* Load idle PSW.
*/
diff --git a/arch/s390/kernel/kprobes.c b/arch/s390/kernel/kprobes.c
index d4b863ed0aa7..cb149a64dba6 100644
--- a/arch/s390/kernel/kprobes.c
+++ b/arch/s390/kernel/kprobes.c
@@ -518,6 +518,12 @@ int __init arch_init_kprobes(void)
return 0;
}
+int __init arch_populate_kprobe_blacklist(void)
+{
+ return kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
+ (unsigned long)__irqentry_text_end);
+}
+
int arch_trampoline_kprobe(struct kprobe *p)
{
return 0;
diff --git a/arch/s390/kernel/syscalls/Makefile b/arch/s390/kernel/syscalls/Makefile
index fb85e797946d..2bd7756288df 100644
--- a/arch/s390/kernel/syscalls/Makefile
+++ b/arch/s390/kernel/syscalls/Makefile
@@ -12,7 +12,7 @@ kapi-hdrs-y := $(kapi)/unistd_nr.h
uapi-hdrs-y := $(uapi)/unistd_32.h
uapi-hdrs-y += $(uapi)/unistd_64.h
-targets += $(addprefix ../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
+targets += $(addprefix ../../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
PHONY += kapi uapi
diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c
index 441f654d048d..44271835c97e 100644
--- a/arch/s390/mm/pageattr.c
+++ b/arch/s390/mm/pageattr.c
@@ -406,6 +406,21 @@ int set_direct_map_default_noflush(struct page *page)
return __set_memory((unsigned long)page_to_virt(page), 1, SET_MEMORY_DEF);
}
+bool kernel_page_present(struct page *page)
+{
+ unsigned long addr;
+ unsigned int cc;
+
+ addr = (unsigned long)page_address(page);
+ asm volatile(
+ " lra %[addr],0(%[addr])\n"
+ " ipm %[cc]\n"
+ : [cc] "=d" (cc), [addr] "+a" (addr)
+ :
+ : "cc");
+ return (cc >> 28) == 0;
+}
+
#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
static void ipte_range(pte_t *pte, unsigned long address, int nr)
diff --git a/arch/sh/kernel/cpu/proc.c b/arch/sh/kernel/cpu/proc.c
index a306bcd6b341..5f6d0e827bae 100644
--- a/arch/sh/kernel/cpu/proc.c
+++ b/arch/sh/kernel/cpu/proc.c
@@ -132,7 +132,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
static void *c_start(struct seq_file *m, loff_t *pos)
{
- return *pos < NR_CPUS ? cpu_data + *pos : NULL;
+ return *pos < nr_cpu_ids ? cpu_data + *pos : NULL;
}
static void *c_next(struct seq_file *m, void *v, loff_t *pos)
{
diff --git a/arch/sh/kernel/setup.c b/arch/sh/kernel/setup.c
index b3da2757faaf..1fb59c69b97c 100644
--- a/arch/sh/kernel/setup.c
+++ b/arch/sh/kernel/setup.c
@@ -260,7 +260,7 @@ void __ref sh_fdt_init(phys_addr_t dt_phys)
dt_virt = phys_to_virt(dt_phys);
#endif
- if (!dt_virt || !early_init_dt_scan(dt_virt)) {
+ if (!dt_virt || !early_init_dt_scan(dt_virt, __pa(dt_virt))) {
pr_crit("Error: invalid device tree blob"
" at physical address %p\n", (void *)dt_phys);
diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c
index cabcc501b448..de187e58dc2b 100644
--- a/arch/um/drivers/net_kern.c
+++ b/arch/um/drivers/net_kern.c
@@ -336,7 +336,7 @@ static struct platform_driver uml_net_driver = {
static void net_device_release(struct device *dev)
{
- struct uml_net *device = dev_get_drvdata(dev);
+ struct uml_net *device = container_of(dev, struct uml_net, pdev.dev);
struct net_device *netdev = device->dev;
struct uml_net_private *lp = netdev_priv(netdev);
diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index ef7b4b911a45..7ddda2c0ae43 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -799,7 +799,7 @@ static int ubd_open_dev(struct ubd *ubd_dev)
static void ubd_device_release(struct device *dev)
{
- struct ubd *ubd_dev = dev_get_drvdata(dev);
+ struct ubd *ubd_dev = container_of(dev, struct ubd, pdev.dev);
blk_mq_free_tag_set(&ubd_dev->tag_set);
*ubd_dev = ((struct ubd) DEFAULT_UBD);
diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
index 94a4dfac6c23..2baa8d4a33ed 100644
--- a/arch/um/drivers/vector_kern.c
+++ b/arch/um/drivers/vector_kern.c
@@ -823,7 +823,8 @@ static struct platform_driver uml_net_driver = {
static void vector_device_release(struct device *dev)
{
- struct vector_device *device = dev_get_drvdata(dev);
+ struct vector_device *device =
+ container_of(dev, struct vector_device, pdev.dev);
struct net_device *netdev = device->dev;
list_del(&device->list);
diff --git a/arch/um/kernel/dtb.c b/arch/um/kernel/dtb.c
index 484141b06938..8d78ced9e08f 100644
--- a/arch/um/kernel/dtb.c
+++ b/arch/um/kernel/dtb.c
@@ -16,16 +16,16 @@ void uml_dtb_init(void)
void *area;
area = uml_load_file(dtb, &size);
- if (!area)
- return;
-
- if (!early_init_dt_scan(area)) {
- pr_err("invalid DTB %s\n", dtb);
- memblock_free(area, size);
- return;
+ if (area) {
+ if (!early_init_dt_scan(area, __pa(area))) {
+ pr_err("invalid DTB %s\n", dtb);
+ memblock_free(area, size);
+ return;
+ }
+
+ early_init_fdt_scan_reserved_mem();
}
- early_init_fdt_scan_reserved_mem();
unflatten_device_tree();
}
diff --git a/arch/um/kernel/physmem.c b/arch/um/kernel/physmem.c
index 91485119ae67..4339580f5a4f 100644
--- a/arch/um/kernel/physmem.c
+++ b/arch/um/kernel/physmem.c
@@ -80,10 +80,10 @@ void __init setup_physmem(unsigned long start, unsigned long reserve_end,
unsigned long len, unsigned long long highmem)
{
unsigned long reserve = reserve_end - start;
- long map_size = len - reserve;
+ unsigned long map_size = len - reserve;
int err;
- if(map_size <= 0) {
+ if (len <= reserve) {
os_warn("Too few physical memory! Needed=%lu, given=%lu\n",
reserve, len);
exit(1);
@@ -94,7 +94,7 @@ void __init setup_physmem(unsigned long start, unsigned long reserve_end,
err = os_map_memory((void *) reserve_end, physmem_fd, reserve,
map_size, 1, 1, 1);
if (err < 0) {
- os_warn("setup_physmem - mapping %ld bytes of memory at 0x%p "
+ os_warn("setup_physmem - mapping %lu bytes of memory at 0x%p "
"failed - errno = %d\n", map_size,
(void *) reserve_end, err);
exit(1);
diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
index 6daffb9d8a8d..afe67d816146 100644
--- a/arch/um/kernel/process.c
+++ b/arch/um/kernel/process.c
@@ -397,6 +397,6 @@ int elf_core_copy_task_fpregs(struct task_struct *t, elf_fpregset_t *fpu)
{
int cpu = current_thread_info()->cpu;
- return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu);
+ return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu) == 0;
}
diff --git a/arch/um/kernel/sysrq.c b/arch/um/kernel/sysrq.c
index 746715379f12..7e897e44a03d 100644
--- a/arch/um/kernel/sysrq.c
+++ b/arch/um/kernel/sysrq.c
@@ -53,5 +53,5 @@ void show_stack(struct task_struct *task, unsigned long *stack,
}
printk("%sCall Trace:\n", loglvl);
- dump_trace(current, &stackops, (void *)loglvl);
+ dump_trace(task ?: current, &stackops, (void *)loglvl);
}
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 3ff53a2d4ff0..c83582b5a010 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -113,7 +113,8 @@ ifeq ($(CONFIG_X86_32),y)
ifeq ($(CONFIG_STACKPROTECTOR),y)
ifeq ($(CONFIG_SMP),y)
- KBUILD_CFLAGS += -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard
+ KBUILD_CFLAGS += -mstack-protector-guard-reg=fs \
+ -mstack-protector-guard-symbol=__ref_stack_chk_guard
else
KBUILD_CFLAGS += -mstack-protector-guard=global
endif
diff --git a/arch/x86/coco/tdx/tdcall.S b/arch/x86/coco/tdx/tdcall.S
index 2eca5f43734f..56b9cd32895e 100644
--- a/arch/x86/coco/tdx/tdcall.S
+++ b/arch/x86/coco/tdx/tdcall.S
@@ -40,49 +40,35 @@
.section .noinstr.text, "ax"
/*
- * __tdx_module_call() - Used by TDX guests to request services from
- * the TDX module (does not include VMM services) using TDCALL instruction.
+ * __tdcall() - Used by TDX guests to request services from the TDX
+ * module (does not include VMM services) using TDCALL instruction.
*
- * Transforms function call register arguments into the TDCALL register ABI.
- * After TDCALL operation, TDX module output is saved in @out (if it is
- * provided by the user).
+ * __tdcall() function ABI:
*
- *-------------------------------------------------------------------------
- * TDCALL ABI:
- *-------------------------------------------------------------------------
- * Input Registers:
- *
- * RAX - TDCALL Leaf number.
- * RCX,RDX,R8-R9 - TDCALL Leaf specific input registers.
- *
- * Output Registers:
- *
- * RAX - TDCALL instruction error code.
- * RCX,RDX,R8-R11 - TDCALL Leaf specific output registers.
- *
- *-------------------------------------------------------------------------
+ * @fn (RDI) - TDCALL Leaf ID, moved to RAX
+ * @args (RSI) - struct tdx_module_args for input
*
- * __tdx_module_call() function ABI:
+ * Return status of TDCALL via RAX.
+ */
+SYM_FUNC_START(__tdcall)
+ TDX_MODULE_CALL host=0
+SYM_FUNC_END(__tdcall)
+
+/*
+ * __tdcall_ret() - Used by TDX guests to request services from the TDX
+ * module (does not include VMM services) using TDCALL instruction, with
+ * saving output registers to the 'struct tdx_module_args' used as input.
*
- * @fn (RDI) - TDCALL Leaf ID, moved to RAX
- * @rcx (RSI) - Input parameter 1, moved to RCX
- * @rdx (RDX) - Input parameter 2, moved to RDX
- * @r8 (RCX) - Input parameter 3, moved to R8
- * @r9 (R8) - Input parameter 4, moved to R9
+ * __tdcall_ret() function ABI:
*
- * @out (R9) - struct tdx_module_output pointer
- * stored temporarily in R12 (not
- * shared with the TDX module). It
- * can be NULL.
+ * @fn (RDI) - TDCALL Leaf ID, moved to RAX
+ * @args (RSI) - struct tdx_module_args for input and output
*
* Return status of TDCALL via RAX.
*/
-SYM_FUNC_START(__tdx_module_call)
- FRAME_BEGIN
- TDX_MODULE_CALL host=0
- FRAME_END
- RET
-SYM_FUNC_END(__tdx_module_call)
+SYM_FUNC_START(__tdcall_ret)
+ TDX_MODULE_CALL host=0 ret=1
+SYM_FUNC_END(__tdcall_ret)
/*
* TDX_HYPERCALL - Make hypercalls to a TDX VMM using TDVMCALL leaf of TDCALL
diff --git a/arch/x86/coco/tdx/tdx-shared.c b/arch/x86/coco/tdx/tdx-shared.c
index ef20ddc37b58..a7396d0ddef9 100644
--- a/arch/x86/coco/tdx/tdx-shared.c
+++ b/arch/x86/coco/tdx/tdx-shared.c
@@ -5,7 +5,7 @@ static unsigned long try_accept_one(phys_addr_t start, unsigned long len,
enum pg_level pg_level)
{
unsigned long accept_size = page_level_size(pg_level);
- u64 tdcall_rcx;
+ struct tdx_module_args args = {};
u8 page_size;
if (!IS_ALIGNED(start, accept_size))
@@ -34,8 +34,8 @@ static unsigned long try_accept_one(phys_addr_t start, unsigned long len,
return 0;
}
- tdcall_rcx = start | page_size;
- if (__tdx_module_call(TDX_ACCEPT_PAGE, tdcall_rcx, 0, 0, 0, NULL))
+ args.rcx = start | page_size;
+ if (__tdcall(TDG_MEM_PAGE_ACCEPT, &args))
return 0;
return accept_size;
@@ -45,7 +45,7 @@ bool tdx_accept_memory(phys_addr_t start, phys_addr_t end)
{
/*
* For shared->private conversion, accept the page using
- * TDX_ACCEPT_PAGE TDX module call.
+ * TDG_MEM_PAGE_ACCEPT TDX module call.
*/
while (start < end) {
unsigned long len = end - start;
diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index 905ac8a3f716..2f67e196a2ea 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -68,13 +68,38 @@ EXPORT_SYMBOL_GPL(tdx_kvm_hypercall);
* should only be used for calls that have no legitimate reason to fail
* or where the kernel can not survive the call failing.
*/
-static inline void tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
- struct tdx_module_output *out)
+static inline void tdcall(u64 fn, struct tdx_module_args *args)
{
- if (__tdx_module_call(fn, rcx, rdx, r8, r9, out))
+ if (__tdcall_ret(fn, args))
panic("TDCALL %lld failed (Buggy TDX module!)\n", fn);
}
+/* Read TD-scoped metadata */
+static inline u64 tdg_vm_rd(u64 field, u64 *value)
+{
+ struct tdx_module_args args = {
+ .rdx = field,
+ };
+ u64 ret;
+
+ ret = __tdcall_ret(TDG_VM_RD, &args);
+ *value = args.r8;
+
+ return ret;
+}
+
+/* Write TD-scoped metadata */
+static inline u64 tdg_vm_wr(u64 field, u64 value, u64 mask)
+{
+ struct tdx_module_args args = {
+ .rdx = field,
+ .r8 = value,
+ .r9 = mask,
+ };
+
+ return __tdcall(TDG_VM_WR, &args);
+}
+
/**
* tdx_mcall_get_report0() - Wrapper to get TDREPORT0 (a.k.a. TDREPORT
* subtype 0) using TDG.MR.REPORT TDCALL.
@@ -91,11 +116,14 @@ static inline void tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
*/
int tdx_mcall_get_report0(u8 *reportdata, u8 *tdreport)
{
+ struct tdx_module_args args = {
+ .rcx = virt_to_phys(tdreport),
+ .rdx = virt_to_phys(reportdata),
+ .r8 = TDREPORT_SUBTYPE_0,
+ };
u64 ret;
- ret = __tdx_module_call(TDX_GET_REPORT, virt_to_phys(tdreport),
- virt_to_phys(reportdata), TDREPORT_SUBTYPE_0,
- 0, NULL);
+ ret = __tdcall(TDG_MR_REPORT, &args);
if (ret) {
if (TDCALL_RETURN_CODE(ret) == TDCALL_INVALID_OPERAND)
return -EINVAL;
@@ -141,9 +169,63 @@ static void __noreturn tdx_panic(const char *msg)
__tdx_hypercall(&args);
}
-static void tdx_parse_tdinfo(u64 *cc_mask)
+/*
+ * The kernel cannot handle #VEs when accessing normal kernel memory. Ensure
+ * that no #VE will be delivered for accesses to TD-private memory.
+ *
+ * TDX 1.0 does not allow the guest to disable SEPT #VE on its own. The VMM
+ * controls if the guest will receive such #VE with TD attribute
+ * ATTR_SEPT_VE_DISABLE.
+ *
+ * Newer TDX modules allow the guest to control if it wants to receive SEPT
+ * violation #VEs.
+ *
+ * Check if the feature is available and disable SEPT #VE if possible.
+ *
+ * If the TD is allowed to disable/enable SEPT #VEs, the ATTR_SEPT_VE_DISABLE
+ * attribute is no longer reliable. It reflects the initial state of the
+ * control for the TD, but it will not be updated if someone (e.g. bootloader)
+ * changes it before the kernel starts. Kernel must check TDCS_TD_CTLS bit to
+ * determine if SEPT #VEs are enabled or disabled.
+ */
+static void disable_sept_ve(u64 td_attr)
+{
+ const char *msg = "TD misconfiguration: SEPT #VE has to be disabled";
+ bool debug = td_attr & ATTR_DEBUG;
+ u64 config, controls;
+
+ /* Is this TD allowed to disable SEPT #VE */
+ tdg_vm_rd(TDCS_CONFIG_FLAGS, &config);
+ if (!(config & TDCS_CONFIG_FLEXIBLE_PENDING_VE)) {
+ /* No SEPT #VE controls for the guest: check the attribute */
+ if (td_attr & ATTR_SEPT_VE_DISABLE)
+ return;
+
+ /* Relax SEPT_VE_DISABLE check for debug TD for backtraces */
+ if (debug)
+ pr_warn("%s\n", msg);
+ else
+ tdx_panic(msg);
+ return;
+ }
+
+ /* Check if SEPT #VE has been disabled before us */
+ tdg_vm_rd(TDCS_TD_CTLS, &controls);
+ if (controls & TD_CTLS_PENDING_VE_DISABLE)
+ return;
+
+ /* Keep #VEs enabled for splats in debugging environments */
+ if (debug)
+ return;
+
+ /* Disable SEPT #VEs */
+ tdg_vm_wr(TDCS_TD_CTLS, TD_CTLS_PENDING_VE_DISABLE,
+ TD_CTLS_PENDING_VE_DISABLE);
+}
+
+static void tdx_setup(u64 *cc_mask)
{
- struct tdx_module_output out;
+ struct tdx_module_args args = {};
unsigned int gpa_width;
u64 td_attr;
@@ -154,7 +236,7 @@ static void tdx_parse_tdinfo(u64 *cc_mask)
* Guest-Host-Communication Interface (GHCI), section 2.4.2 TDCALL
* [TDG.VP.INFO].
*/
- tdx_module_call(TDX_GET_INFO, 0, 0, 0, 0, &out);
+ tdcall(TDG_VP_INFO, &args);
/*
* The highest bit of a guest physical address is the "sharing" bit.
@@ -163,24 +245,15 @@ static void tdx_parse_tdinfo(u64 *cc_mask)
* The GPA width that comes out of this call is critical. TDX guests
* can not meaningfully run without it.
*/
- gpa_width = out.rcx & GENMASK(5, 0);
+ gpa_width = args.rcx & GENMASK(5, 0);
*cc_mask = BIT_ULL(gpa_width - 1);
- /*
- * The kernel can not handle #VE's when accessing normal kernel
- * memory. Ensure that no #VE will be delivered for accesses to
- * TD-private memory. Only VMM-shared memory (MMIO) will #VE.
- */
- td_attr = out.rdx;
- if (!(td_attr & ATTR_SEPT_VE_DISABLE)) {
- const char *msg = "TD misconfiguration: SEPT_VE_DISABLE attribute must be set.";
+ td_attr = args.rdx;
- /* Relax SEPT_VE_DISABLE check for debug TD. */
- if (td_attr & ATTR_DEBUG)
- pr_warn("%s\n", msg);
- else
- tdx_panic(msg);
- }
+ /* Kernel does not use NOTIFY_ENABLES and does not need random #VEs */
+ tdg_vm_wr(TDCS_NOTIFY_ENABLES, 0, -1ULL);
+
+ disable_sept_ve(td_attr);
}
/*
@@ -583,7 +656,7 @@ __init bool tdx_early_handle_ve(struct pt_regs *regs)
void tdx_get_ve_info(struct ve_info *ve)
{
- struct tdx_module_output out;
+ struct tdx_module_args args = {};
/*
* Called during #VE handling to retrieve the #VE info from the
@@ -600,15 +673,15 @@ void tdx_get_ve_info(struct ve_info *ve)
* Note, the TDX module treats virtual NMIs as inhibited if the #VE
* valid flag is set. It means that NMI=>#VE will not result in a #DF.
*/
- tdx_module_call(TDX_GET_VEINFO, 0, 0, 0, 0, &out);
+ tdcall(TDG_VP_VEINFO_GET, &args);
/* Transfer the output parameters */
- ve->exit_reason = out.rcx;
- ve->exit_qual = out.rdx;
- ve->gla = out.r8;
- ve->gpa = out.r9;
- ve->instr_len = lower_32_bits(out.r10);
- ve->instr_info = upper_32_bits(out.r10);
+ ve->exit_reason = args.rcx;
+ ve->exit_qual = args.rdx;
+ ve->gla = args.r8;
+ ve->gpa = args.r9;
+ ve->instr_len = lower_32_bits(args.r10);
+ ve->instr_info = upper_32_bits(args.r10);
}
/*
@@ -776,11 +849,11 @@ void __init tdx_early_init(void)
setup_force_cpu_cap(X86_FEATURE_TDX_GUEST);
cc_vendor = CC_VENDOR_INTEL;
- tdx_parse_tdinfo(&cc_mask);
- cc_set_mask(cc_mask);
- /* Kernel does not use NOTIFY_ENABLES and does not need random #VEs */
- tdx_module_call(TDX_WR, 0, TDCS_NOTIFY_ENABLES, 0, -1ULL, NULL);
+ /* Configure the TD */
+ tdx_setup(&cc_mask);
+
+ cc_set_mask(cc_mask);
/*
* All bits above GPA width are reserved and kernel treats shared bit
diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S
index ad7f4c891625..2de859173940 100644
--- a/arch/x86/crypto/aegis128-aesni-asm.S
+++ b/arch/x86/crypto/aegis128-aesni-asm.S
@@ -21,7 +21,7 @@
#define T1 %xmm7
#define STATEP %rdi
-#define LEN %rsi
+#define LEN %esi
#define SRC %rdx
#define DST %rcx
@@ -76,32 +76,32 @@ SYM_FUNC_START_LOCAL(__load_partial)
xor %r9d, %r9d
pxor MSG, MSG
- mov LEN, %r8
+ mov LEN, %r8d
and $0x1, %r8
jz .Lld_partial_1
- mov LEN, %r8
+ mov LEN, %r8d
and $0x1E, %r8
add SRC, %r8
mov (%r8), %r9b
.Lld_partial_1:
- mov LEN, %r8
+ mov LEN, %r8d
and $0x2, %r8
jz .Lld_partial_2
- mov LEN, %r8
+ mov LEN, %r8d
and $0x1C, %r8
add SRC, %r8
shl $0x10, %r9
mov (%r8), %r9w
.Lld_partial_2:
- mov LEN, %r8
+ mov LEN, %r8d
and $0x4, %r8
jz .Lld_partial_4
- mov LEN, %r8
+ mov LEN, %r8d
and $0x18, %r8
add SRC, %r8
shl $32, %r9
@@ -111,11 +111,11 @@ SYM_FUNC_START_LOCAL(__load_partial)
.Lld_partial_4:
movq %r9, MSG
- mov LEN, %r8
+ mov LEN, %r8d
and $0x8, %r8
jz .Lld_partial_8
- mov LEN, %r8
+ mov LEN, %r8d
and $0x10, %r8
add SRC, %r8
pslldq $8, MSG
@@ -139,7 +139,7 @@ SYM_FUNC_END(__load_partial)
* %r10
*/
SYM_FUNC_START_LOCAL(__store_partial)
- mov LEN, %r8
+ mov LEN, %r8d
mov DST, %r9
movq T0, %r10
@@ -677,7 +677,7 @@ SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec_tail)
call __store_partial
/* mask with byte count: */
- movq LEN, T0
+ movd LEN, T0
punpcklbw T0, T0
punpcklbw T0, T0
punpcklbw T0, T0
@@ -702,7 +702,8 @@ SYM_FUNC_END(crypto_aegis128_aesni_dec_tail)
/*
* void crypto_aegis128_aesni_final(void *state, void *tag_xor,
- * u64 assoclen, u64 cryptlen);
+ * unsigned int assoclen,
+ * unsigned int cryptlen);
*/
SYM_FUNC_START(crypto_aegis128_aesni_final)
FRAME_BEGIN
@@ -715,8 +716,8 @@ SYM_FUNC_START(crypto_aegis128_aesni_final)
movdqu 0x40(STATEP), STATE4
/* prepare length block: */
- movq %rdx, MSG
- movq %rcx, T0
+ movd %edx, MSG
+ movd %ecx, T0
pslldq $8, T0
pxor T0, MSG
psllq $3, MSG /* multiply by 8 (to get bit count) */
diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
index 34eca8015b64..2143358d0c4c 100644
--- a/arch/x86/entry/entry.S
+++ b/arch/x86/entry/entry.S
@@ -48,3 +48,18 @@ EXPORT_SYMBOL_GPL(mds_verw_sel);
.popsection
+#ifndef CONFIG_X86_64
+/*
+ * Clang's implementation of TLS stack cookies requires the variable in
+ * question to be a TLS variable. If the variable happens to be defined as an
+ * ordinary variable with external linkage in the same compilation unit (which
+ * amounts to the whole of vmlinux with LTO enabled), Clang will drop the
+ * segment register prefix from the references, resulting in broken code. Work
+ * around this by avoiding the symbol used in -mstack-protector-guard-symbol=
+ * entirely in the C code, and use an alias emitted by the linker script
+ * instead.
+ */
+#ifdef CONFIG_STACKPROTECTOR
+EXPORT_SYMBOL(__ref_stack_chk_guard);
+#endif
+#endif
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 688550e336ce..37c8badd2701 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -5559,8 +5559,22 @@ default_is_visible(struct kobject *kobj, struct attribute *attr, int i)
return attr->mode;
}
+static umode_t
+td_is_visible(struct kobject *kobj, struct attribute *attr, int i)
+{
+ /*
+ * Hide the perf metrics topdown events
+ * if the feature is not enumerated.
+ */
+ if (x86_pmu.num_topdown_events)
+ return x86_pmu.intel_cap.perf_metrics ? attr->mode : 0;
+
+ return attr->mode;
+}
+
static struct attribute_group group_events_td = {
.name = "events",
+ .is_visible = td_is_visible,
};
static struct attribute_group group_events_mem = {
@@ -5762,9 +5776,27 @@ static umode_t hybrid_format_is_visible(struct kobject *kobj,
return (cpu >= 0) && (pmu->cpu_type & pmu_attr->pmu_type) ? attr->mode : 0;
}
+static umode_t hybrid_td_is_visible(struct kobject *kobj,
+ struct attribute *attr, int i)
+{
+ struct device *dev = kobj_to_dev(kobj);
+ struct x86_hybrid_pmu *pmu =
+ container_of(dev_get_drvdata(dev), struct x86_hybrid_pmu, pmu);
+
+ if (!is_attr_for_this_pmu(kobj, attr))
+ return 0;
+
+
+ /* Only the big core supports perf metrics */
+ if (pmu->cpu_type == hybrid_big)
+ return pmu->intel_cap.perf_metrics ? attr->mode : 0;
+
+ return attr->mode;
+}
+
static struct attribute_group hybrid_group_events_td = {
.name = "events",
- .is_visible = hybrid_events_is_visible,
+ .is_visible = hybrid_td_is_visible,
};
static struct attribute_group hybrid_group_events_mem = {
diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
index 4110246aba12..7ee8dc80a359 100644
--- a/arch/x86/events/intel/pt.c
+++ b/arch/x86/events/intel/pt.c
@@ -827,11 +827,13 @@ static void pt_buffer_advance(struct pt_buffer *buf)
buf->cur_idx++;
if (buf->cur_idx == buf->cur->last) {
- if (buf->cur == buf->last)
+ if (buf->cur == buf->last) {
buf->cur = buf->first;
- else
+ buf->wrapped = true;
+ } else {
buf->cur = list_entry(buf->cur->list.next, struct topa,
list);
+ }
buf->cur_idx = 0;
}
}
@@ -845,8 +847,11 @@ static void pt_buffer_advance(struct pt_buffer *buf)
static void pt_update_head(struct pt *pt)
{
struct pt_buffer *buf = perf_get_aux(&pt->handle);
+ bool wrapped = buf->wrapped;
u64 topa_idx, base, old;
+ buf->wrapped = false;
+
if (buf->single) {
local_set(&buf->data_size, buf->output_off);
return;
@@ -864,7 +869,7 @@ static void pt_update_head(struct pt *pt)
} else {
old = (local64_xchg(&buf->head, base) &
((buf->nr_pages << PAGE_SHIFT) - 1));
- if (base < old)
+ if (base < old || (base == old && wrapped))
base += buf->nr_pages << PAGE_SHIFT;
local_add(base - old, &buf->data_size);
diff --git a/arch/x86/events/intel/pt.h b/arch/x86/events/intel/pt.h
index f5e46c04c145..a1b6c04b7f68 100644
--- a/arch/x86/events/intel/pt.h
+++ b/arch/x86/events/intel/pt.h
@@ -65,6 +65,7 @@ struct pt_pmu {
* @head: logical write offset inside the buffer
* @snapshot: if this is for a snapshot/overwrite counter
* @single: use Single Range Output instead of ToPA
+ * @wrapped: buffer advance wrapped back to the first topa table
* @stop_pos: STOP topa entry index
* @intr_pos: INT topa entry index
* @stop_te: STOP topa entry pointer
@@ -82,6 +83,7 @@ struct pt_buffer {
local64_t head;
bool snapshot;
bool single;
+ bool wrapped;
long stop_pos, intr_pos;
struct topa_entry *stop_te, *intr_te;
void **data_pages;
diff --git a/arch/x86/include/asm/amd_nb.h b/arch/x86/include/asm/amd_nb.h
index ed0eaf65c437..c8cdc69aae09 100644
--- a/arch/x86/include/asm/amd_nb.h
+++ b/arch/x86/include/asm/amd_nb.h
@@ -116,7 +116,10 @@ static inline bool amd_gart_present(void)
#define amd_nb_num(x) 0
#define amd_nb_has_feature(x) false
-#define node_to_amd_nb(x) NULL
+static inline struct amd_northbridge *node_to_amd_nb(int node)
+{
+ return NULL;
+}
#define amd_gart_present(x) false
#endif
diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
index 0e82074517f6..768076e68668 100644
--- a/arch/x86/include/asm/asm-prototypes.h
+++ b/arch/x86/include/asm/asm-prototypes.h
@@ -19,3 +19,6 @@
extern void cmpxchg8b_emu(void);
#endif
+#if defined(__GENKSYMS__) && defined(CONFIG_STACKPROTECTOR)
+extern unsigned long __ref_stack_chk_guard;
+#endif
diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h
index 7513b3bb69b7..aed99fb099d9 100644
--- a/arch/x86/include/asm/shared/tdx.h
+++ b/arch/x86/include/asm/shared/tdx.h
@@ -11,15 +11,24 @@
#define TDX_IDENT "IntelTDX "
/* TDX module Call Leaf IDs */
-#define TDX_GET_INFO 1
-#define TDX_GET_VEINFO 3
-#define TDX_GET_REPORT 4
-#define TDX_ACCEPT_PAGE 6
-#define TDX_WR 8
-
-/* TDCS fields. To be used by TDG.VM.WR and TDG.VM.RD module calls */
+#define TDG_VP_INFO 1
+#define TDG_VP_VEINFO_GET 3
+#define TDG_MR_REPORT 4
+#define TDG_MEM_PAGE_ACCEPT 6
+#define TDG_VM_RD 7
+#define TDG_VM_WR 8
+
+/* TDX TD-Scope Metadata. To be used by TDG.VM.WR and TDG.VM.RD */
+#define TDCS_CONFIG_FLAGS 0x1110000300000016
+#define TDCS_TD_CTLS 0x1110000300000017
#define TDCS_NOTIFY_ENABLES 0x9100000000000010
+/* TDCS_CONFIG_FLAGS bits */
+#define TDCS_CONFIG_FLEXIBLE_PENDING_VE BIT_ULL(1)
+
+/* TDCS_TD_CTLS bits */
+#define TD_CTLS_PENDING_VE_DISABLE BIT_ULL(0)
+
/* TDX hypercall Leaf IDs */
#define TDVMCALL_MAP_GPA 0x10001
#define TDVMCALL_REPORT_FATAL_ERROR 0x10003
@@ -74,11 +83,11 @@ static inline u64 _tdx_hypercall(u64 fn, u64 r12, u64 r13, u64 r14, u64 r15)
void __tdx_hypercall_failed(void);
/*
- * Used in __tdx_module_call() to gather the output registers' values of the
+ * Used in __tdcall*() to gather the input/output registers' values of the
* TDCALL instruction when requesting services from the TDX module. This is a
* software only structure and not part of the TDX module/VMM ABI
*/
-struct tdx_module_output {
+struct tdx_module_args {
u64 rcx;
u64 rdx;
u64 r8;
@@ -88,8 +97,8 @@ struct tdx_module_output {
};
/* Used to communicate with the TDX module */
-u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
- struct tdx_module_output *out);
+u64 __tdcall(u64 fn, struct tdx_module_args *args);
+u64 __tdcall_ret(u64 fn, struct tdx_module_args *args);
bool tdx_accept_memory(phys_addr_t start, phys_addr_t end);
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index dc3576303f1a..50383bc46dd7 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -68,12 +68,12 @@ static void __used common(void)
#endif
BLANK();
- OFFSET(TDX_MODULE_rcx, tdx_module_output, rcx);
- OFFSET(TDX_MODULE_rdx, tdx_module_output, rdx);
- OFFSET(TDX_MODULE_r8, tdx_module_output, r8);
- OFFSET(TDX_MODULE_r9, tdx_module_output, r9);
- OFFSET(TDX_MODULE_r10, tdx_module_output, r10);
- OFFSET(TDX_MODULE_r11, tdx_module_output, r11);
+ OFFSET(TDX_MODULE_rcx, tdx_module_args, rcx);
+ OFFSET(TDX_MODULE_rdx, tdx_module_args, rdx);
+ OFFSET(TDX_MODULE_r8, tdx_module_args, r8);
+ OFFSET(TDX_MODULE_r9, tdx_module_args, r9);
+ OFFSET(TDX_MODULE_r10, tdx_module_args, r10);
+ OFFSET(TDX_MODULE_r11, tdx_module_args, r11);
BLANK();
OFFSET(TDX_HYPERCALL_r8, tdx_hypercall_args, r8);
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 7a1e58fb43a0..852cc2ab4df9 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2159,8 +2159,10 @@ void syscall_init(void)
#ifdef CONFIG_STACKPROTECTOR
DEFINE_PER_CPU(unsigned long, __stack_chk_guard);
+#ifndef CONFIG_SMP
EXPORT_PER_CPU_SYMBOL(__stack_chk_guard);
#endif
+#endif
#endif /* CONFIG_X86_64 */
diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c
index c13c9cb40b9b..37ca25d82bbc 100644
--- a/arch/x86/kernel/devicetree.c
+++ b/arch/x86/kernel/devicetree.c
@@ -283,22 +283,24 @@ static void __init x86_flattree_get_config(void)
u32 size, map_len;
void *dt;
- if (!initial_dtb)
- return;
-
- map_len = max(PAGE_SIZE - (initial_dtb & ~PAGE_MASK), (u64)128);
+ if (initial_dtb) {
+ map_len = max(PAGE_SIZE - (initial_dtb & ~PAGE_MASK), (u64)128);
+
+ dt = early_memremap(initial_dtb, map_len);
+ size = fdt_totalsize(dt);
+ if (map_len < size) {
+ early_memunmap(dt, map_len);
+ dt = early_memremap(initial_dtb, size);
+ map_len = size;
+ }
- dt = early_memremap(initial_dtb, map_len);
- size = fdt_totalsize(dt);
- if (map_len < size) {
- early_memunmap(dt, map_len);
- dt = early_memremap(initial_dtb, size);
- map_len = size;
+ early_init_dt_verify(dt, __pa(dt));
}
- early_init_dt_verify(dt);
unflatten_and_copy_device_tree();
- early_memunmap(dt, map_len);
+
+ if (initial_dtb)
+ early_memunmap(dt, map_len);
}
#else
static inline void x86_flattree_get_config(void) { }
diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
index 7e574cf3bf8a..7784076819de 100644
--- a/arch/x86/kernel/unwind_orc.c
+++ b/arch/x86/kernel/unwind_orc.c
@@ -723,7 +723,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
state->sp = task->thread.sp + sizeof(*frame);
state->bp = READ_ONCE_NOCHECK(frame->bp);
state->ip = READ_ONCE_NOCHECK(frame->ret_addr);
- state->signal = (void *)state->ip == ret_from_fork;
+ state->signal = (void *)state->ip == ret_from_fork_asm;
}
if (get_stack_info((unsigned long *)state->sp, state->task,
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 54a5596adaa6..60eb8baa44d7 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -496,6 +496,9 @@ SECTIONS
ASSERT(SIZEOF(.rela.dyn) == 0, "Unexpected run-time relocations (.rela) detected!")
}
+/* needed for Clang - see arch/x86/entry/entry.S */
+PROVIDE(__ref_stack_chk_guard = __stack_chk_guard);
+
/*
* The ASSERT() sink to . is intentional, for binutils 2.14 compatibility:
*/
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 4a599130e9c9..b4c1119cc48b 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -206,12 +206,20 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask;
/*
- * Optimization: for pte sync, if spte was writable the hash
- * lookup is unnecessary (and expensive). Write protection
- * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots.
- * Same reasoning can be applied to dirty page accounting.
+ * When overwriting an existing leaf SPTE, and the old SPTE was
+ * writable, skip trying to unsync shadow pages as any relevant
+ * shadow pages must already be unsync, i.e. the hash lookup is
+ * unnecessary (and expensive).
+ *
+ * The same reasoning applies to dirty page/folio accounting;
+ * KVM will mark the folio dirty using the old SPTE, thus
+ * there's no need to immediately mark the new SPTE as dirty.
+ *
+ * Note, both cases rely on KVM not changing PFNs without first
+ * zapping the old SPTE, which is guaranteed by both the shadow
+ * MMU and the TDP MMU.
*/
- if (is_writable_pte(old_spte))
+ if (is_last_spte(old_spte, level) && is_writable_pte(old_spte))
goto out;
/*
diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S
index c4365a05ab83..008a80552224 100644
--- a/arch/x86/platform/pvh/head.S
+++ b/arch/x86/platform/pvh/head.S
@@ -100,7 +100,27 @@ SYM_CODE_START_LOCAL(pvh_start_xen)
xor %edx, %edx
wrmsr
- call xen_prepare_pvh
+ /*
+ * Calculate load offset and store in phys_base. __pa() needs
+ * phys_base set to calculate the hypercall page in xen_pvh_init().
+ */
+ movq %rbp, %rbx
+ subq $_pa(pvh_start_xen), %rbx
+ movq %rbx, phys_base(%rip)
+
+ /* Call xen_prepare_pvh() via the kernel virtual mapping */
+ leaq xen_prepare_pvh(%rip), %rax
+ subq phys_base(%rip), %rax
+ addq $__START_KERNEL_map, %rax
+ ANNOTATE_RETPOLINE_SAFE
+ call *%rax
+
+ /*
+ * Clear phys_base. __startup_64 will *add* to its value,
+ * so reset to 0.
+ */
+ xor %rbx, %rbx
+ movq %rbx, phys_base(%rip)
/* startup_64 expects boot_params in %rsi. */
mov $_pa(pvh_bootparams), %rsi
diff --git a/arch/x86/virt/vmx/tdx/tdxcall.S b/arch/x86/virt/vmx/tdx/tdxcall.S
index 49a54356ae99..e9e19e7d77f8 100644
--- a/arch/x86/virt/vmx/tdx/tdxcall.S
+++ b/arch/x86/virt/vmx/tdx/tdxcall.S
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include <asm/asm-offsets.h>
+#include <asm/frame.h>
#include <asm/tdx.h>
/*
@@ -16,35 +17,37 @@
* TDX module and hypercalls to the VMM.
* SEAMCALL - used by TDX hosts to make requests to the
* TDX module.
+ *
+ *-------------------------------------------------------------------------
+ * TDCALL/SEAMCALL ABI:
+ *-------------------------------------------------------------------------
+ * Input Registers:
+ *
+ * RAX - TDCALL/SEAMCALL Leaf number.
+ * RCX,RDX,R8-R11 - TDCALL/SEAMCALL Leaf specific input registers.
+ *
+ * Output Registers:
+ *
+ * RAX - TDCALL/SEAMCALL instruction error code.
+ * RCX,RDX,R8-R11 - TDCALL/SEAMCALL Leaf specific output registers.
+ *
+ *-------------------------------------------------------------------------
*/
-.macro TDX_MODULE_CALL host:req
- /*
- * R12 will be used as temporary storage for struct tdx_module_output
- * pointer. Since R12-R15 registers are not used by TDCALL/SEAMCALL
- * services supported by this function, it can be reused.
- */
-
- /* Callee saved, so preserve it */
- push %r12
-
- /*
- * Push output pointer to stack.
- * After the operation, it will be fetched into R12 register.
- */
- push %r9
+.macro TDX_MODULE_CALL host:req ret=0
+ FRAME_BEGIN
- /* Mangle function call ABI into TDCALL/SEAMCALL ABI: */
/* Move Leaf ID to RAX */
mov %rdi, %rax
- /* Move input 4 to R9 */
- mov %r8, %r9
- /* Move input 3 to R8 */
- mov %rcx, %r8
- /* Move input 1 to RCX */
- mov %rsi, %rcx
- /* Leave input param 2 in RDX */
- .if \host
+ /* Move other input regs from 'struct tdx_module_args' */
+ movq TDX_MODULE_rcx(%rsi), %rcx
+ movq TDX_MODULE_rdx(%rsi), %rdx
+ movq TDX_MODULE_r8(%rsi), %r8
+ movq TDX_MODULE_r9(%rsi), %r9
+ movq TDX_MODULE_r10(%rsi), %r10
+ movq TDX_MODULE_r11(%rsi), %r11
+
+.if \host
seamcall
/*
* SEAMCALL instruction is essentially a VMExit from VMX root
@@ -57,40 +60,31 @@
* This value will never be used as actual SEAMCALL error code as
* it is from the Reserved status code class.
*/
- jnc .Lno_vmfailinvalid
- mov $TDX_SEAMCALL_VMFAILINVALID, %rax
-.Lno_vmfailinvalid:
-
- .else
+ jc .Lseamcall_vmfailinvalid\@
+.else
tdcall
- .endif
+.endif
- /*
- * Fetch output pointer from stack to R12 (It is used
- * as temporary storage)
- */
- pop %r12
+.if \ret
+ /* Copy output registers to the structure */
+ movq %rcx, TDX_MODULE_rcx(%rsi)
+ movq %rdx, TDX_MODULE_rdx(%rsi)
+ movq %r8, TDX_MODULE_r8(%rsi)
+ movq %r9, TDX_MODULE_r9(%rsi)
+ movq %r10, TDX_MODULE_r10(%rsi)
+ movq %r11, TDX_MODULE_r11(%rsi)
+.endif
- /*
- * Since this macro can be invoked with NULL as an output pointer,
- * check if caller provided an output struct before storing output
- * registers.
- *
- * Update output registers, even if the call failed (RAX != 0).
- * Other registers may contain details of the failure.
- */
- test %r12, %r12
- jz .Lno_output_struct
+.if \host
+.Lout\@:
+.endif
+ FRAME_END
+ RET
- /* Copy result registers to output struct: */
- movq %rcx, TDX_MODULE_rcx(%r12)
- movq %rdx, TDX_MODULE_rdx(%r12)
- movq %r8, TDX_MODULE_r8(%r12)
- movq %r9, TDX_MODULE_r9(%r12)
- movq %r10, TDX_MODULE_r10(%r12)
- movq %r11, TDX_MODULE_r11(%r12)
+.if \host
+.Lseamcall_vmfailinvalid\@:
+ mov $TDX_SEAMCALL_VMFAILINVALID, %rax
+ jmp .Lout\@
+.endif /* \host */
-.Lno_output_struct:
- /* Restore the state of R12 register */
- pop %r12
.endm
diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
index 52d6e4870a04..124e84fd9a29 100644
--- a/arch/xtensa/kernel/setup.c
+++ b/arch/xtensa/kernel/setup.c
@@ -228,7 +228,7 @@ static int __init xtensa_dt_io_area(unsigned long node, const char *uname,
void __init early_init_devtree(void *params)
{
- early_init_dt_scan(params);
+ early_init_dt_scan(params, __pa(params));
of_scan_flat_dt(xtensa_dt_io_area, NULL);
if (!command_line[0])
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 7e0dcded5713..dd8ca3f7ba60 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -582,23 +582,31 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd,
#define BFQ_LIMIT_INLINE_DEPTH 16
#ifdef CONFIG_BFQ_GROUP_IOSCHED
-static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
+static bool bfqq_request_over_limit(struct bfq_data *bfqd,
+ struct bfq_io_cq *bic, blk_opf_t opf,
+ unsigned int act_idx, int limit)
{
- struct bfq_data *bfqd = bfqq->bfqd;
- struct bfq_entity *entity = &bfqq->entity;
struct bfq_entity *inline_entities[BFQ_LIMIT_INLINE_DEPTH];
struct bfq_entity **entities = inline_entities;
- int depth, level, alloc_depth = BFQ_LIMIT_INLINE_DEPTH;
- int class_idx = bfqq->ioprio_class - 1;
+ int alloc_depth = BFQ_LIMIT_INLINE_DEPTH;
struct bfq_sched_data *sched_data;
+ struct bfq_entity *entity;
+ struct bfq_queue *bfqq;
unsigned long wsum;
bool ret = false;
-
- if (!entity->on_st_or_in_serv)
- return false;
+ int depth;
+ int level;
retry:
spin_lock_irq(&bfqd->lock);
+ bfqq = bic_to_bfqq(bic, op_is_sync(opf), act_idx);
+ if (!bfqq)
+ goto out;
+
+ entity = &bfqq->entity;
+ if (!entity->on_st_or_in_serv)
+ goto out;
+
/* +1 for bfqq entity, root cgroup not included */
depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1;
if (depth > alloc_depth) {
@@ -643,7 +651,7 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
* class.
*/
wsum = 0;
- for (i = 0; i <= class_idx; i++) {
+ for (i = 0; i <= bfqq->ioprio_class - 1; i++) {
wsum = wsum * IOPRIO_BE_NR +
sched_data->service_tree[i].wsum;
}
@@ -666,7 +674,9 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
return ret;
}
#else
-static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
+static bool bfqq_request_over_limit(struct bfq_data *bfqd,
+ struct bfq_io_cq *bic, blk_opf_t opf,
+ unsigned int act_idx, int limit)
{
return false;
}
@@ -704,8 +714,9 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
}
for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) {
- struct bfq_queue *bfqq =
- bic_to_bfqq(bic, op_is_sync(opf), act_idx);
+ /* Fast path to check if bfqq is already allocated. */
+ if (!bic_to_bfqq(bic, op_is_sync(opf), act_idx))
+ continue;
/*
* Does queue (or any parent entity) exceed number of
@@ -713,7 +724,7 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
* limit depth so that it cannot consume more
* available requests and thus starve other entities.
*/
- if (bfqq && bfqq_request_over_limit(bfqq, limit)) {
+ if (bfqq_request_over_limit(bfqd, bic, opf, act_idx, limit)) {
depth = 1;
break;
}
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 07bf758c523a..889ac59759a2 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -256,6 +256,14 @@ static bool bvec_split_segs(const struct queue_limits *lim,
return len > 0 || bv->bv_len > max_len;
}
+static unsigned int bio_split_alignment(struct bio *bio,
+ const struct queue_limits *lim)
+{
+ if (op_is_write(bio_op(bio)) && lim->zone_write_granularity)
+ return lim->zone_write_granularity;
+ return lim->logical_block_size;
+}
+
/**
* bio_split_rw - split a bio in two bios
* @bio: [in] bio to be split
@@ -326,7 +334,7 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
* split size so that each bio is properly block size aligned, even if
* we do not use the full hardware limits.
*/
- bytes = ALIGN_DOWN(bytes, lim->logical_block_size);
+ bytes = ALIGN_DOWN(bytes, bio_split_alignment(bio, lim));
/*
* Bio splitting may cause subtle trouble such as hang when doing sync
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 733d72f4d1cc..6c71add013bf 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -283,8 +283,9 @@ void blk_mq_quiesce_tagset(struct blk_mq_tag_set *set)
if (!blk_queue_skip_tagset_quiesce(q))
blk_mq_quiesce_queue_nowait(q);
}
- blk_mq_wait_quiesce_done(set);
mutex_unlock(&set->tag_list_lock);
+
+ blk_mq_wait_quiesce_done(set);
}
EXPORT_SYMBOL_GPL(blk_mq_quiesce_tagset);
@@ -2252,6 +2253,24 @@ void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
}
EXPORT_SYMBOL(blk_mq_delay_run_hw_queue);
+static inline bool blk_mq_hw_queue_need_run(struct blk_mq_hw_ctx *hctx)
+{
+ bool need_run;
+
+ /*
+ * When queue is quiesced, we may be switching io scheduler, or
+ * updating nr_hw_queues, or other things, and we can't run queue
+ * any more, even blk_mq_hctx_has_pending() can't be called safely.
+ *
+ * And queue will be rerun in blk_mq_unquiesce_queue() if it is
+ * quiesced.
+ */
+ __blk_mq_run_dispatch_ops(hctx->queue, false,
+ need_run = !blk_queue_quiesced(hctx->queue) &&
+ blk_mq_hctx_has_pending(hctx));
+ return need_run;
+}
+
/**
* blk_mq_run_hw_queue - Start to run a hardware queue.
* @hctx: Pointer to the hardware queue to run.
@@ -2272,20 +2291,23 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
- /*
- * When queue is quiesced, we may be switching io scheduler, or
- * updating nr_hw_queues, or other things, and we can't run queue
- * any more, even __blk_mq_hctx_has_pending() can't be called safely.
- *
- * And queue will be rerun in blk_mq_unquiesce_queue() if it is
- * quiesced.
- */
- __blk_mq_run_dispatch_ops(hctx->queue, false,
- need_run = !blk_queue_quiesced(hctx->queue) &&
- blk_mq_hctx_has_pending(hctx));
+ need_run = blk_mq_hw_queue_need_run(hctx);
+ if (!need_run) {
+ unsigned long flags;
- if (!need_run)
- return;
+ /*
+ * Synchronize with blk_mq_unquiesce_queue(), because we check
+ * if hw queue is quiesced locklessly above, we need the use
+ * ->queue_lock to make sure we see the up-to-date status to
+ * not miss rerunning the hw queue.
+ */
+ spin_lock_irqsave(&hctx->queue->queue_lock, flags);
+ need_run = blk_mq_hw_queue_need_run(hctx);
+ spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
+
+ if (!need_run)
+ return;
+ }
if (async || !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
blk_mq_delay_run_hw_queue(hctx, 0);
@@ -2442,6 +2464,12 @@ void blk_mq_start_stopped_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
return;
clear_bit(BLK_MQ_S_STOPPED, &hctx->state);
+ /*
+ * Pairs with the smp_mb() in blk_mq_hctx_stopped() to order the
+ * clearing of BLK_MQ_S_STOPPED above and the checking of dispatch
+ * list in the subsequent routine.
+ */
+ smp_mb__after_atomic();
blk_mq_run_hw_queue(hctx, async);
}
EXPORT_SYMBOL_GPL(blk_mq_start_stopped_hw_queue);
@@ -2668,6 +2696,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
blk_mq_insert_request(rq, 0);
+ blk_mq_run_hw_queue(hctx, false);
return;
}
@@ -2698,6 +2727,7 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
blk_mq_insert_request(rq, 0);
+ blk_mq_run_hw_queue(hctx, false);
return BLK_STS_OK;
}
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 1743857e0b01..cf9f21772ddc 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -228,6 +228,19 @@ static inline struct blk_mq_tags *blk_mq_tags_from_data(struct blk_mq_alloc_data
static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx)
{
+ /* Fast path: hardware queue is not stopped most of the time. */
+ if (likely(!test_bit(BLK_MQ_S_STOPPED, &hctx->state)))
+ return false;
+
+ /*
+ * This barrier is used to order adding of dispatch list before and
+ * the test of BLK_MQ_S_STOPPED below. Pairs with the memory barrier
+ * in blk_mq_start_stopped_hw_queue() so that dispatch code could
+ * either see BLK_MQ_S_STOPPED is cleared or dispatch list is not
+ * empty to avoid missing dispatching requests.
+ */
+ smp_mb();
+
return test_bit(BLK_MQ_S_STOPPED, &hctx->state);
}
diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
index d0d954fe9d54..7fc79e7dce44 100644
--- a/crypto/pcrypt.c
+++ b/crypto/pcrypt.c
@@ -117,8 +117,10 @@ static int pcrypt_aead_encrypt(struct aead_request *req)
err = padata_do_parallel(ictx->psenc, padata, &ctx->cb_cpu);
if (!err)
return -EINPROGRESS;
- if (err == -EBUSY)
- return -EAGAIN;
+ if (err == -EBUSY) {
+ /* try non-parallel mode */
+ return crypto_aead_encrypt(creq);
+ }
return err;
}
@@ -166,8 +168,10 @@ static int pcrypt_aead_decrypt(struct aead_request *req)
err = padata_do_parallel(ictx->psdec, padata, &ctx->cb_cpu);
if (!err)
return -EINPROGRESS;
- if (err == -EBUSY)
- return -EAGAIN;
+ if (err == -EBUSY) {
+ /* try non-parallel mode */
+ return crypto_aead_decrypt(creq);
+ }
return err;
}
diff --git a/drivers/acpi/arm64/gtdt.c b/drivers/acpi/arm64/gtdt.c
index c0e77c1c8e09..eb6c2d360387 100644
--- a/drivers/acpi/arm64/gtdt.c
+++ b/drivers/acpi/arm64/gtdt.c
@@ -283,7 +283,7 @@ static int __init gtdt_parse_timer_block(struct acpi_gtdt_timer_block *block,
if (frame->virt_irq > 0)
acpi_unregister_gsi(gtdt_frame->virtual_timer_interrupt);
frame->virt_irq = 0;
- } while (i-- >= 0 && gtdt_frame--);
+ } while (i-- > 0 && gtdt_frame--);
return -EINVAL;
}
diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
index 26d1beec9913..ed02a2a9970a 100644
--- a/drivers/acpi/cppc_acpi.c
+++ b/drivers/acpi/cppc_acpi.c
@@ -1142,7 +1142,6 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
return -EFAULT;
}
val = MASK_VAL_WRITE(reg, prev_val, val);
- val |= prev_val;
}
switch (size) {
diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
index 0b18c6b46e65..f3133ba831c5 100644
--- a/drivers/base/firmware_loader/main.c
+++ b/drivers/base/firmware_loader/main.c
@@ -824,19 +824,18 @@ static void fw_log_firmware_info(const struct firmware *fw, const char *name, st
shash->tfm = alg;
if (crypto_shash_digest(shash, fw->data, fw->size, sha256buf) < 0)
- goto out_shash;
+ goto out_free;
for (int i = 0; i < SHA256_DIGEST_SIZE; i++)
sprintf(&outbuf[i * 2], "%02x", sha256buf[i]);
outbuf[SHA256_BLOCK_SIZE] = 0;
dev_dbg(device, "Loaded FW: %s, sha256: %s\n", name, outbuf);
-out_shash:
- crypto_free_shash(alg);
out_free:
kfree(shash);
kfree(outbuf);
kfree(sha256buf);
+ crypto_free_shash(alg);
}
#else
static void fw_log_firmware_info(const struct firmware *fw, const char *name,
diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
index 45fd13ef13fc..dceab5d013de 100644
--- a/drivers/base/regmap/regmap-irq.c
+++ b/drivers/base/regmap/regmap-irq.c
@@ -514,12 +514,16 @@ static irqreturn_t regmap_irq_thread(int irq, void *d)
return IRQ_NONE;
}
+static struct lock_class_key regmap_irq_lock_class;
+static struct lock_class_key regmap_irq_request_class;
+
static int regmap_irq_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
struct regmap_irq_chip_data *data = h->host_data;
irq_set_chip_data(virq, data);
+ irq_set_lockdep_class(virq, ®map_irq_lock_class, ®map_irq_request_class);
irq_set_chip(virq, &data->irq_chip);
irq_set_nested_thread(virq, 1);
irq_set_parent(virq, data->irq);
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index 970bd6ff38c4..d816d1512531 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -310,8 +310,40 @@ __setup("ramdisk_size=", ramdisk_size);
* (should share code eventually).
*/
static LIST_HEAD(brd_devices);
+static DEFINE_MUTEX(brd_devices_mutex);
static struct dentry *brd_debugfs_dir;
+static struct brd_device *brd_find_or_alloc_device(int i)
+{
+ struct brd_device *brd;
+
+ mutex_lock(&brd_devices_mutex);
+ list_for_each_entry(brd, &brd_devices, brd_list) {
+ if (brd->brd_number == i) {
+ mutex_unlock(&brd_devices_mutex);
+ return ERR_PTR(-EEXIST);
+ }
+ }
+
+ brd = kzalloc(sizeof(*brd), GFP_KERNEL);
+ if (!brd) {
+ mutex_unlock(&brd_devices_mutex);
+ return ERR_PTR(-ENOMEM);
+ }
+ brd->brd_number = i;
+ list_add_tail(&brd->brd_list, &brd_devices);
+ mutex_unlock(&brd_devices_mutex);
+ return brd;
+}
+
+static void brd_free_device(struct brd_device *brd)
+{
+ mutex_lock(&brd_devices_mutex);
+ list_del(&brd->brd_list);
+ mutex_unlock(&brd_devices_mutex);
+ kfree(brd);
+}
+
static int brd_alloc(int i)
{
struct brd_device *brd;
@@ -319,14 +351,9 @@ static int brd_alloc(int i)
char buf[DISK_NAME_LEN];
int err = -ENOMEM;
- list_for_each_entry(brd, &brd_devices, brd_list)
- if (brd->brd_number == i)
- return -EEXIST;
- brd = kzalloc(sizeof(*brd), GFP_KERNEL);
- if (!brd)
- return -ENOMEM;
- brd->brd_number = i;
- list_add_tail(&brd->brd_list, &brd_devices);
+ brd = brd_find_or_alloc_device(i);
+ if (IS_ERR(brd))
+ return PTR_ERR(brd);
xa_init(&brd->brd_pages);
@@ -369,8 +396,7 @@ static int brd_alloc(int i)
out_cleanup_disk:
put_disk(disk);
out_free_dev:
- list_del(&brd->brd_list);
- kfree(brd);
+ brd_free_device(brd);
return err;
}
@@ -389,8 +415,7 @@ static void brd_cleanup(void)
del_gendisk(brd->brd_disk);
put_disk(brd->brd_disk);
brd_free_pages(brd);
- list_del(&brd->brd_list);
- kfree(brd);
+ brd_free_device(brd);
}
}
@@ -417,16 +442,6 @@ static int __init brd_init(void)
{
int err, i;
- brd_check_and_reset_par();
-
- brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
-
- for (i = 0; i < rd_nr; i++) {
- err = brd_alloc(i);
- if (err)
- goto out_free;
- }
-
/*
* brd module now has a feature to instantiate underlying device
* structure on-demand, provided that there is an access dev node.
@@ -442,11 +457,18 @@ static int __init brd_init(void)
* dynamically.
*/
+ brd_check_and_reset_par();
+
+ brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
+
if (__register_blkdev(RAMDISK_MAJOR, "ramdisk", brd_probe)) {
err = -EIO;
goto out_free;
}
+ for (i = 0; i < rd_nr; i++)
+ brd_alloc(i);
+
pr_info("brd: module loaded\n");
return 0;
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index f31607a24f57..1105e8adf7f9 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -713,12 +713,21 @@ static inline char *ublk_queue_cmd_buf(struct ublk_device *ub, int q_id)
return ublk_get_queue(ub, q_id)->io_cmd_buf;
}
+static inline int __ublk_queue_cmd_buf_size(int depth)
+{
+ return round_up(depth * sizeof(struct ublksrv_io_desc), PAGE_SIZE);
+}
+
static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id)
{
struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
- return round_up(ubq->q_depth * sizeof(struct ublksrv_io_desc),
- PAGE_SIZE);
+ return __ublk_queue_cmd_buf_size(ubq->q_depth);
+}
+
+static int ublk_max_cmd_buf_size(void)
+{
+ return __ublk_queue_cmd_buf_size(UBLK_MAX_QUEUE_DEPTH);
}
static inline bool ublk_queue_can_use_recovery_reissue(
@@ -1387,7 +1396,7 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct ublk_device *ub = filp->private_data;
size_t sz = vma->vm_end - vma->vm_start;
- unsigned max_sz = UBLK_MAX_QUEUE_DEPTH * sizeof(struct ublksrv_io_desc);
+ unsigned max_sz = ublk_max_cmd_buf_size();
unsigned long pfn, end, phys_off = vma->vm_pgoff << PAGE_SHIFT;
int q_id, ret = 0;
@@ -2904,7 +2913,7 @@ static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd,
ret = ublk_ctrl_end_recovery(ub, cmd);
break;
default:
- ret = -ENOTSUPP;
+ ret = -EOPNOTSUPP;
break;
}
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 41b2fd7e1b9e..997106fe73e4 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -475,18 +475,18 @@ static bool virtblk_prep_rq_batch(struct request *req)
return virtblk_prep_rq(req->mq_hctx, vblk, req, vbr) == BLK_STS_OK;
}
-static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
+static void virtblk_add_req_batch(struct virtio_blk_vq *vq,
struct request **rqlist)
{
+ struct request *req;
unsigned long flags;
- int err;
bool kick;
spin_lock_irqsave(&vq->lock, flags);
- while (!rq_list_empty(*rqlist)) {
- struct request *req = rq_list_pop(rqlist);
+ while ((req = rq_list_pop(rqlist))) {
struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
+ int err;
err = virtblk_add_req(vq->vq, vbr);
if (err) {
@@ -499,37 +499,33 @@ static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
kick = virtqueue_kick_prepare(vq->vq);
spin_unlock_irqrestore(&vq->lock, flags);
- return kick;
+ if (kick)
+ virtqueue_notify(vq->vq);
}
static void virtio_queue_rqs(struct request **rqlist)
{
- struct request *req, *next, *prev = NULL;
+ struct request *submit_list = NULL;
struct request *requeue_list = NULL;
+ struct request **requeue_lastp = &requeue_list;
+ struct virtio_blk_vq *vq = NULL;
+ struct request *req;
- rq_list_for_each_safe(rqlist, req, next) {
- struct virtio_blk_vq *vq = get_virtio_blk_vq(req->mq_hctx);
- bool kick;
-
- if (!virtblk_prep_rq_batch(req)) {
- rq_list_move(rqlist, &requeue_list, req, prev);
- req = prev;
- if (!req)
- continue;
- }
+ while ((req = rq_list_pop(rqlist))) {
+ struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req->mq_hctx);
- if (!next || req->mq_hctx != next->mq_hctx) {
- req->rq_next = NULL;
- kick = virtblk_add_req_batch(vq, rqlist);
- if (kick)
- virtqueue_notify(vq->vq);
+ if (vq && vq != this_vq)
+ virtblk_add_req_batch(vq, &submit_list);
+ vq = this_vq;
- *rqlist = next;
- prev = NULL;
- } else
- prev = req;
+ if (virtblk_prep_rq_batch(req))
+ rq_list_add(&submit_list, req); /* reverse order */
+ else
+ rq_list_add_tail(&requeue_lastp, req);
}
+ if (vq)
+ virtblk_add_req_batch(vq, &submit_list);
*rqlist = requeue_list;
}
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 606f388c7a57..c29c471b6a18 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1600,6 +1600,13 @@ static int zram_recompress(struct zram *zram, u32 index, struct page *page,
if (ret)
return ret;
+ /*
+ * We touched this entry so mark it as non-IDLE. This makes sure that
+ * we don't preserve IDLE flag and don't incorrectly pick this entry
+ * for different post-processing type (e.g. writeback).
+ */
+ zram_clear_flag(zram, index, ZRAM_IDLE);
+
class_index_old = zs_lookup_class_index(zram->mem_pool, comp_len_old);
/*
* Iterate the secondary comp algorithms list (in order of priority)
diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 42b1062e33cd..78999f7f248c 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -519,10 +519,6 @@ static int tpm_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
struct tpm_chip *chip = container_of(rng, struct tpm_chip, hwrng);
- /* Give back zero bytes, as TPM chip has not yet fully resumed: */
- if (chip->flags & TPM_CHIP_FLAG_SUSPENDED)
- return 0;
-
return tpm_get_random(chip, data, max);
}
diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index 66b16d26eecc..c8ea52dfa556 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -394,6 +394,13 @@ int tpm_pm_suspend(struct device *dev)
if (!chip)
return -ENODEV;
+ rc = tpm_try_get_ops(chip);
+ if (rc) {
+ /* Can be safely set out of locks, as no action cannot race: */
+ chip->flags |= TPM_CHIP_FLAG_SUSPENDED;
+ goto out;
+ }
+
if (chip->flags & TPM_CHIP_FLAG_ALWAYS_POWERED)
goto suspended;
@@ -401,19 +408,18 @@ int tpm_pm_suspend(struct device *dev)
!pm_suspend_via_firmware())
goto suspended;
- rc = tpm_try_get_ops(chip);
- if (!rc) {
- if (chip->flags & TPM_CHIP_FLAG_TPM2)
- tpm2_shutdown(chip, TPM2_SU_STATE);
- else
- rc = tpm1_pm_suspend(chip, tpm_suspend_pcr);
-
- tpm_put_ops(chip);
+ if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+ tpm2_shutdown(chip, TPM2_SU_STATE);
+ goto suspended;
}
+ rc = tpm1_pm_suspend(chip, tpm_suspend_pcr);
+
suspended:
chip->flags |= TPM_CHIP_FLAG_SUSPENDED;
+ tpm_put_ops(chip);
+out:
if (rc)
dev_err(dev, "Ignoring error %d while suspending\n", rc);
return 0;
@@ -462,11 +468,18 @@ int tpm_get_random(struct tpm_chip *chip, u8 *out, size_t max)
if (!chip)
return -ENODEV;
+ /* Give back zero bytes, as TPM chip has not yet fully resumed: */
+ if (chip->flags & TPM_CHIP_FLAG_SUSPENDED) {
+ rc = 0;
+ goto out;
+ }
+
if (chip->flags & TPM_CHIP_FLAG_TPM2)
rc = tpm2_get_random(chip, out, max);
else
rc = tpm1_get_random(chip, out, max);
+out:
tpm_put_ops(chip);
return rc;
}
diff --git a/drivers/clk/clk-apple-nco.c b/drivers/clk/clk-apple-nco.c
index 39472a51530a..457a48d48941 100644
--- a/drivers/clk/clk-apple-nco.c
+++ b/drivers/clk/clk-apple-nco.c
@@ -297,6 +297,9 @@ static int applnco_probe(struct platform_device *pdev)
memset(&init, 0, sizeof(init));
init.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
"%s-%d", np->name, i);
+ if (!init.name)
+ return -ENOMEM;
+
init.ops = &applnco_ops;
init.parent_data = &pdata;
init.num_parents = 1;
diff --git a/drivers/clk/clk-axi-clkgen.c b/drivers/clk/clk-axi-clkgen.c
index bf4d8ddc93ae..934e53a96ddd 100644
--- a/drivers/clk/clk-axi-clkgen.c
+++ b/drivers/clk/clk-axi-clkgen.c
@@ -7,6 +7,7 @@
*/
#include <linux/platform_device.h>
+#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/slab.h>
#include <linux/io.h>
@@ -512,6 +513,7 @@ static int axi_clkgen_probe(struct platform_device *pdev)
struct clk_init_data init;
const char *parent_names[2];
const char *clk_name;
+ struct clk *axi_clk;
unsigned int i;
int ret;
@@ -528,8 +530,24 @@ static int axi_clkgen_probe(struct platform_device *pdev)
return PTR_ERR(axi_clkgen->base);
init.num_parents = of_clk_get_parent_count(pdev->dev.of_node);
- if (init.num_parents < 1 || init.num_parents > 2)
- return -EINVAL;
+
+ axi_clk = devm_clk_get_enabled(&pdev->dev, "s_axi_aclk");
+ if (!IS_ERR(axi_clk)) {
+ if (init.num_parents < 2 || init.num_parents > 3)
+ return -EINVAL;
+
+ init.num_parents -= 1;
+ } else {
+ /*
+ * Legacy... So that old DTs which do not have clock-names still
+ * work. In this case we don't explicitly enable the AXI bus
+ * clock.
+ */
+ if (PTR_ERR(axi_clk) != -ENOENT)
+ return PTR_ERR(axi_clk);
+ if (init.num_parents < 1 || init.num_parents > 2)
+ return -EINVAL;
+ }
for (i = 0; i < init.num_parents; i++) {
parent_names[i] = of_clk_get_parent_name(pdev->dev.of_node, i);
diff --git a/drivers/clk/imx/clk-fracn-gppll.c b/drivers/clk/imx/clk-fracn-gppll.c
index 1becba2b62d0..b12b00a2f07f 100644
--- a/drivers/clk/imx/clk-fracn-gppll.c
+++ b/drivers/clk/imx/clk-fracn-gppll.c
@@ -252,9 +252,11 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate,
pll_div = FIELD_PREP(PLL_RDIV_MASK, rate->rdiv) | rate->odiv |
FIELD_PREP(PLL_MFI_MASK, rate->mfi);
writel_relaxed(pll_div, pll->base + PLL_DIV);
+ readl(pll->base + PLL_DIV);
if (pll->flags & CLK_FRACN_GPPLL_FRACN) {
writel_relaxed(rate->mfd, pll->base + PLL_DENOMINATOR);
writel_relaxed(FIELD_PREP(PLL_MFN_MASK, rate->mfn), pll->base + PLL_NUMERATOR);
+ readl(pll->base + PLL_NUMERATOR);
}
/* Wait for 5us according to fracn mode pll doc */
@@ -263,6 +265,7 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate,
/* Enable Powerup */
tmp |= POWERUP_MASK;
writel_relaxed(tmp, pll->base + PLL_CTRL);
+ readl(pll->base + PLL_CTRL);
/* Wait Lock */
ret = clk_fracn_gppll_wait_lock(pll);
@@ -300,14 +303,15 @@ static int clk_fracn_gppll_prepare(struct clk_hw *hw)
val |= POWERUP_MASK;
writel_relaxed(val, pll->base + PLL_CTRL);
-
- val |= CLKMUX_EN;
- writel_relaxed(val, pll->base + PLL_CTRL);
+ readl(pll->base + PLL_CTRL);
ret = clk_fracn_gppll_wait_lock(pll);
if (ret)
return ret;
+ val |= CLKMUX_EN;
+ writel_relaxed(val, pll->base + PLL_CTRL);
+
val &= ~CLKMUX_BYPASS;
writel_relaxed(val, pll->base + PLL_CTRL);
diff --git a/drivers/clk/imx/clk-imx8-acm.c b/drivers/clk/imx/clk-imx8-acm.c
index 1c95ae905eec..b9ddb74b86f7 100644
--- a/drivers/clk/imx/clk-imx8-acm.c
+++ b/drivers/clk/imx/clk-imx8-acm.c
@@ -289,9 +289,9 @@ static int clk_imx_acm_attach_pm_domains(struct device *dev,
DL_FLAG_STATELESS |
DL_FLAG_PM_RUNTIME |
DL_FLAG_RPM_ACTIVE);
- if (IS_ERR(dev_pm->pd_dev_link[i])) {
+ if (!dev_pm->pd_dev_link[i]) {
dev_pm_domain_detach(dev_pm->pd_dev[i], false);
- ret = PTR_ERR(dev_pm->pd_dev_link[i]);
+ ret = -EINVAL;
goto detach_pm;
}
}
diff --git a/drivers/clk/imx/clk-lpcg-scu.c b/drivers/clk/imx/clk-lpcg-scu.c
index dd5abd09f3e2..620afdf8dc03 100644
--- a/drivers/clk/imx/clk-lpcg-scu.c
+++ b/drivers/clk/imx/clk-lpcg-scu.c
@@ -6,10 +6,12 @@
#include <linux/bits.h>
#include <linux/clk-provider.h>
+#include <linux/delay.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
+#include <linux/units.h>
#include "clk-scu.h"
@@ -41,6 +43,29 @@ struct clk_lpcg_scu {
#define to_clk_lpcg_scu(_hw) container_of(_hw, struct clk_lpcg_scu, hw)
+/* e10858 -LPCG clock gating register synchronization errata */
+static void lpcg_e10858_writel(unsigned long rate, void __iomem *reg, u32 val)
+{
+ writel(val, reg);
+
+ if (rate >= 24 * HZ_PER_MHZ || rate == 0) {
+ /*
+ * The time taken to access the LPCG registers from the AP core
+ * through the interconnect is longer than the minimum delay
+ * of 4 clock cycles required by the errata.
+ * Adding a readl will provide sufficient delay to prevent
+ * back-to-back writes.
+ */
+ readl(reg);
+ } else {
+ /*
+ * For clocks running below 24MHz, wait a minimum of
+ * 4 clock cycles.
+ */
+ ndelay(4 * (DIV_ROUND_UP(1000 * HZ_PER_MHZ, rate)));
+ }
+}
+
static int clk_lpcg_scu_enable(struct clk_hw *hw)
{
struct clk_lpcg_scu *clk = to_clk_lpcg_scu(hw);
@@ -57,7 +82,8 @@ static int clk_lpcg_scu_enable(struct clk_hw *hw)
val |= CLK_GATE_SCU_LPCG_HW_SEL;
reg |= val << clk->bit_idx;
- writel(reg, clk->reg);
+
+ lpcg_e10858_writel(clk_hw_get_rate(hw), clk->reg, reg);
spin_unlock_irqrestore(&imx_lpcg_scu_lock, flags);
@@ -74,7 +100,7 @@ static void clk_lpcg_scu_disable(struct clk_hw *hw)
reg = readl_relaxed(clk->reg);
reg &= ~(CLK_GATE_SCU_LPCG_MASK << clk->bit_idx);
- writel(reg, clk->reg);
+ lpcg_e10858_writel(clk_hw_get_rate(hw), clk->reg, reg);
spin_unlock_irqrestore(&imx_lpcg_scu_lock, flags);
}
@@ -145,13 +171,8 @@ static int __maybe_unused imx_clk_lpcg_scu_resume(struct device *dev)
{
struct clk_lpcg_scu *clk = dev_get_drvdata(dev);
- /*
- * FIXME: Sometimes writes don't work unless the CPU issues
- * them twice
- */
-
- writel(clk->state, clk->reg);
writel(clk->state, clk->reg);
+ lpcg_e10858_writel(0, clk->reg, clk->state);
dev_dbg(dev, "restore lpcg state 0x%x\n", clk->state);
return 0;
diff --git a/drivers/clk/imx/clk-scu.c b/drivers/clk/imx/clk-scu.c
index cd83c52e9952..564f549ec204 100644
--- a/drivers/clk/imx/clk-scu.c
+++ b/drivers/clk/imx/clk-scu.c
@@ -594,7 +594,7 @@ static int __maybe_unused imx_clk_scu_suspend(struct device *dev)
clk->rate = clk_scu_recalc_rate(&clk->hw, 0);
else
clk->rate = clk_hw_get_rate(&clk->hw);
- clk->is_enabled = clk_hw_is_enabled(&clk->hw);
+ clk->is_enabled = clk_hw_is_prepared(&clk->hw);
if (clk->parent)
dev_dbg(dev, "save parent %s idx %u\n", clk_hw_get_name(clk->parent),
diff --git a/drivers/clk/mediatek/Kconfig b/drivers/clk/mediatek/Kconfig
index 48b42d11111c..8ad02c1f035b 100644
--- a/drivers/clk/mediatek/Kconfig
+++ b/drivers/clk/mediatek/Kconfig
@@ -878,13 +878,6 @@ config COMMON_CLK_MT8195_APUSYS
help
This driver supports MediaTek MT8195 AI Processor Unit System clocks.
-config COMMON_CLK_MT8195_AUDSYS
- tristate "Clock driver for MediaTek MT8195 audsys"
- depends on COMMON_CLK_MT8195
- default COMMON_CLK_MT8195
- help
- This driver supports MediaTek MT8195 audsys clocks.
-
config COMMON_CLK_MT8195_IMP_IIC_WRAP
tristate "Clock driver for MediaTek MT8195 imp_iic_wrap"
depends on COMMON_CLK_MT8195
@@ -899,14 +892,6 @@ config COMMON_CLK_MT8195_MFGCFG
help
This driver supports MediaTek MT8195 mfgcfg clocks.
-config COMMON_CLK_MT8195_MSDC
- tristate "Clock driver for MediaTek MT8195 msdc"
- depends on COMMON_CLK_MT8195
- default COMMON_CLK_MT8195
- help
- This driver supports MediaTek MT8195 MMC and SD Controller's
- msdc and msdc_top clocks.
-
config COMMON_CLK_MT8195_SCP_ADSP
tristate "Clock driver for MediaTek MT8195 scp_adsp"
depends on COMMON_CLK_MT8195
diff --git a/drivers/clk/qcom/gcc-qcs404.c b/drivers/clk/qcom/gcc-qcs404.c
index a39c4990b29d..7977b981b8e5 100644
--- a/drivers/clk/qcom/gcc-qcs404.c
+++ b/drivers/clk/qcom/gcc-qcs404.c
@@ -131,6 +131,7 @@ static struct clk_alpha_pll gpll1_out_main = {
/* 930MHz configuration */
static const struct alpha_pll_config gpll3_config = {
.l = 48,
+ .alpha_hi = 0x70,
.alpha = 0x0,
.alpha_en_mask = BIT(24),
.post_div_mask = 0xf << 8,
diff --git a/drivers/clk/ralink/clk-mtmips.c b/drivers/clk/ralink/clk-mtmips.c
index 50a443bf79ec..76285fbbdeaa 100644
--- a/drivers/clk/ralink/clk-mtmips.c
+++ b/drivers/clk/ralink/clk-mtmips.c
@@ -263,8 +263,9 @@ static int mtmips_register_pherip_clocks(struct device_node *np,
.rate = _rate \
}
-static struct mtmips_clk_fixed rt305x_fixed_clocks[] = {
- CLK_FIXED("xtal", NULL, 40000000)
+static struct mtmips_clk_fixed rt3883_fixed_clocks[] = {
+ CLK_FIXED("xtal", NULL, 40000000),
+ CLK_FIXED("periph", "xtal", 40000000)
};
static struct mtmips_clk_fixed rt3352_fixed_clocks[] = {
@@ -366,6 +367,12 @@ static inline struct mtmips_clk *to_mtmips_clk(struct clk_hw *hw)
return container_of(hw, struct mtmips_clk, hw);
}
+static unsigned long rt2880_xtal_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ return 40000000;
+}
+
static unsigned long rt5350_xtal_recalc_rate(struct clk_hw *hw,
unsigned long parent_rate)
{
@@ -677,10 +684,12 @@ static unsigned long mt76x8_cpu_recalc_rate(struct clk_hw *hw,
}
static struct mtmips_clk rt2880_clks_base[] = {
+ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
{ CLK_BASE("cpu", "xtal", rt2880_cpu_recalc_rate) }
};
static struct mtmips_clk rt305x_clks_base[] = {
+ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
{ CLK_BASE("cpu", "xtal", rt305x_cpu_recalc_rate) }
};
@@ -690,6 +699,7 @@ static struct mtmips_clk rt3352_clks_base[] = {
};
static struct mtmips_clk rt3883_clks_base[] = {
+ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
{ CLK_BASE("cpu", "xtal", rt3883_cpu_recalc_rate) },
{ CLK_BASE("bus", "cpu", rt3883_bus_recalc_rate) }
};
@@ -746,8 +756,8 @@ static int mtmips_register_clocks(struct device_node *np,
static const struct mtmips_clk_data rt2880_clk_data = {
.clk_base = rt2880_clks_base,
.num_clk_base = ARRAY_SIZE(rt2880_clks_base),
- .clk_fixed = rt305x_fixed_clocks,
- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
+ .clk_fixed = NULL,
+ .num_clk_fixed = 0,
.clk_factor = rt2880_factor_clocks,
.num_clk_factor = ARRAY_SIZE(rt2880_factor_clocks),
.clk_periph = rt2880_pherip_clks,
@@ -757,8 +767,8 @@ static const struct mtmips_clk_data rt2880_clk_data = {
static const struct mtmips_clk_data rt305x_clk_data = {
.clk_base = rt305x_clks_base,
.num_clk_base = ARRAY_SIZE(rt305x_clks_base),
- .clk_fixed = rt305x_fixed_clocks,
- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
+ .clk_fixed = NULL,
+ .num_clk_fixed = 0,
.clk_factor = rt305x_factor_clocks,
.num_clk_factor = ARRAY_SIZE(rt305x_factor_clocks),
.clk_periph = rt305x_pherip_clks,
@@ -779,8 +789,8 @@ static const struct mtmips_clk_data rt3352_clk_data = {
static const struct mtmips_clk_data rt3883_clk_data = {
.clk_base = rt3883_clks_base,
.num_clk_base = ARRAY_SIZE(rt3883_clks_base),
- .clk_fixed = rt305x_fixed_clocks,
- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
+ .clk_fixed = rt3883_fixed_clocks,
+ .num_clk_fixed = ARRAY_SIZE(rt3883_fixed_clocks),
.clk_factor = NULL,
.num_clk_factor = 0,
.clk_periph = rt5350_pherip_clks,
diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
index 75f9eca020ce..f8dbb092b9f1 100644
--- a/drivers/clk/renesas/rzg2l-cpg.c
+++ b/drivers/clk/renesas/rzg2l-cpg.c
@@ -285,7 +285,7 @@ static unsigned long
rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params,
unsigned long rate)
{
- unsigned long foutpostdiv_rate;
+ unsigned long foutpostdiv_rate, foutvco_rate;
params->pl5_intin = rate / MEGA;
params->pl5_fracin = div_u64(((u64)rate % MEGA) << 24, MEGA);
@@ -294,10 +294,11 @@ rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params,
params->pl5_postdiv2 = 1;
params->pl5_spread = 0x16;
- foutpostdiv_rate =
- EXTAL_FREQ_IN_MEGA_HZ * MEGA / params->pl5_refdiv *
- ((((params->pl5_intin << 24) + params->pl5_fracin)) >> 24) /
- (params->pl5_postdiv1 * params->pl5_postdiv2);
+ foutvco_rate = div_u64(mul_u32_u32(EXTAL_FREQ_IN_MEGA_HZ * MEGA,
+ (params->pl5_intin << 24) + params->pl5_fracin),
+ params->pl5_refdiv) >> 24;
+ foutpostdiv_rate = DIV_ROUND_CLOSEST_ULL(foutvco_rate,
+ params->pl5_postdiv1 * params->pl5_postdiv2);
return foutpostdiv_rate;
}
diff --git a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
index 48a8fb2c43b7..f95c3615ca77 100644
--- a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
+++ b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
@@ -1371,7 +1371,7 @@ static int sun20i_d1_ccu_probe(struct platform_device *pdev)
/* Enforce m1 = 0, m0 = 0 for PLL_AUDIO0 */
val = readl(reg + SUN20I_D1_PLL_AUDIO0_REG);
- val &= ~BIT(1) | BIT(0);
+ val &= ~(BIT(1) | BIT(0));
writel(val, reg + SUN20I_D1_PLL_AUDIO0_REG);
/* Force fanout-27M factor N to 0. */
diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
index 0ba0dc4ecf06..8208a3d89563 100644
--- a/drivers/clocksource/Kconfig
+++ b/drivers/clocksource/Kconfig
@@ -390,7 +390,8 @@ config ARM_GT_INITIAL_PRESCALER_VAL
This affects CPU_FREQ max delta from the initial frequency.
config ARM_TIMER_SP804
- bool "Support for Dual Timer SP804 module" if COMPILE_TEST
+ bool "Support for Dual Timer SP804 module"
+ depends on ARM || ARM64 || COMPILE_TEST
depends on GENERIC_SCHED_CLOCK && HAVE_CLK
select CLKSRC_MMIO
select TIMER_OF if OF
diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
index c2dcd8d68e45..d1c144d6f328 100644
--- a/drivers/clocksource/timer-ti-dm-systimer.c
+++ b/drivers/clocksource/timer-ti-dm-systimer.c
@@ -686,9 +686,9 @@ subsys_initcall(dmtimer_percpu_timer_startup);
static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
{
- struct device_node *arm_timer;
+ struct device_node *arm_timer __free(device_node) =
+ of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
- arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
if (of_device_is_available(arm_timer)) {
pr_warn_once("ARM architected timer wrap issue i940 detected\n");
return 0;
diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
index 1548dea15df1..81763e3f9484 100644
--- a/drivers/comedi/comedi_fops.c
+++ b/drivers/comedi/comedi_fops.c
@@ -2407,6 +2407,18 @@ static int comedi_mmap(struct file *file, struct vm_area_struct *vma)
start += PAGE_SIZE;
}
+
+#ifdef CONFIG_MMU
+ /*
+ * Leaving behind a partial mapping of a buffer we're about to
+ * drop is unsafe, see remap_pfn_range_notrack().
+ * We need to zap the range here ourselves instead of relying
+ * on the automatic zapping in remap_pfn_range() because we call
+ * remap_pfn_range() in a loop.
+ */
+ if (retval)
+ zap_vma_ptes(vma, vma->vm_start, size);
+#endif
}
if (retval == 0) {
diff --git a/drivers/counter/stm32-timer-cnt.c b/drivers/counter/stm32-timer-cnt.c
index 6206d2dc3d47..36d7f0d05b5f 100644
--- a/drivers/counter/stm32-timer-cnt.c
+++ b/drivers/counter/stm32-timer-cnt.c
@@ -195,11 +195,17 @@ static int stm32_count_enable_write(struct counter_device *counter,
{
struct stm32_timer_cnt *const priv = counter_priv(counter);
u32 cr1;
+ int ret;
if (enable) {
regmap_read(priv->regmap, TIM_CR1, &cr1);
- if (!(cr1 & TIM_CR1_CEN))
- clk_enable(priv->clk);
+ if (!(cr1 & TIM_CR1_CEN)) {
+ ret = clk_enable(priv->clk);
+ if (ret) {
+ dev_err(counter->parent, "Cannot enable clock %d\n", ret);
+ return ret;
+ }
+ }
regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN,
TIM_CR1_CEN);
@@ -383,7 +389,11 @@ static int __maybe_unused stm32_timer_cnt_resume(struct device *dev)
return ret;
if (priv->enabled) {
- clk_enable(priv->clk);
+ ret = clk_enable(priv->clk);
+ if (ret) {
+ dev_err(dev, "Cannot enable clock %d\n", ret);
+ return ret;
+ }
/* Restore registers that may have been lost */
regmap_write(priv->regmap, TIM_SMCR, priv->bak.smcr);
diff --git a/drivers/counter/ti-ecap-capture.c b/drivers/counter/ti-ecap-capture.c
index fb1cb1774674..b84e368a413f 100644
--- a/drivers/counter/ti-ecap-capture.c
+++ b/drivers/counter/ti-ecap-capture.c
@@ -576,8 +576,13 @@ static int ecap_cnt_resume(struct device *dev)
{
struct counter_device *counter_dev = dev_get_drvdata(dev);
struct ecap_cnt_dev *ecap_dev = counter_priv(counter_dev);
+ int ret;
- clk_enable(ecap_dev->clk);
+ ret = clk_enable(ecap_dev->clk);
+ if (ret) {
+ dev_err(dev, "Cannot enable clock %d\n", ret);
+ return ret;
+ }
ecap_cnt_capture_set_evmode(counter_dev, ecap_dev->pm_ctx.ev_mode);
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 8c16d67b98bf..cdead37d0823 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -1383,7 +1383,7 @@ static void amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
value = READ_ONCE(cpudata->cppc_req_cached);
if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
- min_perf = max_perf;
+ min_perf = min(cpudata->nominal_perf, max_perf);
/* Initial min/max values for CPPC Performance Controls Register */
value &= ~AMD_CPPC_MIN_PERF(~0L);
diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
index 15f1d41920a3..c8447ecad797 100644
--- a/drivers/cpufreq/cppc_cpufreq.c
+++ b/drivers/cpufreq/cppc_cpufreq.c
@@ -118,6 +118,9 @@ static void cppc_scale_freq_workfn(struct kthread_work *work)
perf = cppc_perf_from_fbctrs(cpu_data, &cppc_fi->prev_perf_fb_ctrs,
&fb_ctrs);
+ if (!perf)
+ return;
+
cppc_fi->prev_perf_fb_ctrs = fb_ctrs;
perf <<= SCHED_CAPACITY_SHIFT;
@@ -425,6 +428,9 @@ static int cppc_get_cpu_power(struct device *cpu_dev,
struct cppc_cpudata *cpu_data;
policy = cpufreq_cpu_get_raw(cpu_dev->id);
+ if (!policy)
+ return -EINVAL;
+
cpu_data = policy->driver_data;
perf_caps = &cpu_data->perf_caps;
max_cap = arch_scale_cpu_capacity(cpu_dev->id);
@@ -492,6 +498,9 @@ static int cppc_get_cpu_cost(struct device *cpu_dev, unsigned long KHz,
int step;
policy = cpufreq_cpu_get_raw(cpu_dev->id);
+ if (!policy)
+ return -EINVAL;
+
cpu_data = policy->driver_data;
perf_caps = &cpu_data->perf_caps;
max_cap = arch_scale_cpu_capacity(cpu_dev->id);
@@ -730,13 +739,31 @@ static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
delta_delivered = get_delta(fb_ctrs_t1->delivered,
fb_ctrs_t0->delivered);
- /* Check to avoid divide-by zero and invalid delivered_perf */
+ /*
+ * Avoid divide-by zero and unchanged feedback counters.
+ * Leave it for callers to handle.
+ */
if (!delta_reference || !delta_delivered)
- return cpu_data->perf_ctrls.desired_perf;
+ return 0;
return (reference_perf * delta_delivered) / delta_reference;
}
+static int cppc_get_perf_ctrs_sample(int cpu,
+ struct cppc_perf_fb_ctrs *fb_ctrs_t0,
+ struct cppc_perf_fb_ctrs *fb_ctrs_t1)
+{
+ int ret;
+
+ ret = cppc_get_perf_ctrs(cpu, fb_ctrs_t0);
+ if (ret)
+ return ret;
+
+ udelay(2); /* 2usec delay between sampling */
+
+ return cppc_get_perf_ctrs(cpu, fb_ctrs_t1);
+}
+
static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
{
struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0};
@@ -752,18 +779,32 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
cpufreq_cpu_put(policy);
- ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0);
- if (ret)
- return 0;
-
- udelay(2); /* 2usec delay between sampling */
-
- ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t1);
- if (ret)
- return 0;
+ ret = cppc_get_perf_ctrs_sample(cpu, &fb_ctrs_t0, &fb_ctrs_t1);
+ if (ret) {
+ if (ret == -EFAULT)
+ /* Any of the associated CPPC regs is 0. */
+ goto out_invalid_counters;
+ else
+ return 0;
+ }
delivered_perf = cppc_perf_from_fbctrs(cpu_data, &fb_ctrs_t0,
&fb_ctrs_t1);
+ if (!delivered_perf)
+ goto out_invalid_counters;
+
+ return cppc_perf_to_khz(&cpu_data->perf_caps, delivered_perf);
+
+out_invalid_counters:
+ /*
+ * Feedback counters could be unchanged or 0 when a cpu enters a
+ * low-power idle state, e.g. clock-gated or power-gated.
+ * Use desired perf for reflecting frequency. Get the latest register
+ * value first as some platforms may update the actual delivered perf
+ * there; if failed, resort to the cached desired perf.
+ */
+ if (cppc_get_desired_perf(cpu, &delivered_perf))
+ delivered_perf = cpu_data->perf_ctrls.desired_perf;
return cppc_perf_to_khz(&cpu_data->perf_caps, delivered_perf);
}
diff --git a/drivers/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c
index afc59b292153..63cae4037deb 100644
--- a/drivers/cpufreq/loongson2_cpufreq.c
+++ b/drivers/cpufreq/loongson2_cpufreq.c
@@ -154,7 +154,9 @@ static int __init cpufreq_init(void)
ret = cpufreq_register_driver(&loongson2_cpufreq_driver);
- if (!ret && !nowait) {
+ if (ret) {
+ platform_driver_unregister(&platform_driver);
+ } else if (!nowait) {
saved_cpu_wait = cpu_wait;
cpu_wait = loongson2_cpu_wait;
}
diff --git a/drivers/cpufreq/mediatek-cpufreq-hw.c b/drivers/cpufreq/mediatek-cpufreq-hw.c
index 8d097dcddda4..55353fe7b9e7 100644
--- a/drivers/cpufreq/mediatek-cpufreq-hw.c
+++ b/drivers/cpufreq/mediatek-cpufreq-hw.c
@@ -62,7 +62,7 @@ mtk_cpufreq_get_cpu_power(struct device *cpu_dev, unsigned long *uW,
policy = cpufreq_cpu_get_raw(cpu_dev->id);
if (!policy)
- return 0;
+ return -EINVAL;
data = policy->driver_data;
diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
index 689be70d69c1..1d1ff3b1b0d5 100644
--- a/drivers/crypto/bcm/cipher.c
+++ b/drivers/crypto/bcm/cipher.c
@@ -2415,6 +2415,7 @@ static int ahash_hmac_setkey(struct crypto_ahash *ahash, const u8 *key,
static int ahash_hmac_init(struct ahash_request *req)
{
+ int ret;
struct iproc_reqctx_s *rctx = ahash_request_ctx(req);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct iproc_ctx_s *ctx = crypto_ahash_ctx(tfm);
@@ -2424,7 +2425,9 @@ static int ahash_hmac_init(struct ahash_request *req)
flow_log("ahash_hmac_init()\n");
/* init the context as a hash */
- ahash_init(req);
+ ret = ahash_init(req);
+ if (ret)
+ return ret;
if (!spu_no_incr_hash(ctx)) {
/* SPU-M can do incr hashing but needs sw for outer HMAC */
diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index 887a5f2fb927..cb001aa1de66 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -984,7 +984,7 @@ static int caam_rsa_set_pub_key(struct crypto_akcipher *tfm, const void *key,
return -ENOMEM;
}
-static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+static int caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
struct rsa_key *raw_key)
{
struct caam_rsa_key *rsa_key = &ctx->key;
@@ -994,7 +994,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
rsa_key->p = caam_read_raw_data(raw_key->p, &p_sz);
if (!rsa_key->p)
- return;
+ return -ENOMEM;
rsa_key->p_sz = p_sz;
rsa_key->q = caam_read_raw_data(raw_key->q, &q_sz);
@@ -1029,7 +1029,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
rsa_key->priv_form = FORM3;
- return;
+ return 0;
free_dq:
kfree_sensitive(rsa_key->dq);
@@ -1043,6 +1043,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
kfree_sensitive(rsa_key->q);
free_p:
kfree_sensitive(rsa_key->p);
+ return -ENOMEM;
}
static int caam_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
@@ -1088,7 +1089,9 @@ static int caam_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
rsa_key->e_sz = raw_key.e_sz;
rsa_key->n_sz = raw_key.n_sz;
- caam_rsa_set_priv_key_form(ctx, &raw_key);
+ ret = caam_rsa_set_priv_key_form(ctx, &raw_key);
+ if (ret)
+ goto err;
return 0;
diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
index 46a083849a8e..7a3a104557f0 100644
--- a/drivers/crypto/caam/qi.c
+++ b/drivers/crypto/caam/qi.c
@@ -772,7 +772,7 @@ int caam_qi_init(struct platform_device *caam_pdev)
caam_debugfs_qi_init(ctrlpriv);
- err = devm_add_action_or_reset(qidev, caam_qi_shutdown, ctrlpriv);
+ err = devm_add_action_or_reset(qidev, caam_qi_shutdown, qidev);
if (err)
return err;
diff --git a/drivers/crypto/cavium/cpt/cptpf_main.c b/drivers/crypto/cavium/cpt/cptpf_main.c
index 6872ac344001..54de869e5374 100644
--- a/drivers/crypto/cavium/cpt/cptpf_main.c
+++ b/drivers/crypto/cavium/cpt/cptpf_main.c
@@ -44,7 +44,7 @@ static void cpt_disable_cores(struct cpt_device *cpt, u64 coremask,
dev_err(dev, "Cores still busy %llx", coremask);
grp = cpt_read_csr64(cpt->reg_base,
CPTX_PF_EXEC_BUSY(0));
- if (timeout--)
+ if (!timeout--)
break;
udelay(CSR_DELAY);
@@ -302,6 +302,8 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
ret = do_cpt_init(cpt, mcode);
if (ret) {
+ dma_free_coherent(&cpt->pdev->dev, mcode->code_size,
+ mcode->code, mcode->phys_base);
dev_err(dev, "do_cpt_init failed with ret: %d\n", ret);
goto fw_release;
}
@@ -394,7 +396,7 @@ static void cpt_disable_all_cores(struct cpt_device *cpt)
dev_err(dev, "Cores still busy");
grp = cpt_read_csr64(cpt->reg_base,
CPTX_PF_EXEC_BUSY(0));
- if (timeout--)
+ if (!timeout--)
break;
udelay(CSR_DELAY);
diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
index 3463f5ee83c0..762a2a54ca82 100644
--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
+++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
@@ -1280,11 +1280,15 @@ static u32 hpre_get_hw_err_status(struct hisi_qm *qm)
static void hpre_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
{
- u32 nfe;
-
writel(err_sts, qm->io_base + HPRE_HAC_SOURCE_INT);
- nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
- writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB);
+}
+
+static void hpre_disable_error_report(struct hisi_qm *qm, u32 err_type)
+{
+ u32 nfe_mask;
+
+ nfe_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
+ writel(nfe_mask & (~err_type), qm->io_base + HPRE_RAS_NFE_ENB);
}
static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
@@ -1298,6 +1302,27 @@ static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
}
+static enum acc_err_result hpre_get_err_result(struct hisi_qm *qm)
+{
+ u32 err_status;
+
+ err_status = hpre_get_hw_err_status(qm);
+ if (err_status) {
+ if (err_status & qm->err_info.ecc_2bits_mask)
+ qm->err_status.is_dev_ecc_mbit = true;
+ hpre_log_hw_error(qm, err_status);
+
+ if (err_status & qm->err_info.dev_reset_mask) {
+ /* Disable the same error reporting until device is recovered. */
+ hpre_disable_error_report(qm, err_status);
+ return ACC_ERR_NEED_RESET;
+ }
+ hpre_clear_hw_err_status(qm, err_status);
+ }
+
+ return ACC_ERR_RECOVERED;
+}
+
static void hpre_err_info_init(struct hisi_qm *qm)
{
struct hisi_qm_err_info *err_info = &qm->err_info;
@@ -1324,12 +1349,12 @@ static const struct hisi_qm_err_ini hpre_err_ini = {
.hw_err_disable = hpre_hw_error_disable,
.get_dev_hw_err_status = hpre_get_hw_err_status,
.clear_dev_hw_err_status = hpre_clear_hw_err_status,
- .log_dev_hw_err = hpre_log_hw_error,
.open_axi_master_ooo = hpre_open_axi_master_ooo,
.open_sva_prefetch = hpre_open_sva_prefetch,
.close_sva_prefetch = hpre_close_sva_prefetch,
.show_last_dfx_regs = hpre_show_last_dfx_regs,
.err_info_init = hpre_err_info_init,
+ .get_err_result = hpre_get_err_result,
};
static int hpre_pf_probe_init(struct hpre *hpre)
diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index 1b00edbbfe26..7921409791fb 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -272,12 +272,6 @@ enum vft_type {
SHAPER_VFT,
};
-enum acc_err_result {
- ACC_ERR_NONE,
- ACC_ERR_NEED_RESET,
- ACC_ERR_RECOVERED,
-};
-
enum qm_alg_type {
ALG_TYPE_0,
ALG_TYPE_1,
@@ -1489,22 +1483,25 @@ static void qm_log_hw_error(struct hisi_qm *qm, u32 error_status)
static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm)
{
- u32 error_status, tmp;
-
- /* read err sts */
- tmp = readl(qm->io_base + QM_ABNORMAL_INT_STATUS);
- error_status = qm->error_mask & tmp;
+ u32 error_status;
- if (error_status) {
+ error_status = qm_get_hw_error_status(qm);
+ if (error_status & qm->error_mask) {
if (error_status & QM_ECC_MBIT)
qm->err_status.is_qm_ecc_mbit = true;
qm_log_hw_error(qm, error_status);
- if (error_status & qm->err_info.qm_reset_mask)
+ if (error_status & qm->err_info.qm_reset_mask) {
+ /* Disable the same error reporting until device is recovered. */
+ writel(qm->err_info.nfe & (~error_status),
+ qm->io_base + QM_RAS_NFE_ENABLE);
return ACC_ERR_NEED_RESET;
+ }
+ /* Clear error source if not need reset. */
writel(error_status, qm->io_base + QM_ABNORMAL_INT_SOURCE);
writel(qm->err_info.nfe, qm->io_base + QM_RAS_NFE_ENABLE);
+ writel(qm->err_info.ce, qm->io_base + QM_RAS_CE_ENABLE);
}
return ACC_ERR_RECOVERED;
@@ -3957,30 +3954,12 @@ EXPORT_SYMBOL_GPL(hisi_qm_sriov_configure);
static enum acc_err_result qm_dev_err_handle(struct hisi_qm *qm)
{
- u32 err_sts;
-
- if (!qm->err_ini->get_dev_hw_err_status) {
- dev_err(&qm->pdev->dev, "Device doesn't support get hw error status!\n");
+ if (!qm->err_ini->get_err_result) {
+ dev_err(&qm->pdev->dev, "Device doesn't support reset!\n");
return ACC_ERR_NONE;
}
- /* get device hardware error status */
- err_sts = qm->err_ini->get_dev_hw_err_status(qm);
- if (err_sts) {
- if (err_sts & qm->err_info.ecc_2bits_mask)
- qm->err_status.is_dev_ecc_mbit = true;
-
- if (qm->err_ini->log_dev_hw_err)
- qm->err_ini->log_dev_hw_err(qm, err_sts);
-
- if (err_sts & qm->err_info.dev_reset_mask)
- return ACC_ERR_NEED_RESET;
-
- if (qm->err_ini->clear_dev_hw_err_status)
- qm->err_ini->clear_dev_hw_err_status(qm, err_sts);
- }
-
- return ACC_ERR_RECOVERED;
+ return qm->err_ini->get_err_result(qm);
}
static enum acc_err_result qm_process_dev_error(struct hisi_qm *qm)
diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
index cf7b6a37e7df..6aaaaf784ddc 100644
--- a/drivers/crypto/hisilicon/sec2/sec_main.c
+++ b/drivers/crypto/hisilicon/sec2/sec_main.c
@@ -1006,11 +1006,15 @@ static u32 sec_get_hw_err_status(struct hisi_qm *qm)
static void sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
{
- u32 nfe;
-
writel(err_sts, qm->io_base + SEC_CORE_INT_SOURCE);
- nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
- writel(nfe, qm->io_base + SEC_RAS_NFE_REG);
+}
+
+static void sec_disable_error_report(struct hisi_qm *qm, u32 err_type)
+{
+ u32 nfe_mask;
+
+ nfe_mask = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
+ writel(nfe_mask & (~err_type), qm->io_base + SEC_RAS_NFE_REG);
}
static void sec_open_axi_master_ooo(struct hisi_qm *qm)
@@ -1022,6 +1026,27 @@ static void sec_open_axi_master_ooo(struct hisi_qm *qm)
writel(val | SEC_AXI_SHUTDOWN_ENABLE, qm->io_base + SEC_CONTROL_REG);
}
+static enum acc_err_result sec_get_err_result(struct hisi_qm *qm)
+{
+ u32 err_status;
+
+ err_status = sec_get_hw_err_status(qm);
+ if (err_status) {
+ if (err_status & qm->err_info.ecc_2bits_mask)
+ qm->err_status.is_dev_ecc_mbit = true;
+ sec_log_hw_error(qm, err_status);
+
+ if (err_status & qm->err_info.dev_reset_mask) {
+ /* Disable the same error reporting until device is recovered. */
+ sec_disable_error_report(qm, err_status);
+ return ACC_ERR_NEED_RESET;
+ }
+ sec_clear_hw_err_status(qm, err_status);
+ }
+
+ return ACC_ERR_RECOVERED;
+}
+
static void sec_err_info_init(struct hisi_qm *qm)
{
struct hisi_qm_err_info *err_info = &qm->err_info;
@@ -1048,12 +1073,12 @@ static const struct hisi_qm_err_ini sec_err_ini = {
.hw_err_disable = sec_hw_error_disable,
.get_dev_hw_err_status = sec_get_hw_err_status,
.clear_dev_hw_err_status = sec_clear_hw_err_status,
- .log_dev_hw_err = sec_log_hw_error,
.open_axi_master_ooo = sec_open_axi_master_ooo,
.open_sva_prefetch = sec_open_sva_prefetch,
.close_sva_prefetch = sec_close_sva_prefetch,
.show_last_dfx_regs = sec_show_last_dfx_regs,
.err_info_init = sec_err_info_init,
+ .get_err_result = sec_get_err_result,
};
static int sec_pf_probe_init(struct sec_dev *sec)
diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c
index 9d47b3675da7..66e553115adf 100644
--- a/drivers/crypto/hisilicon/zip/zip_main.c
+++ b/drivers/crypto/hisilicon/zip/zip_main.c
@@ -1068,11 +1068,15 @@ static u32 hisi_zip_get_hw_err_status(struct hisi_qm *qm)
static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
{
- u32 nfe;
-
writel(err_sts, qm->io_base + HZIP_CORE_INT_SOURCE);
- nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
- writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
+}
+
+static void hisi_zip_disable_error_report(struct hisi_qm *qm, u32 err_type)
+{
+ u32 nfe_mask;
+
+ nfe_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
+ writel(nfe_mask & (~err_type), qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
}
static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm)
@@ -1102,6 +1106,27 @@ static void hisi_zip_close_axi_master_ooo(struct hisi_qm *qm)
qm->io_base + HZIP_CORE_INT_SET);
}
+static enum acc_err_result hisi_zip_get_err_result(struct hisi_qm *qm)
+{
+ u32 err_status;
+
+ err_status = hisi_zip_get_hw_err_status(qm);
+ if (err_status) {
+ if (err_status & qm->err_info.ecc_2bits_mask)
+ qm->err_status.is_dev_ecc_mbit = true;
+ hisi_zip_log_hw_error(qm, err_status);
+
+ if (err_status & qm->err_info.dev_reset_mask) {
+ /* Disable the same error reporting until device is recovered. */
+ hisi_zip_disable_error_report(qm, err_status);
+ return ACC_ERR_NEED_RESET;
+ }
+ hisi_zip_clear_hw_err_status(qm, err_status);
+ }
+
+ return ACC_ERR_RECOVERED;
+}
+
static void hisi_zip_err_info_init(struct hisi_qm *qm)
{
struct hisi_qm_err_info *err_info = &qm->err_info;
@@ -1129,13 +1154,13 @@ static const struct hisi_qm_err_ini hisi_zip_err_ini = {
.hw_err_disable = hisi_zip_hw_error_disable,
.get_dev_hw_err_status = hisi_zip_get_hw_err_status,
.clear_dev_hw_err_status = hisi_zip_clear_hw_err_status,
- .log_dev_hw_err = hisi_zip_log_hw_error,
.open_axi_master_ooo = hisi_zip_open_axi_master_ooo,
.close_axi_master_ooo = hisi_zip_close_axi_master_ooo,
.open_sva_prefetch = hisi_zip_open_sva_prefetch,
.close_sva_prefetch = hisi_zip_close_sva_prefetch,
.show_last_dfx_regs = hisi_zip_show_last_dfx_regs,
.err_info_init = hisi_zip_err_info_init,
+ .get_err_result = hisi_zip_get_err_result,
};
static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
index e17577b785c3..f44c08f5f5ec 100644
--- a/drivers/crypto/inside-secure/safexcel_hash.c
+++ b/drivers/crypto/inside-secure/safexcel_hash.c
@@ -2093,7 +2093,7 @@ static int safexcel_xcbcmac_cra_init(struct crypto_tfm *tfm)
safexcel_ahash_cra_init(tfm);
ctx->aes = kmalloc(sizeof(*ctx->aes), GFP_KERNEL);
- return PTR_ERR_OR_ZERO(ctx->aes);
+ return ctx->aes == NULL ? -ENOMEM : 0;
}
static void safexcel_xcbcmac_cra_exit(struct crypto_tfm *tfm)
diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
index 615af0883207..403f07371445 100644
--- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
@@ -473,7 +473,7 @@ static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
else
id = -EINVAL;
- if (id < 0 || id > num_objs)
+ if (id < 0 || id >= num_objs)
return NULL;
return fw_objs[id];
diff --git a/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c b/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
index 04845f8d72be..056fc59b5ae6 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
@@ -19,18 +19,13 @@
void adf_dbgfs_init(struct adf_accel_dev *accel_dev)
{
char name[ADF_DEVICE_NAME_LENGTH];
- void *ret;
/* Create dev top level debugfs entry */
snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
accel_dev->hw_device->dev_class->name,
pci_name(accel_dev->accel_pci_dev.pci_dev));
- ret = debugfs_create_dir(name, NULL);
- if (IS_ERR_OR_NULL(ret))
- return;
-
- accel_dev->debugfs_dir = ret;
+ accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
adf_cfg_dev_dbgfs_add(accel_dev);
}
@@ -56,9 +51,6 @@ EXPORT_SYMBOL_GPL(adf_dbgfs_exit);
*/
void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
{
- if (!accel_dev->debugfs_dir)
- return;
-
if (!accel_dev->is_vf) {
adf_fw_counters_dbgfs_add(accel_dev);
adf_heartbeat_dbgfs_add(accel_dev);
@@ -71,9 +63,6 @@ void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
*/
void adf_dbgfs_rm(struct adf_accel_dev *accel_dev)
{
- if (!accel_dev->debugfs_dir)
- return;
-
if (!accel_dev->is_vf) {
adf_heartbeat_dbgfs_rm(accel_dev);
adf_fw_counters_dbgfs_rm(accel_dev);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c b/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
index da6956699246..dd9a31c20bc9 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
@@ -90,10 +90,6 @@ void adf_exit_arb(struct adf_accel_dev *accel_dev)
hw_data->get_arb_info(&info);
- /* Reset arbiter configuration */
- for (i = 0; i < ADF_ARB_NUM; i++)
- WRITE_CSR_ARB_SARCONFIG(csr, arb_off, i, 0);
-
/* Unmap worker threads to service arbiters */
for (i = 0; i < hw_data->num_engines; i++)
WRITE_CSR_ARB_WT2SAM(csr, arb_off, wt_off, i, 0);
diff --git a/drivers/dax/pmem/Makefile b/drivers/dax/pmem/Makefile
deleted file mode 100644
index 191c31f0d4f0..000000000000
--- a/drivers/dax/pmem/Makefile
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
-obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem_core.o
-
-dax_pmem-y := pmem.o
-dax_pmem_core-y := core.o
-dax_pmem_compat-y := compat.o
diff --git a/drivers/dax/pmem/pmem.c b/drivers/dax/pmem/pmem.c
deleted file mode 100644
index dfe91a2990fe..000000000000
--- a/drivers/dax/pmem/pmem.c
+++ /dev/null
@@ -1,10 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
-#include <linux/percpu-refcount.h>
-#include <linux/memremap.h>
-#include <linux/module.h>
-#include <linux/pfn_t.h>
-#include <linux/nd.h>
-#include "../bus.h"
-
-
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index c40645999648..820c993c8659 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -35,12 +35,13 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
struct vm_area_struct *vma = vmf->vma;
struct udmabuf *ubuf = vma->vm_private_data;
pgoff_t pgoff = vmf->pgoff;
+ unsigned long pfn;
if (pgoff >= ubuf->pagecount)
return VM_FAULT_SIGBUS;
- vmf->page = ubuf->pages[pgoff];
- get_page(vmf->page);
- return 0;
+
+ pfn = page_to_pfn(ubuf->pages[pgoff]);
+ return vmf_insert_pfn(vma, vmf->address, pfn);
}
static const struct vm_operations_struct udmabuf_vm_ops = {
@@ -56,6 +57,7 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
vma->vm_ops = &udmabuf_vm_ops;
vma->vm_private_data = ubuf;
+ vm_flags_set(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
return 0;
}
diff --git a/drivers/edac/bluefield_edac.c b/drivers/edac/bluefield_edac.c
index e4736eb37bfb..0ef048982768 100644
--- a/drivers/edac/bluefield_edac.c
+++ b/drivers/edac/bluefield_edac.c
@@ -180,7 +180,7 @@ static void bluefield_edac_check(struct mem_ctl_info *mci)
static void bluefield_edac_init_dimms(struct mem_ctl_info *mci)
{
struct bluefield_edac_priv *priv = mci->pvt_info;
- int mem_ctrl_idx = mci->mc_idx;
+ u64 mem_ctrl_idx = mci->mc_idx;
struct dimm_info *dimm;
u64 smc_info, smc_arg;
int is_empty = 1, i;
diff --git a/drivers/edac/fsl_ddr_edac.c b/drivers/edac/fsl_ddr_edac.c
index b81757555a8a..7809427c2dbe 100644
--- a/drivers/edac/fsl_ddr_edac.c
+++ b/drivers/edac/fsl_ddr_edac.c
@@ -328,21 +328,25 @@ static void fsl_mc_check(struct mem_ctl_info *mci)
* TODO: Add support for 32-bit wide buses
*/
if ((err_detect & DDR_EDE_SBE) && (bus_width == 64)) {
+ u64 cap = (u64)cap_high << 32 | cap_low;
+ u32 s = syndrome;
+
sbe_ecc_decode(cap_high, cap_low, syndrome,
&bad_data_bit, &bad_ecc_bit);
- if (bad_data_bit != -1)
- fsl_mc_printk(mci, KERN_ERR,
- "Faulty Data bit: %d\n", bad_data_bit);
- if (bad_ecc_bit != -1)
- fsl_mc_printk(mci, KERN_ERR,
- "Faulty ECC bit: %d\n", bad_ecc_bit);
+ if (bad_data_bit >= 0) {
+ fsl_mc_printk(mci, KERN_ERR, "Faulty Data bit: %d\n", bad_data_bit);
+ cap ^= 1ULL << bad_data_bit;
+ }
+
+ if (bad_ecc_bit >= 0) {
+ fsl_mc_printk(mci, KERN_ERR, "Faulty ECC bit: %d\n", bad_ecc_bit);
+ s ^= 1 << bad_ecc_bit;
+ }
fsl_mc_printk(mci, KERN_ERR,
"Expected Data / ECC:\t%#8.8x_%08x / %#2.2x\n",
- cap_high ^ (1 << (bad_data_bit - 32)),
- cap_low ^ (1 << bad_data_bit),
- syndrome ^ (1 << bad_ecc_bit));
+ upper_32_bits(cap), lower_32_bits(cap), s);
}
fsl_mc_printk(mci, KERN_ERR,
diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
index 2b83d6de9352..535f058b48ee 100644
--- a/drivers/edac/i10nm_base.c
+++ b/drivers/edac/i10nm_base.c
@@ -1088,6 +1088,7 @@ static int __init i10nm_init(void)
return -ENODEV;
cfg = (struct res_config *)id->driver_data;
+ skx_set_res_cfg(cfg);
res_cfg = cfg;
rc = skx_get_hi_lo(0x09a2, off, &tolm, &tohm);
diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c
index a0edb61a5a01..0b408299699a 100644
--- a/drivers/edac/igen6_edac.c
+++ b/drivers/edac/igen6_edac.c
@@ -1075,6 +1075,7 @@ static int igen6_register_mci(int mc, u64 mchbar, struct pci_dev *pdev)
imc->mci = mci;
return 0;
fail3:
+ mci->pvt_info = NULL;
kfree(mci->ctl_name);
fail2:
edac_mc_free(mci);
@@ -1099,6 +1100,7 @@ static void igen6_unregister_mcis(void)
edac_mc_del_mc(mci->pdev);
kfree(mci->ctl_name);
+ mci->pvt_info = NULL;
edac_mc_free(mci);
iounmap(imc->window);
}
diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
index 8d18099fd528..0b8aaf5f77d9 100644
--- a/drivers/edac/skx_common.c
+++ b/drivers/edac/skx_common.c
@@ -47,6 +47,7 @@ static skx_show_retry_log_f skx_show_retry_rd_err_log;
static u64 skx_tolm, skx_tohm;
static LIST_HEAD(dev_edac_list);
static bool skx_mem_cfg_2lm;
+static struct res_config *skx_res_cfg;
int skx_adxl_get(void)
{
@@ -119,7 +120,7 @@ void skx_adxl_put(void)
}
EXPORT_SYMBOL_GPL(skx_adxl_put);
-static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_mem)
+static bool skx_adxl_decode(struct decoded_addr *res, enum error_source err_src)
{
struct skx_dev *d;
int i, len = 0;
@@ -135,8 +136,24 @@ static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_me
return false;
}
+ /*
+ * GNR with a Flat2LM memory configuration may mistakenly classify
+ * a near-memory error(DDR5) as a far-memory error(CXL), resulting
+ * in the incorrect selection of decoded ADXL components.
+ * To address this, prefetch the decoded far-memory controller ID
+ * and adjust the error source to near-memory if the far-memory
+ * controller ID is invalid.
+ */
+ if (skx_res_cfg && skx_res_cfg->type == GNR && err_src == ERR_SRC_2LM_FM) {
+ res->imc = (int)adxl_values[component_indices[INDEX_MEMCTRL]];
+ if (res->imc == -1) {
+ err_src = ERR_SRC_2LM_NM;
+ edac_dbg(0, "Adjust the error source to near-memory.\n");
+ }
+ }
+
res->socket = (int)adxl_values[component_indices[INDEX_SOCKET]];
- if (error_in_1st_level_mem) {
+ if (err_src == ERR_SRC_2LM_NM) {
res->imc = (adxl_nm_bitmap & BIT_NM_MEMCTRL) ?
(int)adxl_values[component_indices[INDEX_NM_MEMCTRL]] : -1;
res->channel = (adxl_nm_bitmap & BIT_NM_CHANNEL) ?
@@ -191,6 +208,12 @@ void skx_set_mem_cfg(bool mem_cfg_2lm)
}
EXPORT_SYMBOL_GPL(skx_set_mem_cfg);
+void skx_set_res_cfg(struct res_config *cfg)
+{
+ skx_res_cfg = cfg;
+}
+EXPORT_SYMBOL_GPL(skx_set_res_cfg);
+
void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log)
{
driver_decode = decode;
@@ -620,31 +643,27 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
optype, skx_msg);
}
-static bool skx_error_in_1st_level_mem(const struct mce *m)
+static enum error_source skx_error_source(const struct mce *m)
{
- u32 errcode;
+ u32 errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
- if (!skx_mem_cfg_2lm)
- return false;
-
- errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
-
- return errcode == MCACOD_EXT_MEM_ERR;
-}
+ if (errcode != MCACOD_MEM_CTL_ERR && errcode != MCACOD_EXT_MEM_ERR)
+ return ERR_SRC_NOT_MEMORY;
-static bool skx_error_in_mem(const struct mce *m)
-{
- u32 errcode;
+ if (!skx_mem_cfg_2lm)
+ return ERR_SRC_1LM;
- errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
+ if (errcode == MCACOD_EXT_MEM_ERR)
+ return ERR_SRC_2LM_NM;
- return (errcode == MCACOD_MEM_CTL_ERR || errcode == MCACOD_EXT_MEM_ERR);
+ return ERR_SRC_2LM_FM;
}
int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
void *data)
{
struct mce *mce = (struct mce *)data;
+ enum error_source err_src;
struct decoded_addr res;
struct mem_ctl_info *mci;
char *type;
@@ -652,8 +671,10 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
if (mce->kflags & MCE_HANDLED_CEC)
return NOTIFY_DONE;
+ err_src = skx_error_source(mce);
+
/* Ignore unless this is memory related with an address */
- if (!skx_error_in_mem(mce) || !(mce->status & MCI_STATUS_ADDRV))
+ if (err_src == ERR_SRC_NOT_MEMORY || !(mce->status & MCI_STATUS_ADDRV))
return NOTIFY_DONE;
memset(&res, 0, sizeof(res));
@@ -667,7 +688,7 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
/* Try driver decoder first */
if (!(driver_decode && driver_decode(&res))) {
/* Then try firmware decoder (ACPI DSM methods) */
- if (!(adxl_component_count && skx_adxl_decode(&res, skx_error_in_1st_level_mem(mce))))
+ if (!(adxl_component_count && skx_adxl_decode(&res, err_src)))
return NOTIFY_DONE;
}
diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
index 11faf1db4fa4..e7f18ada1668 100644
--- a/drivers/edac/skx_common.h
+++ b/drivers/edac/skx_common.h
@@ -147,6 +147,13 @@ enum {
INDEX_MAX
};
+enum error_source {
+ ERR_SRC_1LM,
+ ERR_SRC_2LM_NM,
+ ERR_SRC_2LM_FM,
+ ERR_SRC_NOT_MEMORY,
+};
+
#define BIT_NM_MEMCTRL BIT_ULL(INDEX_NM_MEMCTRL)
#define BIT_NM_CHANNEL BIT_ULL(INDEX_NM_CHANNEL)
#define BIT_NM_DIMM BIT_ULL(INDEX_NM_DIMM)
@@ -235,6 +242,7 @@ int skx_adxl_get(void);
void skx_adxl_put(void);
void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log);
void skx_set_mem_cfg(bool mem_cfg_2lm);
+void skx_set_res_cfg(struct res_config *cfg);
int skx_get_src_id(struct skx_dev *d, int off, u8 *id);
int skx_get_node_id(struct skx_dev *d, u8 *id);
diff --git a/drivers/firmware/arm_scmi/common.h b/drivers/firmware/arm_scmi/common.h
index 00b165d1f502..039f686f4580 100644
--- a/drivers/firmware/arm_scmi/common.h
+++ b/drivers/firmware/arm_scmi/common.h
@@ -163,6 +163,7 @@ void scmi_protocol_release(const struct scmi_handle *handle, u8 protocol_id);
* used to initialize this channel
* @dev: Reference to device in the SCMI hierarchy corresponding to this
* channel
+ * @is_p2a: A flag to identify a channel as P2A (RX)
* @rx_timeout_ms: The configured RX timeout in milliseconds.
* @handle: Pointer to SCMI entity handle
* @no_completion_irq: Flag to indicate that this channel has no completion
@@ -174,6 +175,7 @@ void scmi_protocol_release(const struct scmi_handle *handle, u8 protocol_id);
struct scmi_chan_info {
int id;
struct device *dev;
+ bool is_p2a;
unsigned int rx_timeout_ms;
struct scmi_handle *handle;
bool no_completion_irq;
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index 3962683e2af9..efa9698c876a 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -855,6 +855,11 @@ static inline void scmi_xfer_command_release(struct scmi_info *info,
static inline void scmi_clear_channel(struct scmi_info *info,
struct scmi_chan_info *cinfo)
{
+ if (!cinfo->is_p2a) {
+ dev_warn(cinfo->dev, "Invalid clear on A2P channel !\n");
+ return;
+ }
+
if (info->desc->ops->clear_channel)
info->desc->ops->clear_channel(cinfo);
}
@@ -2319,6 +2324,7 @@ static int scmi_chan_setup(struct scmi_info *info, struct device_node *of_node,
if (!cinfo)
return -ENOMEM;
+ cinfo->is_p2a = !tx;
cinfo->rx_timeout_ms = info->desc->max_rx_timeout_ms;
/* Create a unique name for this transport device */
diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c
index 435d0e2658a4..3de25e9d18ef 100644
--- a/drivers/firmware/arm_scpi.c
+++ b/drivers/firmware/arm_scpi.c
@@ -627,6 +627,9 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
if (ret)
return ERR_PTR(ret);
+ if (!buf.opp_count)
+ return ERR_PTR(-ENOENT);
+
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return ERR_PTR(-ENOMEM);
diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c
index f9c1e8a2bd1d..ec01b7d3b6d4 100644
--- a/drivers/firmware/efi/libstub/efi-stub.c
+++ b/drivers/firmware/efi/libstub/efi-stub.c
@@ -129,7 +129,7 @@ efi_status_t efi_handle_cmdline(efi_loaded_image_t *image, char **cmdline_ptr)
if (IS_ENABLED(CONFIG_CMDLINE_EXTEND) ||
IS_ENABLED(CONFIG_CMDLINE_FORCE) ||
- cmdline_size == 0) {
+ cmdline[0] == 0) {
status = efi_parse_options(CONFIG_CMDLINE);
if (status != EFI_SUCCESS) {
efi_err("Failed to parse options\n");
@@ -149,7 +149,7 @@ efi_status_t efi_handle_cmdline(efi_loaded_image_t *image, char **cmdline_ptr)
return EFI_SUCCESS;
fail_free_cmdline:
- efi_bs_call(free_pool, cmdline_ptr);
+ efi_bs_call(free_pool, cmdline);
return status;
}
diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
index e8d69bd548f3..9c3613e6af15 100644
--- a/drivers/firmware/efi/tpm.c
+++ b/drivers/firmware/efi/tpm.c
@@ -40,7 +40,8 @@ int __init efi_tpm_eventlog_init(void)
{
struct linux_efi_tpm_eventlog *log_tbl;
struct efi_tcg2_final_events_table *final_tbl;
- int tbl_size;
+ unsigned int tbl_size;
+ int final_tbl_size;
int ret = 0;
if (efi.tpm_log == EFI_INVALID_TABLE_ADDR) {
@@ -80,26 +81,26 @@ int __init efi_tpm_eventlog_init(void)
goto out;
}
- tbl_size = 0;
+ final_tbl_size = 0;
if (final_tbl->nr_events != 0) {
void *events = (void *)efi.tpm_final_log
+ sizeof(final_tbl->version)
+ sizeof(final_tbl->nr_events);
- tbl_size = tpm2_calc_event_log_size(events,
- final_tbl->nr_events,
- log_tbl->log);
+ final_tbl_size = tpm2_calc_event_log_size(events,
+ final_tbl->nr_events,
+ log_tbl->log);
}
- if (tbl_size < 0) {
+ if (final_tbl_size < 0) {
pr_err(FW_BUG "Failed to parse event in TPM Final Events Log\n");
ret = -EINVAL;
goto out_calc;
}
memblock_reserve(efi.tpm_final_log,
- tbl_size + sizeof(*final_tbl));
- efi_tpm_final_log_size = tbl_size;
+ final_tbl_size + sizeof(*final_tbl));
+ efi_tpm_final_log_size = final_tbl_size;
out_calc:
early_memunmap(final_tbl, sizeof(*final_tbl));
diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c
index 96ea1fa76d35..854d488e025e 100644
--- a/drivers/firmware/google/gsmi.c
+++ b/drivers/firmware/google/gsmi.c
@@ -918,7 +918,8 @@ static __init int gsmi_init(void)
gsmi_dev.pdev = platform_device_register_full(&gsmi_dev_info);
if (IS_ERR(gsmi_dev.pdev)) {
printk(KERN_ERR "gsmi: unable to register platform device\n");
- return PTR_ERR(gsmi_dev.pdev);
+ ret = PTR_ERR(gsmi_dev.pdev);
+ goto out_unregister;
}
/* SMI access needs to be serialized */
@@ -1056,10 +1057,11 @@ static __init int gsmi_init(void)
gsmi_buf_free(gsmi_dev.name_buf);
kmem_cache_destroy(gsmi_dev.mem_pool);
platform_device_unregister(gsmi_dev.pdev);
- pr_info("gsmi: failed to load: %d\n", ret);
+out_unregister:
#ifdef CONFIG_PM
platform_driver_unregister(&gsmi_driver_info);
#endif
+ pr_info("gsmi: failed to load: %d\n", ret);
return ret;
}
diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c
index 5170fe7599cd..d5909a4f0433 100644
--- a/drivers/gpio/gpio-exar.c
+++ b/drivers/gpio/gpio-exar.c
@@ -99,11 +99,13 @@ static void exar_set_value(struct gpio_chip *chip, unsigned int offset,
struct exar_gpio_chip *exar_gpio = gpiochip_get_data(chip);
unsigned int addr = exar_offset_to_lvl_addr(exar_gpio, offset);
unsigned int bit = exar_offset_to_bit(exar_gpio, offset);
+ unsigned int bit_value = value ? BIT(bit) : 0;
- if (value)
- regmap_set_bits(exar_gpio->regmap, addr, BIT(bit));
- else
- regmap_clear_bits(exar_gpio->regmap, addr, BIT(bit));
+ /*
+ * regmap_write_bits() forces value to be written when an external
+ * pull up/down might otherwise indicate value was already set.
+ */
+ regmap_write_bits(exar_gpio->regmap, addr, BIT(bit), bit_value);
}
static int exar_direction_output(struct gpio_chip *chip, unsigned int offset,
diff --git a/drivers/gpio/gpio-zevio.c b/drivers/gpio/gpio-zevio.c
index 2de61337ad3b..d7230fd83f5d 100644
--- a/drivers/gpio/gpio-zevio.c
+++ b/drivers/gpio/gpio-zevio.c
@@ -11,6 +11,7 @@
#include <linux/io.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h>
+#include <linux/property.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
@@ -169,6 +170,7 @@ static const struct gpio_chip zevio_gpio_chip = {
/* Initialization */
static int zevio_gpio_probe(struct platform_device *pdev)
{
+ struct device *dev = &pdev->dev;
struct zevio_gpio *controller;
int status, i;
@@ -180,6 +182,10 @@ static int zevio_gpio_probe(struct platform_device *pdev)
controller->chip = zevio_gpio_chip;
controller->chip.parent = &pdev->dev;
+ controller->chip.label = devm_kasprintf(dev, GFP_KERNEL, "%pfw", dev_fwnode(dev));
+ if (!controller->chip.label)
+ return -ENOMEM;
+
controller->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(controller->regs))
return dev_err_probe(&pdev->dev, PTR_ERR(controller->regs),
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 9c99d69b4b08..cd2d99e00b5d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -4020,8 +4020,8 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
int idx;
bool px;
- amdgpu_fence_driver_sw_fini(adev);
amdgpu_device_ip_fini(adev);
+ amdgpu_fence_driver_sw_fini(adev);
amdgpu_ucode_release(&adev->firmware.gpu_info_fw);
adev->accel_working = false;
dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index 88a3aa36b41d..8e91355ad42c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -214,15 +214,15 @@ int amdgpu_vce_sw_fini(struct amdgpu_device *adev)
drm_sched_entity_destroy(&adev->vce.entity);
- amdgpu_bo_free_kernel(&adev->vce.vcpu_bo, &adev->vce.gpu_addr,
- (void **)&adev->vce.cpu_addr);
-
for (i = 0; i < adev->vce.num_rings; i++)
amdgpu_ring_fini(&adev->vce.ring[i]);
amdgpu_ucode_release(&adev->vce.fw);
mutex_destroy(&adev->vce.idle_mutex);
+ amdgpu_bo_free_kernel(&adev->vce.vcpu_bo, &adev->vce.gpu_addr,
+ (void **)&adev->vce.cpu_addr);
+
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
index 1bea629c49ca..68d13c4fac8f 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
@@ -123,7 +123,7 @@ static bool kq_initialize(struct kernel_queue *kq, struct kfd_node *dev,
memset(kq->pq_kernel_addr, 0, queue_size);
memset(kq->rptr_kernel, 0, sizeof(*kq->rptr_kernel));
- memset(kq->wptr_kernel, 0, sizeof(*kq->wptr_kernel));
+ memset(kq->wptr_kernel, 0, dev->kfd->device_info.doorbell_size);
prop.queue_size = queue_size;
prop.is_interop = false;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
index 6c90231e0aec..fd640a061c96 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
@@ -312,8 +312,8 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
attr_sdma);
struct kfd_sdma_activity_handler_workarea sdma_activity_work_handler;
- INIT_WORK(&sdma_activity_work_handler.sdma_activity_work,
- kfd_sdma_activity_worker);
+ INIT_WORK_ONSTACK(&sdma_activity_work_handler.sdma_activity_work,
+ kfd_sdma_activity_worker);
sdma_activity_work_handler.pdd = pdd;
sdma_activity_work_handler.sdma_activity_counter = 0;
@@ -321,6 +321,7 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
schedule_work(&sdma_activity_work_handler.sdma_activity_work);
flush_work(&sdma_activity_work_handler.sdma_activity_work);
+ destroy_work_on_stack(&sdma_activity_work_handler.sdma_activity_work);
return snprintf(buffer, PAGE_SIZE, "%llu\n",
(sdma_activity_work_handler.sdma_activity_counter)/
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index d390e3d62e56..9ec9792f115a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -179,6 +179,8 @@ amdgpu_dm_mst_connector_early_unregister(struct drm_connector *connector)
dc_sink_release(dc_sink);
aconnector->dc_sink = NULL;
aconnector->edid = NULL;
+ aconnector->dsc_aux = NULL;
+ port->passthrough_aux = NULL;
}
aconnector->mst_status = MST_STATUS_DEFAULT;
@@ -487,6 +489,8 @@ dm_dp_mst_detect(struct drm_connector *connector,
dc_sink_release(aconnector->dc_sink);
aconnector->dc_sink = NULL;
aconnector->edid = NULL;
+ aconnector->dsc_aux = NULL;
+ port->passthrough_aux = NULL;
amdgpu_dm_set_mst_status(&aconnector->mst_status,
MST_REMOTE_EDID | MST_ALLOCATE_NEW_PAYLOAD | MST_CLEAR_ALLOCATED_PAYLOAD,
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
index 3271c8c7905d..4e036356b6a8 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
@@ -560,11 +560,19 @@ void dcn3_clk_mgr_construct(
dce_clock_read_ss_info(clk_mgr);
clk_mgr->base.bw_params = kzalloc(sizeof(*clk_mgr->base.bw_params), GFP_KERNEL);
+ if (!clk_mgr->base.bw_params) {
+ BREAK_TO_DEBUGGER();
+ return;
+ }
/* need physical address of table to give to PMFW */
clk_mgr->wm_range_table = dm_helpers_allocate_gpu_mem(clk_mgr->base.ctx,
DC_MEM_ALLOC_TYPE_GART, sizeof(WatermarksExternal_t),
&clk_mgr->wm_range_table_addr);
+ if (!clk_mgr->wm_range_table) {
+ BREAK_TO_DEBUGGER();
+ return;
+ }
}
void dcn3_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr)
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index 2428a4763b85..1c5ae4d62e37 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -1022,11 +1022,19 @@ void dcn32_clk_mgr_construct(
clk_mgr->smu_present = false;
clk_mgr->base.bw_params = kzalloc(sizeof(*clk_mgr->base.bw_params), GFP_KERNEL);
+ if (!clk_mgr->base.bw_params) {
+ BREAK_TO_DEBUGGER();
+ return;
+ }
/* need physical address of table to give to PMFW */
clk_mgr->wm_range_table = dm_helpers_allocate_gpu_mem(clk_mgr->base.ctx,
DC_MEM_ALLOC_TYPE_GART, sizeof(WatermarksExternal_t),
&clk_mgr->wm_range_table_addr);
+ if (!clk_mgr->wm_range_table) {
+ BREAK_TO_DEBUGGER();
+ return;
+ }
}
void dcn32_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
index f99ec1b0efaf..2eae1fd95fd0 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
@@ -727,6 +727,9 @@ void hwss_setup_dpp(union block_sequence_params *params)
struct dpp *dpp = pipe_ctx->plane_res.dpp;
struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+ if (!plane_state)
+ return;
+
if (dpp && dpp->funcs->dpp_setup) {
// program the input csc
dpp->funcs->dpp_setup(dpp,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
index 12af2859002f..a825fd6c7fa6 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
@@ -880,7 +880,8 @@ bool dcn20_set_output_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx,
/*
* if above if is not executed then 'params' equal to 0 and set in bypass
*/
- mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
+ if (mpc->funcs->set_output_gamma)
+ mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
return true;
}
@@ -1732,17 +1733,26 @@ static void dcn20_program_pipe(
dc->res_pool->hubbub->funcs->program_det_size(
dc->res_pool->hubbub, pipe_ctx->plane_res.hubp->inst, pipe_ctx->det_buffer_size_kb);
- if (pipe_ctx->update_flags.raw || pipe_ctx->plane_state->update_flags.raw || pipe_ctx->stream->update_flags.raw)
+ if (pipe_ctx->update_flags.raw ||
+ (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.raw) ||
+ pipe_ctx->stream->update_flags.raw)
dcn20_update_dchubp_dpp(dc, pipe_ctx, context);
- if (pipe_ctx->update_flags.bits.enable
- || pipe_ctx->plane_state->update_flags.bits.hdr_mult)
+ if (pipe_ctx->update_flags.bits.enable ||
+ (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.bits.hdr_mult))
hws->funcs.set_hdr_multiplier(pipe_ctx);
- if (pipe_ctx->update_flags.bits.enable ||
- pipe_ctx->plane_state->update_flags.bits.in_transfer_func_change ||
- pipe_ctx->plane_state->update_flags.bits.gamma_change ||
- pipe_ctx->plane_state->update_flags.bits.lut_3d)
+ if ((pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.bits.hdr_mult) ||
+ pipe_ctx->update_flags.bits.enable)
+ hws->funcs.set_hdr_multiplier(pipe_ctx);
+
+ if ((pipe_ctx->plane_state &&
+ pipe_ctx->plane_state->update_flags.bits.in_transfer_func_change) ||
+ (pipe_ctx->plane_state &&
+ pipe_ctx->plane_state->update_flags.bits.gamma_change) ||
+ (pipe_ctx->plane_state &&
+ pipe_ctx->plane_state->update_flags.bits.lut_3d) ||
+ pipe_ctx->update_flags.bits.enable)
hws->funcs.set_input_transfer_func(dc, pipe_ctx, pipe_ctx->plane_state);
/* dcn10_translate_regamma_to_hw_format takes 750us to finish
@@ -1752,7 +1762,8 @@ static void dcn20_program_pipe(
if (pipe_ctx->update_flags.bits.enable ||
pipe_ctx->update_flags.bits.plane_changed ||
pipe_ctx->stream->update_flags.bits.out_tf ||
- pipe_ctx->plane_state->update_flags.bits.output_tf_change)
+ (pipe_ctx->plane_state &&
+ pipe_ctx->plane_state->update_flags.bits.output_tf_change))
hws->funcs.set_output_transfer_func(dc, pipe_ctx, pipe_ctx->stream);
/* If the pipe has been enabled or has a different opp, we
@@ -1776,7 +1787,7 @@ static void dcn20_program_pipe(
}
/* Set ABM pipe after other pipe configurations done */
- if (pipe_ctx->plane_state->visible) {
+ if ((pipe_ctx->plane_state && pipe_ctx->plane_state->visible)) {
if (pipe_ctx->stream_res.abm) {
dc->hwss.set_pipe(pipe_ctx);
pipe_ctx->stream_res.abm->funcs->set_abm_level(pipe_ctx->stream_res.abm,
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
index ba4a1e7f196d..b8653bdfc40f 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
@@ -440,7 +440,7 @@ void dcn30_init_hw(struct dc *dc)
int edp_num;
uint32_t backlight = MAX_BACKLIGHT_LEVEL;
- if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
+ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->init_clocks)
dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
// Initialize the dccg
@@ -599,11 +599,12 @@ void dcn30_init_hw(struct dc *dc)
if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
- if (dc->clk_mgr->funcs->notify_wm_ranges)
+ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->notify_wm_ranges)
dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
//if softmax is enabled then hardmax will be set by a different call
- if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
+ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->set_hard_max_memclk &&
+ !dc->clk_mgr->dc_mode_softmax_enabled)
dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
index 88c0b24a3249..de83acd12250 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
@@ -2045,6 +2045,9 @@ bool dcn30_validate_bandwidth(struct dc *dc,
BW_VAL_TRACE_COUNT();
+ if (!pipes)
+ goto validate_fail;
+
DC_FP_START();
out = dcn30_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate, true);
DC_FP_END();
diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
index 82de4fe2637f..84e3df49be2f 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
@@ -1308,6 +1308,8 @@ static struct hpo_dp_link_encoder *dcn31_hpo_dp_link_encoder_create(
/* allocate HPO link encoder */
hpo_dp_enc31 = kzalloc(sizeof(struct dcn31_hpo_dp_link_encoder), GFP_KERNEL);
+ if (!hpo_dp_enc31)
+ return NULL; /* out of memory */
hpo_dp_link_encoder31_construct(hpo_dp_enc31, ctx, inst,
&hpo_dp_link_enc_regs[inst],
@@ -1764,6 +1766,9 @@ bool dcn31_validate_bandwidth(struct dc *dc,
BW_VAL_TRACE_COUNT();
+ if (!pipes)
+ goto validate_fail;
+
DC_FP_START();
out = dcn30_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate, true);
DC_FP_END();
diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
index 3e65e683db0a..6e52851bc031 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
@@ -1381,6 +1381,8 @@ static struct hpo_dp_link_encoder *dcn31_hpo_dp_link_encoder_create(
/* allocate HPO link encoder */
hpo_dp_enc31 = kzalloc(sizeof(struct dcn31_hpo_dp_link_encoder), GFP_KERNEL);
+ if (!hpo_dp_enc31)
+ return NULL; /* out of memory */
hpo_dp_link_encoder31_construct(hpo_dp_enc31, ctx, inst,
&hpo_dp_link_enc_regs[inst],
@@ -1741,6 +1743,9 @@ bool dcn314_validate_bandwidth(struct dc *dc,
BW_VAL_TRACE_COUNT();
+ if (!pipes)
+ goto validate_fail;
+
if (filter_modes_for_single_channel_workaround(dc, context))
goto validate_fail;
diff --git a/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c
index 127487ea3d7d..3f3b555b4523 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn315/dcn315_resource.c
@@ -1308,6 +1308,8 @@ static struct hpo_dp_link_encoder *dcn31_hpo_dp_link_encoder_create(
/* allocate HPO link encoder */
hpo_dp_enc31 = kzalloc(sizeof(struct dcn31_hpo_dp_link_encoder), GFP_KERNEL);
+ if (!hpo_dp_enc31)
+ return NULL; /* out of memory */
hpo_dp_link_encoder31_construct(hpo_dp_enc31, ctx, inst,
&hpo_dp_link_enc_regs[inst],
diff --git a/drivers/gpu/drm/amd/display/dc/dcn316/dcn316_resource.c b/drivers/gpu/drm/amd/display/dc/dcn316/dcn316_resource.c
index 5fe2c61527df..37b7973fc949 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn316/dcn316_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn316/dcn316_resource.c
@@ -1305,6 +1305,8 @@ static struct hpo_dp_link_encoder *dcn31_hpo_dp_link_encoder_create(
/* allocate HPO link encoder */
hpo_dp_enc31 = kzalloc(sizeof(struct dcn31_hpo_dp_link_encoder), GFP_KERNEL);
+ if (!hpo_dp_enc31)
+ return NULL; /* out of memory */
hpo_dp_link_encoder31_construct(hpo_dp_enc31, ctx, inst,
&hpo_dp_link_enc_regs[inst],
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
index 650e1598bddc..1a24fc8b5367 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_hwseq.c
@@ -587,7 +587,9 @@ bool dcn32_set_output_transfer_func(struct dc *dc,
}
}
- mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
+ if (mpc->funcs->set_output_gamma)
+ mpc->funcs->set_output_gamma(mpc, mpcc_id, params);
+
return ret;
}
@@ -771,7 +773,7 @@ void dcn32_init_hw(struct dc *dc)
int edp_num;
uint32_t backlight = MAX_BACKLIGHT_LEVEL;
- if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
+ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->init_clocks)
dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
// Initialize the dccg
@@ -948,10 +950,11 @@ void dcn32_init_hw(struct dc *dc)
if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
- if (dc->clk_mgr->funcs->notify_wm_ranges)
+ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->notify_wm_ranges)
dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
- if (dc->clk_mgr->funcs->set_hard_max_memclk && !dc->clk_mgr->dc_mode_softmax_enabled)
+ if (dc->clk_mgr && dc->clk_mgr->funcs && dc->clk_mgr->funcs->set_hard_max_memclk &&
+ !dc->clk_mgr->dc_mode_softmax_enabled)
dc->clk_mgr->funcs->set_hard_max_memclk(dc->clk_mgr);
if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
index f9d601c8c721..f98f35ac68c0 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_resource.c
@@ -1299,6 +1299,8 @@ static struct hpo_dp_link_encoder *dcn32_hpo_dp_link_encoder_create(
/* allocate HPO link encoder */
hpo_dp_enc31 = kzalloc(sizeof(struct dcn31_hpo_dp_link_encoder), GFP_KERNEL);
+ if (!hpo_dp_enc31)
+ return NULL; /* out of memory */
#undef REG_STRUCT
#define REG_STRUCT hpo_dp_link_enc_regs
@@ -1786,6 +1788,9 @@ void dcn32_add_phantom_pipes(struct dc *dc, struct dc_state *context,
// be a valid candidate for SubVP (i.e. has a plane, stream, doesn't
// already have phantom pipe assigned, etc.) by previous checks.
phantom_stream = dcn32_enable_phantom_stream(dc, context, pipes, pipe_cnt, index);
+ if (!phantom_stream)
+ return;
+
dcn32_enable_phantom_plane(dc, context, phantom_stream, index);
for (i = 0; i < dc->res_pool->pipe_count; i++) {
@@ -1842,6 +1847,9 @@ bool dcn32_validate_bandwidth(struct dc *dc,
BW_VAL_TRACE_COUNT();
+ if (!pipes)
+ goto validate_fail;
+
DC_FP_START();
out = dcn32_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate);
DC_FP_END();
diff --git a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
index aa4c64eec7b3..4289cd1643ec 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn321/dcn321_resource.c
@@ -1285,6 +1285,8 @@ static struct hpo_dp_link_encoder *dcn321_hpo_dp_link_encoder_create(
/* allocate HPO link encoder */
hpo_dp_enc31 = kzalloc(sizeof(struct dcn31_hpo_dp_link_encoder), GFP_KERNEL);
+ if (!hpo_dp_enc31)
+ return NULL; /* out of memory */
#undef REG_STRUCT
#define REG_STRUCT hpo_dp_link_enc_regs
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
index 548cdef8a8ad..543ce9a08cfd 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_rq_dlg_calc_20.c
@@ -78,7 +78,7 @@ static void calculate_ttu_cursor(struct display_mode_lib *mode_lib,
static unsigned int get_bytes_per_element(enum source_format_class source_format, bool is_chroma)
{
- unsigned int ret_val = 0;
+ unsigned int ret_val = 1;
if (source_format == dm_444_16) {
if (!is_chroma)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
index 3d82cbef1274..ac6357c089e7 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
@@ -932,8 +932,9 @@ static bool subvp_drr_schedulable(struct dc *dc, struct dc_state *context)
* for VBLANK: (VACTIVE region of the SubVP pipe can fit the MALL prefetch, VBLANK frame time,
* and the max of (VBLANK blanking time, MALL region)).
*/
- if (stretched_drr_us < (1 / (double)drr_timing->min_refresh_in_uhz) * 1000000 * 1000000 &&
- subvp_active_us - prefetch_us - stretched_drr_us - max_vblank_mallregion > 0)
+ if (drr_timing &&
+ stretched_drr_us < (1 / (double)drr_timing->min_refresh_in_uhz) * 1000000 * 1000000 &&
+ subvp_active_us - prefetch_us - stretched_drr_us - max_vblank_mallregion > 0)
schedulable = true;
return schedulable;
@@ -995,7 +996,7 @@ static bool subvp_vblank_schedulable(struct dc *dc, struct dc_state *context)
if (!subvp_pipe && pipe->stream->mall_stream_config.type == SUBVP_MAIN)
subvp_pipe = pipe;
}
- if (found) {
+ if (found && subvp_pipe) {
main_timing = &subvp_pipe->stream->timing;
phantom_timing = &subvp_pipe->stream->mall_stream_config.paired_stream->timing;
vblank_timing = &context->res_ctx.pipe_ctx[vblank_index].stream->timing;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
index 3df559c591f8..70df992f859d 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dml1_display_rq_dlg_calc.c
@@ -39,7 +39,7 @@
static unsigned int get_bytes_per_element(enum source_format_class source_format, bool is_chroma)
{
- unsigned int ret_val = 0;
+ unsigned int ret_val = 1;
if (source_format == dm_444_16) {
if (!is_chroma)
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
index 51ae41cb43ea..e1521d3a5e0c 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
@@ -1725,6 +1725,8 @@ static ssize_t smu_v13_0_7_get_gpu_metrics(struct smu_context *smu,
gpu_metrics->average_dclk1_frequency = metrics->AverageDclk1Frequency;
gpu_metrics->current_gfxclk = metrics->CurrClock[PPCLK_GFXCLK];
+ gpu_metrics->current_socclk = metrics->CurrClock[PPCLK_SOCCLK];
+ gpu_metrics->current_uclk = metrics->CurrClock[PPCLK_UCLK];
gpu_metrics->current_vclk0 = metrics->CurrClock[PPCLK_VCLK_0];
gpu_metrics->current_dclk0 = metrics->CurrClock[PPCLK_DCLK_0];
gpu_metrics->current_vclk1 = metrics->CurrClock[PPCLK_VCLK_1];
diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
index c1191ef5e8e6..412c6575e87b 100644
--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
+++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
@@ -2573,6 +2573,8 @@ static int __maybe_unused anx7625_runtime_pm_suspend(struct device *dev)
mutex_lock(&ctx->lock);
anx7625_stop_dp_work(ctx);
+ if (!ctx->pdata.panel_bridge)
+ anx7625_remove_edid(ctx);
anx7625_power_standby(ctx);
mutex_unlock(&ctx->lock);
diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
index 4ad527fe04f2..93eb8fba23d4 100644
--- a/drivers/gpu/drm/bridge/ite-it6505.c
+++ b/drivers/gpu/drm/bridge/ite-it6505.c
@@ -3104,6 +3104,8 @@ static __maybe_unused int it6505_bridge_suspend(struct device *dev)
{
struct it6505 *it6505 = dev_get_drvdata(dev);
+ it6505_remove_edid(it6505);
+
return it6505_poweroff(it6505);
}
diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
index 7fd4a5fe03ed..6a3f29390313 100644
--- a/drivers/gpu/drm/bridge/tc358767.c
+++ b/drivers/gpu/drm/bridge/tc358767.c
@@ -1579,6 +1579,13 @@ static struct edid *tc_get_edid(struct drm_bridge *bridge,
struct drm_connector *connector)
{
struct tc_data *tc = bridge_to_tc(bridge);
+ int ret;
+
+ ret = tc_get_display_props(tc);
+ if (ret < 0) {
+ dev_err(tc->dev, "failed to read display props: %d\n", ret);
+ return 0;
+ }
return drm_get_edid(connector, &tc->aux.ddc);
}
diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
index 48af0e2960a2..1d22dba69b27 100644
--- a/drivers/gpu/drm/drm_file.c
+++ b/drivers/gpu/drm/drm_file.c
@@ -149,7 +149,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
*/
struct drm_file *drm_file_alloc(struct drm_minor *minor)
{
- static atomic64_t ident = ATOMIC_INIT(0);
+ static atomic64_t ident = ATOMIC64_INIT(0);
struct drm_device *dev = minor->dev;
struct drm_file *file;
int ret;
diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 8257f9d4f619..22a373eaffef 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -151,7 +151,7 @@ static void show_leaks(struct drm_mm *mm) { }
INTERVAL_TREE_DEFINE(struct drm_mm_node, rb,
u64, __subtree_last,
- START, LAST, static inline, drm_mm_interval_tree)
+ START, LAST, static inline __maybe_unused, drm_mm_interval_tree)
struct drm_mm_node *
__drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last)
diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
index 5b2506c65e95..259a0c765baf 100644
--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
+++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
@@ -403,7 +403,6 @@ static const struct dmi_system_id orientation_data[] = {
}, { /* Lenovo Yoga Tab 3 X90F */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
- DMI_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
},
.driver_data = (void *)&lcd1600x2560_rightside_up,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
index 384df1659be6..b13a17276d07 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
@@ -482,7 +482,8 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
} else {
CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_CACHE,
VIVS_GL_FLUSH_CACHE_DEPTH |
- VIVS_GL_FLUSH_CACHE_COLOR);
+ VIVS_GL_FLUSH_CACHE_COLOR |
+ VIVS_GL_FLUSH_CACHE_SHADER_L1);
if (has_blt) {
CMD_LOAD_STATE(buffer, VIVS_BLT_ENABLE, 0x1);
CMD_LOAD_STATE(buffer, VIVS_BLT_SET_COMMAND, 0x1);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
index f9bc837e22bd..85d0695e94a5 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
@@ -527,6 +527,16 @@ static int etnaviv_bind(struct device *dev)
priv->num_gpus = 0;
priv->shm_gfp_mask = GFP_HIGHUSER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+ /*
+ * If the GPU is part of a system with DMA addressing limitations,
+ * request pages for our SHM backend buffers from the DMA32 zone to
+ * hopefully avoid performance killing SWIOTLB bounce buffering.
+ */
+ if (dma_addressing_limited(dev)) {
+ priv->shm_gfp_mask |= GFP_DMA32;
+ priv->shm_gfp_mask &= ~__GFP_HIGHMEM;
+ }
+
priv->cmdbuf_suballoc = etnaviv_cmdbuf_suballoc_new(drm->dev);
if (IS_ERR(priv->cmdbuf_suballoc)) {
dev_err(drm->dev, "Failed to create cmdbuf suballocator\n");
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
index 371e1f2733f6..ad543a7cbf07 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
@@ -820,14 +820,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
if (ret)
goto fail;
- /*
- * If the GPU is part of a system with DMA addressing limitations,
- * request pages for our SHM backend buffers from the DMA32 zone to
- * hopefully avoid performance killing SWIOTLB bounce buffering.
- */
- if (dma_addressing_limited(gpu->dev))
- priv->shm_gfp_mask |= GFP_DMA32;
-
/* Create buffer: */
ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &gpu->buffer,
PAGE_SIZE);
@@ -1308,6 +1300,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu,
{
u32 val;
+ mutex_lock(&gpu->lock);
+
/* disable clock gating */
val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
val &= ~VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
@@ -1319,6 +1313,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu,
gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, val);
sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_PRE);
+
+ mutex_unlock(&gpu->lock);
}
static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
@@ -1328,13 +1324,9 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
unsigned int i;
u32 val;
- sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
-
- for (i = 0; i < submit->nr_pmrs; i++) {
- const struct etnaviv_perfmon_request *pmr = submit->pmrs + i;
+ mutex_lock(&gpu->lock);
- *pmr->bo_vma = pmr->sequence;
- }
+ sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
/* disable debug register */
val = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
@@ -1345,6 +1337,14 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
val |= VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
gpu_write_power(gpu, VIVS_PM_POWER_CONTROLS, val);
+
+ mutex_unlock(&gpu->lock);
+
+ for (i = 0; i < submit->nr_pmrs; i++) {
+ const struct etnaviv_perfmon_request *pmr = submit->pmrs + i;
+
+ *pmr->bo_vma = pmr->sequence;
+ }
}
diff --git a/drivers/gpu/drm/fsl-dcu/Kconfig b/drivers/gpu/drm/fsl-dcu/Kconfig
index 5ca71ef87325..c9ee98693b48 100644
--- a/drivers/gpu/drm/fsl-dcu/Kconfig
+++ b/drivers/gpu/drm/fsl-dcu/Kconfig
@@ -8,6 +8,7 @@ config DRM_FSL_DCU
select DRM_PANEL
select REGMAP_MMIO
select VIDEOMODE_HELPERS
+ select MFD_SYSCON if SOC_LS1021A
help
Choose this option if you have an Freescale DCU chipset.
If M is selected the module will be called fsl-dcu-drm.
diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
index a395f93449f3..a23f3f5c5530 100644
--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
+++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
@@ -100,6 +100,7 @@ static void fsl_dcu_irq_uninstall(struct drm_device *dev)
static int fsl_dcu_load(struct drm_device *dev, unsigned long flags)
{
struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
+ struct regmap *scfg;
int ret;
ret = fsl_dcu_drm_modeset_init(fsl_dev);
@@ -108,6 +109,20 @@ static int fsl_dcu_load(struct drm_device *dev, unsigned long flags)
return ret;
}
+ scfg = syscon_regmap_lookup_by_compatible("fsl,ls1021a-scfg");
+ if (PTR_ERR(scfg) != -ENODEV) {
+ /*
+ * For simplicity, enable the PIXCLK unconditionally,
+ * resulting in increased power consumption. Disabling
+ * the clock in PM or on unload could be implemented as
+ * a future improvement.
+ */
+ ret = regmap_update_bits(scfg, SCFG_PIXCLKCR, SCFG_PIXCLKCR_PXCEN,
+ SCFG_PIXCLKCR_PXCEN);
+ if (ret < 0)
+ return dev_err_probe(dev->dev, ret, "failed to enable pixclk\n");
+ }
+
ret = drm_vblank_init(dev, dev->mode_config.num_crtc);
if (ret < 0) {
dev_err(dev->dev, "failed to initialize vblank\n");
diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
index e2049a0e8a92..566396013c04 100644
--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
+++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
@@ -160,6 +160,9 @@
#define FSL_DCU_ARGB4444 12
#define FSL_DCU_YUV422 14
+#define SCFG_PIXCLKCR 0x28
+#define SCFG_PIXCLKCR_PXCEN BIT(31)
+
#define VF610_LAYER_REG_NUM 9
#define LS1021A_LAYER_REG_NUM 10
diff --git a/drivers/gpu/drm/imx/dcss/dcss-crtc.c b/drivers/gpu/drm/imx/dcss/dcss-crtc.c
index 31267c00782f..af91e45b5d13 100644
--- a/drivers/gpu/drm/imx/dcss/dcss-crtc.c
+++ b/drivers/gpu/drm/imx/dcss/dcss-crtc.c
@@ -206,15 +206,13 @@ int dcss_crtc_init(struct dcss_crtc *crtc, struct drm_device *drm)
if (crtc->irq < 0)
return crtc->irq;
- ret = request_irq(crtc->irq, dcss_crtc_irq_handler,
- 0, "dcss_drm", crtc);
+ ret = request_irq(crtc->irq, dcss_crtc_irq_handler, IRQF_NO_AUTOEN,
+ "dcss_drm", crtc);
if (ret) {
dev_err(dcss->dev, "irq request failed with %d.\n", ret);
return ret;
}
- disable_irq(crtc->irq);
-
return 0;
}
diff --git a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
index 89585b31b985..5f423a2e0ede 100644
--- a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
+++ b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
@@ -410,14 +410,12 @@ static int ipu_drm_bind(struct device *dev, struct device *master, void *data)
}
ipu_crtc->irq = ipu_plane_irq(ipu_crtc->plane[0]);
- ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler, 0,
- "imx_drm", ipu_crtc);
+ ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler,
+ IRQF_NO_AUTOEN, "imx_drm", ipu_crtc);
if (ret < 0) {
dev_err(ipu_crtc->dev, "irq request failed with %d.\n", ret);
return ret;
}
- /* Only enable IRQ when we actually need it to trigger work. */
- disable_irq(ipu_crtc->irq);
return 0;
}
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
index ffe016d6cbcf..600f4ccc90d3 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
@@ -378,8 +378,10 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
if (all_drm_priv[cnt] && all_drm_priv[cnt]->mtk_drm_bound)
cnt++;
- if (cnt == MAX_CRTC)
+ if (cnt == MAX_CRTC) {
+ of_node_put(node);
break;
+ }
}
if (drm_priv->data->mmsys_dev_num == cnt) {
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 7923129363b0..c9edaa6d7636 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -1432,15 +1432,13 @@ static int a6xx_gmu_get_irq(struct a6xx_gmu *gmu, struct platform_device *pdev,
irq = platform_get_irq_byname(pdev, name);
- ret = request_irq(irq, handler, IRQF_TRIGGER_HIGH, name, gmu);
+ ret = request_irq(irq, handler, IRQF_TRIGGER_HIGH | IRQF_NO_AUTOEN, name, gmu);
if (ret) {
DRM_DEV_ERROR(&pdev->dev, "Unable to get interrupt %s %d\n",
name, ret);
return ret;
}
- disable_irq(irq);
-
return irq;
}
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
index 43c47a19cd94..a857ce8e385f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
@@ -157,18 +157,6 @@ static const struct dpu_lm_cfg msm8998_lm[] = {
.sblk = &msm8998_lm_sblk,
.lm_pair = LM_5,
.pingpong = PINGPONG_2,
- }, {
- .name = "lm_3", .id = LM_3,
- .base = 0x47000, .len = 0x320,
- .features = MIXER_MSM8998_MASK,
- .sblk = &msm8998_lm_sblk,
- .pingpong = PINGPONG_NONE,
- }, {
- .name = "lm_4", .id = LM_4,
- .base = 0x48000, .len = 0x320,
- .features = MIXER_MSM8998_MASK,
- .sblk = &msm8998_lm_sblk,
- .pingpong = PINGPONG_NONE,
}, {
.name = "lm_5", .id = LM_5,
.base = 0x49000, .len = 0x320,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
index 88a5177dfdb7..3749c014870d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
@@ -156,19 +156,6 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
.lm_pair = LM_5,
.pingpong = PINGPONG_2,
.dspp = DSPP_2,
- }, {
- .name = "lm_3", .id = LM_3,
- .base = 0x0, .len = 0x320,
- .features = MIXER_SDM845_MASK,
- .sblk = &sdm845_lm_sblk,
- .pingpong = PINGPONG_NONE,
- .dspp = DSPP_3,
- }, {
- .name = "lm_4", .id = LM_4,
- .base = 0x0, .len = 0x320,
- .features = MIXER_SDM845_MASK,
- .sblk = &sdm845_lm_sblk,
- .pingpong = PINGPONG_NONE,
}, {
.name = "lm_5", .id = LM_5,
.base = 0x49000, .len = 0x320,
@@ -176,6 +163,7 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
.sblk = &sdm845_lm_sblk,
.lm_pair = LM_2,
.pingpong = PINGPONG_3,
+ .dspp = DSPP_3,
},
};
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
index 68fae048a9a8..260accc151d4 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
@@ -80,7 +80,7 @@ static u64 _dpu_core_perf_calc_clk(const struct dpu_perf_cfg *perf_cfg,
mode = &state->adjusted_mode;
- crtc_clk = mode->vtotal * mode->hdisplay * drm_mode_vrefresh(mode);
+ crtc_clk = (u64)mode->vtotal * mode->hdisplay * drm_mode_vrefresh(mode);
drm_atomic_crtc_for_each_plane(plane, crtc) {
pstate = to_dpu_plane_state(plane->state);
diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
index ea70c1c32d94..6970b0f7f457 100644
--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
@@ -140,6 +140,7 @@ void msm_devfreq_init(struct msm_gpu *gpu)
{
struct msm_gpu_devfreq *df = &gpu->devfreq;
struct msm_drm_private *priv = gpu->dev->dev_private;
+ int ret;
/* We need target support to do devfreq */
if (!gpu->funcs->gpu_busy)
@@ -156,8 +157,12 @@ void msm_devfreq_init(struct msm_gpu *gpu)
mutex_init(&df->lock);
- dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
- DEV_PM_QOS_MIN_FREQUENCY, 0);
+ ret = dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
+ DEV_PM_QOS_MIN_FREQUENCY, 0);
+ if (ret < 0) {
+ DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize QoS\n");
+ return;
+ }
msm_devfreq_profile.initial_freq = gpu->fast_rate;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
index 3648868bb9fc..cd533d16b966 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
@@ -443,6 +443,7 @@ gf100_gr_chan_new(struct nvkm_gr *base, struct nvkm_chan *fifoch,
ret = gf100_grctx_generate(gr, chan, fifoch->inst);
if (ret) {
nvkm_error(&base->engine.subdev, "failed to construct context\n");
+ mutex_unlock(&gr->fecs.mutex);
return ret;
}
}
diff --git a/drivers/gpu/drm/omapdrm/dss/base.c b/drivers/gpu/drm/omapdrm/dss/base.c
index 050ca7eafac5..556e0f9026be 100644
--- a/drivers/gpu/drm/omapdrm/dss/base.c
+++ b/drivers/gpu/drm/omapdrm/dss/base.c
@@ -139,21 +139,13 @@ static bool omapdss_device_is_connected(struct omap_dss_device *dssdev)
}
int omapdss_device_connect(struct dss_device *dss,
- struct omap_dss_device *src,
struct omap_dss_device *dst)
{
- dev_dbg(&dss->pdev->dev, "connect(%s, %s)\n",
- src ? dev_name(src->dev) : "NULL",
+ dev_dbg(&dss->pdev->dev, "connect(%s)\n",
dst ? dev_name(dst->dev) : "NULL");
- if (!dst) {
- /*
- * The destination is NULL when the source is connected to a
- * bridge instead of a DSS device. Stop here, we will attach
- * the bridge later when we will have a DRM encoder.
- */
- return src && src->bridge ? 0 : -EINVAL;
- }
+ if (!dst)
+ return -EINVAL;
if (omapdss_device_is_connected(dst))
return -EBUSY;
@@ -163,19 +155,14 @@ int omapdss_device_connect(struct dss_device *dss,
return 0;
}
-void omapdss_device_disconnect(struct omap_dss_device *src,
+void omapdss_device_disconnect(struct dss_device *dss,
struct omap_dss_device *dst)
{
- struct dss_device *dss = src ? src->dss : dst->dss;
-
- dev_dbg(&dss->pdev->dev, "disconnect(%s, %s)\n",
- src ? dev_name(src->dev) : "NULL",
+ dev_dbg(&dss->pdev->dev, "disconnect(%s)\n",
dst ? dev_name(dst->dev) : "NULL");
- if (!dst) {
- WARN_ON(!src->bridge);
+ if (WARN_ON(!dst))
return;
- }
if (!dst->id && !omapdss_device_is_connected(dst)) {
WARN_ON(1);
diff --git a/drivers/gpu/drm/omapdrm/dss/omapdss.h b/drivers/gpu/drm/omapdrm/dss/omapdss.h
index 040d5a3e33d6..4c22c09c93d5 100644
--- a/drivers/gpu/drm/omapdrm/dss/omapdss.h
+++ b/drivers/gpu/drm/omapdrm/dss/omapdss.h
@@ -242,9 +242,8 @@ struct omap_dss_device *omapdss_device_get(struct omap_dss_device *dssdev);
void omapdss_device_put(struct omap_dss_device *dssdev);
struct omap_dss_device *omapdss_find_device_by_node(struct device_node *node);
int omapdss_device_connect(struct dss_device *dss,
- struct omap_dss_device *src,
struct omap_dss_device *dst);
-void omapdss_device_disconnect(struct omap_dss_device *src,
+void omapdss_device_disconnect(struct dss_device *dss,
struct omap_dss_device *dst);
int omap_dss_get_num_overlay_managers(void);
diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
index 21996b713d1c..13790d3ac3b6 100644
--- a/drivers/gpu/drm/omapdrm/omap_drv.c
+++ b/drivers/gpu/drm/omapdrm/omap_drv.c
@@ -307,7 +307,7 @@ static void omap_disconnect_pipelines(struct drm_device *ddev)
for (i = 0; i < priv->num_pipes; i++) {
struct omap_drm_pipeline *pipe = &priv->pipes[i];
- omapdss_device_disconnect(NULL, pipe->output);
+ omapdss_device_disconnect(priv->dss, pipe->output);
omapdss_device_put(pipe->output);
pipe->output = NULL;
@@ -325,7 +325,7 @@ static int omap_connect_pipelines(struct drm_device *ddev)
int r;
for_each_dss_output(output) {
- r = omapdss_device_connect(priv->dss, NULL, output);
+ r = omapdss_device_connect(priv->dss, output);
if (r == -EPROBE_DEFER) {
omapdss_device_put(output);
return r;
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index c48fa531ca32..68117eed702b 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -1395,8 +1395,6 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
omap_obj = to_omap_bo(obj);
- mutex_lock(&omap_obj->lock);
-
omap_obj->sgt = sgt;
if (sgt->orig_nents == 1) {
@@ -1411,21 +1409,17 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
if (!pages) {
omap_gem_free_object(obj);
- obj = ERR_PTR(-ENOMEM);
- goto done;
+ return ERR_PTR(-ENOMEM);
}
omap_obj->pages = pages;
ret = drm_prime_sg_to_page_array(sgt, pages, npages);
if (ret) {
omap_gem_free_object(obj);
- obj = ERR_PTR(-ENOMEM);
- goto done;
+ return ERR_PTR(-ENOMEM);
}
}
-done:
- mutex_unlock(&omap_obj->lock);
return obj;
}
diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c
index c067ff550692..164c4690caca 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gpu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c
@@ -157,7 +157,6 @@ static void panfrost_gpu_init_quirks(struct panfrost_device *pfdev)
struct panfrost_model {
const char *name;
u32 id;
- u32 id_mask;
u64 features;
u64 issues;
struct {
diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
index 4aca09cab4b8..7ea76fdd714a 100644
--- a/drivers/gpu/drm/radeon/atombios_encoders.c
+++ b/drivers/gpu/drm/radeon/atombios_encoders.c
@@ -2178,7 +2178,7 @@ int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx)
void
radeon_atom_encoder_init(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_encoder *encoder;
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index 10be30366c2b..341441b24183 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -7585,7 +7585,7 @@ int cik_irq_process(struct radeon_device *rdev)
DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
if (rdev->irq.crtc_vblank_int[0]) {
- drm_handle_vblank(rdev->ddev, 0);
+ drm_handle_vblank(rdev_to_drm(rdev), 0);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -7615,7 +7615,7 @@ int cik_irq_process(struct radeon_device *rdev)
DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
if (rdev->irq.crtc_vblank_int[1]) {
- drm_handle_vblank(rdev->ddev, 1);
+ drm_handle_vblank(rdev_to_drm(rdev), 1);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -7645,7 +7645,7 @@ int cik_irq_process(struct radeon_device *rdev)
DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
if (rdev->irq.crtc_vblank_int[2]) {
- drm_handle_vblank(rdev->ddev, 2);
+ drm_handle_vblank(rdev_to_drm(rdev), 2);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -7675,7 +7675,7 @@ int cik_irq_process(struct radeon_device *rdev)
DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
if (rdev->irq.crtc_vblank_int[3]) {
- drm_handle_vblank(rdev->ddev, 3);
+ drm_handle_vblank(rdev_to_drm(rdev), 3);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -7705,7 +7705,7 @@ int cik_irq_process(struct radeon_device *rdev)
DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
if (rdev->irq.crtc_vblank_int[4]) {
- drm_handle_vblank(rdev->ddev, 4);
+ drm_handle_vblank(rdev_to_drm(rdev), 4);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -7735,7 +7735,7 @@ int cik_irq_process(struct radeon_device *rdev)
DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
if (rdev->irq.crtc_vblank_int[5]) {
- drm_handle_vblank(rdev->ddev, 5);
+ drm_handle_vblank(rdev_to_drm(rdev), 5);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -8581,7 +8581,7 @@ int cik_init(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* Fence driver */
radeon_fence_driver_init(rdev);
diff --git a/drivers/gpu/drm/radeon/dce6_afmt.c b/drivers/gpu/drm/radeon/dce6_afmt.c
index 4a1d5447eac1..4419a0e85f69 100644
--- a/drivers/gpu/drm/radeon/dce6_afmt.c
+++ b/drivers/gpu/drm/radeon/dce6_afmt.c
@@ -90,7 +90,7 @@ struct r600_audio_pin *dce6_audio_get_pin(struct radeon_device *rdev)
pin = &rdev->audio.pin[i];
pin_count = 0;
- list_for_each_entry(encoder, &rdev->ddev->mode_config.encoder_list, head) {
+ list_for_each_entry(encoder, &rdev_to_drm(rdev)->mode_config.encoder_list, head) {
if (radeon_encoder_is_digital(encoder)) {
radeon_encoder = to_radeon_encoder(encoder);
dig = radeon_encoder->enc_priv;
diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
index f0ae087be914..a7f9fc2b5239 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -1672,7 +1672,7 @@ void evergreen_pm_misc(struct radeon_device *rdev)
*/
void evergreen_pm_prepare(struct radeon_device *rdev)
{
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
u32 tmp;
@@ -1697,7 +1697,7 @@ void evergreen_pm_prepare(struct radeon_device *rdev)
*/
void evergreen_pm_finish(struct radeon_device *rdev)
{
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
u32 tmp;
@@ -1762,7 +1762,7 @@ void evergreen_hpd_set_polarity(struct radeon_device *rdev,
*/
void evergreen_hpd_init(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_connector *connector;
unsigned enabled = 0;
u32 tmp = DC_HPDx_CONNECTION_TIMER(0x9c4) |
@@ -1803,7 +1803,7 @@ void evergreen_hpd_init(struct radeon_device *rdev)
*/
void evergreen_hpd_fini(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_connector *connector;
unsigned disabled = 0;
@@ -4756,7 +4756,7 @@ int evergreen_irq_process(struct radeon_device *rdev)
event_name = "vblank";
if (rdev->irq.crtc_vblank_int[crtc_idx]) {
- drm_handle_vblank(rdev->ddev, crtc_idx);
+ drm_handle_vblank(rdev_to_drm(rdev), crtc_idx);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -5214,7 +5214,7 @@ int evergreen_init(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* Fence driver */
radeon_fence_driver_init(rdev);
/* initialize AGP */
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index 3e48cbb522a1..4cd89fd6e9a2 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -2373,7 +2373,7 @@ int cayman_init(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* Fence driver */
radeon_fence_driver_init(rdev);
/* initialize memory controller */
diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
index b63b6b4e9b28..54cbfac3605f 100644
--- a/drivers/gpu/drm/radeon/r100.c
+++ b/drivers/gpu/drm/radeon/r100.c
@@ -458,7 +458,7 @@ void r100_pm_misc(struct radeon_device *rdev)
*/
void r100_pm_prepare(struct radeon_device *rdev)
{
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
u32 tmp;
@@ -489,7 +489,7 @@ void r100_pm_prepare(struct radeon_device *rdev)
*/
void r100_pm_finish(struct radeon_device *rdev)
{
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
u32 tmp;
@@ -602,7 +602,7 @@ void r100_hpd_set_polarity(struct radeon_device *rdev,
*/
void r100_hpd_init(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_connector *connector;
unsigned enable = 0;
@@ -625,7 +625,7 @@ void r100_hpd_init(struct radeon_device *rdev)
*/
void r100_hpd_fini(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_connector *connector;
unsigned disable = 0;
@@ -797,7 +797,7 @@ int r100_irq_process(struct radeon_device *rdev)
/* Vertical blank interrupts */
if (status & RADEON_CRTC_VBLANK_STAT) {
if (rdev->irq.crtc_vblank_int[0]) {
- drm_handle_vblank(rdev->ddev, 0);
+ drm_handle_vblank(rdev_to_drm(rdev), 0);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -806,7 +806,7 @@ int r100_irq_process(struct radeon_device *rdev)
}
if (status & RADEON_CRTC2_VBLANK_STAT) {
if (rdev->irq.crtc_vblank_int[1]) {
- drm_handle_vblank(rdev->ddev, 1);
+ drm_handle_vblank(rdev_to_drm(rdev), 1);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -1490,7 +1490,7 @@ int r100_cs_packet_parse_vline(struct radeon_cs_parser *p)
header = radeon_get_ib_value(p, h_idx);
crtc_id = radeon_get_ib_value(p, h_idx + 5);
reg = R100_CP_PACKET0_GET_REG(header);
- crtc = drm_crtc_find(p->rdev->ddev, p->filp, crtc_id);
+ crtc = drm_crtc_find(rdev_to_drm(p->rdev), p->filp, crtc_id);
if (!crtc) {
DRM_ERROR("cannot find crtc %d\n", crtc_id);
return -ENOENT;
@@ -3078,7 +3078,7 @@ DEFINE_SHOW_ATTRIBUTE(r100_debugfs_mc_info);
void r100_debugfs_rbbm_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("r100_rbbm_info", 0444, root, rdev,
&r100_debugfs_rbbm_info_fops);
@@ -3088,7 +3088,7 @@ void r100_debugfs_rbbm_init(struct radeon_device *rdev)
void r100_debugfs_cp_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("r100_cp_ring_info", 0444, root, rdev,
&r100_debugfs_cp_ring_info_fops);
@@ -3100,7 +3100,7 @@ void r100_debugfs_cp_init(struct radeon_device *rdev)
void r100_debugfs_mc_info_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("r100_mc_info", 0444, root, rdev,
&r100_debugfs_mc_info_fops);
@@ -3966,7 +3966,7 @@ int r100_resume(struct radeon_device *rdev)
RREG32(R_0007C0_CP_STAT));
}
/* post */
- radeon_combios_asic_init(rdev->ddev);
+ radeon_combios_asic_init(rdev_to_drm(rdev));
/* Resume clock after posting */
r100_clock_startup(rdev);
/* Initialize surface registers */
@@ -4075,7 +4075,7 @@ int r100_init(struct radeon_device *rdev)
/* Set asic errata */
r100_errata(rdev);
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* initialize AGP */
if (rdev->flags & RADEON_IS_AGP) {
r = radeon_agp_init(rdev);
diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
index 25201b9a5aae..430a4263ccf7 100644
--- a/drivers/gpu/drm/radeon/r300.c
+++ b/drivers/gpu/drm/radeon/r300.c
@@ -615,7 +615,7 @@ DEFINE_SHOW_ATTRIBUTE(rv370_debugfs_pcie_gart_info);
static void rv370_debugfs_pcie_gart_info_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("rv370_pcie_gart_info", 0444, root, rdev,
&rv370_debugfs_pcie_gart_info_fops);
@@ -1451,7 +1451,7 @@ int r300_resume(struct radeon_device *rdev)
RREG32(R_0007C0_CP_STAT));
}
/* post */
- radeon_combios_asic_init(rdev->ddev);
+ radeon_combios_asic_init(rdev_to_drm(rdev));
/* Resume clock after posting */
r300_clock_startup(rdev);
/* Initialize surface registers */
@@ -1537,7 +1537,7 @@ int r300_init(struct radeon_device *rdev)
/* Set asic errata */
r300_errata(rdev);
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* initialize AGP */
if (rdev->flags & RADEON_IS_AGP) {
r = radeon_agp_init(rdev);
diff --git a/drivers/gpu/drm/radeon/r420.c b/drivers/gpu/drm/radeon/r420.c
index eae8a6389f5e..b3a747a8f17d 100644
--- a/drivers/gpu/drm/radeon/r420.c
+++ b/drivers/gpu/drm/radeon/r420.c
@@ -321,7 +321,7 @@ int r420_resume(struct radeon_device *rdev)
if (rdev->is_atom_bios) {
atom_asic_init(rdev->mode_info.atom_context);
} else {
- radeon_combios_asic_init(rdev->ddev);
+ radeon_combios_asic_init(rdev_to_drm(rdev));
}
/* Resume clock after posting */
r420_clock_resume(rdev);
@@ -413,7 +413,7 @@ int r420_init(struct radeon_device *rdev)
return -EINVAL;
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* initialize AGP */
if (rdev->flags & RADEON_IS_AGP) {
r = radeon_agp_init(rdev);
@@ -492,7 +492,7 @@ DEFINE_SHOW_ATTRIBUTE(r420_debugfs_pipes_info);
void r420_debugfs_pipes_info_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("r420_pipes_info", 0444, root, rdev,
&r420_debugfs_pipes_info_fops);
diff --git a/drivers/gpu/drm/radeon/r520.c b/drivers/gpu/drm/radeon/r520.c
index 6cbcaa845192..08e127b3249a 100644
--- a/drivers/gpu/drm/radeon/r520.c
+++ b/drivers/gpu/drm/radeon/r520.c
@@ -287,7 +287,7 @@ int r520_init(struct radeon_device *rdev)
atom_asic_init(rdev->mode_info.atom_context);
}
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* initialize AGP */
if (rdev->flags & RADEON_IS_AGP) {
r = radeon_agp_init(rdev);
diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index a17b95eec65f..98d075c540e5 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -950,7 +950,7 @@ void r600_hpd_set_polarity(struct radeon_device *rdev,
void r600_hpd_init(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_connector *connector;
unsigned enable = 0;
@@ -1017,7 +1017,7 @@ void r600_hpd_init(struct radeon_device *rdev)
void r600_hpd_fini(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_connector *connector;
unsigned disable = 0;
@@ -3280,7 +3280,7 @@ int r600_init(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* Fence driver */
radeon_fence_driver_init(rdev);
if (rdev->flags & RADEON_IS_AGP) {
@@ -4136,7 +4136,7 @@ int r600_irq_process(struct radeon_device *rdev)
DRM_DEBUG("IH: D1 vblank - IH event w/o asserted irq bit?\n");
if (rdev->irq.crtc_vblank_int[0]) {
- drm_handle_vblank(rdev->ddev, 0);
+ drm_handle_vblank(rdev_to_drm(rdev), 0);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -4166,7 +4166,7 @@ int r600_irq_process(struct radeon_device *rdev)
DRM_DEBUG("IH: D2 vblank - IH event w/o asserted irq bit?\n");
if (rdev->irq.crtc_vblank_int[1]) {
- drm_handle_vblank(rdev->ddev, 1);
+ drm_handle_vblank(rdev_to_drm(rdev), 1);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -4358,7 +4358,7 @@ DEFINE_SHOW_ATTRIBUTE(r600_debugfs_mc_info);
static void r600_debugfs_mc_info_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("r600_mc_info", 0444, root, rdev,
&r600_debugfs_mc_info_fops);
diff --git a/drivers/gpu/drm/radeon/r600_cs.c b/drivers/gpu/drm/radeon/r600_cs.c
index 6cf54a747749..1b2d31c4d77c 100644
--- a/drivers/gpu/drm/radeon/r600_cs.c
+++ b/drivers/gpu/drm/radeon/r600_cs.c
@@ -884,7 +884,7 @@ int r600_cs_common_vline_parse(struct radeon_cs_parser *p,
crtc_id = radeon_get_ib_value(p, h_idx + 2 + 7 + 1);
reg = R600_CP_PACKET0_GET_REG(header);
- crtc = drm_crtc_find(p->rdev->ddev, p->filp, crtc_id);
+ crtc = drm_crtc_find(rdev_to_drm(p->rdev), p->filp, crtc_id);
if (!crtc) {
DRM_ERROR("cannot find crtc %d\n", crtc_id);
return -ENOENT;
diff --git a/drivers/gpu/drm/radeon/r600_dpm.c b/drivers/gpu/drm/radeon/r600_dpm.c
index 9d2bcb9551e6..157107cf1bfb 100644
--- a/drivers/gpu/drm/radeon/r600_dpm.c
+++ b/drivers/gpu/drm/radeon/r600_dpm.c
@@ -155,7 +155,7 @@ void r600_dpm_print_ps_status(struct radeon_device *rdev,
u32 r600_dpm_get_vblank_time(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
u32 vblank_in_pixels;
@@ -182,7 +182,7 @@ u32 r600_dpm_get_vblank_time(struct radeon_device *rdev)
u32 r600_dpm_get_vrefresh(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
u32 vrefresh = 0;
diff --git a/drivers/gpu/drm/radeon/r600_hdmi.c b/drivers/gpu/drm/radeon/r600_hdmi.c
index f3551ebaa2f0..661f374f5f27 100644
--- a/drivers/gpu/drm/radeon/r600_hdmi.c
+++ b/drivers/gpu/drm/radeon/r600_hdmi.c
@@ -116,7 +116,7 @@ void r600_audio_update_hdmi(struct work_struct *work)
{
struct radeon_device *rdev = container_of(work, struct radeon_device,
audio_work);
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct r600_audio_pin audio_status = r600_audio_status(rdev);
struct drm_encoder *encoder;
bool changed = false;
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 426a49851e34..e0a02b357ce7 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -2478,6 +2478,11 @@ void r100_io_wreg(struct radeon_device *rdev, u32 reg, u32 v);
u32 cik_mm_rdoorbell(struct radeon_device *rdev, u32 index);
void cik_mm_wdoorbell(struct radeon_device *rdev, u32 index, u32 v);
+static inline struct drm_device *rdev_to_drm(struct radeon_device *rdev)
+{
+ return rdev->ddev;
+}
+
/*
* Cast helper
*/
diff --git a/drivers/gpu/drm/radeon/radeon_acpi.c b/drivers/gpu/drm/radeon/radeon_acpi.c
index 603a78e41ba5..22ce61bdfc06 100644
--- a/drivers/gpu/drm/radeon/radeon_acpi.c
+++ b/drivers/gpu/drm/radeon/radeon_acpi.c
@@ -405,11 +405,11 @@ static int radeon_atif_handler(struct radeon_device *rdev,
if (req.pending & ATIF_DGPU_DISPLAY_EVENT) {
if ((rdev->flags & RADEON_IS_PX) &&
radeon_atpx_dgpu_req_power_for_displays()) {
- pm_runtime_get_sync(rdev->ddev->dev);
+ pm_runtime_get_sync(rdev_to_drm(rdev)->dev);
/* Just fire off a uevent and let userspace tell us what to do */
- drm_helper_hpd_irq_event(rdev->ddev);
- pm_runtime_mark_last_busy(rdev->ddev->dev);
- pm_runtime_put_autosuspend(rdev->ddev->dev);
+ drm_helper_hpd_irq_event(rdev_to_drm(rdev));
+ pm_runtime_mark_last_busy(rdev_to_drm(rdev)->dev);
+ pm_runtime_put_autosuspend(rdev_to_drm(rdev)->dev);
}
}
/* TODO: check other events */
@@ -736,7 +736,7 @@ int radeon_acpi_init(struct radeon_device *rdev)
struct radeon_encoder *target = NULL;
/* Find the encoder controlling the brightness */
- list_for_each_entry(tmp, &rdev->ddev->mode_config.encoder_list,
+ list_for_each_entry(tmp, &rdev_to_drm(rdev)->mode_config.encoder_list,
head) {
struct radeon_encoder *enc = to_radeon_encoder(tmp);
diff --git a/drivers/gpu/drm/radeon/radeon_agp.c b/drivers/gpu/drm/radeon/radeon_agp.c
index a3d749e350f9..89d7b0e9e79f 100644
--- a/drivers/gpu/drm/radeon/radeon_agp.c
+++ b/drivers/gpu/drm/radeon/radeon_agp.c
@@ -161,7 +161,7 @@ struct radeon_agp_head *radeon_agp_head_init(struct drm_device *dev)
static int radeon_agp_head_acquire(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct pci_dev *pdev = to_pci_dev(dev->dev);
if (!rdev->agp)
diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c
index 53c7273eb6a5..c025ce6eb316 100644
--- a/drivers/gpu/drm/radeon/radeon_atombios.c
+++ b/drivers/gpu/drm/radeon/radeon_atombios.c
@@ -186,7 +186,7 @@ void radeon_atombios_i2c_init(struct radeon_device *rdev)
if (i2c.valid) {
sprintf(stmp, "0x%x", i2c.i2c_id);
- rdev->i2c_bus[i] = radeon_i2c_create(rdev->ddev, &i2c, stmp);
+ rdev->i2c_bus[i] = radeon_i2c_create(rdev_to_drm(rdev), &i2c, stmp);
}
gpio = (ATOM_GPIO_I2C_ASSIGMENT *)
((u8 *)gpio + sizeof(ATOM_GPIO_I2C_ASSIGMENT));
diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
index d6ccaf24ee0c..fc22fe709b9c 100644
--- a/drivers/gpu/drm/radeon/radeon_audio.c
+++ b/drivers/gpu/drm/radeon/radeon_audio.c
@@ -195,7 +195,7 @@ static void radeon_audio_enable(struct radeon_device *rdev,
return;
if (rdev->mode_info.mode_config_initialized) {
- list_for_each_entry(encoder, &rdev->ddev->mode_config.encoder_list, head) {
+ list_for_each_entry(encoder, &rdev_to_drm(rdev)->mode_config.encoder_list, head) {
if (radeon_encoder_is_digital(encoder)) {
radeon_encoder = to_radeon_encoder(encoder);
dig = radeon_encoder->enc_priv;
@@ -758,16 +758,20 @@ static int radeon_audio_component_get_eld(struct device *kdev, int port,
if (!rdev->audio.enabled || !rdev->mode_info.mode_config_initialized)
return 0;
- list_for_each_entry(encoder, &rdev->ddev->mode_config.encoder_list, head) {
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ const struct drm_connector_helper_funcs *connector_funcs =
+ connector->helper_private;
+ encoder = connector_funcs->best_encoder(connector);
+
+ if (!encoder)
+ continue;
+
if (!radeon_encoder_is_digital(encoder))
continue;
radeon_encoder = to_radeon_encoder(encoder);
dig = radeon_encoder->enc_priv;
if (!dig->pin || dig->pin->id != port)
continue;
- connector = radeon_get_connector_for_encoder(encoder);
- if (!connector)
- continue;
*enabled = true;
ret = drm_eld_size(connector->eld);
memcpy(buf, connector->eld, min(max_bytes, ret));
diff --git a/drivers/gpu/drm/radeon/radeon_combios.c b/drivers/gpu/drm/radeon/radeon_combios.c
index 2620efc7c675..a30f36d098a8 100644
--- a/drivers/gpu/drm/radeon/radeon_combios.c
+++ b/drivers/gpu/drm/radeon/radeon_combios.c
@@ -371,7 +371,7 @@ bool radeon_combios_check_hardcoded_edid(struct radeon_device *rdev)
int edid_info, size;
struct edid *edid;
unsigned char *raw;
- edid_info = combios_get_table_offset(rdev->ddev, COMBIOS_HARDCODED_EDID_TABLE);
+ edid_info = combios_get_table_offset(rdev_to_drm(rdev), COMBIOS_HARDCODED_EDID_TABLE);
if (!edid_info)
return false;
@@ -641,7 +641,7 @@ static struct radeon_i2c_bus_rec combios_setup_i2c_bus(struct radeon_device *rde
static struct radeon_i2c_bus_rec radeon_combios_get_i2c_info_from_table(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct radeon_i2c_bus_rec i2c;
u16 offset;
u8 id, blocks, clk, data;
@@ -669,7 +669,7 @@ static struct radeon_i2c_bus_rec radeon_combios_get_i2c_info_from_table(struct r
void radeon_combios_i2c_init(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct radeon_i2c_bus_rec i2c;
/* actual hw pads
@@ -811,7 +811,7 @@ bool radeon_combios_get_clock_info(struct drm_device *dev)
bool radeon_combios_sideport_present(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
u16 igp_info;
/* sideport is AMD only */
@@ -914,7 +914,7 @@ struct radeon_encoder_primary_dac *radeon_combios_get_primary_dac_info(struct
enum radeon_tv_std
radeon_combios_get_tv_info(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
uint16_t tv_info;
enum radeon_tv_std tv_std = TV_STD_NTSC;
@@ -2636,7 +2636,7 @@ static const char *thermal_controller_names[] = {
void radeon_combios_get_power_modes(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
u16 offset, misc, misc2 = 0;
u8 rev, tmp;
int state_index = 0;
diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
index afbb3a80c0c6..32851632643d 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -760,7 +760,7 @@ bool radeon_boot_test_post_card(struct radeon_device *rdev)
if (rdev->is_atom_bios)
atom_asic_init(rdev->mode_info.atom_context);
else
- radeon_combios_asic_init(rdev->ddev);
+ radeon_combios_asic_init(rdev_to_drm(rdev));
return true;
} else {
dev_err(rdev->dev, "Card not posted and no BIOS - ignoring\n");
@@ -980,7 +980,7 @@ int radeon_atombios_init(struct radeon_device *rdev)
return -ENOMEM;
rdev->mode_info.atom_card_info = atom_card_info;
- atom_card_info->dev = rdev->ddev;
+ atom_card_info->dev = rdev_to_drm(rdev);
atom_card_info->reg_read = cail_reg_read;
atom_card_info->reg_write = cail_reg_write;
/* needed for iio ops */
@@ -1005,7 +1005,7 @@ int radeon_atombios_init(struct radeon_device *rdev)
mutex_init(&rdev->mode_info.atom_context->mutex);
mutex_init(&rdev->mode_info.atom_context->scratch_mutex);
- radeon_atom_initialize_bios_scratch_regs(rdev->ddev);
+ radeon_atom_initialize_bios_scratch_regs(rdev_to_drm(rdev));
atom_allocate_fb_scratch(rdev->mode_info.atom_context);
return 0;
}
@@ -1049,7 +1049,7 @@ void radeon_atombios_fini(struct radeon_device *rdev)
*/
int radeon_combios_init(struct radeon_device *rdev)
{
- radeon_combios_initialize_bios_scratch_regs(rdev->ddev);
+ radeon_combios_initialize_bios_scratch_regs(rdev_to_drm(rdev));
return 0;
}
@@ -1847,7 +1847,7 @@ int radeon_gpu_reset(struct radeon_device *rdev)
downgrade_write(&rdev->exclusive_lock);
- drm_helper_resume_force_mode(rdev->ddev);
+ drm_helper_resume_force_mode(rdev_to_drm(rdev));
/* set the power state here in case we are a PX system or headless */
if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled)
diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
index 5f1d24d3120c..8a8ffc5fc804 100644
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -302,13 +302,13 @@ void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id)
if ((radeon_use_pflipirq == 2) && ASIC_IS_DCE4(rdev))
return;
- spin_lock_irqsave(&rdev->ddev->event_lock, flags);
+ spin_lock_irqsave(&rdev_to_drm(rdev)->event_lock, flags);
if (radeon_crtc->flip_status != RADEON_FLIP_SUBMITTED) {
DRM_DEBUG_DRIVER("radeon_crtc->flip_status = %d != "
"RADEON_FLIP_SUBMITTED(%d)\n",
radeon_crtc->flip_status,
RADEON_FLIP_SUBMITTED);
- spin_unlock_irqrestore(&rdev->ddev->event_lock, flags);
+ spin_unlock_irqrestore(&rdev_to_drm(rdev)->event_lock, flags);
return;
}
@@ -334,7 +334,7 @@ void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id)
*/
if (update_pending &&
(DRM_SCANOUTPOS_VALID &
- radeon_get_crtc_scanoutpos(rdev->ddev, crtc_id,
+ radeon_get_crtc_scanoutpos(rdev_to_drm(rdev), crtc_id,
GET_DISTANCE_TO_VBLANKSTART,
&vpos, &hpos, NULL, NULL,
&rdev->mode_info.crtcs[crtc_id]->base.hwmode)) &&
@@ -347,7 +347,7 @@ void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id)
*/
update_pending = 0;
}
- spin_unlock_irqrestore(&rdev->ddev->event_lock, flags);
+ spin_unlock_irqrestore(&rdev_to_drm(rdev)->event_lock, flags);
if (!update_pending)
radeon_crtc_handle_flip(rdev, crtc_id);
}
@@ -370,14 +370,14 @@ void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id)
if (radeon_crtc == NULL)
return;
- spin_lock_irqsave(&rdev->ddev->event_lock, flags);
+ spin_lock_irqsave(&rdev_to_drm(rdev)->event_lock, flags);
work = radeon_crtc->flip_work;
if (radeon_crtc->flip_status != RADEON_FLIP_SUBMITTED) {
DRM_DEBUG_DRIVER("radeon_crtc->flip_status = %d != "
"RADEON_FLIP_SUBMITTED(%d)\n",
radeon_crtc->flip_status,
RADEON_FLIP_SUBMITTED);
- spin_unlock_irqrestore(&rdev->ddev->event_lock, flags);
+ spin_unlock_irqrestore(&rdev_to_drm(rdev)->event_lock, flags);
return;
}
@@ -389,7 +389,7 @@ void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id)
if (work->event)
drm_crtc_send_vblank_event(&radeon_crtc->base, work->event);
- spin_unlock_irqrestore(&rdev->ddev->event_lock, flags);
+ spin_unlock_irqrestore(&rdev_to_drm(rdev)->event_lock, flags);
drm_crtc_vblank_put(&radeon_crtc->base);
radeon_irq_kms_pflip_irq_put(rdev, work->crtc_id);
@@ -408,7 +408,7 @@ static void radeon_flip_work_func(struct work_struct *__work)
struct radeon_flip_work *work =
container_of(__work, struct radeon_flip_work, flip_work);
struct radeon_device *rdev = work->rdev;
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct radeon_crtc *radeon_crtc = rdev->mode_info.crtcs[work->crtc_id];
struct drm_crtc *crtc = &radeon_crtc->base;
@@ -1401,7 +1401,7 @@ static int radeon_modeset_create_props(struct radeon_device *rdev)
if (rdev->is_atom_bios) {
rdev->mode_info.coherent_mode_property =
- drm_property_create_range(rdev->ddev, 0 , "coherent", 0, 1);
+ drm_property_create_range(rdev_to_drm(rdev), 0, "coherent", 0, 1);
if (!rdev->mode_info.coherent_mode_property)
return -ENOMEM;
}
@@ -1409,57 +1409,57 @@ static int radeon_modeset_create_props(struct radeon_device *rdev)
if (!ASIC_IS_AVIVO(rdev)) {
sz = ARRAY_SIZE(radeon_tmds_pll_enum_list);
rdev->mode_info.tmds_pll_property =
- drm_property_create_enum(rdev->ddev, 0,
+ drm_property_create_enum(rdev_to_drm(rdev), 0,
"tmds_pll",
radeon_tmds_pll_enum_list, sz);
}
rdev->mode_info.load_detect_property =
- drm_property_create_range(rdev->ddev, 0, "load detection", 0, 1);
+ drm_property_create_range(rdev_to_drm(rdev), 0, "load detection", 0, 1);
if (!rdev->mode_info.load_detect_property)
return -ENOMEM;
- drm_mode_create_scaling_mode_property(rdev->ddev);
+ drm_mode_create_scaling_mode_property(rdev_to_drm(rdev));
sz = ARRAY_SIZE(radeon_tv_std_enum_list);
rdev->mode_info.tv_std_property =
- drm_property_create_enum(rdev->ddev, 0,
+ drm_property_create_enum(rdev_to_drm(rdev), 0,
"tv standard",
radeon_tv_std_enum_list, sz);
sz = ARRAY_SIZE(radeon_underscan_enum_list);
rdev->mode_info.underscan_property =
- drm_property_create_enum(rdev->ddev, 0,
+ drm_property_create_enum(rdev_to_drm(rdev), 0,
"underscan",
radeon_underscan_enum_list, sz);
rdev->mode_info.underscan_hborder_property =
- drm_property_create_range(rdev->ddev, 0,
+ drm_property_create_range(rdev_to_drm(rdev), 0,
"underscan hborder", 0, 128);
if (!rdev->mode_info.underscan_hborder_property)
return -ENOMEM;
rdev->mode_info.underscan_vborder_property =
- drm_property_create_range(rdev->ddev, 0,
+ drm_property_create_range(rdev_to_drm(rdev), 0,
"underscan vborder", 0, 128);
if (!rdev->mode_info.underscan_vborder_property)
return -ENOMEM;
sz = ARRAY_SIZE(radeon_audio_enum_list);
rdev->mode_info.audio_property =
- drm_property_create_enum(rdev->ddev, 0,
+ drm_property_create_enum(rdev_to_drm(rdev), 0,
"audio",
radeon_audio_enum_list, sz);
sz = ARRAY_SIZE(radeon_dither_enum_list);
rdev->mode_info.dither_property =
- drm_property_create_enum(rdev->ddev, 0,
+ drm_property_create_enum(rdev_to_drm(rdev), 0,
"dither",
radeon_dither_enum_list, sz);
sz = ARRAY_SIZE(radeon_output_csc_enum_list);
rdev->mode_info.output_csc_property =
- drm_property_create_enum(rdev->ddev, 0,
+ drm_property_create_enum(rdev_to_drm(rdev), 0,
"output_csc",
radeon_output_csc_enum_list, sz);
@@ -1578,29 +1578,29 @@ int radeon_modeset_init(struct radeon_device *rdev)
int i;
int ret;
- drm_mode_config_init(rdev->ddev);
+ drm_mode_config_init(rdev_to_drm(rdev));
rdev->mode_info.mode_config_initialized = true;
- rdev->ddev->mode_config.funcs = &radeon_mode_funcs;
+ rdev_to_drm(rdev)->mode_config.funcs = &radeon_mode_funcs;
if (radeon_use_pflipirq == 2 && rdev->family >= CHIP_R600)
- rdev->ddev->mode_config.async_page_flip = true;
+ rdev_to_drm(rdev)->mode_config.async_page_flip = true;
if (ASIC_IS_DCE5(rdev)) {
- rdev->ddev->mode_config.max_width = 16384;
- rdev->ddev->mode_config.max_height = 16384;
+ rdev_to_drm(rdev)->mode_config.max_width = 16384;
+ rdev_to_drm(rdev)->mode_config.max_height = 16384;
} else if (ASIC_IS_AVIVO(rdev)) {
- rdev->ddev->mode_config.max_width = 8192;
- rdev->ddev->mode_config.max_height = 8192;
+ rdev_to_drm(rdev)->mode_config.max_width = 8192;
+ rdev_to_drm(rdev)->mode_config.max_height = 8192;
} else {
- rdev->ddev->mode_config.max_width = 4096;
- rdev->ddev->mode_config.max_height = 4096;
+ rdev_to_drm(rdev)->mode_config.max_width = 4096;
+ rdev_to_drm(rdev)->mode_config.max_height = 4096;
}
- rdev->ddev->mode_config.preferred_depth = 24;
- rdev->ddev->mode_config.prefer_shadow = 1;
+ rdev_to_drm(rdev)->mode_config.preferred_depth = 24;
+ rdev_to_drm(rdev)->mode_config.prefer_shadow = 1;
- rdev->ddev->mode_config.fb_modifiers_not_supported = true;
+ rdev_to_drm(rdev)->mode_config.fb_modifiers_not_supported = true;
ret = radeon_modeset_create_props(rdev);
if (ret) {
@@ -1618,11 +1618,11 @@ int radeon_modeset_init(struct radeon_device *rdev)
/* allocate crtcs */
for (i = 0; i < rdev->num_crtc; i++) {
- radeon_crtc_init(rdev->ddev, i);
+ radeon_crtc_init(rdev_to_drm(rdev), i);
}
/* okay we should have all the bios connectors */
- ret = radeon_setup_enc_conn(rdev->ddev);
+ ret = radeon_setup_enc_conn(rdev_to_drm(rdev));
if (!ret) {
return ret;
}
@@ -1639,7 +1639,7 @@ int radeon_modeset_init(struct radeon_device *rdev)
/* setup afmt */
radeon_afmt_init(rdev);
- drm_kms_helper_poll_init(rdev->ddev);
+ drm_kms_helper_poll_init(rdev_to_drm(rdev));
/* do pm late init */
ret = radeon_pm_late_init(rdev);
@@ -1650,11 +1650,11 @@ int radeon_modeset_init(struct radeon_device *rdev)
void radeon_modeset_fini(struct radeon_device *rdev)
{
if (rdev->mode_info.mode_config_initialized) {
- drm_kms_helper_poll_fini(rdev->ddev);
+ drm_kms_helper_poll_fini(rdev_to_drm(rdev));
radeon_hpd_fini(rdev);
- drm_helper_force_disable_all(rdev->ddev);
+ drm_helper_force_disable_all(rdev_to_drm(rdev));
radeon_afmt_fini(rdev);
- drm_mode_config_cleanup(rdev->ddev);
+ drm_mode_config_cleanup(rdev_to_drm(rdev));
rdev->mode_info.mode_config_initialized = false;
}
diff --git a/drivers/gpu/drm/radeon/radeon_fbdev.c b/drivers/gpu/drm/radeon/radeon_fbdev.c
index 02bf25759059..fb70de29545c 100644
--- a/drivers/gpu/drm/radeon/radeon_fbdev.c
+++ b/drivers/gpu/drm/radeon/radeon_fbdev.c
@@ -67,7 +67,7 @@ static int radeon_fbdev_create_pinned_object(struct drm_fb_helper *fb_helper,
int height = mode_cmd->height;
u32 cpp;
- info = drm_get_format_info(rdev->ddev, mode_cmd);
+ info = drm_get_format_info(rdev_to_drm(rdev), mode_cmd);
cpp = info->cpp[0];
/* need to align pitch with crtc limits */
@@ -148,15 +148,15 @@ static int radeon_fbdev_fb_open(struct fb_info *info, int user)
struct radeon_device *rdev = fb_helper->dev->dev_private;
int ret;
- ret = pm_runtime_get_sync(rdev->ddev->dev);
+ ret = pm_runtime_get_sync(rdev_to_drm(rdev)->dev);
if (ret < 0 && ret != -EACCES)
goto err_pm_runtime_mark_last_busy;
return 0;
err_pm_runtime_mark_last_busy:
- pm_runtime_mark_last_busy(rdev->ddev->dev);
- pm_runtime_put_autosuspend(rdev->ddev->dev);
+ pm_runtime_mark_last_busy(rdev_to_drm(rdev)->dev);
+ pm_runtime_put_autosuspend(rdev_to_drm(rdev)->dev);
return ret;
}
@@ -165,8 +165,8 @@ static int radeon_fbdev_fb_release(struct fb_info *info, int user)
struct drm_fb_helper *fb_helper = info->par;
struct radeon_device *rdev = fb_helper->dev->dev_private;
- pm_runtime_mark_last_busy(rdev->ddev->dev);
- pm_runtime_put_autosuspend(rdev->ddev->dev);
+ pm_runtime_mark_last_busy(rdev_to_drm(rdev)->dev);
+ pm_runtime_put_autosuspend(rdev_to_drm(rdev)->dev);
return 0;
}
@@ -236,7 +236,7 @@ static int radeon_fbdev_fb_helper_fb_probe(struct drm_fb_helper *fb_helper,
ret = -ENOMEM;
goto err_radeon_fbdev_destroy_pinned_object;
}
- ret = radeon_framebuffer_init(rdev->ddev, fb, &mode_cmd, gobj);
+ ret = radeon_framebuffer_init(rdev_to_drm(rdev), fb, &mode_cmd, gobj);
if (ret) {
DRM_ERROR("failed to initialize framebuffer %d\n", ret);
goto err_kfree;
@@ -374,12 +374,12 @@ void radeon_fbdev_setup(struct radeon_device *rdev)
fb_helper = kzalloc(sizeof(*fb_helper), GFP_KERNEL);
if (!fb_helper)
return;
- drm_fb_helper_prepare(rdev->ddev, fb_helper, bpp_sel, &radeon_fbdev_fb_helper_funcs);
+ drm_fb_helper_prepare(rdev_to_drm(rdev), fb_helper, bpp_sel, &radeon_fbdev_fb_helper_funcs);
- ret = drm_client_init(rdev->ddev, &fb_helper->client, "radeon-fbdev",
+ ret = drm_client_init(rdev_to_drm(rdev), &fb_helper->client, "radeon-fbdev",
&radeon_fbdev_client_funcs);
if (ret) {
- drm_err(rdev->ddev, "Failed to register client: %d\n", ret);
+ drm_err(rdev_to_drm(rdev), "Failed to register client: %d\n", ret);
goto err_drm_client_init;
}
@@ -394,13 +394,13 @@ void radeon_fbdev_setup(struct radeon_device *rdev)
void radeon_fbdev_set_suspend(struct radeon_device *rdev, int state)
{
- if (rdev->ddev->fb_helper)
- drm_fb_helper_set_suspend(rdev->ddev->fb_helper, state);
+ if (rdev_to_drm(rdev)->fb_helper)
+ drm_fb_helper_set_suspend(rdev_to_drm(rdev)->fb_helper, state);
}
bool radeon_fbdev_robj_is_fb(struct radeon_device *rdev, struct radeon_bo *robj)
{
- struct drm_fb_helper *fb_helper = rdev->ddev->fb_helper;
+ struct drm_fb_helper *fb_helper = rdev_to_drm(rdev)->fb_helper;
struct drm_gem_object *gobj;
if (!fb_helper)
diff --git a/drivers/gpu/drm/radeon/radeon_fence.c b/drivers/gpu/drm/radeon/radeon_fence.c
index 2749dde5838f..6d5e828fa39e 100644
--- a/drivers/gpu/drm/radeon/radeon_fence.c
+++ b/drivers/gpu/drm/radeon/radeon_fence.c
@@ -151,7 +151,7 @@ int radeon_fence_emit(struct radeon_device *rdev,
rdev->fence_context + ring,
seq);
radeon_fence_ring_emit(rdev, ring, *fence);
- trace_radeon_fence_emit(rdev->ddev, ring, (*fence)->seq);
+ trace_radeon_fence_emit(rdev_to_drm(rdev), ring, (*fence)->seq);
radeon_fence_schedule_check(rdev, ring);
return 0;
}
@@ -492,7 +492,7 @@ static long radeon_fence_wait_seq_timeout(struct radeon_device *rdev,
if (!target_seq[i])
continue;
- trace_radeon_fence_wait_begin(rdev->ddev, i, target_seq[i]);
+ trace_radeon_fence_wait_begin(rdev_to_drm(rdev), i, target_seq[i]);
radeon_irq_kms_sw_irq_get(rdev, i);
}
@@ -514,7 +514,7 @@ static long radeon_fence_wait_seq_timeout(struct radeon_device *rdev,
continue;
radeon_irq_kms_sw_irq_put(rdev, i);
- trace_radeon_fence_wait_end(rdev->ddev, i, target_seq[i]);
+ trace_radeon_fence_wait_end(rdev_to_drm(rdev), i, target_seq[i]);
}
return r;
@@ -1004,7 +1004,7 @@ DEFINE_DEBUGFS_ATTRIBUTE(radeon_debugfs_gpu_reset_fops,
void radeon_debugfs_fence_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("radeon_gpu_reset", 0444, root, rdev,
&radeon_debugfs_gpu_reset_fops);
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 27225d1fe8d2..96934fee7e94 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -898,7 +898,7 @@ DEFINE_SHOW_ATTRIBUTE(radeon_debugfs_gem_info);
void radeon_gem_debugfs_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("radeon_gem_info", 0444, root, rdev,
&radeon_debugfs_gem_info_fops);
diff --git a/drivers/gpu/drm/radeon/radeon_i2c.c b/drivers/gpu/drm/radeon/radeon_i2c.c
index 314d066e68e9..e7b2e9370729 100644
--- a/drivers/gpu/drm/radeon/radeon_i2c.c
+++ b/drivers/gpu/drm/radeon/radeon_i2c.c
@@ -1012,7 +1012,7 @@ void radeon_i2c_add(struct radeon_device *rdev,
struct radeon_i2c_bus_rec *rec,
const char *name)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
int i;
for (i = 0; i < RADEON_MAX_I2C_BUS; i++) {
diff --git a/drivers/gpu/drm/radeon/radeon_ib.c b/drivers/gpu/drm/radeon/radeon_ib.c
index fb9ecf5dbe2b..560ce90f4eb1 100644
--- a/drivers/gpu/drm/radeon/radeon_ib.c
+++ b/drivers/gpu/drm/radeon/radeon_ib.c
@@ -307,7 +307,7 @@ DEFINE_SHOW_ATTRIBUTE(radeon_debugfs_sa_info);
static void radeon_debugfs_sa_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("radeon_sa_info", 0444, root, rdev,
&radeon_debugfs_sa_info_fops);
diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c
index c4dda908666c..9961251b44ba 100644
--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
@@ -80,7 +80,7 @@ static void radeon_hotplug_work_func(struct work_struct *work)
{
struct radeon_device *rdev = container_of(work, struct radeon_device,
hotplug_work.work);
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_mode_config *mode_config = &dev->mode_config;
struct drm_connector *connector;
@@ -101,7 +101,7 @@ static void radeon_dp_work_func(struct work_struct *work)
{
struct radeon_device *rdev = container_of(work, struct radeon_device,
dp_work);
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_mode_config *mode_config = &dev->mode_config;
struct drm_connector *connector;
@@ -197,7 +197,7 @@ static void radeon_driver_irq_uninstall_kms(struct drm_device *dev)
static int radeon_irq_install(struct radeon_device *rdev, int irq)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
int ret;
if (irq == IRQ_NOTCONNECTED)
@@ -218,7 +218,7 @@ static int radeon_irq_install(struct radeon_device *rdev, int irq)
static void radeon_irq_uninstall(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct pci_dev *pdev = to_pci_dev(dev->dev);
radeon_driver_irq_uninstall_kms(dev);
@@ -322,9 +322,9 @@ int radeon_irq_kms_init(struct radeon_device *rdev)
spin_lock_init(&rdev->irq.lock);
/* Disable vblank irqs aggressively for power-saving */
- rdev->ddev->vblank_disable_immediate = true;
+ rdev_to_drm(rdev)->vblank_disable_immediate = true;
- r = drm_vblank_init(rdev->ddev, rdev->num_crtc);
+ r = drm_vblank_init(rdev_to_drm(rdev), rdev->num_crtc);
if (r) {
return r;
}
diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
index 10c0fbd9d2b4..6f3c9a20a2de 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -152,7 +152,7 @@ int radeon_bo_create(struct radeon_device *rdev,
bo = kzalloc(sizeof(struct radeon_bo), GFP_KERNEL);
if (bo == NULL)
return -ENOMEM;
- drm_gem_private_object_init(rdev->ddev, &bo->tbo.base, size);
+ drm_gem_private_object_init(rdev_to_drm(rdev), &bo->tbo.base, size);
bo->rdev = rdev;
bo->surface_reg = -1;
INIT_LIST_HEAD(&bo->list);
diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
index b73fd9ab0252..66fe9fb92045 100644
--- a/drivers/gpu/drm/radeon/radeon_pm.c
+++ b/drivers/gpu/drm/radeon/radeon_pm.c
@@ -281,7 +281,7 @@ static void radeon_pm_set_clocks(struct radeon_device *rdev)
if (rdev->irq.installed) {
i = 0;
- drm_for_each_crtc(crtc, rdev->ddev) {
+ drm_for_each_crtc(crtc, rdev_to_drm(rdev)) {
if (rdev->pm.active_crtcs & (1 << i)) {
/* This can fail if a modeset is in progress */
if (drm_crtc_vblank_get(crtc) == 0)
@@ -298,7 +298,7 @@ static void radeon_pm_set_clocks(struct radeon_device *rdev)
if (rdev->irq.installed) {
i = 0;
- drm_for_each_crtc(crtc, rdev->ddev) {
+ drm_for_each_crtc(crtc, rdev_to_drm(rdev)) {
if (rdev->pm.req_vblank & (1 << i)) {
rdev->pm.req_vblank &= ~(1 << i);
drm_crtc_vblank_put(crtc);
@@ -670,7 +670,7 @@ static ssize_t radeon_hwmon_show_temp(struct device *dev,
char *buf)
{
struct radeon_device *rdev = dev_get_drvdata(dev);
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
int temp;
/* Can't get temperature when the card is off */
@@ -714,7 +714,7 @@ static ssize_t radeon_hwmon_show_sclk(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct radeon_device *rdev = dev_get_drvdata(dev);
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
u32 sclk = 0;
/* Can't get clock frequency when the card is off */
@@ -739,7 +739,7 @@ static ssize_t radeon_hwmon_show_vddc(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct radeon_device *rdev = dev_get_drvdata(dev);
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
u16 vddc = 0;
/* Can't get vddc when the card is off */
@@ -1691,7 +1691,7 @@ void radeon_pm_fini(struct radeon_device *rdev)
static void radeon_pm_compute_clocks_old(struct radeon_device *rdev)
{
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
@@ -1764,7 +1764,7 @@ static void radeon_pm_compute_clocks_old(struct radeon_device *rdev)
static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev)
{
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
struct radeon_connector *radeon_connector;
@@ -1825,7 +1825,7 @@ static bool radeon_pm_in_vbl(struct radeon_device *rdev)
*/
for (crtc = 0; (crtc < rdev->num_crtc) && in_vbl; crtc++) {
if (rdev->pm.active_crtcs & (1 << crtc)) {
- vbl_status = radeon_get_crtc_scanoutpos(rdev->ddev,
+ vbl_status = radeon_get_crtc_scanoutpos(rdev_to_drm(rdev),
crtc,
USE_REAL_VBLANKSTART,
&vpos, &hpos, NULL, NULL,
@@ -1917,7 +1917,7 @@ static void radeon_dynpm_idle_work_handler(struct work_struct *work)
static int radeon_debugfs_pm_info_show(struct seq_file *m, void *unused)
{
struct radeon_device *rdev = m->private;
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
if ((rdev->flags & RADEON_IS_PX) &&
(ddev->switch_power_state != DRM_SWITCH_POWER_ON)) {
@@ -1954,7 +1954,7 @@ DEFINE_SHOW_ATTRIBUTE(radeon_debugfs_pm_info);
static void radeon_debugfs_pm_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("radeon_pm_info", 0444, root, rdev,
&radeon_debugfs_pm_info_fops);
diff --git a/drivers/gpu/drm/radeon/radeon_ring.c b/drivers/gpu/drm/radeon/radeon_ring.c
index e6534fa9f1fb..8626171e9a6d 100644
--- a/drivers/gpu/drm/radeon/radeon_ring.c
+++ b/drivers/gpu/drm/radeon/radeon_ring.c
@@ -548,7 +548,7 @@ static void radeon_debugfs_ring_init(struct radeon_device *rdev, struct radeon_r
{
#if defined(CONFIG_DEBUG_FS)
const char *ring_name = radeon_debugfs_ring_idx_to_name(ring->idx);
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
if (ring_name)
debugfs_create_file(ring_name, 0444, root, ring,
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 4eb83ccc4906..065a09e7997c 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -689,8 +689,8 @@ int radeon_ttm_init(struct radeon_device *rdev)
/* No others user of address space so set it to 0 */
r = ttm_device_init(&rdev->mman.bdev, &radeon_bo_driver, rdev->dev,
- rdev->ddev->anon_inode->i_mapping,
- rdev->ddev->vma_offset_manager,
+ rdev_to_drm(rdev)->anon_inode->i_mapping,
+ rdev_to_drm(rdev)->vma_offset_manager,
rdev->need_swiotlb,
dma_addressing_limited(&rdev->pdev->dev));
if (r) {
@@ -897,7 +897,7 @@ static const struct file_operations radeon_ttm_gtt_fops = {
static void radeon_ttm_debugfs_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct drm_minor *minor = rdev->ddev->primary;
+ struct drm_minor *minor = rdev_to_drm(rdev)->primary;
struct dentry *root = minor->debugfs_root;
debugfs_create_file("radeon_vram", 0444, root, rdev,
diff --git a/drivers/gpu/drm/radeon/rs400.c b/drivers/gpu/drm/radeon/rs400.c
index 922a29e58880..4f93fe468ec7 100644
--- a/drivers/gpu/drm/radeon/rs400.c
+++ b/drivers/gpu/drm/radeon/rs400.c
@@ -378,7 +378,7 @@ DEFINE_SHOW_ATTRIBUTE(rs400_debugfs_gart_info);
static void rs400_debugfs_pcie_gart_info_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("rs400_gart_info", 0444, root, rdev,
&rs400_debugfs_gart_info_fops);
@@ -473,7 +473,7 @@ int rs400_resume(struct radeon_device *rdev)
RREG32(R_0007C0_CP_STAT));
}
/* post */
- radeon_combios_asic_init(rdev->ddev);
+ radeon_combios_asic_init(rdev_to_drm(rdev));
/* Resume clock after posting */
r300_clock_startup(rdev);
/* Initialize surface registers */
@@ -551,7 +551,7 @@ int rs400_init(struct radeon_device *rdev)
return -EINVAL;
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* initialize memory controller */
rs400_mc_init(rdev);
/* Fence driver */
diff --git a/drivers/gpu/drm/radeon/rs600.c b/drivers/gpu/drm/radeon/rs600.c
index 8cf87a0a2b2a..fa4cc2a185dd 100644
--- a/drivers/gpu/drm/radeon/rs600.c
+++ b/drivers/gpu/drm/radeon/rs600.c
@@ -322,7 +322,7 @@ void rs600_pm_misc(struct radeon_device *rdev)
void rs600_pm_prepare(struct radeon_device *rdev)
{
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
u32 tmp;
@@ -340,7 +340,7 @@ void rs600_pm_prepare(struct radeon_device *rdev)
void rs600_pm_finish(struct radeon_device *rdev)
{
- struct drm_device *ddev = rdev->ddev;
+ struct drm_device *ddev = rdev_to_drm(rdev);
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
u32 tmp;
@@ -409,7 +409,7 @@ void rs600_hpd_set_polarity(struct radeon_device *rdev,
void rs600_hpd_init(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_connector *connector;
unsigned enable = 0;
@@ -436,7 +436,7 @@ void rs600_hpd_init(struct radeon_device *rdev)
void rs600_hpd_fini(struct radeon_device *rdev)
{
- struct drm_device *dev = rdev->ddev;
+ struct drm_device *dev = rdev_to_drm(rdev);
struct drm_connector *connector;
unsigned disable = 0;
@@ -798,7 +798,7 @@ int rs600_irq_process(struct radeon_device *rdev)
/* Vertical blank interrupts */
if (G_007EDC_LB_D1_VBLANK_INTERRUPT(rdev->irq.stat_regs.r500.disp_int)) {
if (rdev->irq.crtc_vblank_int[0]) {
- drm_handle_vblank(rdev->ddev, 0);
+ drm_handle_vblank(rdev_to_drm(rdev), 0);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -807,7 +807,7 @@ int rs600_irq_process(struct radeon_device *rdev)
}
if (G_007EDC_LB_D2_VBLANK_INTERRUPT(rdev->irq.stat_regs.r500.disp_int)) {
if (rdev->irq.crtc_vblank_int[1]) {
- drm_handle_vblank(rdev->ddev, 1);
+ drm_handle_vblank(rdev_to_drm(rdev), 1);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -1134,7 +1134,7 @@ int rs600_init(struct radeon_device *rdev)
return -EINVAL;
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* initialize memory controller */
rs600_mc_init(rdev);
r100_debugfs_rbbm_init(rdev);
diff --git a/drivers/gpu/drm/radeon/rs690.c b/drivers/gpu/drm/radeon/rs690.c
index 14fb0819b8c1..016eb4992803 100644
--- a/drivers/gpu/drm/radeon/rs690.c
+++ b/drivers/gpu/drm/radeon/rs690.c
@@ -845,7 +845,7 @@ int rs690_init(struct radeon_device *rdev)
return -EINVAL;
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* initialize memory controller */
rs690_mc_init(rdev);
rv515_debugfs(rdev);
diff --git a/drivers/gpu/drm/radeon/rv515.c b/drivers/gpu/drm/radeon/rv515.c
index 76260fdfbaa7..19a26d85e029 100644
--- a/drivers/gpu/drm/radeon/rv515.c
+++ b/drivers/gpu/drm/radeon/rv515.c
@@ -255,7 +255,7 @@ DEFINE_SHOW_ATTRIBUTE(rv515_debugfs_ga_info);
void rv515_debugfs(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
- struct dentry *root = rdev->ddev->primary->debugfs_root;
+ struct dentry *root = rdev_to_drm(rdev)->primary->debugfs_root;
debugfs_create_file("rv515_pipes_info", 0444, root, rdev,
&rv515_debugfs_pipes_info_fops);
@@ -636,7 +636,7 @@ int rv515_init(struct radeon_device *rdev)
if (radeon_boot_test_post_card(rdev) == false)
return -EINVAL;
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* initialize AGP */
if (rdev->flags & RADEON_IS_AGP) {
r = radeon_agp_init(rdev);
diff --git a/drivers/gpu/drm/radeon/rv770.c b/drivers/gpu/drm/radeon/rv770.c
index 9ce12fa3c356..7d4b0bf59109 100644
--- a/drivers/gpu/drm/radeon/rv770.c
+++ b/drivers/gpu/drm/radeon/rv770.c
@@ -1935,7 +1935,7 @@ int rv770_init(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* Fence driver */
radeon_fence_driver_init(rdev);
/* initialize AGP */
diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
index 85e9cba49cec..312fe76944a9 100644
--- a/drivers/gpu/drm/radeon/si.c
+++ b/drivers/gpu/drm/radeon/si.c
@@ -6296,7 +6296,7 @@ int si_irq_process(struct radeon_device *rdev)
event_name = "vblank";
if (rdev->irq.crtc_vblank_int[crtc_idx]) {
- drm_handle_vblank(rdev->ddev, crtc_idx);
+ drm_handle_vblank(rdev_to_drm(rdev), crtc_idx);
rdev->pm.vblank_sync = true;
wake_up(&rdev->irq.vblank_queue);
}
@@ -6858,7 +6858,7 @@ int si_init(struct radeon_device *rdev)
/* Initialize surface registers */
radeon_surface_init(rdev);
/* Initialize clocks */
- radeon_get_clock_info(rdev->ddev);
+ radeon_get_clock_info(rdev_to_drm(rdev));
/* Fence driver */
radeon_fence_driver_init(rdev);
diff --git a/drivers/gpu/drm/sti/sti_cursor.c b/drivers/gpu/drm/sti/sti_cursor.c
index db0a1eb53532..c59fcb4dca32 100644
--- a/drivers/gpu/drm/sti/sti_cursor.c
+++ b/drivers/gpu/drm/sti/sti_cursor.c
@@ -200,6 +200,9 @@ static int sti_cursor_atomic_check(struct drm_plane *drm_plane,
return 0;
crtc_state = drm_atomic_get_crtc_state(state, crtc);
+ if (IS_ERR(crtc_state))
+ return PTR_ERR(crtc_state);
+
mode = &crtc_state->mode;
dst_x = new_plane_state->crtc_x;
dst_y = new_plane_state->crtc_y;
diff --git a/drivers/gpu/drm/sti/sti_gdp.c b/drivers/gpu/drm/sti/sti_gdp.c
index 43c72c2604a0..f046f5f7ad25 100644
--- a/drivers/gpu/drm/sti/sti_gdp.c
+++ b/drivers/gpu/drm/sti/sti_gdp.c
@@ -638,6 +638,9 @@ static int sti_gdp_atomic_check(struct drm_plane *drm_plane,
mixer = to_sti_mixer(crtc);
crtc_state = drm_atomic_get_crtc_state(state, crtc);
+ if (IS_ERR(crtc_state))
+ return PTR_ERR(crtc_state);
+
mode = &crtc_state->mode;
dst_x = new_plane_state->crtc_x;
dst_y = new_plane_state->crtc_y;
diff --git a/drivers/gpu/drm/sti/sti_hqvdp.c b/drivers/gpu/drm/sti/sti_hqvdp.c
index 0fb48ac044d8..abab92df78bd 100644
--- a/drivers/gpu/drm/sti/sti_hqvdp.c
+++ b/drivers/gpu/drm/sti/sti_hqvdp.c
@@ -1037,6 +1037,9 @@ static int sti_hqvdp_atomic_check(struct drm_plane *drm_plane,
return 0;
crtc_state = drm_atomic_get_crtc_state(state, crtc);
+ if (IS_ERR(crtc_state))
+ return PTR_ERR(crtc_state);
+
mode = &crtc_state->mode;
dst_x = new_plane_state->crtc_x;
dst_y = new_plane_state->crtc_y;
diff --git a/drivers/gpu/drm/v3d/v3d_mmu.c b/drivers/gpu/drm/v3d/v3d_mmu.c
index 5a453532901f..166d4a88daee 100644
--- a/drivers/gpu/drm/v3d/v3d_mmu.c
+++ b/drivers/gpu/drm/v3d/v3d_mmu.c
@@ -34,32 +34,23 @@ static int v3d_mmu_flush_all(struct v3d_dev *v3d)
{
int ret;
- /* Make sure that another flush isn't already running when we
- * start this one.
- */
- ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
- V3D_MMU_CTL_TLB_CLEARING), 100);
- if (ret)
- dev_err(v3d->drm.dev, "TLB clear wait idle pre-wait failed\n");
-
- V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) |
- V3D_MMU_CTL_TLB_CLEAR);
-
- V3D_WRITE(V3D_MMUC_CONTROL,
- V3D_MMUC_CONTROL_FLUSH |
+ V3D_WRITE(V3D_MMUC_CONTROL, V3D_MMUC_CONTROL_FLUSH |
V3D_MMUC_CONTROL_ENABLE);
- ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
- V3D_MMU_CTL_TLB_CLEARING), 100);
+ ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) &
+ V3D_MMUC_CONTROL_FLUSHING), 100);
if (ret) {
- dev_err(v3d->drm.dev, "TLB clear wait idle failed\n");
+ dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n");
return ret;
}
- ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) &
- V3D_MMUC_CONTROL_FLUSHING), 100);
+ V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) |
+ V3D_MMU_CTL_TLB_CLEAR);
+
+ ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
+ V3D_MMU_CTL_TLB_CLEARING), 100);
if (ret)
- dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n");
+ dev_err(v3d->drm.dev, "MMU TLB clear wait idle failed\n");
return ret;
}
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index bf66499765fb..ac4ad95b3643 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -314,6 +314,7 @@ struct vc4_hvs {
struct platform_device *pdev;
void __iomem *regs;
u32 __iomem *dlist;
+ unsigned int dlist_mem_size;
struct clk *core_clk;
diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
index c6e986f71a26..d4487f4cb303 100644
--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
+++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
@@ -179,6 +179,8 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused)
if (!drm_dev_enter(drm, &idx))
return -ENODEV;
+ WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev));
+
drm_print_regset32(&p, &vc4_hdmi->hdmi_regset);
drm_print_regset32(&p, &vc4_hdmi->hd_regset);
drm_print_regset32(&p, &vc4_hdmi->cec_regset);
@@ -188,6 +190,8 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused)
drm_print_regset32(&p, &vc4_hdmi->ram_regset);
drm_print_regset32(&p, &vc4_hdmi->rm_regset);
+ pm_runtime_put(&vc4_hdmi->pdev->dev);
+
drm_dev_exit(idx);
return 0;
diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
index 04af672caacb..008352166579 100644
--- a/drivers/gpu/drm/vc4/vc4_hvs.c
+++ b/drivers/gpu/drm/vc4/vc4_hvs.c
@@ -110,7 +110,8 @@ static int vc4_hvs_debugfs_dlist(struct seq_file *m, void *data)
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct vc4_hvs *hvs = vc4->hvs;
struct drm_printer p = drm_seq_file_printer(m);
- unsigned int next_entry_start = 0;
+ unsigned int dlist_mem_size = hvs->dlist_mem_size;
+ unsigned int next_entry_start;
unsigned int i, j;
u32 dlist_word, dispstat;
@@ -124,8 +125,9 @@ static int vc4_hvs_debugfs_dlist(struct seq_file *m, void *data)
}
drm_printf(&p, "HVS chan %u:\n", i);
+ next_entry_start = 0;
- for (j = HVS_READ(SCALER_DISPLISTX(i)); j < 256; j++) {
+ for (j = HVS_READ(SCALER_DISPLISTX(i)); j < dlist_mem_size; j++) {
dlist_word = readl((u32 __iomem *)vc4->hvs->dlist + j);
drm_printf(&p, "dlist: %02d: 0x%08x\n", j,
dlist_word);
@@ -222,6 +224,9 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs,
if (!drm_dev_enter(drm, &idx))
return;
+ if (hvs->vc4->is_vc5)
+ return;
+
/* The LUT memory is laid out with each HVS channel in order,
* each of which takes 256 writes for R, 256 for G, then 256
* for B.
@@ -415,13 +420,11 @@ void vc4_hvs_stop_channel(struct vc4_hvs *hvs, unsigned int chan)
if (!drm_dev_enter(drm, &idx))
return;
- if (HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE)
+ if (!(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE))
goto out;
- HVS_WRITE(SCALER_DISPCTRLX(chan),
- HVS_READ(SCALER_DISPCTRLX(chan)) | SCALER_DISPCTRLX_RESET);
- HVS_WRITE(SCALER_DISPCTRLX(chan),
- HVS_READ(SCALER_DISPCTRLX(chan)) & ~SCALER_DISPCTRLX_ENABLE);
+ HVS_WRITE(SCALER_DISPCTRLX(chan), SCALER_DISPCTRLX_RESET);
+ HVS_WRITE(SCALER_DISPCTRLX(chan), 0);
/* Once we leave, the scaler should be disabled and its fifo empty. */
WARN_ON_ONCE(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_RESET);
@@ -580,7 +583,7 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc,
}
if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED)
- return;
+ goto exit;
if (debug_dump_regs) {
DRM_INFO("CRTC %d HVS before:\n", drm_crtc_index(crtc));
@@ -663,6 +666,7 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc,
vc4_hvs_dump_state(hvs);
}
+exit:
drm_dev_exit(idx);
}
@@ -800,9 +804,10 @@ struct vc4_hvs *__vc4_hvs_alloc(struct vc4_dev *vc4, struct platform_device *pde
* our 16K), since we don't want to scramble the screen when
* transitioning from the firmware's boot setup to runtime.
*/
+ hvs->dlist_mem_size = (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END;
drm_mm_init(&hvs->dlist_mm,
HVS_BOOTLOADER_DLIST_END,
- (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END);
+ hvs->dlist_mem_size);
/* Set up the HVS LBM memory manager. We could have some more
* complicated data structure that allowed reuse of LBM areas
diff --git a/drivers/gpu/drm/vkms/vkms_output.c b/drivers/gpu/drm/vkms/vkms_output.c
index 5ce70dd946aa..24589b947dea 100644
--- a/drivers/gpu/drm/vkms/vkms_output.c
+++ b/drivers/gpu/drm/vkms/vkms_output.c
@@ -84,7 +84,7 @@ int vkms_output_init(struct vkms_device *vkmsdev, int index)
DRM_MODE_CONNECTOR_VIRTUAL);
if (ret) {
DRM_ERROR("Failed to init connector\n");
- goto err_connector;
+ return ret;
}
drm_connector_helper_add(connector, &vkms_conn_helper_funcs);
@@ -119,8 +119,5 @@ int vkms_output_init(struct vkms_device *vkmsdev, int index)
err_encoder:
drm_connector_cleanup(connector);
-err_connector:
- drm_crtc_cleanup(crtc);
-
return ret;
}
diff --git a/drivers/gpu/drm/xlnx/zynqmp_kms.c b/drivers/gpu/drm/xlnx/zynqmp_kms.c
index 44d4a510ad7d..079bd97da4fa 100644
--- a/drivers/gpu/drm/xlnx/zynqmp_kms.c
+++ b/drivers/gpu/drm/xlnx/zynqmp_kms.c
@@ -506,12 +506,12 @@ int zynqmp_dpsub_drm_init(struct zynqmp_dpsub *dpsub)
if (ret)
return ret;
- drm_kms_helper_poll_init(drm);
-
ret = zynqmp_dpsub_kms_init(dpsub);
if (ret < 0)
goto err_poll_fini;
+ drm_kms_helper_poll_init(drm);
+
/* Reset all components and register the DRM device. */
drm_mode_config_reset(drm);
@@ -533,7 +533,7 @@ void zynqmp_dpsub_drm_cleanup(struct zynqmp_dpsub *dpsub)
{
struct drm_device *drm = &dpsub->drm->dev;
- drm_dev_unregister(drm);
+ drm_dev_unplug(drm);
drm_atomic_helper_shutdown(drm);
drm_encoder_cleanup(&dpsub->drm->encoder);
drm_kms_helper_poll_fini(drm);
diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c
index f33485d83d24..0fb210e40a41 100644
--- a/drivers/hid/hid-hyperv.c
+++ b/drivers/hid/hid-hyperv.c
@@ -422,6 +422,25 @@ static int mousevsc_hid_raw_request(struct hid_device *hid,
return 0;
}
+static int mousevsc_hid_probe(struct hid_device *hid_dev, const struct hid_device_id *id)
+{
+ int ret;
+
+ ret = hid_parse(hid_dev);
+ if (ret) {
+ hid_err(hid_dev, "parse failed\n");
+ return ret;
+ }
+
+ ret = hid_hw_start(hid_dev, HID_CONNECT_HIDINPUT | HID_CONNECT_HIDDEV);
+ if (ret) {
+ hid_err(hid_dev, "hw start failed\n");
+ return ret;
+ }
+
+ return 0;
+}
+
static const struct hid_ll_driver mousevsc_ll_driver = {
.parse = mousevsc_hid_parse,
.open = mousevsc_hid_open,
@@ -431,7 +450,16 @@ static const struct hid_ll_driver mousevsc_ll_driver = {
.raw_request = mousevsc_hid_raw_request,
};
-static struct hid_driver mousevsc_hid_driver;
+static const struct hid_device_id mousevsc_devices[] = {
+ { HID_DEVICE(BUS_VIRTUAL, HID_GROUP_ANY, 0x045E, 0x0621) },
+ { }
+};
+
+static struct hid_driver mousevsc_hid_driver = {
+ .name = "hid-hyperv",
+ .id_table = mousevsc_devices,
+ .probe = mousevsc_hid_probe,
+};
static int mousevsc_probe(struct hv_device *device,
const struct hv_vmbus_device_id *dev_id)
@@ -473,7 +501,6 @@ static int mousevsc_probe(struct hv_device *device,
}
hid_dev->ll_driver = &mousevsc_ll_driver;
- hid_dev->driver = &mousevsc_hid_driver;
hid_dev->bus = BUS_VIRTUAL;
hid_dev->vendor = input_dev->hid_dev_info.vendor;
hid_dev->product = input_dev->hid_dev_info.product;
@@ -488,20 +515,6 @@ static int mousevsc_probe(struct hv_device *device,
if (ret)
goto probe_err2;
-
- ret = hid_parse(hid_dev);
- if (ret) {
- hid_err(hid_dev, "parse failed\n");
- goto probe_err2;
- }
-
- ret = hid_hw_start(hid_dev, HID_CONNECT_HIDINPUT | HID_CONNECT_HIDDEV);
-
- if (ret) {
- hid_err(hid_dev, "hw start failed\n");
- goto probe_err2;
- }
-
device_init_wakeup(&device->device, true);
input_dev->connected = true;
@@ -579,12 +592,23 @@ static struct hv_driver mousevsc_drv = {
static int __init mousevsc_init(void)
{
- return vmbus_driver_register(&mousevsc_drv);
+ int ret;
+
+ ret = hid_register_driver(&mousevsc_hid_driver);
+ if (ret)
+ return ret;
+
+ ret = vmbus_driver_register(&mousevsc_drv);
+ if (ret)
+ hid_unregister_driver(&mousevsc_hid_driver);
+
+ return ret;
}
static void __exit mousevsc_exit(void)
{
vmbus_driver_unregister(&mousevsc_drv);
+ hid_unregister_driver(&mousevsc_hid_driver);
}
MODULE_LICENSE("GPL");
diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
index 18b5cd0234d2..33466c71c9da 100644
--- a/drivers/hid/wacom_wac.c
+++ b/drivers/hid/wacom_wac.c
@@ -1399,9 +1399,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
rotation -= 1800;
input_report_abs(pen_input, ABS_TILT_X,
- (char)frame[7]);
+ (signed char)frame[7]);
input_report_abs(pen_input, ABS_TILT_Y,
- (char)frame[8]);
+ (signed char)frame[8]);
input_report_abs(pen_input, ABS_Z, rotation);
input_report_abs(pen_input, ABS_WHEEL,
get_unaligned_le16(&frame[11]));
diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
index 8da7aa1614d7..16f6b7ba2a5d 100644
--- a/drivers/hwmon/nct6775-core.c
+++ b/drivers/hwmon/nct6775-core.c
@@ -2878,8 +2878,7 @@ store_target_temp(struct device *dev, struct device_attribute *attr,
if (err < 0)
return err;
- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0,
- data->target_temp_mask);
+ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, data->target_temp_mask * 1000), 1000);
mutex_lock(&data->update_lock);
data->target_temp[nr] = val;
@@ -2959,7 +2958,7 @@ store_temp_tolerance(struct device *dev, struct device_attribute *attr,
return err;
/* Limit tolerance as needed */
- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, data->tolerance_mask);
+ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, data->tolerance_mask * 1000), 1000);
mutex_lock(&data->update_lock);
data->temp_tolerance[index][nr] = val;
@@ -3085,7 +3084,7 @@ store_weight_temp(struct device *dev, struct device_attribute *attr,
if (err < 0)
return err;
- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 255);
+ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 255000), 1000);
mutex_lock(&data->update_lock);
data->weight_temp[index][nr] = val;
diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
index 728c07c42651..019c5982ba56 100644
--- a/drivers/hwmon/pmbus/pmbus_core.c
+++ b/drivers/hwmon/pmbus/pmbus_core.c
@@ -3199,7 +3199,17 @@ static int pmbus_regulator_notify(struct pmbus_data *data, int page, int event)
static int pmbus_write_smbalert_mask(struct i2c_client *client, u8 page, u8 reg, u8 val)
{
- return pmbus_write_word_data(client, page, PMBUS_SMBALERT_MASK, reg | (val << 8));
+ int ret;
+
+ ret = _pmbus_write_word_data(client, page, PMBUS_SMBALERT_MASK, reg | (val << 8));
+
+ /*
+ * Clear fault systematically in case writing PMBUS_SMBALERT_MASK
+ * is not supported by the chip.
+ */
+ pmbus_clear_fault_page(client, page);
+
+ return ret;
}
static irqreturn_t pmbus_fault_handler(int irq, void *pdata)
diff --git a/drivers/hwmon/tps23861.c b/drivers/hwmon/tps23861.c
index d33ecbac00d6..cea34fb9ba58 100644
--- a/drivers/hwmon/tps23861.c
+++ b/drivers/hwmon/tps23861.c
@@ -132,7 +132,7 @@ static int tps23861_read_temp(struct tps23861_data *data, long *val)
if (err < 0)
return err;
- *val = (regval * TEMPERATURE_LSB) - 20000;
+ *val = ((long)regval * TEMPERATURE_LSB) - 20000;
return 0;
}
diff --git a/drivers/i2c/busses/i2c-imx-lpi2c.c b/drivers/i2c/busses/i2c-imx-lpi2c.c
index 678b30e90492..5d4f04a3c6d3 100644
--- a/drivers/i2c/busses/i2c-imx-lpi2c.c
+++ b/drivers/i2c/busses/i2c-imx-lpi2c.c
@@ -99,6 +99,7 @@ struct lpi2c_imx_struct {
__u8 *rx_buf;
__u8 *tx_buf;
struct completion complete;
+ unsigned long rate_per;
unsigned int msglen;
unsigned int delivered;
unsigned int block_data;
@@ -207,9 +208,7 @@ static int lpi2c_imx_config(struct lpi2c_imx_struct *lpi2c_imx)
lpi2c_imx_set_mode(lpi2c_imx);
- clk_rate = clk_get_rate(lpi2c_imx->clks[0].clk);
- if (!clk_rate)
- return -EINVAL;
+ clk_rate = lpi2c_imx->rate_per;
if (lpi2c_imx->mode == HS || lpi2c_imx->mode == ULTRA_FAST)
filt = 0;
@@ -590,6 +589,11 @@ static int lpi2c_imx_probe(struct platform_device *pdev)
if (ret)
return ret;
+ lpi2c_imx->rate_per = clk_get_rate(lpi2c_imx->clks[0].clk);
+ if (!lpi2c_imx->rate_per)
+ return dev_err_probe(&pdev->dev, -EINVAL,
+ "can't get I2C peripheral clock rate\n");
+
pm_runtime_set_autosuspend_delay(&pdev->dev, I2C_PM_TIMEOUT);
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_get_noresume(&pdev->dev);
diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
index 0e9ff5500a77..70d120dfb090 100644
--- a/drivers/i3c/master.c
+++ b/drivers/i3c/master.c
@@ -1293,7 +1293,7 @@ static void i3c_master_put_i3c_addrs(struct i3c_dev_desc *dev)
I3C_ADDR_SLOT_FREE);
if (dev->boardinfo && dev->boardinfo->init_dyn_addr)
- i3c_bus_set_addr_slot_status(&master->bus, dev->info.dyn_addr,
+ i3c_bus_set_addr_slot_status(&master->bus, dev->boardinfo->init_dyn_addr,
I3C_ADDR_SLOT_FREE);
}
diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
index f344f8733f83..dca266d9dd12 100644
--- a/drivers/i3c/master/svc-i3c-master.c
+++ b/drivers/i3c/master/svc-i3c-master.c
@@ -1684,8 +1684,8 @@ static int svc_i3c_master_probe(struct platform_device *pdev)
rpm_disable:
pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_put_noidle(&pdev->dev);
- pm_runtime_set_suspended(&pdev->dev);
pm_runtime_disable(&pdev->dev);
+ pm_runtime_set_suspended(&pdev->dev);
err_disable_clks:
svc_i3c_master_unprepare_clks(master);
diff --git a/drivers/iio/accel/kionix-kx022a.c b/drivers/iio/accel/kionix-kx022a.c
index 971fc60efef0..220465d9c713 100644
--- a/drivers/iio/accel/kionix-kx022a.c
+++ b/drivers/iio/accel/kionix-kx022a.c
@@ -475,7 +475,7 @@ static int kx022a_get_axis(struct kx022a_data *data,
if (ret)
return ret;
- *val = le16_to_cpu(data->buffer[0]);
+ *val = (s16)le16_to_cpu(data->buffer[0]);
return IIO_VAL_INT;
}
diff --git a/drivers/iio/adc/ad7780.c b/drivers/iio/adc/ad7780.c
index a813fe04787c..97b6774e7f58 100644
--- a/drivers/iio/adc/ad7780.c
+++ b/drivers/iio/adc/ad7780.c
@@ -152,7 +152,7 @@ static int ad7780_write_raw(struct iio_dev *indio_dev,
switch (m) {
case IIO_CHAN_INFO_SCALE:
- if (val != 0)
+ if (val != 0 || val2 == 0)
return -EINVAL;
vref = st->int_vref_mv * 1000000LL;
diff --git a/drivers/iio/adc/ad7923.c b/drivers/iio/adc/ad7923.c
index 9d6bf6d0927a..709ce2a50097 100644
--- a/drivers/iio/adc/ad7923.c
+++ b/drivers/iio/adc/ad7923.c
@@ -48,7 +48,7 @@
struct ad7923_state {
struct spi_device *spi;
- struct spi_transfer ring_xfer[5];
+ struct spi_transfer ring_xfer[9];
struct spi_transfer scan_single_xfer[2];
struct spi_message ring_msg;
struct spi_message scan_single_msg;
@@ -64,7 +64,7 @@ struct ad7923_state {
* Length = 8 channels + 4 extra for 8 byte timestamp
*/
__be16 rx_buf[12] __aligned(IIO_DMA_MINALIGN);
- __be16 tx_buf[4];
+ __be16 tx_buf[8];
};
struct ad7923_chip_info {
diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
index 5f131bc1a01e..291c0fc332c9 100644
--- a/drivers/iio/industrialio-gts-helper.c
+++ b/drivers/iio/industrialio-gts-helper.c
@@ -167,7 +167,7 @@ static int iio_gts_gain_cmp(const void *a, const void *b)
static int gain_to_scaletables(struct iio_gts *gts, int **gains, int **scales)
{
- int ret, i, j, new_idx, time_idx;
+ int i, j, new_idx, time_idx, ret = 0;
int *all_gains;
size_t gain_bytes;
@@ -205,7 +205,7 @@ static int gain_to_scaletables(struct iio_gts *gts, int **gains, int **scales)
memcpy(all_gains, gains[time_idx], gain_bytes);
new_idx = gts->num_hwgain;
- while (time_idx--) {
+ while (time_idx-- > 0) {
for (j = 0; j < gts->num_hwgain; j++) {
int candidate = gains[time_idx][j];
int chk;
diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
index 80e1c45485c9..079e30c522bb 100644
--- a/drivers/iio/inkern.c
+++ b/drivers/iio/inkern.c
@@ -277,7 +277,7 @@ struct iio_channel *fwnode_iio_channel_get_by_name(struct fwnode_handle *fwnode,
return ERR_PTR(-ENODEV);
}
- chan = __fwnode_iio_channel_get_by_name(fwnode, name);
+ chan = __fwnode_iio_channel_get_by_name(parent, name);
if (!IS_ERR(chan) || PTR_ERR(chan) != -ENODEV) {
fwnode_handle_put(parent);
return chan;
diff --git a/drivers/iio/light/al3010.c b/drivers/iio/light/al3010.c
index 8f0119f392b7..7d4053bfceea 100644
--- a/drivers/iio/light/al3010.c
+++ b/drivers/iio/light/al3010.c
@@ -87,7 +87,12 @@ static int al3010_init(struct al3010_data *data)
int ret;
ret = al3010_set_pwr(data->client, true);
+ if (ret < 0)
+ return ret;
+ ret = devm_add_action_or_reset(&data->client->dev,
+ al3010_set_pwr_off,
+ data);
if (ret < 0)
return ret;
@@ -190,12 +195,6 @@ static int al3010_probe(struct i2c_client *client)
return ret;
}
- ret = devm_add_action_or_reset(&client->dev,
- al3010_set_pwr_off,
- data);
- if (ret < 0)
- return ret;
-
return devm_iio_device_register(&client->dev, indio_dev);
}
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
index f20da108fb29..df5897260601 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
@@ -3559,7 +3559,7 @@ static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *gsi_sqp,
wc->byte_len = orig_cqe->length;
wc->qp = &gsi_qp->ib_qp;
- wc->ex.imm_data = cpu_to_be32(le32_to_cpu(orig_cqe->immdata));
+ wc->ex.imm_data = cpu_to_be32(orig_cqe->immdata);
wc->src_qp = orig_cqe->src_qp;
memcpy(wc->smac, orig_cqe->smac, ETH_ALEN);
if (bnxt_re_is_vlan_pkt(orig_cqe, &vlan_id, &sl)) {
@@ -3704,7 +3704,10 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
(unsigned long)(cqe->qp_handle),
struct bnxt_re_qp, qplib_qp);
wc->qp = &qp->ib_qp;
- wc->ex.imm_data = cpu_to_be32(le32_to_cpu(cqe->immdata));
+ if (cqe->flags & CQ_RES_RC_FLAGS_IMM)
+ wc->ex.imm_data = cpu_to_be32(cqe->immdata);
+ else
+ wc->ex.invalidate_rkey = cqe->invrkey;
wc->src_qp = cqe->src_qp;
memcpy(wc->smac, cqe->smac, ETH_ALEN);
wc->port_num = 1;
diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
index 56ddff96b508..5d4c49089a20 100644
--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
@@ -389,7 +389,7 @@ struct bnxt_qplib_cqe {
u16 cfa_meta;
u64 wr_id;
union {
- __le32 immdata;
+ u32 immdata;
u32 invrkey;
};
u64 qp_handle;
diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
index ff177466de9b..9b91731a6207 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
@@ -180,8 +180,8 @@ static void free_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_CQC,
hr_cq->cqn);
if (ret)
- dev_err(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n", ret,
- hr_cq->cqn);
+ dev_err_ratelimited(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n",
+ ret, hr_cq->cqn);
xa_erase_irq(&cq_table->array, hr_cq->cqn);
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
index cd593d651e4c..21ef00fdb656 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -1236,6 +1236,7 @@ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn);
void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type);
void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp);
void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type);
+void hns_roce_flush_cqe(struct hns_roce_dev *hr_dev, u32 qpn);
void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type);
void hns_roce_handle_device_err(struct hns_roce_dev *hr_dev);
int hns_roce_init(struct hns_roce_dev *hr_dev);
diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
index 7ebf80504fd1..0ab514c49d5e 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
@@ -337,7 +337,7 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev,
struct hns_roce_hem_mhop *mhop,
struct hns_roce_hem_index *index)
{
- struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct device *dev = hr_dev->dev;
unsigned long mhop_obj = obj;
u32 l0_idx, l1_idx, l2_idx;
u32 chunk_ba_num;
@@ -368,14 +368,14 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev,
index->buf = l0_idx;
break;
default:
- ibdev_err(ibdev, "table %u not support mhop.hop_num = %u!\n",
- table->type, mhop->hop_num);
+ dev_err(dev, "table %u not support mhop.hop_num = %u!\n",
+ table->type, mhop->hop_num);
return -EINVAL;
}
if (unlikely(index->buf >= table->num_hem)) {
- ibdev_err(ibdev, "table %u exceed hem limt idx %llu, max %lu!\n",
- table->type, index->buf, table->num_hem);
+ dev_err(dev, "table %u exceed hem limt idx %llu, max %lu!\n",
+ table->type, index->buf, table->num_hem);
return -EINVAL;
}
@@ -487,14 +487,14 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
struct hns_roce_hem_mhop *mhop,
struct hns_roce_hem_index *index)
{
- struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct device *dev = hr_dev->dev;
u32 step_idx;
int ret = 0;
if (index->inited & HEM_INDEX_L0) {
ret = hr_dev->hw->set_hem(hr_dev, table, obj, 0);
if (ret) {
- ibdev_err(ibdev, "set HEM step 0 failed!\n");
+ dev_err(dev, "set HEM step 0 failed!\n");
goto out;
}
}
@@ -502,7 +502,7 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
if (index->inited & HEM_INDEX_L1) {
ret = hr_dev->hw->set_hem(hr_dev, table, obj, 1);
if (ret) {
- ibdev_err(ibdev, "set HEM step 1 failed!\n");
+ dev_err(dev, "set HEM step 1 failed!\n");
goto out;
}
}
@@ -514,7 +514,7 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
step_idx = mhop->hop_num;
ret = hr_dev->hw->set_hem(hr_dev, table, obj, step_idx);
if (ret)
- ibdev_err(ibdev, "set HEM step last failed!\n");
+ dev_err(dev, "set HEM step last failed!\n");
}
out:
return ret;
@@ -524,14 +524,14 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
struct hns_roce_hem_table *table,
unsigned long obj)
{
- struct ib_device *ibdev = &hr_dev->ib_dev;
struct hns_roce_hem_index index = {};
struct hns_roce_hem_mhop mhop = {};
+ struct device *dev = hr_dev->dev;
int ret;
ret = calc_hem_config(hr_dev, table, obj, &mhop, &index);
if (ret) {
- ibdev_err(ibdev, "calc hem config failed!\n");
+ dev_err(dev, "calc hem config failed!\n");
return ret;
}
@@ -543,7 +543,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
ret = alloc_mhop_hem(hr_dev, table, &mhop, &index);
if (ret) {
- ibdev_err(ibdev, "alloc mhop hem failed!\n");
+ dev_err(dev, "alloc mhop hem failed!\n");
goto out;
}
@@ -551,7 +551,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
if (table->type < HEM_TYPE_MTT) {
ret = set_mhop_hem(hr_dev, table, obj, &mhop, &index);
if (ret) {
- ibdev_err(ibdev, "set HEM address to HW failed!\n");
+ dev_err(dev, "set HEM address to HW failed!\n");
goto err_alloc;
}
}
@@ -615,7 +615,7 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev,
struct hns_roce_hem_mhop *mhop,
struct hns_roce_hem_index *index)
{
- struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct device *dev = hr_dev->dev;
u32 hop_num = mhop->hop_num;
u32 chunk_ba_num;
u32 step_idx;
@@ -645,21 +645,21 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev,
ret = hr_dev->hw->clear_hem(hr_dev, table, obj, step_idx);
if (ret)
- ibdev_warn(ibdev, "failed to clear hop%u HEM, ret = %d.\n",
- hop_num, ret);
+ dev_warn(dev, "failed to clear hop%u HEM, ret = %d.\n",
+ hop_num, ret);
if (index->inited & HEM_INDEX_L1) {
ret = hr_dev->hw->clear_hem(hr_dev, table, obj, 1);
if (ret)
- ibdev_warn(ibdev, "failed to clear HEM step 1, ret = %d.\n",
- ret);
+ dev_warn(dev, "failed to clear HEM step 1, ret = %d.\n",
+ ret);
}
if (index->inited & HEM_INDEX_L0) {
ret = hr_dev->hw->clear_hem(hr_dev, table, obj, 0);
if (ret)
- ibdev_warn(ibdev, "failed to clear HEM step 0, ret = %d.\n",
- ret);
+ dev_warn(dev, "failed to clear HEM step 0, ret = %d.\n",
+ ret);
}
}
}
@@ -669,14 +669,14 @@ static void hns_roce_table_mhop_put(struct hns_roce_dev *hr_dev,
unsigned long obj,
int check_refcount)
{
- struct ib_device *ibdev = &hr_dev->ib_dev;
struct hns_roce_hem_index index = {};
struct hns_roce_hem_mhop mhop = {};
+ struct device *dev = hr_dev->dev;
int ret;
ret = calc_hem_config(hr_dev, table, obj, &mhop, &index);
if (ret) {
- ibdev_err(ibdev, "calc hem config failed!\n");
+ dev_err(dev, "calc hem config failed!\n");
return;
}
@@ -712,8 +712,8 @@ void hns_roce_table_put(struct hns_roce_dev *hr_dev,
ret = hr_dev->hw->clear_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT);
if (ret)
- dev_warn(dev, "failed to clear HEM base address, ret = %d.\n",
- ret);
+ dev_warn_ratelimited(dev, "failed to clear HEM base address, ret = %d.\n",
+ ret);
hns_roce_free_hem(hr_dev, table->hem[i]);
table->hem[i] = NULL;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index 8066750afab9..2824d390ec31 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -372,19 +372,12 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
static int check_send_valid(struct hns_roce_dev *hr_dev,
struct hns_roce_qp *hr_qp)
{
- struct ib_device *ibdev = &hr_dev->ib_dev;
-
if (unlikely(hr_qp->state == IB_QPS_RESET ||
hr_qp->state == IB_QPS_INIT ||
- hr_qp->state == IB_QPS_RTR)) {
- ibdev_err(ibdev, "failed to post WQE, QP state %u!\n",
- hr_qp->state);
+ hr_qp->state == IB_QPS_RTR))
return -EINVAL;
- } else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN)) {
- ibdev_err(ibdev, "failed to post WQE, dev state %d!\n",
- hr_dev->state);
+ else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN))
return -EIO;
- }
return 0;
}
@@ -585,7 +578,7 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp,
if (WARN_ON(ret))
return ret;
- hr_reg_write(rc_sq_wqe, RC_SEND_WQE_FENCE,
+ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SO,
(wr->send_flags & IB_SEND_FENCE) ? 1 : 0);
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SE,
@@ -2737,8 +2730,8 @@ static int free_mr_modify_rsv_qp(struct hns_roce_dev *hr_dev,
ret = hr_dev->hw->modify_qp(&hr_qp->ibqp, attr, mask, IB_QPS_INIT,
IB_QPS_INIT, NULL);
if (ret) {
- ibdev_err(ibdev, "failed to modify qp to init, ret = %d.\n",
- ret);
+ ibdev_err_ratelimited(ibdev, "failed to modify qp to init, ret = %d.\n",
+ ret);
return ret;
}
@@ -3384,8 +3377,8 @@ static int free_mr_post_send_lp_wqe(struct hns_roce_qp *hr_qp)
ret = hns_roce_v2_post_send(&hr_qp->ibqp, send_wr, &bad_wr);
if (ret) {
- ibdev_err(ibdev, "failed to post wqe for free mr, ret = %d.\n",
- ret);
+ ibdev_err_ratelimited(ibdev, "failed to post wqe for free mr, ret = %d.\n",
+ ret);
return ret;
}
@@ -3424,9 +3417,9 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
ret = free_mr_post_send_lp_wqe(hr_qp);
if (ret) {
- ibdev_err(ibdev,
- "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n",
- hr_qp->qpn, ret);
+ ibdev_err_ratelimited(ibdev,
+ "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n",
+ hr_qp->qpn, ret);
break;
}
@@ -3437,16 +3430,16 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
while (cqe_cnt) {
npolled = hns_roce_v2_poll_cq(&free_mr->rsv_cq->ib_cq, cqe_cnt, wc);
if (npolled < 0) {
- ibdev_err(ibdev,
- "failed to poll cqe for free mr, remain %d cqe.\n",
- cqe_cnt);
+ ibdev_err_ratelimited(ibdev,
+ "failed to poll cqe for free mr, remain %d cqe.\n",
+ cqe_cnt);
goto out;
}
if (time_after(jiffies, end)) {
- ibdev_err(ibdev,
- "failed to poll cqe for free mr and timeout, remain %d cqe.\n",
- cqe_cnt);
+ ibdev_err_ratelimited(ibdev,
+ "failed to poll cqe for free mr and timeout, remain %d cqe.\n",
+ cqe_cnt);
goto out;
}
cqe_cnt -= npolled;
@@ -4986,10 +4979,8 @@ static int hns_roce_v2_set_abs_fields(struct ib_qp *ibqp,
struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
int ret = 0;
- if (!check_qp_state(cur_state, new_state)) {
- ibdev_err(&hr_dev->ib_dev, "Illegal state for QP!\n");
+ if (!check_qp_state(cur_state, new_state))
return -EINVAL;
- }
if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
memset(qpc_mask, 0, hr_dev->caps.qpc_sz);
@@ -5251,7 +5242,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
/* SW pass context to HW */
ret = hns_roce_v2_qp_modify(hr_dev, context, qpc_mask, hr_qp);
if (ret) {
- ibdev_err(ibdev, "failed to modify QP, ret = %d.\n", ret);
+ ibdev_err_ratelimited(ibdev, "failed to modify QP, ret = %d.\n", ret);
goto out;
}
@@ -5341,7 +5332,9 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
ret = hns_roce_v2_query_qpc(hr_dev, hr_qp->qpn, &context);
if (ret) {
- ibdev_err(ibdev, "failed to query QPC, ret = %d.\n", ret);
+ ibdev_err_ratelimited(ibdev,
+ "failed to query QPC, ret = %d.\n",
+ ret);
ret = -EINVAL;
goto out;
}
@@ -5349,7 +5342,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
state = hr_reg_read(&context, QPC_QP_ST);
tmp_qp_state = to_ib_qp_st((enum hns_roce_v2_qp_state)state);
if (tmp_qp_state == -1) {
- ibdev_err(ibdev, "Illegal ib_qp_state\n");
+ ibdev_err_ratelimited(ibdev, "Illegal ib_qp_state\n");
ret = -EINVAL;
goto out;
}
@@ -5442,9 +5435,9 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, NULL, 0,
hr_qp->state, IB_QPS_RESET, udata);
if (ret)
- ibdev_err(ibdev,
- "failed to modify QP to RST, ret = %d.\n",
- ret);
+ ibdev_err_ratelimited(ibdev,
+ "failed to modify QP to RST, ret = %d.\n",
+ ret);
}
send_cq = hr_qp->ibqp.send_cq ? to_hr_cq(hr_qp->ibqp.send_cq) : NULL;
@@ -5480,9 +5473,9 @@ int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata);
if (ret)
- ibdev_err(&hr_dev->ib_dev,
- "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
- hr_qp->qpn, ret);
+ ibdev_err_ratelimited(&hr_dev->ib_dev,
+ "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
+ hr_qp->qpn, ret);
hns_roce_qp_destroy(hr_dev, hr_qp, udata);
@@ -5755,9 +5748,9 @@ static int hns_roce_v2_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
HNS_ROCE_CMD_MODIFY_CQC, hr_cq->cqn);
hns_roce_free_cmd_mailbox(hr_dev, mailbox);
if (ret)
- ibdev_err(&hr_dev->ib_dev,
- "failed to process cmd when modifying CQ, ret = %d.\n",
- ret);
+ ibdev_err_ratelimited(&hr_dev->ib_dev,
+ "failed to process cmd when modifying CQ, ret = %d.\n",
+ ret);
return ret;
}
@@ -5777,9 +5770,9 @@ static int hns_roce_v2_query_cqc(struct hns_roce_dev *hr_dev, u32 cqn,
ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma,
HNS_ROCE_CMD_QUERY_CQC, cqn);
if (ret) {
- ibdev_err(&hr_dev->ib_dev,
- "failed to process cmd when querying CQ, ret = %d.\n",
- ret);
+ ibdev_err_ratelimited(&hr_dev->ib_dev,
+ "failed to process cmd when querying CQ, ret = %d.\n",
+ ret);
goto err_mailbox;
}
@@ -5820,11 +5813,10 @@ static int hns_roce_v2_query_mpt(struct hns_roce_dev *hr_dev, u32 key,
return ret;
}
-static void hns_roce_irq_work_handle(struct work_struct *work)
+static void dump_aeqe_log(struct hns_roce_work *irq_work)
{
- struct hns_roce_work *irq_work =
- container_of(work, struct hns_roce_work, work);
- struct ib_device *ibdev = &irq_work->hr_dev->ib_dev;
+ struct hns_roce_dev *hr_dev = irq_work->hr_dev;
+ struct ib_device *ibdev = &hr_dev->ib_dev;
switch (irq_work->event_type) {
case HNS_ROCE_EVENT_TYPE_PATH_MIG:
@@ -5868,6 +5860,8 @@ static void hns_roce_irq_work_handle(struct work_struct *work)
case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
ibdev_warn(ibdev, "DB overflow.\n");
break;
+ case HNS_ROCE_EVENT_TYPE_MB:
+ break;
case HNS_ROCE_EVENT_TYPE_FLR:
ibdev_warn(ibdev, "function level reset.\n");
break;
@@ -5877,10 +5871,48 @@ static void hns_roce_irq_work_handle(struct work_struct *work)
case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH:
ibdev_err(ibdev, "invalid xrceth error.\n");
break;
+ default:
+ ibdev_info(ibdev, "Undefined event %d.\n",
+ irq_work->event_type);
+ break;
+ }
+}
+
+static void hns_roce_irq_work_handle(struct work_struct *work)
+{
+ struct hns_roce_work *irq_work =
+ container_of(work, struct hns_roce_work, work);
+ struct hns_roce_dev *hr_dev = irq_work->hr_dev;
+ int event_type = irq_work->event_type;
+ u32 queue_num = irq_work->queue_num;
+
+ switch (event_type) {
+ case HNS_ROCE_EVENT_TYPE_PATH_MIG:
+ case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
+ case HNS_ROCE_EVENT_TYPE_COMM_EST:
+ case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
+ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
+ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
+ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION:
+ case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH:
+ hns_roce_qp_event(hr_dev, queue_num, event_type);
+ break;
+ case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
+ case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
+ hns_roce_srq_event(hr_dev, queue_num, event_type);
+ break;
+ case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
+ hns_roce_cq_event(hr_dev, queue_num, event_type);
+ break;
default:
break;
}
+ dump_aeqe_log(irq_work);
+
kfree(irq_work);
}
@@ -5940,14 +5972,14 @@ static struct hns_roce_aeqe *next_aeqe_sw_v2(struct hns_roce_eq *eq)
static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
struct hns_roce_eq *eq)
{
- struct device *dev = hr_dev->dev;
struct hns_roce_aeqe *aeqe = next_aeqe_sw_v2(eq);
irqreturn_t aeqe_found = IRQ_NONE;
+ int num_aeqes = 0;
int event_type;
u32 queue_num;
int sub_type;
- while (aeqe) {
+ while (aeqe && num_aeqes < HNS_AEQ_POLLING_BUDGET) {
/* Make sure we read AEQ entry after we have checked the
* ownership bit
*/
@@ -5958,25 +5990,12 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
queue_num = hr_reg_read(aeqe, AEQE_EVENT_QUEUE_NUM);
switch (event_type) {
- case HNS_ROCE_EVENT_TYPE_PATH_MIG:
- case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
- case HNS_ROCE_EVENT_TYPE_COMM_EST:
- case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
- case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
case HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION:
case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH:
- hns_roce_qp_event(hr_dev, queue_num, event_type);
- break;
- case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
- case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
- hns_roce_srq_event(hr_dev, queue_num, event_type);
- break;
- case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
- case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
- hns_roce_cq_event(hr_dev, queue_num, event_type);
+ hns_roce_flush_cqe(hr_dev, queue_num);
break;
case HNS_ROCE_EVENT_TYPE_MB:
hns_roce_cmd_event(hr_dev,
@@ -5984,12 +6003,7 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
aeqe->event.cmd.status,
le64_to_cpu(aeqe->event.cmd.out_param));
break;
- case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
- case HNS_ROCE_EVENT_TYPE_FLR:
- break;
default:
- dev_err(dev, "unhandled event %d on EQ %d at idx %u.\n",
- event_type, eq->eqn, eq->cons_index);
break;
}
@@ -6001,6 +6015,7 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
hns_roce_v2_init_irq_work(hr_dev, eq, queue_num);
aeqe = next_aeqe_sw_v2(eq);
+ ++num_aeqes;
}
update_eq_db(eq);
@@ -6530,6 +6545,9 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)
int ret;
int i;
+ if (hr_dev->caps.aeqe_depth < HNS_AEQ_POLLING_BUDGET)
+ return -EINVAL;
+
other_num = hr_dev->caps.num_other_vectors;
comp_num = hr_dev->caps.num_comp_vectors;
aeq_num = hr_dev->caps.num_aeq_vectors;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
index cd97cbee682a..b8e17721f6fd 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
@@ -85,6 +85,11 @@
#define HNS_ROCE_V2_TABLE_CHUNK_SIZE (1 << 18)
+/* budget must be smaller than aeqe_depth to guarantee that we update
+ * the ci before we polled all the entries in the EQ.
+ */
+#define HNS_AEQ_POLLING_BUDGET 64
+
enum {
HNS_ROCE_CMD_FLAG_IN = BIT(0),
HNS_ROCE_CMD_FLAG_OUT = BIT(1),
@@ -894,6 +899,7 @@ struct hns_roce_v2_rc_send_wqe {
#define RC_SEND_WQE_OWNER RC_SEND_WQE_FIELD_LOC(7, 7)
#define RC_SEND_WQE_CQE RC_SEND_WQE_FIELD_LOC(8, 8)
#define RC_SEND_WQE_FENCE RC_SEND_WQE_FIELD_LOC(9, 9)
+#define RC_SEND_WQE_SO RC_SEND_WQE_FIELD_LOC(10, 10)
#define RC_SEND_WQE_SE RC_SEND_WQE_FIELD_LOC(11, 11)
#define RC_SEND_WQE_INLINE RC_SEND_WQE_FIELD_LOC(12, 12)
#define RC_SEND_WQE_WQE_INDEX RC_SEND_WQE_FIELD_LOC(30, 15)
diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
index 980261969b0c..7f29a55d378f 100644
--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
+++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
@@ -130,8 +130,8 @@ static void hns_roce_mr_free(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr
key_to_hw_index(mr->key) &
(hr_dev->caps.num_mtpts - 1));
if (ret)
- ibdev_warn(ibdev, "failed to destroy mpt, ret = %d.\n",
- ret);
+ ibdev_warn_ratelimited(ibdev, "failed to destroy mpt, ret = %d.\n",
+ ret);
}
free_mr_pbl(hr_dev, mr);
@@ -415,15 +415,16 @@ static int hns_roce_set_page(struct ib_mr *ibmr, u64 addr)
}
int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
- unsigned int *sg_offset)
+ unsigned int *sg_offset_p)
{
+ unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0;
struct hns_roce_dev *hr_dev = to_hr_dev(ibmr->device);
struct ib_device *ibdev = &hr_dev->ib_dev;
struct hns_roce_mr *mr = to_hr_mr(ibmr);
struct hns_roce_mtr *mtr = &mr->pbl_mtr;
int ret, sg_num = 0;
- if (!IS_ALIGNED(*sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) ||
+ if (!IS_ALIGNED(sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) ||
ibmr->page_size < HNS_HW_PAGE_SIZE ||
ibmr->page_size > HNS_HW_MAX_PAGE_SIZE)
return sg_num;
@@ -434,7 +435,7 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
if (!mr->page_list)
return sg_num;
- sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, hns_roce_set_page);
+ sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset_p, hns_roce_set_page);
if (sg_num < 1) {
ibdev_err(ibdev, "failed to store sg pages %u %u, cnt = %d.\n",
mr->npages, mr->pbl_mtr.hem_cfg.buf_pg_count, sg_num);
diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
index 04063cfacae5..88a4777d29f8 100644
--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
+++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
@@ -39,6 +39,25 @@
#include "hns_roce_device.h"
#include "hns_roce_hem.h"
+static struct hns_roce_qp *hns_roce_qp_lookup(struct hns_roce_dev *hr_dev,
+ u32 qpn)
+{
+ struct device *dev = hr_dev->dev;
+ struct hns_roce_qp *qp;
+ unsigned long flags;
+
+ xa_lock_irqsave(&hr_dev->qp_table_xa, flags);
+ qp = __hns_roce_qp_lookup(hr_dev, qpn);
+ if (qp)
+ refcount_inc(&qp->refcount);
+ xa_unlock_irqrestore(&hr_dev->qp_table_xa, flags);
+
+ if (!qp)
+ dev_warn(dev, "async event for bogus QP %08x\n", qpn);
+
+ return qp;
+}
+
static void flush_work_handle(struct work_struct *work)
{
struct hns_roce_work *flush_work = container_of(work,
@@ -95,31 +114,28 @@ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp)
void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
{
- struct device *dev = hr_dev->dev;
struct hns_roce_qp *qp;
- xa_lock(&hr_dev->qp_table_xa);
- qp = __hns_roce_qp_lookup(hr_dev, qpn);
- if (qp)
- refcount_inc(&qp->refcount);
- xa_unlock(&hr_dev->qp_table_xa);
-
- if (!qp) {
- dev_warn(dev, "async event for bogus QP %08x\n", qpn);
+ qp = hns_roce_qp_lookup(hr_dev, qpn);
+ if (!qp)
return;
- }
- if (event_type == HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR ||
- event_type == HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR ||
- event_type == HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR ||
- event_type == HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION ||
- event_type == HNS_ROCE_EVENT_TYPE_INVALID_XRCETH) {
- qp->state = IB_QPS_ERR;
+ qp->event(qp, (enum hns_roce_event)event_type);
- flush_cqe(hr_dev, qp);
- }
+ if (refcount_dec_and_test(&qp->refcount))
+ complete(&qp->free);
+}
- qp->event(qp, (enum hns_roce_event)event_type);
+void hns_roce_flush_cqe(struct hns_roce_dev *hr_dev, u32 qpn)
+{
+ struct hns_roce_qp *qp;
+
+ qp = hns_roce_qp_lookup(hr_dev, qpn);
+ if (!qp)
+ return;
+
+ qp->state = IB_QPS_ERR;
+ flush_cqe(hr_dev, qp);
if (refcount_dec_and_test(&qp->refcount))
complete(&qp->free);
diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
index 727f92650071..652508b660a0 100644
--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
+++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
@@ -150,8 +150,8 @@ static void free_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq)
ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_SRQ,
srq->srqn);
if (ret)
- dev_err(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
- ret, srq->srqn);
+ dev_err_ratelimited(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
+ ret, srq->srqn);
xa_erase_irq(&srq_table->xa, srq->srqn);
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 296af7a5c279..c510484e024b 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -2796,37 +2796,72 @@ static u8 mlx5_get_umr_fence(u8 umr_fence_cap)
}
}
-static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
+int mlx5_ib_dev_res_cq_init(struct mlx5_ib_dev *dev)
{
struct mlx5_ib_resources *devr = &dev->devr;
- struct ib_srq_init_attr attr;
- struct ib_device *ibdev;
struct ib_cq_init_attr cq_attr = {.cqe = 1};
- int port;
+ struct ib_device *ibdev;
+ struct ib_pd *pd;
+ struct ib_cq *cq;
int ret = 0;
- ibdev = &dev->ib_dev;
- if (!MLX5_CAP_GEN(dev->mdev, xrc))
- return -EOPNOTSUPP;
+ /*
+ * devr->c0 is set once, never changed until device unload.
+ * Avoid taking the mutex if initialization is already done.
+ */
+ if (devr->c0)
+ return 0;
- devr->p0 = ib_alloc_pd(ibdev, 0);
- if (IS_ERR(devr->p0))
- return PTR_ERR(devr->p0);
+ mutex_lock(&devr->cq_lock);
+ if (devr->c0)
+ goto unlock;
- devr->c0 = ib_create_cq(ibdev, NULL, NULL, NULL, &cq_attr);
- if (IS_ERR(devr->c0)) {
- ret = PTR_ERR(devr->c0);
- goto error1;
+ ibdev = &dev->ib_dev;
+ pd = ib_alloc_pd(ibdev, 0);
+ if (IS_ERR(pd)) {
+ ret = PTR_ERR(pd);
+ mlx5_ib_err(dev, "Couldn't allocate PD for res init, err=%d\n", ret);
+ goto unlock;
}
- ret = mlx5_cmd_xrcd_alloc(dev->mdev, &devr->xrcdn0, 0);
- if (ret)
- goto error2;
+ cq = ib_create_cq(ibdev, NULL, NULL, NULL, &cq_attr);
+ if (IS_ERR(cq)) {
+ ret = PTR_ERR(cq);
+ mlx5_ib_err(dev, "Couldn't create CQ for res init, err=%d\n", ret);
+ ib_dealloc_pd(pd);
+ goto unlock;
+ }
- ret = mlx5_cmd_xrcd_alloc(dev->mdev, &devr->xrcdn1, 0);
+ devr->p0 = pd;
+ devr->c0 = cq;
+
+unlock:
+ mutex_unlock(&devr->cq_lock);
+ return ret;
+}
+
+int mlx5_ib_dev_res_srq_init(struct mlx5_ib_dev *dev)
+{
+ struct mlx5_ib_resources *devr = &dev->devr;
+ struct ib_srq_init_attr attr;
+ struct ib_srq *s0, *s1;
+ int ret = 0;
+
+ /*
+ * devr->s1 is set once, never changed until device unload.
+ * Avoid taking the mutex if initialization is already done.
+ */
+ if (devr->s1)
+ return 0;
+
+ mutex_lock(&devr->srq_lock);
+ if (devr->s1)
+ goto unlock;
+
+ ret = mlx5_ib_dev_res_cq_init(dev);
if (ret)
- goto error3;
+ goto unlock;
memset(&attr, 0, sizeof(attr));
attr.attr.max_sge = 1;
@@ -2834,10 +2869,11 @@ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
attr.srq_type = IB_SRQT_XRC;
attr.ext.cq = devr->c0;
- devr->s0 = ib_create_srq(devr->p0, &attr);
- if (IS_ERR(devr->s0)) {
- ret = PTR_ERR(devr->s0);
- goto err_create;
+ s0 = ib_create_srq(devr->p0, &attr);
+ if (IS_ERR(s0)) {
+ ret = PTR_ERR(s0);
+ mlx5_ib_err(dev, "Couldn't create SRQ 0 for res init, err=%d\n", ret);
+ goto unlock;
}
memset(&attr, 0, sizeof(attr));
@@ -2845,51 +2881,63 @@ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
attr.attr.max_wr = 1;
attr.srq_type = IB_SRQT_BASIC;
- devr->s1 = ib_create_srq(devr->p0, &attr);
- if (IS_ERR(devr->s1)) {
- ret = PTR_ERR(devr->s1);
- goto error6;
+ s1 = ib_create_srq(devr->p0, &attr);
+ if (IS_ERR(s1)) {
+ ret = PTR_ERR(s1);
+ mlx5_ib_err(dev, "Couldn't create SRQ 1 for res init, err=%d\n", ret);
+ ib_destroy_srq(s0);
}
- for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
- INIT_WORK(&devr->ports[port].pkey_change_work,
- pkey_change_handler);
-
- return 0;
+ devr->s0 = s0;
+ devr->s1 = s1;
-error6:
- ib_destroy_srq(devr->s0);
-err_create:
- mlx5_cmd_xrcd_dealloc(dev->mdev, devr->xrcdn1, 0);
-error3:
- mlx5_cmd_xrcd_dealloc(dev->mdev, devr->xrcdn0, 0);
-error2:
- ib_destroy_cq(devr->c0);
-error1:
- ib_dealloc_pd(devr->p0);
+unlock:
+ mutex_unlock(&devr->srq_lock);
return ret;
}
-static void mlx5_ib_dev_res_cleanup(struct mlx5_ib_dev *dev)
+static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
{
struct mlx5_ib_resources *devr = &dev->devr;
- int port;
+ int ret;
- /*
- * Make sure no change P_Key work items are still executing.
- *
- * At this stage, the mlx5_ib_event should be unregistered
- * and it ensures that no new works are added.
- */
- for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
- cancel_work_sync(&devr->ports[port].pkey_change_work);
+ if (!MLX5_CAP_GEN(dev->mdev, xrc))
+ return -EOPNOTSUPP;
+
+ ret = mlx5_cmd_xrcd_alloc(dev->mdev, &devr->xrcdn0, 0);
+ if (ret)
+ return ret;
+
+ ret = mlx5_cmd_xrcd_alloc(dev->mdev, &devr->xrcdn1, 0);
+ if (ret) {
+ mlx5_cmd_xrcd_dealloc(dev->mdev, devr->xrcdn0, 0);
+ return ret;
+ }
+
+ mutex_init(&devr->cq_lock);
+ mutex_init(&devr->srq_lock);
- ib_destroy_srq(devr->s1);
- ib_destroy_srq(devr->s0);
+ return 0;
+}
+
+static void mlx5_ib_dev_res_cleanup(struct mlx5_ib_dev *dev)
+{
+ struct mlx5_ib_resources *devr = &dev->devr;
+
+ /* After s0/s1 init, they are not unset during the device lifetime. */
+ if (devr->s1) {
+ ib_destroy_srq(devr->s1);
+ ib_destroy_srq(devr->s0);
+ }
mlx5_cmd_xrcd_dealloc(dev->mdev, devr->xrcdn1, 0);
mlx5_cmd_xrcd_dealloc(dev->mdev, devr->xrcdn0, 0);
- ib_destroy_cq(devr->c0);
- ib_dealloc_pd(devr->p0);
+ /* After p0/c0 init, they are not unset during the device lifetime. */
+ if (devr->c0) {
+ ib_destroy_cq(devr->c0);
+ ib_dealloc_pd(devr->p0);
+ }
+ mutex_destroy(&devr->cq_lock);
+ mutex_destroy(&devr->srq_lock);
}
static u32 get_core_cap_flags(struct ib_device *ibdev,
@@ -4138,6 +4186,13 @@ static void mlx5_ib_stage_delay_drop_cleanup(struct mlx5_ib_dev *dev)
static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev)
{
+ struct mlx5_ib_resources *devr = &dev->devr;
+ int port;
+
+ for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
+ INIT_WORK(&devr->ports[port].pkey_change_work,
+ pkey_change_handler);
+
dev->mdev_events.notifier_call = mlx5_ib_event;
mlx5_notifier_register(dev->mdev, &dev->mdev_events);
@@ -4148,8 +4203,14 @@ static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev)
static void mlx5_ib_stage_dev_notifier_cleanup(struct mlx5_ib_dev *dev)
{
+ struct mlx5_ib_resources *devr = &dev->devr;
+ int port;
+
mlx5r_macsec_event_unregister(dev);
mlx5_notifier_unregister(dev->mdev, &dev->mdev_events);
+
+ for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
+ cancel_work_sync(&devr->ports[port].pkey_change_work);
}
void __mlx5_ib_remove(struct mlx5_ib_dev *dev,
@@ -4223,9 +4284,6 @@ static const struct mlx5_ib_profile pf_profile = {
STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES,
mlx5_ib_dev_res_init,
mlx5_ib_dev_res_cleanup),
- STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
- mlx5_ib_stage_dev_notifier_init,
- mlx5_ib_stage_dev_notifier_cleanup),
STAGE_CREATE(MLX5_IB_STAGE_ODP,
mlx5_ib_odp_init_one,
mlx5_ib_odp_cleanup_one),
@@ -4250,6 +4308,9 @@ static const struct mlx5_ib_profile pf_profile = {
STAGE_CREATE(MLX5_IB_STAGE_IB_REG,
mlx5_ib_stage_ib_reg_init,
mlx5_ib_stage_ib_reg_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
+ mlx5_ib_stage_dev_notifier_init,
+ mlx5_ib_stage_dev_notifier_cleanup),
STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
mlx5_ib_stage_post_ib_reg_umr_init,
NULL),
@@ -4286,9 +4347,6 @@ const struct mlx5_ib_profile raw_eth_profile = {
STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES,
mlx5_ib_dev_res_init,
mlx5_ib_dev_res_cleanup),
- STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
- mlx5_ib_stage_dev_notifier_init,
- mlx5_ib_stage_dev_notifier_cleanup),
STAGE_CREATE(MLX5_IB_STAGE_COUNTERS,
mlx5_ib_counters_init,
mlx5_ib_counters_cleanup),
@@ -4310,6 +4368,9 @@ const struct mlx5_ib_profile raw_eth_profile = {
STAGE_CREATE(MLX5_IB_STAGE_IB_REG,
mlx5_ib_stage_ib_reg_init,
mlx5_ib_stage_ib_reg_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
+ mlx5_ib_stage_dev_notifier_init,
+ mlx5_ib_stage_dev_notifier_cleanup),
STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
mlx5_ib_stage_post_ib_reg_umr_init,
NULL),
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 43a963e205eb..94678e5c59dd 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -820,11 +820,13 @@ struct mlx5_ib_port_resources {
struct mlx5_ib_resources {
struct ib_cq *c0;
+ struct mutex cq_lock;
u32 xrcdn0;
u32 xrcdn1;
struct ib_pd *p0;
struct ib_srq *s0;
struct ib_srq *s1;
+ struct mutex srq_lock;
struct mlx5_ib_port_resources ports[2];
};
@@ -952,7 +954,6 @@ enum mlx5_ib_stages {
MLX5_IB_STAGE_QP,
MLX5_IB_STAGE_SRQ,
MLX5_IB_STAGE_DEVICE_RESOURCES,
- MLX5_IB_STAGE_DEVICE_NOTIFIER,
MLX5_IB_STAGE_ODP,
MLX5_IB_STAGE_COUNTERS,
MLX5_IB_STAGE_CONG_DEBUGFS,
@@ -961,6 +962,7 @@ enum mlx5_ib_stages {
MLX5_IB_STAGE_PRE_IB_REG_UMR,
MLX5_IB_STAGE_WHITELIST_UID,
MLX5_IB_STAGE_IB_REG,
+ MLX5_IB_STAGE_DEVICE_NOTIFIER,
MLX5_IB_STAGE_POST_IB_REG_UMR,
MLX5_IB_STAGE_DELAY_DROP,
MLX5_IB_STAGE_RESTRACK,
@@ -1270,6 +1272,8 @@ to_mmmap(struct rdma_user_mmap_entry *rdma_entry)
struct mlx5_user_mmap_entry, rdma_entry);
}
+int mlx5_ib_dev_res_cq_init(struct mlx5_ib_dev *dev);
+int mlx5_ib_dev_res_srq_init(struct mlx5_ib_dev *dev);
int mlx5_ib_db_map_user(struct mlx5_ib_ucontext *context, unsigned long virt,
struct mlx5_db *db);
void mlx5_ib_db_unmap_user(struct mlx5_ib_ucontext *context, struct mlx5_db *db);
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 93d9b15cbbb9..71a856409cee 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -3247,6 +3247,10 @@ int mlx5_ib_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attr,
enum ib_qp_type type;
int err;
+ err = mlx5_ib_dev_res_srq_init(dev);
+ if (err)
+ return err;
+
err = check_qp_type(dev, attr, &type);
if (err)
return err;
diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
index 84be0c3d5699..bcb6b324af50 100644
--- a/drivers/infiniband/hw/mlx5/srq.c
+++ b/drivers/infiniband/hw/mlx5/srq.c
@@ -216,6 +216,10 @@ int mlx5_ib_create_srq(struct ib_srq *ib_srq,
return -EINVAL;
}
+ err = mlx5_ib_dev_res_cq_init(dev);
+ if (err)
+ return err;
+
mutex_init(&srq->mutex);
spin_lock_init(&srq->lock);
srq->msrq.max = roundup_pow_of_two(init_attr->attr.max_wr + 1);
diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
index 28e379c108bc..3767d7fc0aac 100644
--- a/drivers/infiniband/sw/rxe/rxe_qp.c
+++ b/drivers/infiniband/sw/rxe/rxe_qp.c
@@ -781,6 +781,7 @@ int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask)
* Yield the processor
*/
spin_lock_irqsave(&qp->state_lock, flags);
+ attr->cur_qp_state = qp_state(qp);
if (qp->attr.sq_draining) {
spin_unlock_irqrestore(&qp->state_lock, flags);
cond_resched();
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 7a36080d2bae..7ff152ffe15b 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -693,10 +693,12 @@ int rxe_requester(struct rxe_qp *qp)
if (unlikely(qp_state(qp) == IB_QPS_ERR)) {
wqe = __req_next_wqe(qp);
spin_unlock_irqrestore(&qp->state_lock, flags);
- if (wqe)
+ if (wqe) {
+ wqe->status = IB_WC_WR_FLUSH_ERR;
goto err;
- else
+ } else {
goto exit;
+ }
}
if (unlikely(qp_state(qp) == IB_QPS_RESET)) {
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 3a7c647d3aff..d6381c00bb8d 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -815,14 +815,15 @@ static void pgtable_walk(struct intel_iommu *iommu, unsigned long pfn,
while (1) {
offset = pfn_level_offset(pfn, level);
pte = &parent[offset];
- if (!pte || (dma_pte_superpage(pte) || !dma_pte_present(pte))) {
- pr_info("PTE not present at level %d\n", level);
- break;
- }
pr_info("pte level: %d, pte value: 0x%016llx\n", level, pte->val);
- if (level == 1)
+ if (!dma_pte_present(pte)) {
+ pr_info("page table not present at level %d\n", level - 1);
+ break;
+ }
+
+ if (level == 1 || dma_pte_superpage(pte))
break;
parent = phys_to_virt(dma_pte_addr(pte));
@@ -845,11 +846,11 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
pr_info("Dump %s table entries for IOVA 0x%llx\n", iommu->name, addr);
/* root entry dump */
- rt_entry = &iommu->root_entry[bus];
- if (!rt_entry) {
- pr_info("root table entry is not present\n");
+ if (!iommu->root_entry) {
+ pr_info("root table is not present\n");
return;
}
+ rt_entry = &iommu->root_entry[bus];
if (sm_supported(iommu))
pr_info("scalable mode root entry: hi 0x%016llx, low 0x%016llx\n",
@@ -860,7 +861,7 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
/* context entry dump */
ctx_entry = iommu_context_addr(iommu, bus, devfn, 0);
if (!ctx_entry) {
- pr_info("context table entry is not present\n");
+ pr_info("context table is not present\n");
return;
}
@@ -869,17 +870,23 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
/* legacy mode does not require PASID entries */
if (!sm_supported(iommu)) {
+ if (!context_present(ctx_entry)) {
+ pr_info("legacy mode page table is not present\n");
+ return;
+ }
level = agaw_to_level(ctx_entry->hi & 7);
pgtable = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
goto pgtable_walk;
}
- /* get the pointer to pasid directory entry */
- dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
- if (!dir) {
- pr_info("pasid directory entry is not present\n");
+ if (!context_present(ctx_entry)) {
+ pr_info("pasid directory table is not present\n");
return;
}
+
+ /* get the pointer to pasid directory entry */
+ dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
+
/* For request-without-pasid, get the pasid from context entry */
if (intel_iommu_sm && pasid == IOMMU_PASID_INVALID)
pasid = IOMMU_NO_PASID;
@@ -891,7 +898,7 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
/* get the pointer to the pasid table entry */
entries = get_pasid_table_from_pde(pde);
if (!entries) {
- pr_info("pasid table entry is not present\n");
+ pr_info("pasid table is not present\n");
return;
}
index = pasid & PASID_PTE_MASK;
@@ -899,6 +906,11 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
for (i = 0; i < ARRAY_SIZE(pte->val); i++)
pr_info("pasid table entry[%d]: 0x%016llx\n", i, pte->val[i]);
+ if (!pasid_pte_is_present(pte)) {
+ pr_info("scalable mode page table is not present\n");
+ return;
+ }
+
if (pasid_pte_get_pgtt(pte) == PASID_ENTRY_PGTT_FL_ONLY) {
level = pte->val[2] & BIT_ULL(2) ? 5 : 4;
pgtable = phys_to_virt(pte->val[2] & VTD_PAGE_MASK);
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 934dc97f5df9..bc758ab70f49 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -180,6 +180,18 @@ static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte,
return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4);
}
+/*
+ * Convert an index returned by ARM_LPAE_PGD_IDX(), which can point into
+ * a concatenated PGD, into the maximum number of entries that can be
+ * mapped in the same table page.
+ */
+static inline int arm_lpae_max_entries(int i, struct arm_lpae_io_pgtable *data)
+{
+ int ptes_per_table = ARM_LPAE_PTES_PER_TABLE(data);
+
+ return ptes_per_table - (i & (ptes_per_table - 1));
+}
+
static bool selftest_running = false;
static dma_addr_t __arm_lpae_dma_addr(void *pages)
@@ -357,7 +369,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
/* If we can install a leaf entry at this level, then do so */
if (size == block_size) {
- max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
+ max_entries = arm_lpae_max_entries(map_idx_start, data);
num_entries = min_t(int, pgcount, max_entries);
ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep);
if (!ret)
@@ -557,7 +569,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
if (size == split_sz) {
unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data);
- max_entries = ptes_per_table - unmap_idx_start;
+ max_entries = arm_lpae_max_entries(unmap_idx_start, data);
num_entries = min_t(int, pgcount, max_entries);
}
@@ -615,7 +627,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
/* If the size matches this level, we're in the right place */
if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) {
- max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start;
+ max_entries = arm_lpae_max_entries(unmap_idx_start, data);
num_entries = min_t(int, pgcount, max_entries);
while (i < num_entries) {
diff --git a/drivers/leds/flash/leds-mt6360.c b/drivers/leds/flash/leds-mt6360.c
index fdf0812774ce..7abd535dce31 100644
--- a/drivers/leds/flash/leds-mt6360.c
+++ b/drivers/leds/flash/leds-mt6360.c
@@ -774,7 +774,6 @@ static void mt6360_v4l2_flash_release(struct mt6360_priv *priv)
static int mt6360_led_probe(struct platform_device *pdev)
{
struct mt6360_priv *priv;
- struct fwnode_handle *child;
size_t count;
int i = 0, ret;
@@ -801,7 +800,7 @@ static int mt6360_led_probe(struct platform_device *pdev)
return -ENODEV;
}
- device_for_each_child_node(&pdev->dev, child) {
+ device_for_each_child_node_scoped(&pdev->dev, child) {
struct mt6360_led *led = priv->leds + i;
struct led_init_data init_data = { .fwnode = child, };
u32 reg, led_color;
diff --git a/drivers/leds/leds-lp55xx-common.c b/drivers/leds/leds-lp55xx-common.c
index 77bb26906ea6..84028800aed3 100644
--- a/drivers/leds/leds-lp55xx-common.c
+++ b/drivers/leds/leds-lp55xx-common.c
@@ -580,9 +580,6 @@ static int lp55xx_parse_common_child(struct device_node *np,
if (ret)
return ret;
- if (*chan_nr < 0 || *chan_nr > cfg->max_channel)
- return -EINVAL;
-
return 0;
}
diff --git a/drivers/mailbox/arm_mhuv2.c b/drivers/mailbox/arm_mhuv2.c
index 0ec21dcdbde7..cff7c343ee08 100644
--- a/drivers/mailbox/arm_mhuv2.c
+++ b/drivers/mailbox/arm_mhuv2.c
@@ -500,7 +500,7 @@ static const struct mhuv2_protocol_ops mhuv2_data_transfer_ops = {
static struct mbox_chan *get_irq_chan_comb(struct mhuv2 *mhu, u32 __iomem *reg)
{
struct mbox_chan *chans = mhu->mbox.chans;
- int channel = 0, i, offset = 0, windows, protocol, ch_wn;
+ int channel = 0, i, j, offset = 0, windows, protocol, ch_wn;
u32 stat;
for (i = 0; i < MHUV2_CMB_INT_ST_REG_CNT; i++) {
@@ -510,9 +510,9 @@ static struct mbox_chan *get_irq_chan_comb(struct mhuv2 *mhu, u32 __iomem *reg)
ch_wn = i * MHUV2_STAT_BITS + __builtin_ctz(stat);
- for (i = 0; i < mhu->length; i += 2) {
- protocol = mhu->protocols[i];
- windows = mhu->protocols[i + 1];
+ for (j = 0; j < mhu->length; j += 2) {
+ protocol = mhu->protocols[j];
+ windows = mhu->protocols[j + 1];
if (ch_wn >= offset + windows) {
if (protocol == DOORBELL)
diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
index 4d62b07c1411..d5f5606585f4 100644
--- a/drivers/mailbox/mtk-cmdq-mailbox.c
+++ b/drivers/mailbox/mtk-cmdq-mailbox.c
@@ -623,12 +623,6 @@ static int cmdq_probe(struct platform_device *pdev)
cmdq->mbox.chans[i].con_priv = (void *)&cmdq->thread[i];
}
- err = devm_mbox_controller_register(dev, &cmdq->mbox);
- if (err < 0) {
- dev_err(dev, "failed to register mailbox: %d\n", err);
- return err;
- }
-
platform_set_drvdata(pdev, cmdq);
WARN_ON(clk_bulk_prepare(cmdq->pdata->gce_num, cmdq->clocks));
@@ -642,6 +636,12 @@ static int cmdq_probe(struct platform_device *pdev)
return err;
}
+ err = devm_mbox_controller_register(dev, &cmdq->mbox);
+ if (err < 0) {
+ dev_err(dev, "failed to register mailbox: %d\n", err);
+ return err;
+ }
+
return 0;
}
diff --git a/drivers/md/bcache/closure.c b/drivers/md/bcache/closure.c
index d8d9394a6beb..18f21d4e9aaa 100644
--- a/drivers/md/bcache/closure.c
+++ b/drivers/md/bcache/closure.c
@@ -17,10 +17,16 @@ static inline void closure_put_after_sub(struct closure *cl, int flags)
{
int r = flags & CLOSURE_REMAINING_MASK;
- BUG_ON(flags & CLOSURE_GUARD_MASK);
- BUG_ON(!r && (flags & ~CLOSURE_DESTRUCTOR));
+ if (WARN(flags & CLOSURE_GUARD_MASK,
+ "closure has guard bits set: %x (%u)",
+ flags & CLOSURE_GUARD_MASK, (unsigned) __fls(r)))
+ r &= ~CLOSURE_GUARD_MASK;
if (!r) {
+ WARN(flags & ~CLOSURE_DESTRUCTOR,
+ "closure ref hit 0 with incorrect flags set: %x (%u)",
+ flags & ~CLOSURE_DESTRUCTOR, (unsigned) __fls(flags));
+
if (cl->fn && !(flags & CLOSURE_DESTRUCTOR)) {
atomic_set(&cl->remaining,
CLOSURE_REMAINING_INITIALIZER);
diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index f1345781c861..30ddfb21f658 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -2444,7 +2444,8 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
int r;
unsigned int num_locks;
struct dm_bufio_client *c;
- char slab_name[27];
+ char slab_name[64];
+ static atomic_t seqno = ATOMIC_INIT(0);
if (!block_size || block_size & ((1 << SECTOR_SHIFT) - 1)) {
DMERR("%s: block size not specified or is not multiple of 512b", __func__);
@@ -2495,7 +2496,8 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
(block_size < PAGE_SIZE || !is_power_of_2(block_size))) {
unsigned int align = min(1U << __ffs(block_size), (unsigned int)PAGE_SIZE);
- snprintf(slab_name, sizeof(slab_name), "dm_bufio_cache-%u", block_size);
+ snprintf(slab_name, sizeof(slab_name), "dm_bufio_cache-%u-%u",
+ block_size, atomic_inc_return(&seqno));
c->slab_cache = kmem_cache_create(slab_name, block_size, align,
SLAB_RECLAIM_ACCOUNT, NULL);
if (!c->slab_cache) {
@@ -2504,9 +2506,11 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
}
}
if (aux_size)
- snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer-%u", aux_size);
+ snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer-%u-%u",
+ aux_size, atomic_inc_return(&seqno));
else
- snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer");
+ snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer-%u",
+ atomic_inc_return(&seqno));
c->slab_buffer = kmem_cache_create(slab_name, sizeof(struct dm_buffer) + aux_size,
0, SLAB_RECLAIM_ACCOUNT, NULL);
if (!c->slab_buffer) {
diff --git a/drivers/md/dm-cache-background-tracker.c b/drivers/md/dm-cache-background-tracker.c
index 9c5308298cf1..f3051bd7d2df 100644
--- a/drivers/md/dm-cache-background-tracker.c
+++ b/drivers/md/dm-cache-background-tracker.c
@@ -11,12 +11,6 @@
#define DM_MSG_PREFIX "dm-background-tracker"
-struct bt_work {
- struct list_head list;
- struct rb_node node;
- struct policy_work work;
-};
-
struct background_tracker {
unsigned int max_work;
atomic_t pending_promotes;
@@ -26,10 +20,10 @@ struct background_tracker {
struct list_head issued;
struct list_head queued;
struct rb_root pending;
-
- struct kmem_cache *work_cache;
};
+struct kmem_cache *btracker_work_cache = NULL;
+
struct background_tracker *btracker_create(unsigned int max_work)
{
struct background_tracker *b = kmalloc(sizeof(*b), GFP_KERNEL);
@@ -48,12 +42,6 @@ struct background_tracker *btracker_create(unsigned int max_work)
INIT_LIST_HEAD(&b->queued);
b->pending = RB_ROOT;
- b->work_cache = KMEM_CACHE(bt_work, 0);
- if (!b->work_cache) {
- DMERR("couldn't create mempool for background work items");
- kfree(b);
- b = NULL;
- }
return b;
}
@@ -66,10 +54,9 @@ void btracker_destroy(struct background_tracker *b)
BUG_ON(!list_empty(&b->issued));
list_for_each_entry_safe (w, tmp, &b->queued, list) {
list_del(&w->list);
- kmem_cache_free(b->work_cache, w);
+ kmem_cache_free(btracker_work_cache, w);
}
- kmem_cache_destroy(b->work_cache);
kfree(b);
}
EXPORT_SYMBOL_GPL(btracker_destroy);
@@ -180,7 +167,7 @@ static struct bt_work *alloc_work(struct background_tracker *b)
if (max_work_reached(b))
return NULL;
- return kmem_cache_alloc(b->work_cache, GFP_NOWAIT);
+ return kmem_cache_alloc(btracker_work_cache, GFP_NOWAIT);
}
int btracker_queue(struct background_tracker *b,
@@ -203,7 +190,7 @@ int btracker_queue(struct background_tracker *b,
* There was a race, we'll just ignore this second
* bit of work for the same oblock.
*/
- kmem_cache_free(b->work_cache, w);
+ kmem_cache_free(btracker_work_cache, w);
return -EINVAL;
}
@@ -244,7 +231,7 @@ void btracker_complete(struct background_tracker *b,
update_stats(b, &w->work, -1);
rb_erase(&w->node, &b->pending);
list_del(&w->list);
- kmem_cache_free(b->work_cache, w);
+ kmem_cache_free(btracker_work_cache, w);
}
EXPORT_SYMBOL_GPL(btracker_complete);
diff --git a/drivers/md/dm-cache-background-tracker.h b/drivers/md/dm-cache-background-tracker.h
index 5b8f5c667b81..09c8fc59f7bb 100644
--- a/drivers/md/dm-cache-background-tracker.h
+++ b/drivers/md/dm-cache-background-tracker.h
@@ -26,6 +26,14 @@
* protected with a spinlock.
*/
+struct bt_work {
+ struct list_head list;
+ struct rb_node node;
+ struct policy_work work;
+};
+
+extern struct kmem_cache *btracker_work_cache;
+
struct background_work;
struct background_tracker;
diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
index fb809b46d6aa..c5851c9f7ec0 100644
--- a/drivers/md/dm-cache-target.c
+++ b/drivers/md/dm-cache-target.c
@@ -10,6 +10,7 @@
#include "dm-bio-record.h"
#include "dm-cache-metadata.h"
#include "dm-io-tracker.h"
+#include "dm-cache-background-tracker.h"
#include <linux/dm-io.h>
#include <linux/dm-kcopyd.h>
@@ -2267,7 +2268,7 @@ static int parse_cache_args(struct cache_args *ca, int argc, char **argv,
/*----------------------------------------------------------------*/
-static struct kmem_cache *migration_cache;
+static struct kmem_cache *migration_cache = NULL;
#define NOT_CORE_OPTION 1
@@ -3455,22 +3456,36 @@ static int __init dm_cache_init(void)
int r;
migration_cache = KMEM_CACHE(dm_cache_migration, 0);
- if (!migration_cache)
- return -ENOMEM;
+ if (!migration_cache) {
+ r = -ENOMEM;
+ goto err;
+ }
+
+ btracker_work_cache = kmem_cache_create("dm_cache_bt_work",
+ sizeof(struct bt_work), __alignof__(struct bt_work), 0, NULL);
+ if (!btracker_work_cache) {
+ r = -ENOMEM;
+ goto err;
+ }
r = dm_register_target(&cache_target);
if (r) {
- kmem_cache_destroy(migration_cache);
- return r;
+ goto err;
}
return 0;
+
+err:
+ kmem_cache_destroy(migration_cache);
+ kmem_cache_destroy(btracker_work_cache);
+ return r;
}
static void __exit dm_cache_exit(void)
{
dm_unregister_target(&cache_target);
kmem_cache_destroy(migration_cache);
+ kmem_cache_destroy(btracker_work_cache);
}
module_init(dm_cache_init);
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index 07c7f9795b10..032cefe3e351 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -2486,6 +2486,7 @@ static void pool_work_wait(struct pool_work *pw, struct pool *pool,
init_completion(&pw->complete);
queue_work(pool->wq, &pw->worker);
wait_for_completion(&pw->complete);
+ destroy_work_on_stack(&pw->worker);
}
/*----------------------------------------------------------------*/
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
index be65472d8f8b..ba63076cd8f2 100644
--- a/drivers/md/md-bitmap.c
+++ b/drivers/md/md-bitmap.c
@@ -1089,6 +1089,7 @@ void md_bitmap_unplug_async(struct bitmap *bitmap)
queue_work(md_bitmap_wq, &unplug_work.work);
wait_for_completion(&done);
+ destroy_work_on_stack(&unplug_work.work);
}
EXPORT_SYMBOL(md_bitmap_unplug_async);
diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c
index 591d1a43d035..101736b80df8 100644
--- a/drivers/md/persistent-data/dm-space-map-common.c
+++ b/drivers/md/persistent-data/dm-space-map-common.c
@@ -51,7 +51,7 @@ static int index_check(struct dm_block_validator *v,
block_size - sizeof(__le32),
INDEX_CSUM_XOR));
if (csum_disk != mi_le->csum) {
- DMERR_LIMIT("i%s failed: csum %u != wanted %u", __func__,
+ DMERR_LIMIT("%s failed: csum %u != wanted %u", __func__,
le32_to_cpu(csum_disk), le32_to_cpu(mi_le->csum));
return -EILSEQ;
}
diff --git a/drivers/media/dvb-frontends/ts2020.c b/drivers/media/dvb-frontends/ts2020.c
index a5ebce57f35e..d49facd92cc0 100644
--- a/drivers/media/dvb-frontends/ts2020.c
+++ b/drivers/media/dvb-frontends/ts2020.c
@@ -553,13 +553,19 @@ static void ts2020_regmap_unlock(void *__dev)
static int ts2020_probe(struct i2c_client *client)
{
struct ts2020_config *pdata = client->dev.platform_data;
- struct dvb_frontend *fe = pdata->fe;
+ struct dvb_frontend *fe;
struct ts2020_priv *dev;
int ret;
u8 u8tmp;
unsigned int utmp;
char *chip_str;
+ if (!pdata) {
+ dev_err(&client->dev, "platform data is mandatory\n");
+ return -EINVAL;
+ }
+
+ fe = pdata->fe;
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (!dev) {
ret = -ENOMEM;
diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
index d1609bd8f048..3bd31b587023 100644
--- a/drivers/media/i2c/adv7604.c
+++ b/drivers/media/i2c/adv7604.c
@@ -1405,12 +1405,13 @@ static int stdi2dv_timings(struct v4l2_subdev *sd,
if (v4l2_detect_cvt(stdi->lcf + 1, hfreq, stdi->lcvs, 0,
(stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
(stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
- false, timings))
+ false, adv76xx_get_dv_timings_cap(sd, -1), timings))
return 0;
if (v4l2_detect_gtf(stdi->lcf + 1, hfreq, stdi->lcvs,
(stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
(stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
- false, state->aspect_ratio, timings))
+ false, state->aspect_ratio,
+ adv76xx_get_dv_timings_cap(sd, -1), timings))
return 0;
v4l2_dbg(2, debug, sd,
diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
index c1664a3620c8..cb90178ce4b1 100644
--- a/drivers/media/i2c/adv7842.c
+++ b/drivers/media/i2c/adv7842.c
@@ -1431,14 +1431,15 @@ static int stdi2dv_timings(struct v4l2_subdev *sd,
}
if (v4l2_detect_cvt(stdi->lcf + 1, hfreq, stdi->lcvs, 0,
- (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
- (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
- false, timings))
+ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+ false, adv7842_get_dv_timings_cap(sd), timings))
return 0;
if (v4l2_detect_gtf(stdi->lcf + 1, hfreq, stdi->lcvs,
- (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
- (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
- false, state->aspect_ratio, timings))
+ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+ false, state->aspect_ratio,
+ adv7842_get_dv_timings_cap(sd), timings))
return 0;
v4l2_dbg(2, debug, sd,
diff --git a/drivers/media/i2c/ds90ub960.c b/drivers/media/i2c/ds90ub960.c
index 8ba5750f5a23..7f30e8923633 100644
--- a/drivers/media/i2c/ds90ub960.c
+++ b/drivers/media/i2c/ds90ub960.c
@@ -1286,7 +1286,7 @@ static int ub960_rxport_get_strobe_pos(struct ub960_data *priv,
clk_delay += v & UB960_IR_RX_ANA_STROBE_SET_CLK_DELAY_MASK;
- ub960_rxport_read(priv, nport, UB960_RR_SFILTER_STS_1, &v);
+ ret = ub960_rxport_read(priv, nport, UB960_RR_SFILTER_STS_1, &v);
if (ret)
return ret;
diff --git a/drivers/media/i2c/dw9768.c b/drivers/media/i2c/dw9768.c
index daabbece8c7e..682a1aa7febd 100644
--- a/drivers/media/i2c/dw9768.c
+++ b/drivers/media/i2c/dw9768.c
@@ -476,10 +476,9 @@ static int dw9768_probe(struct i2c_client *client)
* to be powered on in an ACPI system. Similarly for power off in
* remove.
*/
- pm_runtime_enable(dev);
full_power = (is_acpi_node(dev_fwnode(dev)) &&
acpi_dev_state_d0(dev)) ||
- (is_of_node(dev_fwnode(dev)) && !pm_runtime_enabled(dev));
+ (is_of_node(dev_fwnode(dev)) && !IS_ENABLED(CONFIG_PM));
if (full_power) {
ret = dw9768_runtime_resume(dev);
if (ret < 0) {
@@ -489,6 +488,7 @@ static int dw9768_probe(struct i2c_client *client)
pm_runtime_set_active(dev);
}
+ pm_runtime_enable(dev);
ret = v4l2_async_register_subdev(&dw9768->sd);
if (ret < 0) {
dev_err(dev, "failed to register V4L2 subdev: %d", ret);
@@ -500,12 +500,12 @@ static int dw9768_probe(struct i2c_client *client)
return 0;
err_power_off:
+ pm_runtime_disable(dev);
if (full_power) {
dw9768_runtime_suspend(dev);
pm_runtime_set_suspended(dev);
}
err_clean_entity:
- pm_runtime_disable(dev);
media_entity_cleanup(&dw9768->sd.entity);
err_free_handler:
v4l2_ctrl_handler_free(&dw9768->ctrls);
@@ -522,12 +522,12 @@ static void dw9768_remove(struct i2c_client *client)
v4l2_async_unregister_subdev(&dw9768->sd);
v4l2_ctrl_handler_free(&dw9768->ctrls);
media_entity_cleanup(&dw9768->sd.entity);
+ pm_runtime_disable(dev);
if ((is_acpi_node(dev_fwnode(dev)) && acpi_dev_state_d0(dev)) ||
- (is_of_node(dev_fwnode(dev)) && !pm_runtime_enabled(dev))) {
+ (is_of_node(dev_fwnode(dev)) && !IS_ENABLED(CONFIG_PM))) {
dw9768_runtime_suspend(dev);
pm_runtime_set_suspended(dev);
}
- pm_runtime_disable(dev);
}
static const struct of_device_id dw9768_of_table[] = {
diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
index 558152575d10..c81dd4183404 100644
--- a/drivers/media/i2c/tc358743.c
+++ b/drivers/media/i2c/tc358743.c
@@ -2159,8 +2159,10 @@ static int tc358743_probe(struct i2c_client *client)
err_work_queues:
cec_unregister_adapter(state->cec_adap);
- if (!state->i2c_client->irq)
+ if (!state->i2c_client->irq) {
+ del_timer(&state->timer);
flush_work(&state->work_i2c_poll);
+ }
cancel_delayed_work(&state->delayed_work_enable_hotplug);
mutex_destroy(&state->confctl_mutex);
err_hdl:
diff --git a/drivers/media/platform/allegro-dvt/allegro-core.c b/drivers/media/platform/allegro-dvt/allegro-core.c
index da61f9beb6b4..7dffea2ad88a 100644
--- a/drivers/media/platform/allegro-dvt/allegro-core.c
+++ b/drivers/media/platform/allegro-dvt/allegro-core.c
@@ -1509,8 +1509,10 @@ static int allocate_buffers_internal(struct allegro_channel *channel,
INIT_LIST_HEAD(&buffer->head);
err = allegro_alloc_buffer(dev, buffer, size);
- if (err)
+ if (err) {
+ kfree(buffer);
goto err;
+ }
list_add(&buffer->head, list);
}
diff --git a/drivers/media/platform/amphion/vpu_drv.c b/drivers/media/platform/amphion/vpu_drv.c
index 2bf70aafd2ba..51d5234869f5 100644
--- a/drivers/media/platform/amphion/vpu_drv.c
+++ b/drivers/media/platform/amphion/vpu_drv.c
@@ -151,8 +151,8 @@ static int vpu_probe(struct platform_device *pdev)
media_device_cleanup(&vpu->mdev);
v4l2_device_unregister(&vpu->v4l2_dev);
err_vpu_deinit:
- pm_runtime_set_suspended(dev);
pm_runtime_disable(dev);
+ pm_runtime_set_suspended(dev);
return ret;
}
diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
index d7e0de49b3dc..61d27b63b99d 100644
--- a/drivers/media/platform/amphion/vpu_v4l2.c
+++ b/drivers/media/platform/amphion/vpu_v4l2.c
@@ -825,6 +825,7 @@ int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func)
vfd->fops = vdec_get_fops();
vfd->ioctl_ops = vdec_get_ioctl_ops();
}
+ video_set_drvdata(vfd, vpu);
ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
if (ret) {
@@ -832,7 +833,6 @@ int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func)
v4l2_m2m_release(func->m2m_dev);
return ret;
}
- video_set_drvdata(vfd, vpu);
func->vfd = vfd;
ret = v4l2_m2m_register_media_controller(func->m2m_dev, func->vfd, func->function);
diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
index c3456c700c07..4c7b46f5a7dd 100644
--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
@@ -1294,6 +1294,11 @@ static int mtk_jpeg_single_core_init(struct platform_device *pdev,
return 0;
}
+static void mtk_jpeg_destroy_workqueue(void *data)
+{
+ destroy_workqueue(data);
+}
+
static int mtk_jpeg_probe(struct platform_device *pdev)
{
struct mtk_jpeg_dev *jpeg;
@@ -1338,6 +1343,11 @@ static int mtk_jpeg_probe(struct platform_device *pdev)
| WQ_FREEZABLE);
if (!jpeg->workqueue)
return -EINVAL;
+ ret = devm_add_action_or_reset(&pdev->dev,
+ mtk_jpeg_destroy_workqueue,
+ jpeg->workqueue);
+ if (ret)
+ return ret;
}
ret = v4l2_device_register(&pdev->dev, &jpeg->v4l2_dev);
diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c
index 4a6ee211e18f..2c5d74939d0a 100644
--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c
+++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c
@@ -578,11 +578,6 @@ static int mtk_jpegdec_hw_init_irq(struct mtk_jpegdec_comp_dev *dev)
return 0;
}
-static void mtk_jpegdec_destroy_workqueue(void *data)
-{
- destroy_workqueue(data);
-}
-
static int mtk_jpegdec_hw_probe(struct platform_device *pdev)
{
struct mtk_jpegdec_clk *jpegdec_clk;
@@ -606,12 +601,6 @@ static int mtk_jpegdec_hw_probe(struct platform_device *pdev)
dev->plat_dev = pdev;
dev->dev = &pdev->dev;
- ret = devm_add_action_or_reset(&pdev->dev,
- mtk_jpegdec_destroy_workqueue,
- master_dev->workqueue);
- if (ret)
- return ret;
-
spin_lock_init(&dev->hw_lock);
dev->hw_state = MTK_JPEG_HW_IDLE;
diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
index 2007152cd7a4..e8dcd44f6e46 100644
--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
@@ -2674,6 +2674,8 @@ static void mxc_jpeg_detach_pm_domains(struct mxc_jpeg_dev *jpeg)
int i;
for (i = 0; i < jpeg->num_domains; i++) {
+ if (jpeg->pd_dev[i] && !pm_runtime_suspended(jpeg->pd_dev[i]))
+ pm_runtime_force_suspend(jpeg->pd_dev[i]);
if (jpeg->pd_link[i] && !IS_ERR(jpeg->pd_link[i]))
device_link_del(jpeg->pd_link[i]);
if (jpeg->pd_dev[i] && !IS_ERR(jpeg->pd_dev[i]))
@@ -2837,6 +2839,7 @@ static int mxc_jpeg_probe(struct platform_device *pdev)
jpeg->dec_vdev->vfl_dir = VFL_DIR_M2M;
jpeg->dec_vdev->device_caps = V4L2_CAP_STREAMING |
V4L2_CAP_VIDEO_M2M_MPLANE;
+ video_set_drvdata(jpeg->dec_vdev, jpeg);
if (mode == MXC_JPEG_ENCODE) {
v4l2_disable_ioctl(jpeg->dec_vdev, VIDIOC_DECODER_CMD);
v4l2_disable_ioctl(jpeg->dec_vdev, VIDIOC_TRY_DECODER_CMD);
@@ -2849,7 +2852,6 @@ static int mxc_jpeg_probe(struct platform_device *pdev)
dev_err(dev, "failed to register video device\n");
goto err_vdev_register;
}
- video_set_drvdata(jpeg->dec_vdev, jpeg);
if (mode == MXC_JPEG_ENCODE)
v4l2_info(&jpeg->v4l2_dev,
"encoder device registered as /dev/video%d (%d,%d)\n",
diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
index 0fc9414f8f18..b570eb8c3756 100644
--- a/drivers/media/platform/qcom/venus/core.c
+++ b/drivers/media/platform/qcom/venus/core.c
@@ -406,8 +406,8 @@ static int venus_probe(struct platform_device *pdev)
of_platform_depopulate(dev);
err_runtime_disable:
pm_runtime_put_noidle(dev);
- pm_runtime_set_suspended(dev);
pm_runtime_disable(dev);
+ pm_runtime_set_suspended(dev);
hfi_destroy(core);
err_core_deinit:
hfi_core_deinit(core, false);
diff --git a/drivers/media/platform/samsung/exynos4-is/media-dev.h b/drivers/media/platform/samsung/exynos4-is/media-dev.h
index 786264cf79dc..a50e58ab7ef7 100644
--- a/drivers/media/platform/samsung/exynos4-is/media-dev.h
+++ b/drivers/media/platform/samsung/exynos4-is/media-dev.h
@@ -178,8 +178,9 @@ int fimc_md_set_camclk(struct v4l2_subdev *sd, bool on);
#ifdef CONFIG_OF
static inline bool fimc_md_is_isp_available(struct device_node *node)
{
- node = of_get_child_by_name(node, FIMC_IS_OF_NODE_NAME);
- return node ? of_device_is_available(node) : false;
+ struct device_node *child __free(device_node) =
+ of_get_child_by_name(node, FIMC_IS_OF_NODE_NAME);
+ return child ? of_device_is_available(child) : false;
}
#else
#define fimc_md_is_isp_available(node) (false)
diff --git a/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c b/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
index cc4483857489..ff78b3172829 100644
--- a/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
+++ b/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
@@ -161,8 +161,7 @@ static int rockchip_vpu981_av1_dec_frame_ref(struct hantro_ctx *ctx,
av1_dec->frame_refs[i].timestamp = timestamp;
av1_dec->frame_refs[i].frame_type = frame->frame_type;
av1_dec->frame_refs[i].order_hint = frame->order_hint;
- if (!av1_dec->frame_refs[i].vb2_ref)
- av1_dec->frame_refs[i].vb2_ref = hantro_get_dst_buf(ctx);
+ av1_dec->frame_refs[i].vb2_ref = hantro_get_dst_buf(ctx);
for (j = 0; j < V4L2_AV1_TOTAL_REFS_PER_FRAME; j++)
av1_dec->frame_refs[i].order_hints[j] = frame->order_hints[j];
diff --git a/drivers/media/radio/wl128x/fmdrv_common.c b/drivers/media/radio/wl128x/fmdrv_common.c
index 3da8e5102bec..1225dab9fc7d 100644
--- a/drivers/media/radio/wl128x/fmdrv_common.c
+++ b/drivers/media/radio/wl128x/fmdrv_common.c
@@ -466,11 +466,12 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
jiffies_to_msecs(FM_DRV_TX_TIMEOUT) / 1000);
return -ETIMEDOUT;
}
+ spin_lock_irqsave(&fmdev->resp_skb_lock, flags);
if (!fmdev->resp_skb) {
+ spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
fmerr("Response SKB is missing\n");
return -EFAULT;
}
- spin_lock_irqsave(&fmdev->resp_skb_lock, flags);
skb = fmdev->resp_skb;
fmdev->resp_skb = NULL;
spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
index 99325bfed643..9443dbb04699 100644
--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
@@ -1466,12 +1466,19 @@ static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings)
h_freq = (u32)bt->pixelclock / total_h_pixel;
if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_CVT)) {
+ struct v4l2_dv_timings cvt = {};
+
if (v4l2_detect_cvt(total_v_lines, h_freq, bt->vsync, bt->width,
- bt->polarities, bt->interlaced, timings))
+ bt->polarities, bt->interlaced,
+ &vivid_dv_timings_cap, &cvt) &&
+ cvt.bt.width == bt->width && cvt.bt.height == bt->height) {
+ *timings = cvt;
return true;
+ }
}
if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_GTF)) {
+ struct v4l2_dv_timings gtf = {};
struct v4l2_fract aspect_ratio;
find_aspect_ratio(bt->width, bt->height,
@@ -1479,8 +1486,12 @@ static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings)
&aspect_ratio.denominator);
if (v4l2_detect_gtf(total_v_lines, h_freq, bt->vsync,
bt->polarities, bt->interlaced,
- aspect_ratio, timings))
+ aspect_ratio, &vivid_dv_timings_cap,
+ >f) &&
+ gtf.bt.width == bt->width && gtf.bt.height == bt->height) {
+ *timings = gtf;
return true;
+ }
}
return false;
}
diff --git a/drivers/media/usb/gspca/ov534.c b/drivers/media/usb/gspca/ov534.c
index 8b6a57f170d0..bdff64a29a33 100644
--- a/drivers/media/usb/gspca/ov534.c
+++ b/drivers/media/usb/gspca/ov534.c
@@ -847,7 +847,7 @@ static void set_frame_rate(struct gspca_dev *gspca_dev)
r = rate_1;
i = ARRAY_SIZE(rate_1);
}
- while (--i > 0) {
+ while (--i >= 0) {
if (sd->frame_rate >= r->fps)
break;
r++;
diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
index 37d75bc97fd8..1385cbf462d1 100644
--- a/drivers/media/usb/uvc/uvc_driver.c
+++ b/drivers/media/usb/uvc/uvc_driver.c
@@ -775,14 +775,27 @@ static const u8 uvc_media_transport_input_guid[16] =
UVC_GUID_UVC_MEDIA_TRANSPORT_INPUT;
static const u8 uvc_processing_guid[16] = UVC_GUID_UVC_PROCESSING;
-static struct uvc_entity *uvc_alloc_entity(u16 type, u16 id,
- unsigned int num_pads, unsigned int extra_size)
+static struct uvc_entity *uvc_alloc_new_entity(struct uvc_device *dev, u16 type,
+ u16 id, unsigned int num_pads,
+ unsigned int extra_size)
{
struct uvc_entity *entity;
unsigned int num_inputs;
unsigned int size;
unsigned int i;
+ /* Per UVC 1.1+ spec 3.7.2, the ID should be non-zero. */
+ if (id == 0) {
+ dev_err(&dev->udev->dev, "Found Unit with invalid ID 0.\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ /* Per UVC 1.1+ spec 3.7.2, the ID is unique. */
+ if (uvc_entity_by_id(dev, id)) {
+ dev_err(&dev->udev->dev, "Found multiple Units with ID %u\n", id);
+ return ERR_PTR(-EINVAL);
+ }
+
extra_size = roundup(extra_size, sizeof(*entity->pads));
if (num_pads)
num_inputs = type & UVC_TERM_OUTPUT ? num_pads : num_pads - 1;
@@ -792,7 +805,7 @@ static struct uvc_entity *uvc_alloc_entity(u16 type, u16 id,
+ num_inputs;
entity = kzalloc(size, GFP_KERNEL);
if (entity == NULL)
- return NULL;
+ return ERR_PTR(-ENOMEM);
entity->id = id;
entity->type = type;
@@ -904,10 +917,10 @@ static int uvc_parse_vendor_control(struct uvc_device *dev,
break;
}
- unit = uvc_alloc_entity(UVC_VC_EXTENSION_UNIT, buffer[3],
- p + 1, 2*n);
- if (unit == NULL)
- return -ENOMEM;
+ unit = uvc_alloc_new_entity(dev, UVC_VC_EXTENSION_UNIT,
+ buffer[3], p + 1, 2 * n);
+ if (IS_ERR(unit))
+ return PTR_ERR(unit);
memcpy(unit->guid, &buffer[4], 16);
unit->extension.bNumControls = buffer[20];
@@ -1016,10 +1029,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
return -EINVAL;
}
- term = uvc_alloc_entity(type | UVC_TERM_INPUT, buffer[3],
- 1, n + p);
- if (term == NULL)
- return -ENOMEM;
+ term = uvc_alloc_new_entity(dev, type | UVC_TERM_INPUT,
+ buffer[3], 1, n + p);
+ if (IS_ERR(term))
+ return PTR_ERR(term);
if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA) {
term->camera.bControlSize = n;
@@ -1075,10 +1088,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
return 0;
}
- term = uvc_alloc_entity(type | UVC_TERM_OUTPUT, buffer[3],
- 1, 0);
- if (term == NULL)
- return -ENOMEM;
+ term = uvc_alloc_new_entity(dev, type | UVC_TERM_OUTPUT,
+ buffer[3], 1, 0);
+ if (IS_ERR(term))
+ return PTR_ERR(term);
memcpy(term->baSourceID, &buffer[7], 1);
@@ -1097,9 +1110,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
return -EINVAL;
}
- unit = uvc_alloc_entity(buffer[2], buffer[3], p + 1, 0);
- if (unit == NULL)
- return -ENOMEM;
+ unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3],
+ p + 1, 0);
+ if (IS_ERR(unit))
+ return PTR_ERR(unit);
memcpy(unit->baSourceID, &buffer[5], p);
@@ -1119,9 +1133,9 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
return -EINVAL;
}
- unit = uvc_alloc_entity(buffer[2], buffer[3], 2, n);
- if (unit == NULL)
- return -ENOMEM;
+ unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3], 2, n);
+ if (IS_ERR(unit))
+ return PTR_ERR(unit);
memcpy(unit->baSourceID, &buffer[4], 1);
unit->processing.wMaxMultiplier =
@@ -1148,9 +1162,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
return -EINVAL;
}
- unit = uvc_alloc_entity(buffer[2], buffer[3], p + 1, n);
- if (unit == NULL)
- return -ENOMEM;
+ unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3],
+ p + 1, n);
+ if (IS_ERR(unit))
+ return PTR_ERR(unit);
memcpy(unit->guid, &buffer[4], 16);
unit->extension.bNumControls = buffer[20];
@@ -1290,9 +1305,10 @@ static int uvc_gpio_parse(struct uvc_device *dev)
return dev_err_probe(&dev->udev->dev, irq,
"No IRQ for privacy GPIO\n");
- unit = uvc_alloc_entity(UVC_EXT_GPIO_UNIT, UVC_EXT_GPIO_UNIT_ID, 0, 1);
- if (!unit)
- return -ENOMEM;
+ unit = uvc_alloc_new_entity(dev, UVC_EXT_GPIO_UNIT,
+ UVC_EXT_GPIO_UNIT_ID, 0, 1);
+ if (IS_ERR(unit))
+ return PTR_ERR(unit);
unit->gpio.gpio_privacy = gpio_privacy;
unit->gpio.irq = irq;
@@ -1919,11 +1935,41 @@ static void uvc_unregister_video(struct uvc_device *dev)
struct uvc_streaming *stream;
list_for_each_entry(stream, &dev->streams, list) {
+ /* Nothing to do here, continue. */
if (!video_is_registered(&stream->vdev))
continue;
+ /*
+ * For stream->vdev we follow the same logic as:
+ * vb2_video_unregister_device().
+ */
+
+ /* 1. Take a reference to vdev */
+ get_device(&stream->vdev.dev);
+
+ /* 2. Ensure that no new ioctls can be called. */
video_unregister_device(&stream->vdev);
- video_unregister_device(&stream->meta.vdev);
+
+ /* 3. Wait for old ioctls to finish. */
+ mutex_lock(&stream->mutex);
+
+ /* 4. Stop streaming. */
+ uvc_queue_release(&stream->queue);
+
+ mutex_unlock(&stream->mutex);
+
+ put_device(&stream->vdev.dev);
+
+ /*
+ * For stream->meta.vdev we can directly call:
+ * vb2_video_unregister_device().
+ */
+ vb2_video_unregister_device(&stream->meta.vdev);
+
+ /*
+ * Now both vdevs are not streaming and all the ioctls will
+ * return -ENODEV.
+ */
uvc_debugfs_cleanup_stream(stream);
}
diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
index 942d0005c55e..2cf5dcee0ce8 100644
--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
+++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
@@ -481,25 +481,28 @@ EXPORT_SYMBOL_GPL(v4l2_calc_timeperframe);
* @polarities - the horizontal and vertical polarities (same as struct
* v4l2_bt_timings polarities).
* @interlaced - if this flag is true, it indicates interlaced format
- * @fmt - the resulting timings.
+ * @cap - the v4l2_dv_timings_cap capabilities.
+ * @timings - the resulting timings.
*
* This function will attempt to detect if the given values correspond to a
* valid CVT format. If so, then it will return true, and fmt will be filled
* in with the found CVT timings.
*/
-bool v4l2_detect_cvt(unsigned frame_height,
- unsigned hfreq,
- unsigned vsync,
- unsigned active_width,
+bool v4l2_detect_cvt(unsigned int frame_height,
+ unsigned int hfreq,
+ unsigned int vsync,
+ unsigned int active_width,
u32 polarities,
bool interlaced,
- struct v4l2_dv_timings *fmt)
+ const struct v4l2_dv_timings_cap *cap,
+ struct v4l2_dv_timings *timings)
{
- int v_fp, v_bp, h_fp, h_bp, hsync;
- int frame_width, image_height, image_width;
+ struct v4l2_dv_timings t = {};
+ int v_fp, v_bp, h_fp, h_bp, hsync;
+ int frame_width, image_height, image_width;
bool reduced_blanking;
bool rb_v2 = false;
- unsigned pix_clk;
+ unsigned int pix_clk;
if (vsync < 4 || vsync > 8)
return false;
@@ -625,36 +628,39 @@ bool v4l2_detect_cvt(unsigned frame_height,
h_fp = h_blank - hsync - h_bp;
}
- fmt->type = V4L2_DV_BT_656_1120;
- fmt->bt.polarities = polarities;
- fmt->bt.width = image_width;
- fmt->bt.height = image_height;
- fmt->bt.hfrontporch = h_fp;
- fmt->bt.vfrontporch = v_fp;
- fmt->bt.hsync = hsync;
- fmt->bt.vsync = vsync;
- fmt->bt.hbackporch = frame_width - image_width - h_fp - hsync;
+ t.type = V4L2_DV_BT_656_1120;
+ t.bt.polarities = polarities;
+ t.bt.width = image_width;
+ t.bt.height = image_height;
+ t.bt.hfrontporch = h_fp;
+ t.bt.vfrontporch = v_fp;
+ t.bt.hsync = hsync;
+ t.bt.vsync = vsync;
+ t.bt.hbackporch = frame_width - image_width - h_fp - hsync;
if (!interlaced) {
- fmt->bt.vbackporch = frame_height - image_height - v_fp - vsync;
- fmt->bt.interlaced = V4L2_DV_PROGRESSIVE;
+ t.bt.vbackporch = frame_height - image_height - v_fp - vsync;
+ t.bt.interlaced = V4L2_DV_PROGRESSIVE;
} else {
- fmt->bt.vbackporch = (frame_height - image_height - 2 * v_fp -
+ t.bt.vbackporch = (frame_height - image_height - 2 * v_fp -
2 * vsync) / 2;
- fmt->bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
- 2 * vsync - fmt->bt.vbackporch;
- fmt->bt.il_vfrontporch = v_fp;
- fmt->bt.il_vsync = vsync;
- fmt->bt.flags |= V4L2_DV_FL_HALF_LINE;
- fmt->bt.interlaced = V4L2_DV_INTERLACED;
+ t.bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
+ 2 * vsync - t.bt.vbackporch;
+ t.bt.il_vfrontporch = v_fp;
+ t.bt.il_vsync = vsync;
+ t.bt.flags |= V4L2_DV_FL_HALF_LINE;
+ t.bt.interlaced = V4L2_DV_INTERLACED;
}
- fmt->bt.pixelclock = pix_clk;
- fmt->bt.standards = V4L2_DV_BT_STD_CVT;
+ t.bt.pixelclock = pix_clk;
+ t.bt.standards = V4L2_DV_BT_STD_CVT;
if (reduced_blanking)
- fmt->bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+ t.bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+ if (!v4l2_valid_dv_timings(&t, cap, NULL, NULL))
+ return false;
+ *timings = t;
return true;
}
EXPORT_SYMBOL_GPL(v4l2_detect_cvt);
@@ -699,22 +705,25 @@ EXPORT_SYMBOL_GPL(v4l2_detect_cvt);
* image height, so it has to be passed explicitly. Usually
* the native screen aspect ratio is used for this. If it
* is not filled in correctly, then 16:9 will be assumed.
- * @fmt - the resulting timings.
+ * @cap - the v4l2_dv_timings_cap capabilities.
+ * @timings - the resulting timings.
*
* This function will attempt to detect if the given values correspond to a
* valid GTF format. If so, then it will return true, and fmt will be filled
* in with the found GTF timings.
*/
-bool v4l2_detect_gtf(unsigned frame_height,
- unsigned hfreq,
- unsigned vsync,
- u32 polarities,
- bool interlaced,
- struct v4l2_fract aspect,
- struct v4l2_dv_timings *fmt)
+bool v4l2_detect_gtf(unsigned int frame_height,
+ unsigned int hfreq,
+ unsigned int vsync,
+ u32 polarities,
+ bool interlaced,
+ struct v4l2_fract aspect,
+ const struct v4l2_dv_timings_cap *cap,
+ struct v4l2_dv_timings *timings)
{
+ struct v4l2_dv_timings t = {};
int pix_clk;
- int v_fp, v_bp, h_fp, hsync;
+ int v_fp, v_bp, h_fp, hsync;
int frame_width, image_height, image_width;
bool default_gtf;
int h_blank;
@@ -783,36 +792,39 @@ bool v4l2_detect_gtf(unsigned frame_height,
h_fp = h_blank / 2 - hsync;
- fmt->type = V4L2_DV_BT_656_1120;
- fmt->bt.polarities = polarities;
- fmt->bt.width = image_width;
- fmt->bt.height = image_height;
- fmt->bt.hfrontporch = h_fp;
- fmt->bt.vfrontporch = v_fp;
- fmt->bt.hsync = hsync;
- fmt->bt.vsync = vsync;
- fmt->bt.hbackporch = frame_width - image_width - h_fp - hsync;
+ t.type = V4L2_DV_BT_656_1120;
+ t.bt.polarities = polarities;
+ t.bt.width = image_width;
+ t.bt.height = image_height;
+ t.bt.hfrontporch = h_fp;
+ t.bt.vfrontporch = v_fp;
+ t.bt.hsync = hsync;
+ t.bt.vsync = vsync;
+ t.bt.hbackporch = frame_width - image_width - h_fp - hsync;
if (!interlaced) {
- fmt->bt.vbackporch = frame_height - image_height - v_fp - vsync;
- fmt->bt.interlaced = V4L2_DV_PROGRESSIVE;
+ t.bt.vbackporch = frame_height - image_height - v_fp - vsync;
+ t.bt.interlaced = V4L2_DV_PROGRESSIVE;
} else {
- fmt->bt.vbackporch = (frame_height - image_height - 2 * v_fp -
+ t.bt.vbackporch = (frame_height - image_height - 2 * v_fp -
2 * vsync) / 2;
- fmt->bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
- 2 * vsync - fmt->bt.vbackporch;
- fmt->bt.il_vfrontporch = v_fp;
- fmt->bt.il_vsync = vsync;
- fmt->bt.flags |= V4L2_DV_FL_HALF_LINE;
- fmt->bt.interlaced = V4L2_DV_INTERLACED;
+ t.bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
+ 2 * vsync - t.bt.vbackporch;
+ t.bt.il_vfrontporch = v_fp;
+ t.bt.il_vsync = vsync;
+ t.bt.flags |= V4L2_DV_FL_HALF_LINE;
+ t.bt.interlaced = V4L2_DV_INTERLACED;
}
- fmt->bt.pixelclock = pix_clk;
- fmt->bt.standards = V4L2_DV_BT_STD_GTF;
+ t.bt.pixelclock = pix_clk;
+ t.bt.standards = V4L2_DV_BT_STD_GTF;
if (!default_gtf)
- fmt->bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+ t.bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+ if (!v4l2_valid_dv_timings(&t, cap, NULL, NULL))
+ return false;
+ *timings = t;
return true;
}
EXPORT_SYMBOL_GPL(v4l2_detect_gtf);
diff --git a/drivers/message/fusion/mptsas.c b/drivers/message/fusion/mptsas.c
index 86f16f3ea478..d97057f46ca8 100644
--- a/drivers/message/fusion/mptsas.c
+++ b/drivers/message/fusion/mptsas.c
@@ -4234,10 +4234,8 @@ mptsas_find_phyinfo_by_phys_disk_num(MPT_ADAPTER *ioc, u8 phys_disk_num,
static void
mptsas_reprobe_lun(struct scsi_device *sdev, void *data)
{
- int rc;
-
sdev->no_uld_attach = data ? 1 : 0;
- rc = scsi_device_reprobe(sdev);
+ WARN_ON(scsi_device_reprobe(sdev));
}
static void
diff --git a/drivers/mfd/da9052-spi.c b/drivers/mfd/da9052-spi.c
index be5f2b34e18a..80fc5c0cac2f 100644
--- a/drivers/mfd/da9052-spi.c
+++ b/drivers/mfd/da9052-spi.c
@@ -37,7 +37,7 @@ static int da9052_spi_probe(struct spi_device *spi)
spi_set_drvdata(spi, da9052);
config = da9052_regmap_config;
- config.read_flag_mask = 1;
+ config.write_flag_mask = 1;
config.reg_bits = 7;
config.pad_bits = 1;
config.val_bits = 8;
diff --git a/drivers/mfd/intel_soc_pmic_bxtwc.c b/drivers/mfd/intel_soc_pmic_bxtwc.c
index 8dac0d41f64f..3aa7857271da 100644
--- a/drivers/mfd/intel_soc_pmic_bxtwc.c
+++ b/drivers/mfd/intel_soc_pmic_bxtwc.c
@@ -231,44 +231,55 @@ static const struct resource tmu_resources[] = {
};
static struct mfd_cell bxt_wc_dev[] = {
- {
- .name = "bxt_wcove_gpadc",
- .num_resources = ARRAY_SIZE(adc_resources),
- .resources = adc_resources,
- },
{
.name = "bxt_wcove_thermal",
.num_resources = ARRAY_SIZE(thermal_resources),
.resources = thermal_resources,
},
{
- .name = "bxt_wcove_usbc",
- .num_resources = ARRAY_SIZE(usbc_resources),
- .resources = usbc_resources,
+ .name = "bxt_wcove_gpio",
+ .num_resources = ARRAY_SIZE(gpio_resources),
+ .resources = gpio_resources,
},
{
- .name = "bxt_wcove_ext_charger",
- .num_resources = ARRAY_SIZE(charger_resources),
- .resources = charger_resources,
+ .name = "bxt_wcove_region",
+ },
+};
+
+static const struct mfd_cell bxt_wc_tmu_dev[] = {
+ {
+ .name = "bxt_wcove_tmu",
+ .num_resources = ARRAY_SIZE(tmu_resources),
+ .resources = tmu_resources,
},
+};
+
+static const struct mfd_cell bxt_wc_bcu_dev[] = {
{
.name = "bxt_wcove_bcu",
.num_resources = ARRAY_SIZE(bcu_resources),
.resources = bcu_resources,
},
+};
+
+static const struct mfd_cell bxt_wc_adc_dev[] = {
{
- .name = "bxt_wcove_tmu",
- .num_resources = ARRAY_SIZE(tmu_resources),
- .resources = tmu_resources,
+ .name = "bxt_wcove_gpadc",
+ .num_resources = ARRAY_SIZE(adc_resources),
+ .resources = adc_resources,
},
+};
+static struct mfd_cell bxt_wc_chgr_dev[] = {
{
- .name = "bxt_wcove_gpio",
- .num_resources = ARRAY_SIZE(gpio_resources),
- .resources = gpio_resources,
+ .name = "bxt_wcove_usbc",
+ .num_resources = ARRAY_SIZE(usbc_resources),
+ .resources = usbc_resources,
},
{
- .name = "bxt_wcove_region",
+ .name = "bxt_wcove_ext_charger",
+ .num_resources = ARRAY_SIZE(charger_resources),
+ .resources = charger_resources,
},
};
@@ -426,6 +437,26 @@ static int bxtwc_add_chained_irq_chip(struct intel_soc_pmic *pmic,
0, chip, data);
}
+static int bxtwc_add_chained_devices(struct intel_soc_pmic *pmic,
+ const struct mfd_cell *cells, int n_devs,
+ struct regmap_irq_chip_data *pdata,
+ int pirq, int irq_flags,
+ const struct regmap_irq_chip *chip,
+ struct regmap_irq_chip_data **data)
+{
+ struct device *dev = pmic->dev;
+ struct irq_domain *domain;
+ int ret;
+
+ ret = bxtwc_add_chained_irq_chip(pmic, pdata, pirq, irq_flags, chip, data);
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to add %s IRQ chip\n", chip->name);
+
+ domain = regmap_irq_get_domain(*data);
+
+ return devm_mfd_add_devices(dev, PLATFORM_DEVID_NONE, cells, n_devs, NULL, 0, domain);
+}
+
static int bxtwc_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@@ -467,6 +498,15 @@ static int bxtwc_probe(struct platform_device *pdev)
if (ret)
return dev_err_probe(dev, ret, "Failed to add IRQ chip\n");
+ ret = bxtwc_add_chained_devices(pmic, bxt_wc_tmu_dev, ARRAY_SIZE(bxt_wc_tmu_dev),
+ pmic->irq_chip_data,
+ BXTWC_TMU_LVL1_IRQ,
+ IRQF_ONESHOT,
+ &bxtwc_regmap_irq_chip_tmu,
+ &pmic->irq_chip_data_tmu);
+ if (ret)
+ return ret;
+
ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
BXTWC_PWRBTN_LVL1_IRQ,
IRQF_ONESHOT,
@@ -475,40 +515,32 @@ static int bxtwc_probe(struct platform_device *pdev)
if (ret)
return dev_err_probe(dev, ret, "Failed to add PWRBTN IRQ chip\n");
- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
- BXTWC_TMU_LVL1_IRQ,
- IRQF_ONESHOT,
- &bxtwc_regmap_irq_chip_tmu,
- &pmic->irq_chip_data_tmu);
- if (ret)
- return dev_err_probe(dev, ret, "Failed to add TMU IRQ chip\n");
-
- /* Add chained IRQ handler for BCU IRQs */
- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
- BXTWC_BCU_LVL1_IRQ,
- IRQF_ONESHOT,
- &bxtwc_regmap_irq_chip_bcu,
- &pmic->irq_chip_data_bcu);
+ ret = bxtwc_add_chained_devices(pmic, bxt_wc_bcu_dev, ARRAY_SIZE(bxt_wc_bcu_dev),
+ pmic->irq_chip_data,
+ BXTWC_BCU_LVL1_IRQ,
+ IRQF_ONESHOT,
+ &bxtwc_regmap_irq_chip_bcu,
+ &pmic->irq_chip_data_bcu);
if (ret)
- return dev_err_probe(dev, ret, "Failed to add BUC IRQ chip\n");
+ return ret;
- /* Add chained IRQ handler for ADC IRQs */
- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
- BXTWC_ADC_LVL1_IRQ,
- IRQF_ONESHOT,
- &bxtwc_regmap_irq_chip_adc,
- &pmic->irq_chip_data_adc);
+ ret = bxtwc_add_chained_devices(pmic, bxt_wc_adc_dev, ARRAY_SIZE(bxt_wc_adc_dev),
+ pmic->irq_chip_data,
+ BXTWC_ADC_LVL1_IRQ,
+ IRQF_ONESHOT,
+ &bxtwc_regmap_irq_chip_adc,
+ &pmic->irq_chip_data_adc);
if (ret)
- return dev_err_probe(dev, ret, "Failed to add ADC IRQ chip\n");
+ return ret;
- /* Add chained IRQ handler for CHGR IRQs */
- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
- BXTWC_CHGR_LVL1_IRQ,
- IRQF_ONESHOT,
- &bxtwc_regmap_irq_chip_chgr,
- &pmic->irq_chip_data_chgr);
+ ret = bxtwc_add_chained_devices(pmic, bxt_wc_chgr_dev, ARRAY_SIZE(bxt_wc_chgr_dev),
+ pmic->irq_chip_data,
+ BXTWC_CHGR_LVL1_IRQ,
+ IRQF_ONESHOT,
+ &bxtwc_regmap_irq_chip_chgr,
+ &pmic->irq_chip_data_chgr);
if (ret)
- return dev_err_probe(dev, ret, "Failed to add CHGR IRQ chip\n");
+ return ret;
/* Add chained IRQ handler for CRIT IRQs */
ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
diff --git a/drivers/mfd/rt5033.c b/drivers/mfd/rt5033.c
index 7e23ab3d5842..84ebc96f58e4 100644
--- a/drivers/mfd/rt5033.c
+++ b/drivers/mfd/rt5033.c
@@ -81,8 +81,8 @@ static int rt5033_i2c_probe(struct i2c_client *i2c)
chip_rev = dev_id & RT5033_CHIP_REV_MASK;
dev_info(&i2c->dev, "Device found (rev. %d)\n", chip_rev);
- ret = regmap_add_irq_chip(rt5033->regmap, rt5033->irq,
- IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ ret = devm_regmap_add_irq_chip(rt5033->dev, rt5033->regmap,
+ rt5033->irq, IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
0, &rt5033_irq_chip, &rt5033->irq_data);
if (ret) {
dev_err(&i2c->dev, "Failed to request IRQ %d: %d\n",
diff --git a/drivers/mfd/tps65010.c b/drivers/mfd/tps65010.c
index 2b9105295f30..710364435b6b 100644
--- a/drivers/mfd/tps65010.c
+++ b/drivers/mfd/tps65010.c
@@ -544,17 +544,13 @@ static int tps65010_probe(struct i2c_client *client)
*/
if (client->irq > 0) {
status = request_irq(client->irq, tps65010_irq,
- IRQF_TRIGGER_FALLING, DRIVER_NAME, tps);
+ IRQF_TRIGGER_FALLING | IRQF_NO_AUTOEN,
+ DRIVER_NAME, tps);
if (status < 0) {
dev_dbg(&client->dev, "can't get IRQ %d, err %d\n",
client->irq, status);
return status;
}
- /* annoying race here, ideally we'd have an option
- * to claim the irq now and enable it later.
- * FIXME genirq IRQF_NOAUTOEN now solves that ...
- */
- disable_irq(client->irq);
set_bit(FLAG_IRQ_ENABLE, &tps->flags);
} else
dev_warn(&client->dev, "IRQ not configured!\n");
diff --git a/drivers/misc/apds990x.c b/drivers/misc/apds990x.c
index 92b92be91d60..095344a312d2 100644
--- a/drivers/misc/apds990x.c
+++ b/drivers/misc/apds990x.c
@@ -1147,7 +1147,7 @@ static int apds990x_probe(struct i2c_client *client)
err = chip->pdata->setup_resources();
if (err) {
err = -EINVAL;
- goto fail3;
+ goto fail4;
}
}
@@ -1155,7 +1155,7 @@ static int apds990x_probe(struct i2c_client *client)
apds990x_attribute_group);
if (err < 0) {
dev_err(&chip->client->dev, "Sysfs registration failed\n");
- goto fail4;
+ goto fail5;
}
err = request_threaded_irq(client->irq, NULL,
@@ -1166,15 +1166,17 @@ static int apds990x_probe(struct i2c_client *client)
if (err) {
dev_err(&client->dev, "could not get IRQ %d\n",
client->irq);
- goto fail5;
+ goto fail6;
}
return err;
-fail5:
+fail6:
sysfs_remove_group(&chip->client->dev.kobj,
&apds990x_attribute_group[0]);
-fail4:
+fail5:
if (chip->pdata && chip->pdata->release_resources)
chip->pdata->release_resources();
+fail4:
+ pm_runtime_disable(&client->dev);
fail3:
regulator_bulk_disable(ARRAY_SIZE(chip->regs), chip->regs);
fail2:
diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
index c66cc05a68c4..473ec58f87a2 100644
--- a/drivers/misc/lkdtm/bugs.c
+++ b/drivers/misc/lkdtm/bugs.c
@@ -388,8 +388,8 @@ static void lkdtm_FAM_BOUNDS(void)
pr_err("FAIL: survived access of invalid flexible array member index!\n");
- if (!__has_attribute(__counted_by__))
- pr_warn("This is expected since this %s was built a compiler supporting __counted_by\n",
+ if (!IS_ENABLED(CONFIG_CC_HAS_COUNTED_BY))
+ pr_warn("This is expected since this %s was built with a compiler that does not support __counted_by\n",
lkdtm_kernel_info);
else if (IS_ENABLED(CONFIG_UBSAN_BOUNDS))
pr_expected_config(CONFIG_UBSAN_TRAP);
diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
index 2a99ffb61f8c..30b93dc938f1 100644
--- a/drivers/mmc/host/mmc_spi.c
+++ b/drivers/mmc/host/mmc_spi.c
@@ -223,10 +223,6 @@ static int mmc_spi_response_get(struct mmc_spi_host *host,
u8 leftover = 0;
unsigned short rotator;
int i;
- char tag[32];
-
- snprintf(tag, sizeof(tag), " ... CMD%d response SPI_%s",
- cmd->opcode, maptype(cmd));
/* Except for data block reads, the whole response will already
* be stored in the scratch buffer. It's somewhere after the
@@ -379,8 +375,9 @@ static int mmc_spi_response_get(struct mmc_spi_host *host,
}
if (value < 0)
- dev_dbg(&host->spi->dev, "%s: resp %04x %08x\n",
- tag, cmd->resp[0], cmd->resp[1]);
+ dev_dbg(&host->spi->dev,
+ " ... CMD%d response SPI_%s: resp %04x %08x\n",
+ cmd->opcode, maptype(cmd), cmd->resp[0], cmd->resp[1]);
/* disable chipselect on errors and some success cases */
if (value >= 0 && cs_on)
diff --git a/drivers/mtd/hyperbus/rpc-if.c b/drivers/mtd/hyperbus/rpc-if.c
index ef32fca5f785..e7a28f3316c3 100644
--- a/drivers/mtd/hyperbus/rpc-if.c
+++ b/drivers/mtd/hyperbus/rpc-if.c
@@ -154,20 +154,25 @@ static int rpcif_hb_probe(struct platform_device *pdev)
return error;
}
-static int rpcif_hb_remove(struct platform_device *pdev)
+static void rpcif_hb_remove(struct platform_device *pdev)
{
struct rpcif_hyperbus *hyperbus = platform_get_drvdata(pdev);
hyperbus_unregister_device(&hyperbus->hbdev);
pm_runtime_disable(hyperbus->rpc.dev);
-
- return 0;
}
+static const struct platform_device_id rpc_if_hyperflash_id_table[] = {
+ { .name = "rpc-if-hyperflash" },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(platform, rpc_if_hyperflash_id_table);
+
static struct platform_driver rpcif_platform_driver = {
.probe = rpcif_hb_probe,
- .remove = rpcif_hb_remove,
+ .remove_new = rpcif_hb_remove,
+ .id_table = rpc_if_hyperflash_id_table,
.driver = {
.name = "rpc-if-hyperflash",
},
diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c
index 4d7dc8a9c373..a22aab4ed4e8 100644
--- a/drivers/mtd/nand/raw/atmel/pmecc.c
+++ b/drivers/mtd/nand/raw/atmel/pmecc.c
@@ -362,7 +362,7 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
size = ALIGN(size, sizeof(s32));
size += (req->ecc.strength + 1) * sizeof(s32) * 3;
- user = kzalloc(size, GFP_KERNEL);
+ user = devm_kzalloc(pmecc->dev, size, GFP_KERNEL);
if (!user)
return ERR_PTR(-ENOMEM);
@@ -408,12 +408,6 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
}
EXPORT_SYMBOL_GPL(atmel_pmecc_create_user);
-void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user)
-{
- kfree(user);
-}
-EXPORT_SYMBOL_GPL(atmel_pmecc_destroy_user);
-
static int get_strength(struct atmel_pmecc_user *user)
{
const int *strengths = user->pmecc->caps->strengths;
diff --git a/drivers/mtd/nand/raw/atmel/pmecc.h b/drivers/mtd/nand/raw/atmel/pmecc.h
index 7851c05126cf..cc0c5af1f4f1 100644
--- a/drivers/mtd/nand/raw/atmel/pmecc.h
+++ b/drivers/mtd/nand/raw/atmel/pmecc.h
@@ -55,8 +55,6 @@ struct atmel_pmecc *devm_atmel_pmecc_get(struct device *dev);
struct atmel_pmecc_user *
atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
struct atmel_pmecc_user_req *req);
-void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user);
-
void atmel_pmecc_reset(struct atmel_pmecc *pmecc);
int atmel_pmecc_enable(struct atmel_pmecc_user *user, int op);
void atmel_pmecc_disable(struct atmel_pmecc_user *user);
diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
index 1b0c6770c14e..8d75a66775cb 100644
--- a/drivers/mtd/spi-nor/core.c
+++ b/drivers/mtd/spi-nor/core.c
@@ -89,7 +89,7 @@ void spi_nor_spimem_setup_op(const struct spi_nor *nor,
op->addr.buswidth = spi_nor_get_protocol_addr_nbits(proto);
if (op->dummy.nbytes)
- op->dummy.buswidth = spi_nor_get_protocol_addr_nbits(proto);
+ op->dummy.buswidth = spi_nor_get_protocol_data_nbits(proto);
if (op->data.nbytes)
op->data.buswidth = spi_nor_get_protocol_data_nbits(proto);
diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c
index 709822fced86..828b442735ee 100644
--- a/drivers/mtd/spi-nor/spansion.c
+++ b/drivers/mtd/spi-nor/spansion.c
@@ -105,6 +105,7 @@ static int cypress_nor_sr_ready_and_clear_reg(struct spi_nor *nor, u64 addr)
int ret;
if (nor->reg_proto == SNOR_PROTO_8_8_8_DTR) {
+ op.addr.nbytes = nor->addr_nbytes;
op.dummy.nbytes = params->rdsr_dummy;
op.data.nbytes = 2;
}
diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c
index ae5abe492b52..adc47b87b38a 100644
--- a/drivers/mtd/ubi/attach.c
+++ b/drivers/mtd/ubi/attach.c
@@ -1447,7 +1447,7 @@ static int scan_all(struct ubi_device *ubi, struct ubi_attach_info *ai,
return err;
}
-static struct ubi_attach_info *alloc_ai(void)
+static struct ubi_attach_info *alloc_ai(const char *slab_name)
{
struct ubi_attach_info *ai;
@@ -1461,7 +1461,7 @@ static struct ubi_attach_info *alloc_ai(void)
INIT_LIST_HEAD(&ai->alien);
INIT_LIST_HEAD(&ai->fastmap);
ai->volumes = RB_ROOT;
- ai->aeb_slab_cache = kmem_cache_create("ubi_aeb_slab_cache",
+ ai->aeb_slab_cache = kmem_cache_create(slab_name,
sizeof(struct ubi_ainf_peb),
0, 0, NULL);
if (!ai->aeb_slab_cache) {
@@ -1491,7 +1491,7 @@ static int scan_fast(struct ubi_device *ubi, struct ubi_attach_info **ai)
err = -ENOMEM;
- scan_ai = alloc_ai();
+ scan_ai = alloc_ai("ubi_aeb_slab_cache_fastmap");
if (!scan_ai)
goto out;
@@ -1557,7 +1557,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
int err;
struct ubi_attach_info *ai;
- ai = alloc_ai();
+ ai = alloc_ai("ubi_aeb_slab_cache");
if (!ai)
return -ENOMEM;
@@ -1575,7 +1575,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
if (err > 0 || mtd_is_eccerr(err)) {
if (err != UBI_NO_FASTMAP) {
destroy_ai(ai);
- ai = alloc_ai();
+ ai = alloc_ai("ubi_aeb_slab_cache");
if (!ai)
return -ENOMEM;
@@ -1614,7 +1614,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
if (ubi->fm && ubi_dbg_chk_fastmap(ubi)) {
struct ubi_attach_info *scan_ai;
- scan_ai = alloc_ai();
+ scan_ai = alloc_ai("ubi_aeb_slab_cache_dbg_chk_fastmap");
if (!scan_ai) {
err = -ENOMEM;
goto out_wl;
diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
index 863f571f1adb..79733163ab7d 100644
--- a/drivers/mtd/ubi/fastmap-wl.c
+++ b/drivers/mtd/ubi/fastmap-wl.c
@@ -282,14 +282,27 @@ int ubi_wl_get_peb(struct ubi_device *ubi)
* WL sub-system.
*
* @ubi: UBI device description object
+ * @need_fill: whether to fill wear-leveling pool when no PEBs are found
*/
-static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi)
+static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi,
+ bool need_fill)
{
struct ubi_fm_pool *pool = &ubi->fm_wl_pool;
int pnum;
- if (pool->used == pool->size)
+ if (pool->used == pool->size) {
+ if (need_fill && !ubi->fm_work_scheduled) {
+ /*
+ * We cannot update the fastmap here because this
+ * function is called in atomic context.
+ * Let's fail here and refill/update it as soon as
+ * possible.
+ */
+ ubi->fm_work_scheduled = 1;
+ schedule_work(&ubi->fm_work);
+ }
return NULL;
+ }
pnum = pool->pebs[pool->used];
return ubi->lookuptbl[pnum];
@@ -311,7 +324,7 @@ static bool need_wear_leveling(struct ubi_device *ubi)
if (!ubi->used.rb_node)
return false;
- e = next_peb_for_wl(ubi);
+ e = next_peb_for_wl(ubi, false);
if (!e) {
if (!ubi->free.rb_node)
return false;
diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 26a214f016c1..886d44019401 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -671,7 +671,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
ubi_assert(!ubi->move_to_put);
#ifdef CONFIG_MTD_UBI_FASTMAP
- if (!next_peb_for_wl(ubi) ||
+ if (!next_peb_for_wl(ubi, true) ||
#else
if (!ubi->free.rb_node ||
#endif
@@ -834,7 +834,14 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
goto out_not_moved;
}
if (err == MOVE_RETRY) {
- scrubbing = 1;
+ /*
+ * For source PEB:
+ * 1. The scrubbing is set for scrub type PEB, it will
+ * be put back into ubi->scrub list.
+ * 2. Non-scrub type PEB will be put back into ubi->used
+ * list.
+ */
+ keep = 1;
dst_leb_clean = 1;
goto out_not_moved;
}
diff --git a/drivers/mtd/ubi/wl.h b/drivers/mtd/ubi/wl.h
index 5ebe374a08ae..1d83e552533a 100644
--- a/drivers/mtd/ubi/wl.h
+++ b/drivers/mtd/ubi/wl.h
@@ -5,7 +5,8 @@
static void update_fastmap_work_fn(struct work_struct *wrk);
static struct ubi_wl_entry *find_anchor_wl_entry(struct rb_root *root);
static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi);
-static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi);
+static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi,
+ bool need_fill);
static bool need_wear_leveling(struct ubi_device *ubi);
static void ubi_fastmap_close(struct ubi_device *ubi);
static inline void ubi_fastmap_init(struct ubi_device *ubi, int *count)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 58a7bb75506a..c440f4d8d43a 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -7597,7 +7597,6 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
struct hwrm_port_mac_ptp_qcfg_output *resp;
struct hwrm_port_mac_ptp_qcfg_input *req;
struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
- bool phc_cfg;
u8 flags;
int rc;
@@ -7640,8 +7639,9 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
rc = -ENODEV;
goto exit;
}
- phc_cfg = (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0;
- rc = bnxt_ptp_init(bp, phc_cfg);
+ ptp->rtc_configured =
+ (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0;
+ rc = bnxt_ptp_init(bp);
if (rc)
netdev_warn(bp->dev, "PTP initialization failed.\n");
exit:
@@ -13857,6 +13857,7 @@ static void bnxt_shutdown(struct pci_dev *pdev)
if (netif_running(dev))
dev_close(dev);
+ bnxt_ptp_clear(bp);
bnxt_clear_int_mode(bp);
pci_disable_device(pdev);
@@ -13883,6 +13884,7 @@ static int bnxt_suspend(struct device *device)
rc = bnxt_close(dev);
}
bnxt_hwrm_func_drv_unrgtr(bp);
+ bnxt_ptp_clear(bp);
pci_disable_device(bp->pdev);
bnxt_free_ctx_mem(bp);
kfree(bp->ctx);
@@ -13926,6 +13928,10 @@ static int bnxt_resume(struct device *device)
goto resume_exit;
}
+ if (bnxt_ptp_init(bp)) {
+ kfree(bp->ptp_cfg);
+ bp->ptp_cfg = NULL;
+ }
bnxt_get_wol_settings(bp);
if (netif_running(dev)) {
rc = bnxt_open(dev);
@@ -14102,8 +14108,12 @@ static void bnxt_io_resume(struct pci_dev *pdev)
rtnl_lock();
err = bnxt_hwrm_func_qcaps(bp);
- if (!err && netif_running(netdev))
- err = bnxt_open(netdev);
+ if (!err) {
+ if (netif_running(netdev))
+ err = bnxt_open(netdev);
+ else
+ err = bnxt_reserve_rings(bp, true);
+ }
bnxt_ulp_start(bp, err);
if (!err) {
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
index 6e3da3362bd6..bbe8657f6545 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
@@ -922,7 +922,7 @@ static void bnxt_ptp_free(struct bnxt *bp)
}
}
-int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
+int bnxt_ptp_init(struct bnxt *bp)
{
struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
int rc;
@@ -944,7 +944,7 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
if (BNXT_PTP_USE_RTC(bp)) {
bnxt_ptp_timecounter_init(bp, false);
- rc = bnxt_ptp_init_rtc(bp, phc_cfg);
+ rc = bnxt_ptp_init_rtc(bp, ptp->rtc_configured);
if (rc)
goto out;
} else {
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
index 34162e07a119..7d6a215b10b1 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
@@ -115,6 +115,7 @@ struct bnxt_ptp_cfg {
BNXT_PTP_MSG_PDELAY_REQ | \
BNXT_PTP_MSG_PDELAY_RESP)
u8 tx_tstamp_en:1;
+ u8 rtc_configured:1;
int rx_filter;
u32 tstamp_filters;
@@ -145,6 +146,6 @@ int bnxt_get_tx_ts_p5(struct bnxt *bp, struct sk_buff *skb);
int bnxt_get_rx_ts_p5(struct bnxt *bp, u64 *ts, u32 pkt_ts);
void bnxt_ptp_rtc_timecounter_init(struct bnxt_ptp_cfg *ptp, u64 ns);
int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg);
-int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg);
+int bnxt_ptp_init(struct bnxt *bp);
void bnxt_ptp_clear(struct bnxt *bp);
#endif
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index f1c8ff5b63ac..7f74e5e106d9 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -17731,6 +17731,9 @@ static int tg3_init_one(struct pci_dev *pdev,
} else
persist_dma_mask = dma_mask = DMA_BIT_MASK(64);
+ if (tg3_asic_rev(tp) == ASIC_REV_57766)
+ persist_dma_mask = DMA_BIT_MASK(31);
+
/* Configure DMA attributes. */
if (dma_mask > DMA_BIT_MASK(32)) {
err = dma_set_mask(&pdev->dev, dma_mask);
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
index 6c6f267dcccc..9f7268bb2ee3 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
@@ -479,6 +479,9 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_REQ_QUEUES)
vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
+ if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_CRC)
+ vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_CRC;
+
if (vf->driver_caps & VIRTCHNL_VF_CAP_ADV_LINK_SPEED)
vfres->vf_cap_flags |= VIRTCHNL_VF_CAP_ADV_LINK_SPEED;
@@ -1642,8 +1645,8 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
/* copy Tx queue info from VF into VSI */
if (qpi->txq.ring_len > 0) {
- vsi->tx_rings[i]->dma = qpi->txq.dma_ring_addr;
- vsi->tx_rings[i]->count = qpi->txq.ring_len;
+ vsi->tx_rings[q_idx]->dma = qpi->txq.dma_ring_addr;
+ vsi->tx_rings[q_idx]->count = qpi->txq.ring_len;
/* Disable any existing queue first */
if (ice_vf_vsi_dis_single_txq(vf, vsi, q_idx))
@@ -1652,7 +1655,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
/* Configure a queue with the requested settings */
if (ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx)) {
dev_warn(ice_pf_to_dev(pf), "VF-%d failed to configure TX queue %d\n",
- vf->vf_id, i);
+ vf->vf_id, q_idx);
goto error_param;
}
}
@@ -1660,17 +1663,28 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
/* copy Rx queue info from VF into VSI */
if (qpi->rxq.ring_len > 0) {
u16 max_frame_size = ice_vc_get_max_frame_size(vf);
+ struct ice_rx_ring *ring = vsi->rx_rings[q_idx];
u32 rxdid;
- vsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr;
- vsi->rx_rings[i]->count = qpi->rxq.ring_len;
+ ring->dma = qpi->rxq.dma_ring_addr;
+ ring->count = qpi->rxq.ring_len;
+
+ if (qpi->rxq.crc_disable &&
+ !(vf->driver_caps & VIRTCHNL_VF_OFFLOAD_CRC)) {
+ goto error_param;
+ }
+
+ if (qpi->rxq.crc_disable)
+ ring->flags |= ICE_RX_FLAGS_CRC_STRIP_DIS;
+ else
+ ring->flags &= ~ICE_RX_FLAGS_CRC_STRIP_DIS;
if (qpi->rxq.databuffer_size != 0 &&
(qpi->rxq.databuffer_size > ((16 * 1024) - 128) ||
qpi->rxq.databuffer_size < 1024))
goto error_param;
vsi->rx_buf_len = qpi->rxq.databuffer_size;
- vsi->rx_rings[i]->rx_buf_len = vsi->rx_buf_len;
+ ring->rx_buf_len = vsi->rx_buf_len;
if (qpi->rxq.max_pkt_size > max_frame_size ||
qpi->rxq.max_pkt_size < 64)
goto error_param;
@@ -1685,7 +1699,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
if (ice_vsi_cfg_single_rxq(vsi, q_idx)) {
dev_warn(ice_pf_to_dev(pf), "VF-%d failed to configure RX queue %d\n",
- vf->vf_id, i);
+ vf->vf_id, q_idx);
goto error_param;
}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
index 2539c985f695..52792546fe00 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
@@ -24,6 +24,8 @@
#define DRV_NAME "Marvell-CGX/RPM"
#define DRV_STRING "Marvell CGX/RPM Driver"
+#define CGX_RX_STAT_GLOBAL_INDEX 9
+
static LIST_HEAD(cgx_list);
/* Convert firmware speed encoding to user format(Mbps) */
@@ -110,6 +112,11 @@ struct mac_ops *get_mac_ops(void *cgxd)
return ((struct cgx *)cgxd)->mac_ops;
}
+u32 cgx_get_fifo_len(void *cgxd)
+{
+ return ((struct cgx *)cgxd)->fifo_len;
+}
+
void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val)
{
writeq(val, cgx->reg_base + (lmac << cgx->mac_ops->lmac_offset) +
@@ -207,6 +214,24 @@ u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id)
return (cfg & CMR_P2X_SEL_MASK) >> CMR_P2X_SEL_SHIFT;
}
+static u8 cgx_get_nix_resetbit(struct cgx *cgx)
+{
+ int first_lmac;
+ u8 p2x;
+
+ /* non 98XX silicons supports only NIX0 block */
+ if (cgx->pdev->subsystem_device != PCI_SUBSYS_DEVID_98XX)
+ return CGX_NIX0_RESET;
+
+ first_lmac = find_first_bit(&cgx->lmac_bmap, cgx->max_lmac_per_mac);
+ p2x = cgx_lmac_get_p2x(cgx->cgx_id, first_lmac);
+
+ if (p2x == CMR_P2X_SEL_NIX1)
+ return CGX_NIX1_RESET;
+ else
+ return CGX_NIX0_RESET;
+}
+
/* Ensure the required lock for event queue(where asynchronous events are
* posted) is acquired before calling this API. Else an asynchronous event(with
* latest link status) can reach the destination before this function returns
@@ -499,7 +524,7 @@ static u32 cgx_get_lmac_fifo_len(void *cgxd, int lmac_id)
u8 num_lmacs;
u32 fifo_len;
- fifo_len = cgx->mac_ops->fifo_len;
+ fifo_len = cgx->fifo_len;
num_lmacs = cgx->mac_ops->get_nr_lmacs(cgx);
switch (num_lmacs) {
@@ -701,6 +726,30 @@ u64 cgx_features_get(void *cgxd)
return ((struct cgx *)cgxd)->hw_features;
}
+int cgx_stats_reset(void *cgxd, int lmac_id)
+{
+ struct cgx *cgx = cgxd;
+ int stat_id;
+
+ if (!is_lmac_valid(cgx, lmac_id))
+ return -ENODEV;
+
+ for (stat_id = 0 ; stat_id < CGX_RX_STATS_COUNT; stat_id++) {
+ if (stat_id >= CGX_RX_STAT_GLOBAL_INDEX)
+ /* pass lmac as 0 for CGX_CMR_RX_STAT9-12 */
+ cgx_write(cgx, 0,
+ (CGXX_CMRX_RX_STAT0 + (stat_id * 8)), 0);
+ else
+ cgx_write(cgx, lmac_id,
+ (CGXX_CMRX_RX_STAT0 + (stat_id * 8)), 0);
+ }
+
+ for (stat_id = 0 ; stat_id < CGX_TX_STATS_COUNT; stat_id++)
+ cgx_write(cgx, lmac_id, CGXX_CMRX_TX_STAT0 + (stat_id * 8), 0);
+
+ return 0;
+}
+
static int cgx_set_fec_stats_count(struct cgx_link_user_info *linfo)
{
if (!linfo->fec)
@@ -1695,6 +1744,8 @@ static int cgx_lmac_init(struct cgx *cgx)
lmac->lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac->lmac_id);
}
+ /* Start X2P reset on given MAC block */
+ cgx->mac_ops->mac_x2p_reset(cgx, true);
return cgx_lmac_verify_fwi_version(cgx);
err_bitmap_free:
@@ -1740,7 +1791,7 @@ static void cgx_populate_features(struct cgx *cgx)
u64 cfg;
cfg = cgx_read(cgx, 0, CGX_CONST);
- cgx->mac_ops->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg);
+ cgx->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg);
cgx->max_lmac_per_mac = FIELD_GET(CGX_CONST_MAX_LMACS, cfg);
if (is_dev_rpm(cgx))
@@ -1760,6 +1811,45 @@ static u8 cgx_get_rxid_mapoffset(struct cgx *cgx)
return 0x60;
}
+static void cgx_x2p_reset(void *cgxd, bool enable)
+{
+ struct cgx *cgx = cgxd;
+ int lmac_id;
+ u64 cfg;
+
+ if (enable) {
+ for_each_set_bit(lmac_id, &cgx->lmac_bmap, cgx->max_lmac_per_mac)
+ cgx->mac_ops->mac_enadis_rx(cgx, lmac_id, false);
+
+ usleep_range(1000, 2000);
+
+ cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG);
+ cfg |= cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP;
+ cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg);
+ } else {
+ cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG);
+ cfg &= ~(cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP);
+ cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg);
+ }
+}
+
+static int cgx_enadis_rx(void *cgxd, int lmac_id, bool enable)
+{
+ struct cgx *cgx = cgxd;
+ u64 cfg;
+
+ if (!is_lmac_valid(cgx, lmac_id))
+ return -ENODEV;
+
+ cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG);
+ if (enable)
+ cfg |= DATA_PKT_RX_EN;
+ else
+ cfg &= ~DATA_PKT_RX_EN;
+ cgx_write(cgx, lmac_id, CGXX_CMRX_CFG, cfg);
+ return 0;
+}
+
static struct mac_ops cgx_mac_ops = {
.name = "cgx",
.csr_offset = 0,
@@ -1790,6 +1880,9 @@ static struct mac_ops cgx_mac_ops = {
.pfc_config = cgx_lmac_pfc_config,
.mac_get_pfc_frm_cfg = cgx_lmac_get_pfc_frm_cfg,
.mac_reset = cgx_lmac_reset,
+ .mac_stats_reset = cgx_stats_reset,
+ .mac_x2p_reset = cgx_x2p_reset,
+ .mac_enadis_rx = cgx_enadis_rx,
};
static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
index 6f7d1dee5830..1cf12e5c7da8 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
@@ -32,6 +32,10 @@
#define CGX_LMAC_TYPE_MASK 0xF
#define CGXX_CMRX_INT 0x040
#define FW_CGX_INT BIT_ULL(1)
+#define CGXX_CMR_GLOBAL_CONFIG 0x08
+#define CGX_NIX0_RESET BIT_ULL(2)
+#define CGX_NIX1_RESET BIT_ULL(3)
+#define CGX_NSCI_DROP BIT_ULL(9)
#define CGXX_CMRX_INT_ENA_W1S 0x058
#define CGXX_CMRX_RX_ID_MAP 0x060
#define CGXX_CMRX_RX_STAT0 0x070
@@ -141,6 +145,7 @@ int cgx_lmac_evh_register(struct cgx_event_cb *cb, void *cgxd, int lmac_id);
int cgx_lmac_evh_unregister(void *cgxd, int lmac_id);
int cgx_get_tx_stats(void *cgxd, int lmac_id, int idx, u64 *tx_stat);
int cgx_get_rx_stats(void *cgxd, int lmac_id, int idx, u64 *rx_stat);
+int cgx_stats_reset(void *cgxd, int lmac_id);
int cgx_lmac_rx_tx_enable(void *cgxd, int lmac_id, bool enable);
int cgx_lmac_tx_enable(void *cgxd, int lmac_id, bool enable);
int cgx_lmac_addr_set(u8 cgx_id, u8 lmac_id, u8 *mac_addr);
@@ -184,4 +189,5 @@ int cgx_lmac_get_pfc_frm_cfg(void *cgxd, int lmac_id, u8 *tx_pause,
int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause,
int pfvf_idx);
int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr);
+u32 cgx_get_fifo_len(void *cgxd);
#endif /* CGX_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
index 0b4cba03f2e8..6180e68e1765 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
@@ -72,7 +72,6 @@ struct mac_ops {
u8 irq_offset;
u8 int_ena_bit;
u8 lmac_fwi;
- u32 fifo_len;
bool non_contiguous_serdes_lane;
/* RPM & CGX differs in number of Receive/transmit stats */
u8 rx_stats_cnt;
@@ -132,6 +131,9 @@ struct mac_ops {
/* FEC stats */
int (*get_fec_stats)(void *cgxd, int lmac_id,
struct cgx_fec_stats_rsp *rsp);
+ int (*mac_stats_reset)(void *cgxd, int lmac_id);
+ void (*mac_x2p_reset)(void *cgxd, bool enable);
+ int (*mac_enadis_rx)(void *cgxd, int lmac_id, bool enable);
};
struct cgx {
@@ -141,6 +143,10 @@ struct cgx {
u8 lmac_count;
/* number of LMACs per MAC could be 4 or 8 */
u8 max_lmac_per_mac;
+ /* length of fifo varies depending on the number
+ * of LMACS
+ */
+ u32 fifo_len;
#define MAX_LMAC_COUNT 8
struct lmac *lmac_idmap[MAX_LMAC_COUNT];
struct work_struct cgx_cmd_work;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index e883c0929b1a..b4b23e475c95 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -174,6 +174,7 @@ M(CGX_FEC_STATS, 0x217, cgx_fec_stats, msg_req, cgx_fec_stats_rsp) \
M(CGX_SET_LINK_MODE, 0x218, cgx_set_link_mode, cgx_set_link_mode_req,\
cgx_set_link_mode_rsp) \
M(CGX_GET_PHY_FEC_STATS, 0x219, cgx_get_phy_fec_stats, msg_req, msg_rsp) \
+M(CGX_STATS_RST, 0x21A, cgx_stats_rst, msg_req, msg_rsp) \
M(CGX_FEATURES_GET, 0x21B, cgx_features_get, msg_req, \
cgx_features_info_msg) \
M(RPM_STATS, 0x21C, rpm_stats, msg_req, rpm_stats_rsp) \
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
index 76218f1cb459..2e9945446199 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
@@ -38,6 +38,9 @@ static struct mac_ops rpm_mac_ops = {
.pfc_config = rpm_lmac_pfc_config,
.mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
.mac_reset = rpm_lmac_reset,
+ .mac_stats_reset = rpm_stats_reset,
+ .mac_x2p_reset = rpm_x2p_reset,
+ .mac_enadis_rx = rpm_enadis_rx,
};
static struct mac_ops rpm2_mac_ops = {
@@ -70,6 +73,9 @@ static struct mac_ops rpm2_mac_ops = {
.pfc_config = rpm_lmac_pfc_config,
.mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
.mac_reset = rpm_lmac_reset,
+ .mac_stats_reset = rpm_stats_reset,
+ .mac_x2p_reset = rpm_x2p_reset,
+ .mac_enadis_rx = rpm_enadis_rx,
};
bool is_dev_rpm2(void *rpmd)
@@ -443,6 +449,21 @@ int rpm_get_tx_stats(void *rpmd, int lmac_id, int idx, u64 *tx_stat)
return 0;
}
+int rpm_stats_reset(void *rpmd, int lmac_id)
+{
+ rpm_t *rpm = rpmd;
+ u64 cfg;
+
+ if (!is_lmac_valid(rpm, lmac_id))
+ return -ENODEV;
+
+ cfg = rpm_read(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL);
+ cfg |= RPMX_CMD_CLEAR_TX | RPMX_CMD_CLEAR_RX | BIT_ULL(lmac_id);
+ rpm_write(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL, cfg);
+
+ return 0;
+}
+
u8 rpm_get_lmac_type(void *rpmd, int lmac_id)
{
rpm_t *rpm = rpmd;
@@ -450,7 +471,7 @@ u8 rpm_get_lmac_type(void *rpmd, int lmac_id)
int err;
req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_LINK_STS, req);
- err = cgx_fwi_cmd_generic(req, &resp, rpm, 0);
+ err = cgx_fwi_cmd_generic(req, &resp, rpm, lmac_id);
if (!err)
return FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, resp);
return err;
@@ -463,7 +484,7 @@ u32 rpm_get_lmac_fifo_len(void *rpmd, int lmac_id)
u8 num_lmacs;
u32 fifo_len;
- fifo_len = rpm->mac_ops->fifo_len;
+ fifo_len = rpm->fifo_len;
num_lmacs = rpm->mac_ops->get_nr_lmacs(rpm);
switch (num_lmacs) {
@@ -516,9 +537,9 @@ u32 rpm2_get_lmac_fifo_len(void *rpmd, int lmac_id)
*/
max_lmac = (rpm_read(rpm, 0, CGX_CONST) >> 24) & 0xFF;
if (max_lmac > 4)
- fifo_len = rpm->mac_ops->fifo_len / 2;
+ fifo_len = rpm->fifo_len / 2;
else
- fifo_len = rpm->mac_ops->fifo_len;
+ fifo_len = rpm->fifo_len;
if (lmac_id < 4) {
num_lmacs = hweight8(lmac_info & 0xF);
@@ -682,46 +703,51 @@ int rpm_get_fec_stats(void *rpmd, int lmac_id, struct cgx_fec_stats_rsp *rsp)
if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_NONE)
return 0;
+ /* latched registers FCFECX_CW_HI/RSFEC_STAT_FAST_DATA_HI_CDC are common
+ * for all counters. Acquire lock to ensure serialized reads
+ */
+ mutex_lock(&rpm->lock);
if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_BASER) {
- val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_CCW_LO);
- val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI);
+ val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_CCW_LO(lmac_id));
+ val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id));
rsp->fec_corr_blks = (val_hi << 16 | val_lo);
- val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_NCCW_LO);
- val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI);
+ val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_NCCW_LO(lmac_id));
+ val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id));
rsp->fec_uncorr_blks = (val_hi << 16 | val_lo);
/* 50G uses 2 Physical serdes lines */
if (rpm->lmac_idmap[lmac_id]->link_info.lmac_type_id ==
LMAC_MODE_50G_R) {
- val_lo = rpm_read(rpm, lmac_id,
- RPMX_MTI_FCFECX_VL1_CCW_LO);
- val_hi = rpm_read(rpm, lmac_id,
- RPMX_MTI_FCFECX_CW_HI);
+ val_lo = rpm_read(rpm, 0,
+ RPMX_MTI_FCFECX_VL1_CCW_LO(lmac_id));
+ val_hi = rpm_read(rpm, 0,
+ RPMX_MTI_FCFECX_CW_HI(lmac_id));
rsp->fec_corr_blks += (val_hi << 16 | val_lo);
- val_lo = rpm_read(rpm, lmac_id,
- RPMX_MTI_FCFECX_VL1_NCCW_LO);
- val_hi = rpm_read(rpm, lmac_id,
- RPMX_MTI_FCFECX_CW_HI);
+ val_lo = rpm_read(rpm, 0,
+ RPMX_MTI_FCFECX_VL1_NCCW_LO(lmac_id));
+ val_hi = rpm_read(rpm, 0,
+ RPMX_MTI_FCFECX_CW_HI(lmac_id));
rsp->fec_uncorr_blks += (val_hi << 16 | val_lo);
}
} else {
/* enable RS-FEC capture */
- cfg = rpm_read(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL);
+ cfg = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL);
cfg |= RPMX_RSFEC_RX_CAPTURE | BIT(lmac_id);
- rpm_write(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL, cfg);
+ rpm_write(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL, cfg);
val_lo = rpm_read(rpm, 0,
RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2);
- val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC);
+ val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC);
rsp->fec_corr_blks = (val_hi << 32 | val_lo);
val_lo = rpm_read(rpm, 0,
RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3);
- val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC);
+ val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC);
rsp->fec_uncorr_blks = (val_hi << 32 | val_lo);
}
+ mutex_unlock(&rpm->lock);
return 0;
}
@@ -746,3 +772,41 @@ int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr)
return 0;
}
+
+void rpm_x2p_reset(void *rpmd, bool enable)
+{
+ rpm_t *rpm = rpmd;
+ int lmac_id;
+ u64 cfg;
+
+ if (enable) {
+ for_each_set_bit(lmac_id, &rpm->lmac_bmap, rpm->max_lmac_per_mac)
+ rpm->mac_ops->mac_enadis_rx(rpm, lmac_id, false);
+
+ usleep_range(1000, 2000);
+
+ cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG);
+ rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg | RPM_NIX0_RESET);
+ } else {
+ cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG);
+ cfg &= ~RPM_NIX0_RESET;
+ rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg);
+ }
+}
+
+int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable)
+{
+ rpm_t *rpm = rpmd;
+ u64 cfg;
+
+ if (!is_lmac_valid(rpm, lmac_id))
+ return -ENODEV;
+
+ cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
+ if (enable)
+ cfg |= RPM_RX_EN;
+ else
+ cfg &= ~RPM_RX_EN;
+ rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
+ return 0;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
index b79cfbc6f877..b8d3972e096a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
@@ -17,6 +17,8 @@
/* Registers */
#define RPMX_CMRX_CFG 0x00
+#define RPMX_CMR_GLOBAL_CFG 0x08
+#define RPM_NIX0_RESET BIT_ULL(3)
#define RPMX_RX_TS_PREPEND BIT_ULL(22)
#define RPMX_TX_PTP_1S_SUPPORT BIT_ULL(17)
#define RPMX_CMRX_RX_ID_MAP 0x80
@@ -84,14 +86,18 @@
/* FEC stats */
#define RPMX_MTI_STAT_STATN_CONTROL 0x10018
#define RPMX_MTI_STAT_DATA_HI_CDC 0x10038
-#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(27)
+#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(28)
+#define RPMX_CMD_CLEAR_RX BIT_ULL(30)
+#define RPMX_CMD_CLEAR_TX BIT_ULL(31)
+#define RPMX_MTI_RSFEC_STAT_STATN_CONTROL 0x40018
+#define RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC 0x40000
#define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2 0x40050
#define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3 0x40058
-#define RPMX_MTI_FCFECX_VL0_CCW_LO 0x38618
-#define RPMX_MTI_FCFECX_VL0_NCCW_LO 0x38620
-#define RPMX_MTI_FCFECX_VL1_CCW_LO 0x38628
-#define RPMX_MTI_FCFECX_VL1_NCCW_LO 0x38630
-#define RPMX_MTI_FCFECX_CW_HI 0x38638
+#define RPMX_MTI_FCFECX_VL0_CCW_LO(a) (0x38618 + ((a) * 0x40))
+#define RPMX_MTI_FCFECX_VL0_NCCW_LO(a) (0x38620 + ((a) * 0x40))
+#define RPMX_MTI_FCFECX_VL1_CCW_LO(a) (0x38628 + ((a) * 0x40))
+#define RPMX_MTI_FCFECX_VL1_NCCW_LO(a) (0x38630 + ((a) * 0x40))
+#define RPMX_MTI_FCFECX_CW_HI(a) (0x38638 + ((a) * 0x40))
/* CN10KB CSR Declaration */
#define RPM2_CMRX_SW_INT 0x1b0
@@ -134,4 +140,7 @@ int rpm2_get_nr_lmacs(void *rpmd);
bool is_dev_rpm2(void *rpmd);
int rpm_get_fec_stats(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp);
int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr);
+int rpm_stats_reset(void *rpmd, int lmac_id);
+void rpm_x2p_reset(void *rpmd, bool enable);
+int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable);
#endif /* RPM_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 5906f5f8d190..524173722223 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -1157,6 +1157,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
}
rvu_program_channels(rvu);
+ cgx_start_linkup(rvu);
err = rvu_mcs_init(rvu);
if (err) {
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index e81cfcaf9ce4..a607c7294b0c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -912,6 +912,7 @@ int rvu_cgx_prio_flow_ctrl_cfg(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_
int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause);
void rvu_mac_reset(struct rvu *rvu, u16 pcifunc);
u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac);
+void cgx_start_linkup(struct rvu *rvu);
int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf,
int type);
bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr,
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
index 19075f217d00..d14cf2a9d207 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
@@ -349,6 +349,7 @@ static void rvu_cgx_wq_destroy(struct rvu *rvu)
int rvu_cgx_init(struct rvu *rvu)
{
+ struct mac_ops *mac_ops;
int cgx, err;
void *cgxd;
@@ -375,6 +376,15 @@ int rvu_cgx_init(struct rvu *rvu)
if (err)
return err;
+ /* Clear X2P reset on all MAC blocks */
+ for (cgx = 0; cgx < rvu->cgx_cnt_max; cgx++) {
+ cgxd = rvu_cgx_pdata(cgx, rvu);
+ if (!cgxd)
+ continue;
+ mac_ops = get_mac_ops(cgxd);
+ mac_ops->mac_x2p_reset(cgxd, false);
+ }
+
/* Register for CGX events */
err = cgx_lmac_event_handler_init(rvu);
if (err)
@@ -382,10 +392,26 @@ int rvu_cgx_init(struct rvu *rvu)
mutex_init(&rvu->cgx_cfg_lock);
- /* Ensure event handler registration is completed, before
- * we turn on the links
- */
- mb();
+ return 0;
+}
+
+void cgx_start_linkup(struct rvu *rvu)
+{
+ unsigned long lmac_bmap;
+ struct mac_ops *mac_ops;
+ int cgx, lmac, err;
+ void *cgxd;
+
+ /* Enable receive on all LMACS */
+ for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) {
+ cgxd = rvu_cgx_pdata(cgx, rvu);
+ if (!cgxd)
+ continue;
+ mac_ops = get_mac_ops(cgxd);
+ lmac_bmap = cgx_get_lmac_bmap(cgxd);
+ for_each_set_bit(lmac, &lmac_bmap, rvu->hw->lmac_per_cgx)
+ mac_ops->mac_enadis_rx(cgxd, lmac, true);
+ }
/* Do link up for all CGX ports */
for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) {
@@ -398,8 +424,6 @@ int rvu_cgx_init(struct rvu *rvu)
"Link up process failed to start on cgx %d\n",
cgx);
}
-
- return 0;
}
int rvu_cgx_exit(struct rvu *rvu)
@@ -604,6 +628,35 @@ int rvu_mbox_handler_rpm_stats(struct rvu *rvu, struct msg_req *req,
return rvu_lmac_get_stats(rvu, req, (void *)rsp);
}
+int rvu_mbox_handler_cgx_stats_rst(struct rvu *rvu, struct msg_req *req,
+ struct msg_rsp *rsp)
+{
+ int pf = rvu_get_pf(req->hdr.pcifunc);
+ struct rvu_pfvf *parent_pf;
+ struct mac_ops *mac_ops;
+ u8 cgx_idx, lmac;
+ void *cgxd;
+
+ if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc))
+ return LMAC_AF_ERR_PERM_DENIED;
+
+ parent_pf = &rvu->pf[pf];
+ /* To ensure reset cgx stats won't affect VF stats,
+ * check if it used by only PF interface.
+ * If not, return
+ */
+ if (parent_pf->cgx_users > 1) {
+ dev_info(rvu->dev, "CGX busy, could not reset statistics\n");
+ return 0;
+ }
+
+ rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_idx, &lmac);
+ cgxd = rvu_cgx_pdata(cgx_idx, rvu);
+ mac_ops = get_mac_ops(cgxd);
+
+ return mac_ops->mac_stats_reset(cgxd, lmac);
+}
+
int rvu_mbox_handler_cgx_fec_stats(struct rvu *rvu,
struct msg_req *req,
struct cgx_fec_stats_rsp *rsp)
@@ -895,13 +948,12 @@ int rvu_mbox_handler_cgx_features_get(struct rvu *rvu,
u32 rvu_cgx_get_fifolen(struct rvu *rvu)
{
- struct mac_ops *mac_ops;
- u32 fifo_len;
+ void *cgxd = rvu_first_cgx_pdata(rvu);
- mac_ops = get_mac_ops(rvu_first_cgx_pdata(rvu));
- fifo_len = mac_ops ? mac_ops->fifo_len : 0;
+ if (!cgxd)
+ return 0;
- return fifo_len;
+ return cgx_get_fifo_len(cgxd);
}
u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
index c1c99d7054f8..7417087b6db5 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
@@ -203,6 +203,11 @@ int cn10k_alloc_leaf_profile(struct otx2_nic *pfvf, u16 *leaf)
rsp = (struct nix_bandprof_alloc_rsp *)
otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+ if (IS_ERR(rsp)) {
+ rc = PTR_ERR(rsp);
+ goto out;
+ }
+
if (!rsp->prof_count[BAND_PROF_LEAF_LAYER]) {
rc = -EIO;
goto out;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index b3064377510e..47adccf7a777 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -1837,6 +1837,10 @@ u16 otx2_get_max_mtu(struct otx2_nic *pfvf)
if (!rc) {
rsp = (struct nix_hw_info *)
otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+ if (IS_ERR(rsp)) {
+ rc = PTR_ERR(rsp);
+ goto out;
+ }
/* HW counts VLAN insertion bytes (8 for double tag)
* irrespective of whether SQE is requesting to insert VLAN
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index 7e16a341ec58..c5de3ba33e2f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -961,6 +961,7 @@ void otx2_get_mac_from_af(struct net_device *netdev);
void otx2_config_irq_coalescing(struct otx2_nic *pfvf, int qidx);
int otx2_config_pause_frm(struct otx2_nic *pfvf);
void otx2_setup_segmentation(struct otx2_nic *pfvf);
+int otx2_reset_mac_stats(struct otx2_nic *pfvf);
/* RVU block related APIs */
int otx2_attach_npa_nix(struct otx2_nic *pfvf);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
index aa01110f04a3..294fba58b670 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
@@ -315,6 +315,11 @@ int otx2_config_priority_flow_ctrl(struct otx2_nic *pfvf)
if (!otx2_sync_mbox_msg(&pfvf->mbox)) {
rsp = (struct cgx_pfc_rsp *)
otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+ if (IS_ERR(rsp)) {
+ err = PTR_ERR(rsp);
+ goto unlock;
+ }
+
if (req->rx_pause != rsp->rx_pause || req->tx_pause != rsp->tx_pause) {
dev_warn(pfvf->dev,
"Failed to config PFC\n");
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
index 80d853b343f9..2046dd0da00d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
@@ -28,6 +28,11 @@ static int otx2_dmacflt_do_add(struct otx2_nic *pf, const u8 *mac,
if (!err) {
rsp = (struct cgx_mac_addr_add_rsp *)
otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr);
+ if (IS_ERR(rsp)) {
+ mutex_unlock(&pf->mbox.lock);
+ return PTR_ERR(rsp);
+ }
+
*dmac_index = rsp->index;
}
@@ -200,6 +205,10 @@ int otx2_dmacflt_update(struct otx2_nic *pf, u8 *mac, u32 bit_pos)
rsp = (struct cgx_mac_addr_update_rsp *)
otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr);
+ if (IS_ERR(rsp)) {
+ rc = PTR_ERR(rsp);
+ goto out;
+ }
pf->flow_cfg->bmap_to_dmacindex[bit_pos] = rsp->index;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
index 8b7fc0af91ce..532e84bc38c7 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
@@ -343,6 +343,11 @@ static void otx2_get_pauseparam(struct net_device *netdev,
if (!otx2_sync_mbox_msg(&pfvf->mbox)) {
rsp = (struct cgx_pause_frm_cfg *)
otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
+ if (IS_ERR(rsp)) {
+ mutex_unlock(&pfvf->mbox.lock);
+ return;
+ }
+
pause->rx_pause = rsp->rx_pause;
pause->tx_pause = rsp->tx_pause;
}
@@ -1082,6 +1087,11 @@ static int otx2_set_fecparam(struct net_device *netdev,
rsp = (struct fec_mode *)otx2_mbox_get_rsp(&pfvf->mbox.mbox,
0, &req->hdr);
+ if (IS_ERR(rsp)) {
+ err = PTR_ERR(rsp);
+ goto end;
+ }
+
if (rsp->fec >= 0)
pfvf->linfo.fec = rsp->fec;
else
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
index 97a71e9b8563..e6082f90f57a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
@@ -121,6 +121,8 @@ int otx2_alloc_mcam_entries(struct otx2_nic *pfvf, u16 count)
rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp
(&pfvf->mbox.mbox, 0, &req->hdr);
+ if (IS_ERR(rsp))
+ goto exit;
for (ent = 0; ent < rsp->count; ent++)
flow_cfg->flow_ent[ent + allocated] = rsp->entry_list[ent];
@@ -199,6 +201,10 @@ static int otx2_mcam_entry_init(struct otx2_nic *pfvf)
rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp
(&pfvf->mbox.mbox, 0, &req->hdr);
+ if (IS_ERR(rsp)) {
+ mutex_unlock(&pfvf->mbox.lock);
+ return PTR_ERR(rsp);
+ }
if (rsp->count != req->count) {
netdev_info(pfvf->netdev,
@@ -234,6 +240,10 @@ static int otx2_mcam_entry_init(struct otx2_nic *pfvf)
frsp = (struct npc_get_field_status_rsp *)otx2_mbox_get_rsp
(&pfvf->mbox.mbox, 0, &freq->hdr);
+ if (IS_ERR(frsp)) {
+ mutex_unlock(&pfvf->mbox.lock);
+ return PTR_ERR(frsp);
+ }
if (frsp->enable) {
pfvf->flags |= OTX2_FLAG_RX_VLAN_SUPPORT;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 3f46d5e0fb2e..b4194ec2a1f2 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1150,6 +1150,23 @@ static int otx2_cgx_config_linkevents(struct otx2_nic *pf, bool enable)
return err;
}
+int otx2_reset_mac_stats(struct otx2_nic *pfvf)
+{
+ struct msg_req *req;
+ int err;
+
+ mutex_lock(&pfvf->mbox.lock);
+ req = otx2_mbox_alloc_msg_cgx_stats_rst(&pfvf->mbox);
+ if (!req) {
+ mutex_unlock(&pfvf->mbox.lock);
+ return -ENOMEM;
+ }
+
+ err = otx2_sync_mbox_msg(&pfvf->mbox);
+ mutex_unlock(&pfvf->mbox.lock);
+ return err;
+}
+
static int otx2_cgx_config_loopback(struct otx2_nic *pf, bool enable)
{
struct msg_req *msg;
@@ -3038,6 +3055,9 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
netdev->min_mtu = OTX2_MIN_MTU;
netdev->max_mtu = otx2_get_max_mtu(pf);
+ /* reset CGX/RPM MAC stats */
+ otx2_reset_mac_stats(pf);
+
err = register_netdev(netdev);
if (err) {
dev_err(dev, "Failed to register netdevice\n");
diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
index d5691b6a2bc5..f2ca4376b48c 100644
--- a/drivers/net/ethernet/marvell/pxa168_eth.c
+++ b/drivers/net/ethernet/marvell/pxa168_eth.c
@@ -1394,18 +1394,15 @@ static int pxa168_eth_probe(struct platform_device *pdev)
printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n");
- clk = devm_clk_get(&pdev->dev, NULL);
+ clk = devm_clk_get_enabled(&pdev->dev, NULL);
if (IS_ERR(clk)) {
- dev_err(&pdev->dev, "Fast Ethernet failed to get clock\n");
+ dev_err(&pdev->dev, "Fast Ethernet failed to get and enable clock\n");
return -ENODEV;
}
- clk_prepare_enable(clk);
dev = alloc_etherdev(sizeof(struct pxa168_eth_private));
- if (!dev) {
- err = -ENOMEM;
- goto err_clk;
- }
+ if (!dev)
+ return -ENOMEM;
platform_set_drvdata(pdev, dev);
pep = netdev_priv(dev);
@@ -1523,8 +1520,6 @@ static int pxa168_eth_probe(struct platform_device *pdev)
mdiobus_free(pep->smi_bus);
err_netdev:
free_netdev(dev);
-err_clk:
- clk_disable_unprepare(clk);
return err;
}
@@ -1542,7 +1537,6 @@ static int pxa168_eth_remove(struct platform_device *pdev)
if (dev->phydev)
phy_disconnect(dev->phydev);
- clk_disable_unprepare(pep->clk);
mdiobus_unregister(pep->smi_bus);
mdiobus_free(pep->smi_bus);
unregister_netdev(dev);
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
index 66ef14d95bf6..88744ae65293 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+++ b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
@@ -366,12 +366,13 @@ static void vcap_api_iterator_init_test(struct kunit *test)
struct vcap_typegroup typegroups[] = {
{ .offset = 0, .width = 2, .value = 2, },
{ .offset = 156, .width = 1, .value = 0, },
- { .offset = 0, .width = 0, .value = 0, },
+ { }
};
struct vcap_typegroup typegroups2[] = {
{ .offset = 0, .width = 3, .value = 4, },
{ .offset = 49, .width = 2, .value = 0, },
{ .offset = 98, .width = 2, .value = 0, },
+ { }
};
vcap_iter_init(&iter, 52, typegroups, 86);
@@ -399,6 +400,7 @@ static void vcap_api_iterator_next_test(struct kunit *test)
{ .offset = 147, .width = 3, .value = 0, },
{ .offset = 196, .width = 2, .value = 0, },
{ .offset = 245, .width = 1, .value = 0, },
+ { }
};
int idx;
@@ -433,7 +435,7 @@ static void vcap_api_encode_typegroups_test(struct kunit *test)
{ .offset = 147, .width = 3, .value = 5, },
{ .offset = 196, .width = 2, .value = 2, },
{ .offset = 245, .width = 5, .value = 27, },
- { .offset = 0, .width = 0, .value = 0, },
+ { }
};
vcap_encode_typegroups(stream, 49, typegroups, false);
@@ -463,6 +465,7 @@ static void vcap_api_encode_bit_test(struct kunit *test)
{ .offset = 147, .width = 3, .value = 5, },
{ .offset = 196, .width = 2, .value = 2, },
{ .offset = 245, .width = 1, .value = 0, },
+ { }
};
vcap_iter_init(&iter, 49, typegroups, 44);
@@ -489,7 +492,7 @@ static void vcap_api_encode_field_test(struct kunit *test)
{ .offset = 147, .width = 3, .value = 5, },
{ .offset = 196, .width = 2, .value = 2, },
{ .offset = 245, .width = 5, .value = 27, },
- { .offset = 0, .width = 0, .value = 0, },
+ { }
};
struct vcap_field rf = {
.type = VCAP_FIELD_U32,
@@ -538,7 +541,7 @@ static void vcap_api_encode_short_field_test(struct kunit *test)
{ .offset = 0, .width = 3, .value = 7, },
{ .offset = 21, .width = 2, .value = 3, },
{ .offset = 42, .width = 1, .value = 1, },
- { .offset = 0, .width = 0, .value = 0, },
+ { }
};
struct vcap_field rf = {
.type = VCAP_FIELD_U32,
@@ -608,7 +611,7 @@ static void vcap_api_encode_keyfield_test(struct kunit *test)
struct vcap_typegroup tgt[] = {
{ .offset = 0, .width = 2, .value = 2, },
{ .offset = 156, .width = 1, .value = 1, },
- { .offset = 0, .width = 0, .value = 0, },
+ { }
};
vcap_test_api_init(&admin);
@@ -671,7 +674,7 @@ static void vcap_api_encode_max_keyfield_test(struct kunit *test)
struct vcap_typegroup tgt[] = {
{ .offset = 0, .width = 2, .value = 2, },
{ .offset = 156, .width = 1, .value = 1, },
- { .offset = 0, .width = 0, .value = 0, },
+ { }
};
u32 keyres[] = {
0x928e8a84,
@@ -732,7 +735,7 @@ static void vcap_api_encode_actionfield_test(struct kunit *test)
{ .offset = 0, .width = 2, .value = 2, },
{ .offset = 21, .width = 1, .value = 1, },
{ .offset = 42, .width = 1, .value = 0, },
- { .offset = 0, .width = 0, .value = 0, },
+ { }
};
vcap_encode_actionfield(&rule, &caf, &rf, tgt);
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
index 9bf102bbc6a0..5d20325a18dd 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
@@ -429,6 +429,8 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
plat_dat->bsp_priv = dwmac;
plat_dat->fix_mac_speed = socfpga_dwmac_fix_mac_speed;
+ plat_dat->riwt_off = 1;
+
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret)
goto err_remove_config_dt;
diff --git a/drivers/net/mdio/mdio-ipq4019.c b/drivers/net/mdio/mdio-ipq4019.c
index 78b93de636f5..0e13fca8731a 100644
--- a/drivers/net/mdio/mdio-ipq4019.c
+++ b/drivers/net/mdio/mdio-ipq4019.c
@@ -255,8 +255,11 @@ static int ipq4019_mdio_probe(struct platform_device *pdev)
/* The platform resource is provided on the chipset IPQ5018 */
/* This resource is optional */
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
- if (res)
+ if (res) {
priv->eth_ldo_rdy = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(priv->eth_ldo_rdy))
+ return PTR_ERR(priv->eth_ldo_rdy);
+ }
bus->name = "ipq4019_mdio";
bus->read = ipq4019_mdio_read_c22;
diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
index f0d58092e7e9..3612b0633bd1 100644
--- a/drivers/net/netdevsim/ipsec.c
+++ b/drivers/net/netdevsim/ipsec.c
@@ -176,14 +176,13 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs,
return ret;
}
- if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) {
+ if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN)
sa.rx = true;
- if (xs->props.family == AF_INET6)
- memcpy(sa.ipaddr, &xs->id.daddr.a6, 16);
- else
- memcpy(&sa.ipaddr[3], &xs->id.daddr.a4, 4);
- }
+ if (xs->props.family == AF_INET6)
+ memcpy(sa.ipaddr, &xs->id.daddr.a6, 16);
+ else
+ memcpy(&sa.ipaddr[3], &xs->id.daddr.a4, 4);
/* the preparations worked, so save the info */
memcpy(&ipsec->sa[sa_idx], &sa, sizeof(sa));
diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
index 921ae046f860..09173d7b87ed 100644
--- a/drivers/net/usb/lan78xx.c
+++ b/drivers/net/usb/lan78xx.c
@@ -1657,13 +1657,13 @@ static int lan78xx_set_wol(struct net_device *netdev,
struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
int ret;
+ if (wol->wolopts & ~WAKE_ALL)
+ return -EINVAL;
+
ret = usb_autopm_get_interface(dev->intf);
if (ret < 0)
return ret;
- if (wol->wolopts & ~WAKE_ALL)
- return -EINVAL;
-
pdata->wol = wol->wolopts;
device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts);
@@ -2387,6 +2387,7 @@ static int lan78xx_phy_init(struct lan78xx_net *dev)
if (dev->chipid == ID_REV_CHIP_ID_7801_) {
if (phy_is_pseudo_fixed_link(phydev)) {
fixed_phy_unregister(phydev);
+ phy_device_free(phydev);
} else {
phy_unregister_fixup_for_uid(PHY_KSZ9031RNX,
0xfffffff0);
@@ -4246,8 +4247,10 @@ static void lan78xx_disconnect(struct usb_interface *intf)
phy_disconnect(net->phydev);
- if (phy_is_pseudo_fixed_link(phydev))
+ if (phy_is_pseudo_fixed_link(phydev)) {
fixed_phy_unregister(phydev);
+ phy_device_free(phydev);
+ }
usb_scuttle_anchored_urbs(&dev->deferred);
@@ -4414,29 +4417,30 @@ static int lan78xx_probe(struct usb_interface *intf,
period = ep_intr->desc.bInterval;
maxp = usb_maxpacket(dev->udev, dev->pipe_intr);
- buf = kmalloc(maxp, GFP_KERNEL);
- if (!buf) {
+
+ dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL);
+ if (!dev->urb_intr) {
ret = -ENOMEM;
goto out5;
}
- dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL);
- if (!dev->urb_intr) {
+ buf = kmalloc(maxp, GFP_KERNEL);
+ if (!buf) {
ret = -ENOMEM;
- goto out6;
- } else {
- usb_fill_int_urb(dev->urb_intr, dev->udev,
- dev->pipe_intr, buf, maxp,
- intr_complete, dev, period);
- dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
+ goto free_urbs;
}
+ usb_fill_int_urb(dev->urb_intr, dev->udev,
+ dev->pipe_intr, buf, maxp,
+ intr_complete, dev, period);
+ dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
+
dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out);
/* Reject broken descriptors. */
if (dev->maxpacket == 0) {
ret = -ENODEV;
- goto out6;
+ goto free_urbs;
}
/* driver requires remote-wakeup capability during autosuspend. */
@@ -4444,7 +4448,7 @@ static int lan78xx_probe(struct usb_interface *intf,
ret = lan78xx_phy_init(dev);
if (ret < 0)
- goto out7;
+ goto free_urbs;
ret = register_netdev(netdev);
if (ret != 0) {
@@ -4466,10 +4470,8 @@ static int lan78xx_probe(struct usb_interface *intf,
out8:
phy_disconnect(netdev->phydev);
-out7:
+free_urbs:
usb_free_urb(dev->urb_intr);
-out6:
- kfree(buf);
out5:
lan78xx_unbind(dev, intf);
out4:
diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
index 2cf4324a12fd..89775b6d0699 100644
--- a/drivers/net/usb/qmi_wwan.c
+++ b/drivers/net/usb/qmi_wwan.c
@@ -1084,6 +1084,7 @@ static const struct usb_device_id products[] = {
USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x581d, USB_CLASS_VENDOR_SPEC, 1, 7),
.driver_info = (unsigned long)&qmi_wwan_info,
},
+ {QMI_MATCH_FF_FF_FF(0x2c7c, 0x0122)}, /* Quectel RG650V */
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0125)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0306)}, /* Quectel EP06/EG06/EM06 */
{QMI_MATCH_FF_FF_FF(0x2c7c, 0x0512)}, /* Quectel EG12/EM12 */
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
index ce19ebd180f1..3e5998555f98 100644
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -10016,6 +10016,7 @@ static const struct usb_device_id rtl8152_table[] = {
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x3062) },
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x3069) },
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x3082) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3098) },
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x7205) },
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x720c) },
{ USB_DEVICE(VENDOR_ID_LENOVO, 0x7214) },
diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
index 03e7bc5b6c0b..d5e6e11f630b 100644
--- a/drivers/net/wireless/ath/ath10k/mac.c
+++ b/drivers/net/wireless/ath/ath10k/mac.c
@@ -9119,7 +9119,7 @@ static const struct ath10k_index_vht_data_rate_type supported_vht_mcs_rate_nss1[
{6, {2633, 2925}, {1215, 1350}, {585, 650} },
{7, {2925, 3250}, {1350, 1500}, {650, 722} },
{8, {3510, 3900}, {1620, 1800}, {780, 867} },
- {9, {3900, 4333}, {1800, 2000}, {780, 867} }
+ {9, {3900, 4333}, {1800, 2000}, {865, 960} }
};
/*MCS parameters with Nss = 2 */
@@ -9134,7 +9134,7 @@ static const struct ath10k_index_vht_data_rate_type supported_vht_mcs_rate_nss2[
{6, {5265, 5850}, {2430, 2700}, {1170, 1300} },
{7, {5850, 6500}, {2700, 3000}, {1300, 1444} },
{8, {7020, 7800}, {3240, 3600}, {1560, 1733} },
- {9, {7800, 8667}, {3600, 4000}, {1560, 1733} }
+ {9, {7800, 8667}, {3600, 4000}, {1730, 1920} }
};
static void ath10k_mac_get_rate_flags_ht(struct ath10k *ar, u32 rate, u8 nss, u8 mcs,
diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
index 83dc284392de..fa46e645009c 100644
--- a/drivers/net/wireless/ath/ath11k/qmi.c
+++ b/drivers/net/wireless/ath/ath11k/qmi.c
@@ -2180,6 +2180,9 @@ static int ath11k_qmi_request_device_info(struct ath11k_base *ab)
ab->mem = bar_addr_va;
ab->mem_len = resp.bar_size;
+ if (!ab->hw_params.ce_remap)
+ ab->mem_ce = ab->mem;
+
return 0;
out:
return ret;
diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c
index 907655c45a4b..c663ff990b47 100644
--- a/drivers/net/wireless/ath/ath12k/dp.c
+++ b/drivers/net/wireless/ath/ath12k/dp.c
@@ -1214,6 +1214,7 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab)
}
kfree(dp->spt_info);
+ dp->spt_info = NULL;
}
static void ath12k_dp_reoq_lut_cleanup(struct ath12k_base *ab)
@@ -1249,8 +1250,10 @@ void ath12k_dp_free(struct ath12k_base *ab)
ath12k_dp_rx_reo_cmd_list_cleanup(ab);
- for (i = 0; i < ab->hw_params->max_tx_ring; i++)
+ for (i = 0; i < ab->hw_params->max_tx_ring; i++) {
kfree(dp->tx_ring[i].tx_status);
+ dp->tx_ring[i].tx_status = NULL;
+ }
ath12k_dp_rx_free(ab);
/* Deinit any SOC level resource */
diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
index 4bb30e403728..f90191a290c2 100644
--- a/drivers/net/wireless/ath/ath12k/mac.c
+++ b/drivers/net/wireless/ath/ath12k/mac.c
@@ -775,7 +775,10 @@ void ath12k_mac_peer_cleanup_all(struct ath12k *ar)
spin_lock_bh(&ab->base_lock);
list_for_each_entry_safe(peer, tmp, &ab->peers, list) {
- ath12k_dp_rx_peer_tid_cleanup(ar, peer);
+ /* Skip Rx TID cleanup for self peer */
+ if (peer->sta)
+ ath12k_dp_rx_peer_tid_cleanup(ar, peer);
+
list_del(&peer->list);
kfree(peer);
}
diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
index 99667aba289d..00dc97ac53b9 100644
--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
+++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
@@ -294,6 +294,9 @@ int htc_connect_service(struct htc_target *target,
return -ETIMEDOUT;
}
+ if (target->conn_rsp_epid < 0 || target->conn_rsp_epid >= ENDPOINT_MAX)
+ return -EINVAL;
+
*conn_rsp_epid = target->conn_rsp_epid;
return 0;
err:
diff --git a/drivers/net/wireless/ath/wil6210/txrx.c b/drivers/net/wireless/ath/wil6210/txrx.c
index f29ac6de7139..19702b6f09c3 100644
--- a/drivers/net/wireless/ath/wil6210/txrx.c
+++ b/drivers/net/wireless/ath/wil6210/txrx.c
@@ -306,7 +306,7 @@ static void wil_rx_add_radiotap_header(struct wil6210_priv *wil,
struct sk_buff *skb)
{
struct wil6210_rtap {
- struct ieee80211_radiotap_header rthdr;
+ struct ieee80211_radiotap_header_fixed rthdr;
/* fields should be in the order of bits in rthdr.it_present */
/* flags */
u8 flags;
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
index e406e11481a6..73fc701204e2 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
@@ -109,9 +109,8 @@ void brcmf_of_probe(struct device *dev, enum brcmf_bus_type bus_type,
}
strreplace(board_type, '/', '-');
settings->board_type = board_type;
-
- of_node_put(root);
}
+ of_node_put(root);
if (!np || !of_device_is_compatible(np, "brcm,bcm4329-fmac"))
return;
diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2100.c b/drivers/net/wireless/intel/ipw2x00/ipw2100.c
index 0812db8936f1..9e9ff0cb724c 100644
--- a/drivers/net/wireless/intel/ipw2x00/ipw2100.c
+++ b/drivers/net/wireless/intel/ipw2x00/ipw2100.c
@@ -2520,7 +2520,7 @@ static void isr_rx_monitor(struct ipw2100_priv *priv, int i,
* to build this manually element by element, we can write it much
* more efficiently than we can parse it. ORDER MATTERS HERE */
struct ipw_rt_hdr {
- struct ieee80211_radiotap_header rt_hdr;
+ struct ieee80211_radiotap_header_fixed rt_hdr;
s8 rt_dbmsignal; /* signal in dbM, kluged to signed */
} *ipw_rt;
diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.h b/drivers/net/wireless/intel/ipw2x00/ipw2200.h
index 8ebf09121e17..226286cb7eb8 100644
--- a/drivers/net/wireless/intel/ipw2x00/ipw2200.h
+++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.h
@@ -1143,7 +1143,7 @@ struct ipw_prom_priv {
* structure is provided regardless of any bits unset.
*/
struct ipw_rt_hdr {
- struct ieee80211_radiotap_header rt_hdr;
+ struct ieee80211_radiotap_header_fixed rt_hdr;
u64 rt_tsf; /* TSF */ /* XXX */
u8 rt_flags; /* radiotap packet flags */
u8 rt_rate; /* rate in 500kb/s */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/init.c b/drivers/net/wireless/intel/iwlwifi/fw/init.c
index 135bd48bfe9f..cf02a2afbee5 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/init.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/init.c
@@ -39,10 +39,12 @@ void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans,
}
IWL_EXPORT_SYMBOL(iwl_fw_runtime_init);
+/* Assumes the appropriate lock is held by the caller */
void iwl_fw_runtime_suspend(struct iwl_fw_runtime *fwrt)
{
iwl_fw_suspend_timestamp(fwrt);
- iwl_dbg_tlv_time_point(fwrt, IWL_FW_INI_TIME_POINT_HOST_D3_START, NULL);
+ iwl_dbg_tlv_time_point_sync(fwrt, IWL_FW_INI_TIME_POINT_HOST_D3_START,
+ NULL);
}
IWL_EXPORT_SYMBOL(iwl_fw_runtime_suspend);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
index 08d1fab7f53c..592b9157d50c 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
@@ -1382,7 +1382,9 @@ int iwl_mvm_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan)
iwl_mvm_pause_tcm(mvm, true);
+ mutex_lock(&mvm->mutex);
iwl_fw_runtime_suspend(&mvm->fwrt);
+ mutex_unlock(&mvm->mutex);
return __iwl_mvm_suspend(hw, wowlan, false);
}
diff --git a/drivers/net/wireless/intersil/p54/p54spi.c b/drivers/net/wireless/intersil/p54/p54spi.c
index ce0179b8ab36..90ebed33d792 100644
--- a/drivers/net/wireless/intersil/p54/p54spi.c
+++ b/drivers/net/wireless/intersil/p54/p54spi.c
@@ -624,7 +624,7 @@ static int p54spi_probe(struct spi_device *spi)
gpio_direction_input(p54spi_gpio_irq);
ret = request_irq(gpio_to_irq(p54spi_gpio_irq),
- p54spi_interrupt, 0, "p54spi",
+ p54spi_interrupt, IRQF_NO_AUTOEN, "p54spi",
priv->spi);
if (ret < 0) {
dev_err(&priv->spi->dev, "request_irq() failed");
@@ -633,8 +633,6 @@ static int p54spi_probe(struct spi_device *spi)
irq_set_irq_type(gpio_to_irq(p54spi_gpio_irq), IRQ_TYPE_EDGE_RISING);
- disable_irq(gpio_to_irq(p54spi_gpio_irq));
-
INIT_WORK(&priv->work, p54spi_work);
init_completion(&priv->fw_comp);
INIT_LIST_HEAD(&priv->tx_pending);
diff --git a/drivers/net/wireless/marvell/libertas/radiotap.h b/drivers/net/wireless/marvell/libertas/radiotap.h
index 1ed5608d353f..d543bfe739dc 100644
--- a/drivers/net/wireless/marvell/libertas/radiotap.h
+++ b/drivers/net/wireless/marvell/libertas/radiotap.h
@@ -2,7 +2,7 @@
#include <net/ieee80211_radiotap.h>
struct tx_radiotap_hdr {
- struct ieee80211_radiotap_header hdr;
+ struct ieee80211_radiotap_header_fixed hdr;
u8 rate;
u8 txpower;
u8 rts_retries;
@@ -31,7 +31,7 @@ struct tx_radiotap_hdr {
#define IEEE80211_FC_DSTODS 0x0300
struct rx_radiotap_hdr {
- struct ieee80211_radiotap_header hdr;
+ struct ieee80211_radiotap_header_fixed hdr;
u8 flags;
u8 rate;
u8 antsignal;
diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
index a3be37526697..7b06a6d57ffb 100644
--- a/drivers/net/wireless/marvell/mwifiex/fw.h
+++ b/drivers/net/wireless/marvell/mwifiex/fw.h
@@ -842,7 +842,7 @@ struct mwifiex_ietypes_chanstats {
struct mwifiex_ie_types_wildcard_ssid_params {
struct mwifiex_ie_types_header header;
u8 max_ssid_length;
- u8 ssid[1];
+ u8 ssid[];
} __packed;
#define TSF_DATA_SIZE 8
diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
index d99127dc466e..6c60a4c21a31 100644
--- a/drivers/net/wireless/marvell/mwifiex/main.c
+++ b/drivers/net/wireless/marvell/mwifiex/main.c
@@ -1633,7 +1633,8 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter)
}
ret = devm_request_irq(dev, adapter->irq_wakeup,
- mwifiex_irq_wakeup_handler, IRQF_TRIGGER_LOW,
+ mwifiex_irq_wakeup_handler,
+ IRQF_TRIGGER_LOW | IRQF_NO_AUTOEN,
"wifi_wake", adapter);
if (ret) {
dev_err(dev, "Failed to request irq_wakeup %d (%d)\n",
@@ -1641,7 +1642,6 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter)
goto err_exit;
}
- disable_irq(adapter->irq_wakeup);
if (device_init_wakeup(dev, true)) {
dev_err(dev, "fail to init wakeup for mwifiex\n");
goto err_exit;
diff --git a/drivers/net/wireless/microchip/wilc1000/mon.c b/drivers/net/wireless/microchip/wilc1000/mon.c
index 03b7229a0ff5..c3d27aaec297 100644
--- a/drivers/net/wireless/microchip/wilc1000/mon.c
+++ b/drivers/net/wireless/microchip/wilc1000/mon.c
@@ -7,12 +7,12 @@
#include "cfg80211.h"
struct wilc_wfi_radiotap_hdr {
- struct ieee80211_radiotap_header hdr;
+ struct ieee80211_radiotap_header_fixed hdr;
u8 rate;
} __packed;
struct wilc_wfi_radiotap_cb_hdr {
- struct ieee80211_radiotap_header hdr;
+ struct ieee80211_radiotap_header_fixed hdr;
u8 rate;
u8 dump;
u16 tx_flags;
diff --git a/drivers/net/wireless/realtek/rtlwifi/efuse.c b/drivers/net/wireless/realtek/rtlwifi/efuse.c
index 2e945554ed6d..6c8efd2b2642 100644
--- a/drivers/net/wireless/realtek/rtlwifi/efuse.c
+++ b/drivers/net/wireless/realtek/rtlwifi/efuse.c
@@ -162,10 +162,19 @@ void efuse_write_1byte(struct ieee80211_hw *hw, u16 address, u8 value)
void read_efuse_byte(struct ieee80211_hw *hw, u16 _offset, u8 *pbuf)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
+ u16 max_attempts = 10000;
u32 value32;
u8 readbyte;
u16 retry;
+ /*
+ * In case of USB devices, transfer speeds are limited, hence
+ * efuse I/O reads could be (way) slower. So, decrease (a lot)
+ * the read attempts in case of failures.
+ */
+ if (rtlpriv->rtlhal.interface == INTF_USB)
+ max_attempts = 10;
+
rtl_write_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL] + 1,
(_offset & 0xff));
readbyte = rtl_read_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL] + 2);
@@ -178,7 +187,7 @@ void read_efuse_byte(struct ieee80211_hw *hw, u16 _offset, u8 *pbuf)
retry = 0;
value32 = rtl_read_dword(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL]);
- while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < 10000)) {
+ while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < max_attempts)) {
value32 = rtl_read_dword(rtlpriv,
rtlpriv->cfg->maps[EFUSE_CTRL]);
retry++;
diff --git a/drivers/net/wireless/silabs/wfx/main.c b/drivers/net/wireless/silabs/wfx/main.c
index ede822d771aa..f2409830d87e 100644
--- a/drivers/net/wireless/silabs/wfx/main.c
+++ b/drivers/net/wireless/silabs/wfx/main.c
@@ -475,10 +475,23 @@ static int __init wfx_core_init(void)
{
int ret = 0;
- if (IS_ENABLED(CONFIG_SPI))
+ if (IS_ENABLED(CONFIG_SPI)) {
ret = spi_register_driver(&wfx_spi_driver);
- if (IS_ENABLED(CONFIG_MMC) && !ret)
+ if (ret)
+ goto out;
+ }
+ if (IS_ENABLED(CONFIG_MMC)) {
ret = sdio_register_driver(&wfx_sdio_driver);
+ if (ret)
+ goto unregister_spi;
+ }
+
+ return 0;
+
+unregister_spi:
+ if (IS_ENABLED(CONFIG_SPI))
+ spi_unregister_driver(&wfx_spi_driver);
+out:
return ret;
}
module_init(wfx_core_init);
diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.c b/drivers/net/wireless/virtual/mac80211_hwsim.c
index 07be0adc13ec..d86a1bd7aab0 100644
--- a/drivers/net/wireless/virtual/mac80211_hwsim.c
+++ b/drivers/net/wireless/virtual/mac80211_hwsim.c
@@ -736,7 +736,7 @@ static const struct rhashtable_params hwsim_rht_params = {
};
struct hwsim_radiotap_hdr {
- struct ieee80211_radiotap_header hdr;
+ struct ieee80211_radiotap_header_fixed hdr;
__le64 rt_tsft;
u8 rt_flags;
u8 rt_rate;
@@ -745,7 +745,7 @@ struct hwsim_radiotap_hdr {
} __packed;
struct hwsim_radiotap_ack_hdr {
- struct ieee80211_radiotap_header hdr;
+ struct ieee80211_radiotap_header_fixed hdr;
u8 rt_flags;
u8 pad;
__le16 rt_channel;
diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c
index 596bb11eeba5..396eb9437659 100644
--- a/drivers/nvme/host/apple.c
+++ b/drivers/nvme/host/apple.c
@@ -1387,7 +1387,7 @@ static void devm_apple_nvme_mempool_destroy(void *data)
mempool_destroy(data);
}
-static int apple_nvme_probe(struct platform_device *pdev)
+static struct apple_nvme *apple_nvme_alloc(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct apple_nvme *anv;
@@ -1395,7 +1395,7 @@ static int apple_nvme_probe(struct platform_device *pdev)
anv = devm_kzalloc(dev, sizeof(*anv), GFP_KERNEL);
if (!anv)
- return -ENOMEM;
+ return ERR_PTR(-ENOMEM);
anv->dev = get_device(dev);
anv->adminq.is_adminq = true;
@@ -1515,10 +1515,26 @@ static int apple_nvme_probe(struct platform_device *pdev)
goto put_dev;
}
+ return anv;
+put_dev:
+ put_device(anv->dev);
+ return ERR_PTR(ret);
+}
+
+static int apple_nvme_probe(struct platform_device *pdev)
+{
+ struct apple_nvme *anv;
+ int ret;
+
+ anv = apple_nvme_alloc(pdev);
+ if (IS_ERR(anv))
+ return PTR_ERR(anv);
+
anv->ctrl.admin_q = blk_mq_init_queue(&anv->admin_tagset);
if (IS_ERR(anv->ctrl.admin_q)) {
ret = -ENOMEM;
- goto put_dev;
+ anv->ctrl.admin_q = NULL;
+ goto out_uninit_ctrl;
}
nvme_reset_ctrl(&anv->ctrl);
@@ -1526,8 +1542,9 @@ static int apple_nvme_probe(struct platform_device *pdev)
return 0;
-put_dev:
- put_device(anv->dev);
+out_uninit_ctrl:
+ nvme_uninit_ctrl(&anv->ctrl);
+ nvme_put_ctrl(&anv->ctrl);
return ret;
}
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 965ca7d7a3de..5b6a6bd4e6e8 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -109,7 +109,7 @@ struct workqueue_struct *nvme_delete_wq;
EXPORT_SYMBOL_GPL(nvme_delete_wq);
static LIST_HEAD(nvme_subsystems);
-static DEFINE_MUTEX(nvme_subsystems_lock);
+DEFINE_MUTEX(nvme_subsystems_lock);
static DEFINE_IDA(nvme_instance_ida);
static dev_t nvme_ctrl_base_chr_devt;
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index 875dee6ecd40..19a7f0160618 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -3,6 +3,7 @@
* Copyright (c) 2011-2014, Intel Corporation.
* Copyright (c) 2017-2021 Christoph Hellwig.
*/
+#include <linux/blk-integrity.h>
#include <linux/ptrace.h> /* for force_successful_syscall_return */
#include <linux/nvme_ioctl.h>
#include <linux/io_uring.h>
@@ -171,10 +172,15 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
struct request_queue *q = req->q;
struct nvme_ns *ns = q->queuedata;
struct block_device *bdev = ns ? ns->disk->part0 : NULL;
+ bool supports_metadata = bdev && blk_get_integrity(bdev->bd_disk);
+ bool has_metadata = meta_buffer && meta_len;
struct bio *bio = NULL;
void *meta = NULL;
int ret;
+ if (has_metadata && !supports_metadata)
+ return -EINVAL;
+
if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
struct iov_iter iter;
@@ -198,7 +204,7 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
if (bdev)
bio_set_dev(bio, bdev);
- if (bdev && meta_buffer && meta_len) {
+ if (has_metadata) {
meta = nvme_add_user_metadata(req, meta_buffer, meta_len,
meta_seed);
if (IS_ERR(meta)) {
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index ede2a14dad8b..32283301199f 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -17,6 +17,7 @@ MODULE_PARM_DESC(multipath,
static const char *nvme_iopolicy_names[] = {
[NVME_IOPOLICY_NUMA] = "numa",
[NVME_IOPOLICY_RR] = "round-robin",
+ [NVME_IOPOLICY_QD] = "queue-depth",
};
static int iopolicy = NVME_IOPOLICY_NUMA;
@@ -29,6 +30,8 @@ static int nvme_set_iopolicy(const char *val, const struct kernel_param *kp)
iopolicy = NVME_IOPOLICY_NUMA;
else if (!strncmp(val, "round-robin", 11))
iopolicy = NVME_IOPOLICY_RR;
+ else if (!strncmp(val, "queue-depth", 11))
+ iopolicy = NVME_IOPOLICY_QD;
else
return -EINVAL;
@@ -43,7 +46,7 @@ static int nvme_get_iopolicy(char *buf, const struct kernel_param *kp)
module_param_call(iopolicy, nvme_set_iopolicy, nvme_get_iopolicy,
&iopolicy, 0644);
MODULE_PARM_DESC(iopolicy,
- "Default multipath I/O policy; 'numa' (default) or 'round-robin'");
+ "Default multipath I/O policy; 'numa' (default), 'round-robin' or 'queue-depth'");
void nvme_mpath_default_iopolicy(struct nvme_subsystem *subsys)
{
@@ -128,6 +131,11 @@ void nvme_mpath_start_request(struct request *rq)
struct nvme_ns *ns = rq->q->queuedata;
struct gendisk *disk = ns->head->disk;
+ if (READ_ONCE(ns->head->subsys->iopolicy) == NVME_IOPOLICY_QD) {
+ atomic_inc(&ns->ctrl->nr_active);
+ nvme_req(rq)->flags |= NVME_MPATH_CNT_ACTIVE;
+ }
+
if (!blk_queue_io_stat(disk->queue) || blk_rq_is_passthrough(rq))
return;
@@ -141,6 +149,9 @@ void nvme_mpath_end_request(struct request *rq)
{
struct nvme_ns *ns = rq->q->queuedata;
+ if (nvme_req(rq)->flags & NVME_MPATH_CNT_ACTIVE)
+ atomic_dec_if_positive(&ns->ctrl->nr_active);
+
if (!(nvme_req(rq)->flags & NVME_MPATH_IO_STATS))
return;
bdev_end_io_acct(ns->head->disk->part0, req_op(rq),
@@ -154,7 +165,8 @@ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl)
int srcu_idx;
srcu_idx = srcu_read_lock(&ctrl->srcu);
- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
+ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
+ srcu_read_lock_held(&ctrl->srcu)) {
if (!ns->head->disk)
continue;
kblockd_schedule_work(&ns->head->requeue_work);
@@ -198,7 +210,8 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
int srcu_idx;
srcu_idx = srcu_read_lock(&ctrl->srcu);
- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
+ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
+ srcu_read_lock_held(&ctrl->srcu)) {
nvme_mpath_clear_current_path(ns);
kblockd_schedule_work(&ns->head->requeue_work);
}
@@ -213,7 +226,8 @@ void nvme_mpath_revalidate_paths(struct nvme_ns *ns)
int srcu_idx;
srcu_idx = srcu_read_lock(&head->srcu);
- list_for_each_entry_rcu(ns, &head->list, siblings) {
+ list_for_each_entry_srcu(ns, &head->list, siblings,
+ srcu_read_lock_held(&head->srcu)) {
if (capacity != get_capacity(ns->disk))
clear_bit(NVME_NS_READY, &ns->flags);
}
@@ -245,7 +259,8 @@ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
int found_distance = INT_MAX, fallback_distance = INT_MAX, distance;
struct nvme_ns *found = NULL, *fallback = NULL, *ns;
- list_for_each_entry_rcu(ns, &head->list, siblings) {
+ list_for_each_entry_srcu(ns, &head->list, siblings,
+ srcu_read_lock_held(&head->srcu)) {
if (nvme_path_is_disabled(ns))
continue;
@@ -290,10 +305,15 @@ static struct nvme_ns *nvme_next_ns(struct nvme_ns_head *head,
return list_first_or_null_rcu(&head->list, struct nvme_ns, siblings);
}
-static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head,
- int node, struct nvme_ns *old)
+static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head)
{
struct nvme_ns *ns, *found = NULL;
+ int node = numa_node_id();
+ struct nvme_ns *old = srcu_dereference(head->current_path[node],
+ &head->srcu);
+
+ if (unlikely(!old))
+ return __nvme_find_path(head, node);
if (list_is_singular(&head->list)) {
if (nvme_path_is_disabled(old))
@@ -333,13 +353,50 @@ static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head,
return found;
}
+static struct nvme_ns *nvme_queue_depth_path(struct nvme_ns_head *head)
+{
+ struct nvme_ns *best_opt = NULL, *best_nonopt = NULL, *ns;
+ unsigned int min_depth_opt = UINT_MAX, min_depth_nonopt = UINT_MAX;
+ unsigned int depth;
+
+ list_for_each_entry_srcu(ns, &head->list, siblings,
+ srcu_read_lock_held(&head->srcu)) {
+ if (nvme_path_is_disabled(ns))
+ continue;
+
+ depth = atomic_read(&ns->ctrl->nr_active);
+
+ switch (ns->ana_state) {
+ case NVME_ANA_OPTIMIZED:
+ if (depth < min_depth_opt) {
+ min_depth_opt = depth;
+ best_opt = ns;
+ }
+ break;
+ case NVME_ANA_NONOPTIMIZED:
+ if (depth < min_depth_nonopt) {
+ min_depth_nonopt = depth;
+ best_nonopt = ns;
+ }
+ break;
+ default:
+ break;
+ }
+
+ if (min_depth_opt == 0)
+ return best_opt;
+ }
+
+ return best_opt ? best_opt : best_nonopt;
+}
+
static inline bool nvme_path_is_optimized(struct nvme_ns *ns)
{
return ns->ctrl->state == NVME_CTRL_LIVE &&
ns->ana_state == NVME_ANA_OPTIMIZED;
}
-inline struct nvme_ns *nvme_find_path(struct nvme_ns_head *head)
+static struct nvme_ns *nvme_numa_path(struct nvme_ns_head *head)
{
int node = numa_node_id();
struct nvme_ns *ns;
@@ -347,19 +404,32 @@ inline struct nvme_ns *nvme_find_path(struct nvme_ns_head *head)
ns = srcu_dereference(head->current_path[node], &head->srcu);
if (unlikely(!ns))
return __nvme_find_path(head, node);
-
- if (READ_ONCE(head->subsys->iopolicy) == NVME_IOPOLICY_RR)
- return nvme_round_robin_path(head, node, ns);
if (unlikely(!nvme_path_is_optimized(ns)))
return __nvme_find_path(head, node);
return ns;
}
+inline struct nvme_ns *nvme_find_path(struct nvme_ns_head *head)
+{
+ switch (READ_ONCE(head->subsys->iopolicy)) {
+ case NVME_IOPOLICY_QD:
+ return nvme_queue_depth_path(head);
+ case NVME_IOPOLICY_RR:
+ return nvme_round_robin_path(head);
+ default:
+ return nvme_numa_path(head);
+ }
+}
+
static bool nvme_available_path(struct nvme_ns_head *head)
{
struct nvme_ns *ns;
- list_for_each_entry_rcu(ns, &head->list, siblings) {
+ if (!test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags))
+ return NULL;
+
+ list_for_each_entry_srcu(ns, &head->list, siblings,
+ srcu_read_lock_held(&head->srcu)) {
if (test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ns->ctrl->flags))
continue;
switch (ns->ctrl->state) {
@@ -720,7 +790,8 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
return 0;
srcu_idx = srcu_read_lock(&ctrl->srcu);
- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
+ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
+ srcu_read_lock_held(&ctrl->srcu)) {
unsigned nsid;
again:
nsid = le32_to_cpu(desc->nsids[n]);
@@ -827,6 +898,29 @@ static ssize_t nvme_subsys_iopolicy_show(struct device *dev,
nvme_iopolicy_names[READ_ONCE(subsys->iopolicy)]);
}
+static void nvme_subsys_iopolicy_update(struct nvme_subsystem *subsys,
+ int iopolicy)
+{
+ struct nvme_ctrl *ctrl;
+ int old_iopolicy = READ_ONCE(subsys->iopolicy);
+
+ if (old_iopolicy == iopolicy)
+ return;
+
+ WRITE_ONCE(subsys->iopolicy, iopolicy);
+
+ /* iopolicy changes clear the mpath by design */
+ mutex_lock(&nvme_subsystems_lock);
+ list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
+ nvme_mpath_clear_ctrl_paths(ctrl);
+ mutex_unlock(&nvme_subsystems_lock);
+
+ pr_notice("subsysnqn %s iopolicy changed from %s to %s\n",
+ subsys->subnqn,
+ nvme_iopolicy_names[old_iopolicy],
+ nvme_iopolicy_names[iopolicy]);
+}
+
static ssize_t nvme_subsys_iopolicy_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
@@ -836,7 +930,7 @@ static ssize_t nvme_subsys_iopolicy_store(struct device *dev,
for (i = 0; i < ARRAY_SIZE(nvme_iopolicy_names); i++) {
if (sysfs_streq(buf, nvme_iopolicy_names[i])) {
- WRITE_ONCE(subsys->iopolicy, i);
+ nvme_subsys_iopolicy_update(subsys, i);
return count;
}
}
@@ -912,8 +1006,7 @@ void nvme_mpath_shutdown_disk(struct nvme_ns_head *head)
{
if (!head->disk)
return;
- kblockd_schedule_work(&head->requeue_work);
- if (test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) {
+ if (test_and_clear_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) {
nvme_cdev_del(&head->cdev, &head->cdev_device);
/*
* requeue I/O after NVME_NSHEAD_DISK_LIVE has been cleared
@@ -923,6 +1016,12 @@ void nvme_mpath_shutdown_disk(struct nvme_ns_head *head)
kblockd_schedule_work(&head->requeue_work);
del_gendisk(head->disk);
}
+ /*
+ * requeue I/O after NVME_NSHEAD_DISK_LIVE has been cleared
+ * to allow multipath to fail all I/O.
+ */
+ synchronize_srcu(&head->srcu);
+ kblockd_schedule_work(&head->requeue_work);
}
void nvme_mpath_remove_disk(struct nvme_ns_head *head)
@@ -954,6 +1053,9 @@ int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
!(ctrl->subsys->cmic & NVME_CTRL_CMIC_ANA))
return 0;
+ /* initialize this in the identify path to cover controller resets */
+ atomic_set(&ctrl->nr_active, 0);
+
if (!ctrl->max_namespaces ||
ctrl->max_namespaces > le32_to_cpu(id->nn)) {
dev_err(ctrl->device,
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 14a867245c29..bddc068d58c7 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -48,6 +48,7 @@ extern unsigned int admin_timeout;
extern struct workqueue_struct *nvme_wq;
extern struct workqueue_struct *nvme_reset_wq;
extern struct workqueue_struct *nvme_delete_wq;
+extern struct mutex nvme_subsystems_lock;
/*
* List of workarounds for devices that required behavior not specified in
@@ -199,6 +200,7 @@ enum {
NVME_REQ_CANCELLED = (1 << 0),
NVME_REQ_USERCMD = (1 << 1),
NVME_MPATH_IO_STATS = (1 << 2),
+ NVME_MPATH_CNT_ACTIVE = (1 << 3),
};
static inline struct nvme_request *nvme_req(struct request *req)
@@ -364,6 +366,7 @@ struct nvme_ctrl {
size_t ana_log_size;
struct timer_list anatt_timer;
struct work_struct ana_work;
+ atomic_t nr_active;
#endif
#ifdef CONFIG_NVME_AUTH
@@ -411,6 +414,7 @@ static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl)
enum nvme_iopolicy {
NVME_IOPOLICY_NUMA,
NVME_IOPOLICY_RR,
+ NVME_IOPOLICY_QD,
};
struct nvme_subsystem {
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index b701969cf1c2..d525fa1229d7 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -153,6 +153,7 @@ struct nvme_dev {
/* host memory buffer support: */
u64 host_mem_size;
u32 nr_host_mem_descs;
+ u32 host_mem_descs_size;
dma_addr_t host_mem_descs_dma;
struct nvme_host_mem_buf_desc *host_mem_descs;
void **host_mem_desc_bufs;
@@ -904,9 +905,10 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
{
+ struct request *req;
+
spin_lock(&nvmeq->sq_lock);
- while (!rq_list_empty(*rqlist)) {
- struct request *req = rq_list_pop(rqlist);
+ while ((req = rq_list_pop(rqlist))) {
struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
nvme_sq_copy_cmd(nvmeq, &iod->cmd);
@@ -932,31 +934,25 @@ static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
static void nvme_queue_rqs(struct request **rqlist)
{
- struct request *req, *next, *prev = NULL;
+ struct request *submit_list = NULL;
struct request *requeue_list = NULL;
+ struct request **requeue_lastp = &requeue_list;
+ struct nvme_queue *nvmeq = NULL;
+ struct request *req;
- rq_list_for_each_safe(rqlist, req, next) {
- struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
-
- if (!nvme_prep_rq_batch(nvmeq, req)) {
- /* detach 'req' and add to remainder list */
- rq_list_move(rqlist, &requeue_list, req, prev);
-
- req = prev;
- if (!req)
- continue;
- }
+ while ((req = rq_list_pop(rqlist))) {
+ if (nvmeq && nvmeq != req->mq_hctx->driver_data)
+ nvme_submit_cmds(nvmeq, &submit_list);
+ nvmeq = req->mq_hctx->driver_data;
- if (!next || req->mq_hctx != next->mq_hctx) {
- /* detach rest of list, and submit */
- req->rq_next = NULL;
- nvme_submit_cmds(nvmeq, rqlist);
- *rqlist = next;
- prev = NULL;
- } else
- prev = req;
+ if (nvme_prep_rq_batch(nvmeq, req))
+ rq_list_add(&submit_list, req); /* reverse order */
+ else
+ rq_list_add_tail(&requeue_lastp, req);
}
+ if (nvmeq)
+ nvme_submit_cmds(nvmeq, &submit_list);
*rqlist = requeue_list;
}
@@ -1929,10 +1925,10 @@ static void nvme_free_host_mem(struct nvme_dev *dev)
kfree(dev->host_mem_desc_bufs);
dev->host_mem_desc_bufs = NULL;
- dma_free_coherent(dev->dev,
- dev->nr_host_mem_descs * sizeof(*dev->host_mem_descs),
+ dma_free_coherent(dev->dev, dev->host_mem_descs_size,
dev->host_mem_descs, dev->host_mem_descs_dma);
dev->host_mem_descs = NULL;
+ dev->host_mem_descs_size = 0;
dev->nr_host_mem_descs = 0;
}
@@ -1940,7 +1936,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
u32 chunk_size)
{
struct nvme_host_mem_buf_desc *descs;
- u32 max_entries, len;
+ u32 max_entries, len, descs_size;
dma_addr_t descs_dma;
int i = 0;
void **bufs;
@@ -1953,8 +1949,9 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
if (dev->ctrl.hmmaxd && dev->ctrl.hmmaxd < max_entries)
max_entries = dev->ctrl.hmmaxd;
- descs = dma_alloc_coherent(dev->dev, max_entries * sizeof(*descs),
- &descs_dma, GFP_KERNEL);
+ descs_size = max_entries * sizeof(*descs);
+ descs = dma_alloc_coherent(dev->dev, descs_size, &descs_dma,
+ GFP_KERNEL);
if (!descs)
goto out;
@@ -1983,6 +1980,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
dev->host_mem_size = size;
dev->host_mem_descs = descs;
dev->host_mem_descs_dma = descs_dma;
+ dev->host_mem_descs_size = descs_size;
dev->host_mem_desc_bufs = bufs;
return 0;
@@ -1997,8 +1995,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
kfree(bufs);
out_free_descs:
- dma_free_coherent(dev->dev, max_entries * sizeof(*descs), descs,
- descs_dma);
+ dma_free_coherent(dev->dev, descs_size, descs, descs_dma);
out:
dev->host_mem_descs = NULL;
return -ENOMEM;
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index bf502ba8da95..366fbdc56dec 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -471,6 +471,7 @@ int __initdata dt_root_addr_cells;
int __initdata dt_root_size_cells;
void *initial_boot_params __ro_after_init;
+phys_addr_t initial_boot_params_pa __ro_after_init;
#ifdef CONFIG_OF_EARLY_FLATTREE
@@ -1270,17 +1271,18 @@ static void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
return ptr;
}
-bool __init early_init_dt_verify(void *params)
+bool __init early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys)
{
- if (!params)
+ if (!dt_virt)
return false;
/* check device tree validity */
- if (fdt_check_header(params))
+ if (fdt_check_header(dt_virt))
return false;
/* Setup flat device-tree pointer */
- initial_boot_params = params;
+ initial_boot_params = dt_virt;
+ initial_boot_params_pa = dt_phys;
of_fdt_crc32 = crc32_be(~0, initial_boot_params,
fdt_totalsize(initial_boot_params));
return true;
@@ -1306,11 +1308,11 @@ void __init early_init_dt_scan_nodes(void)
early_init_dt_check_for_usable_mem_range();
}
-bool __init early_init_dt_scan(void *params)
+bool __init early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys)
{
bool status;
- status = early_init_dt_verify(params);
+ status = early_init_dt_verify(dt_virt, dt_phys);
if (!status)
return false;
diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c
index 68278340cecf..3b98a57f1f07 100644
--- a/drivers/of/kexec.c
+++ b/drivers/of/kexec.c
@@ -301,7 +301,7 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image,
}
/* Remove memory reservation for the current device tree. */
- ret = fdt_find_and_del_mem_rsv(fdt, __pa(initial_boot_params),
+ ret = fdt_find_and_del_mem_rsv(fdt, initial_boot_params_pa,
fdt_totalsize(initial_boot_params));
if (ret == -EINVAL) {
pr_err("Error removing memory reservation.\n");
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 4f58345b5c68..7986113adc7d 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -4017,10 +4017,6 @@ static int __init of_unittest(void)
add_taint(TAINT_TEST, LOCKDEP_STILL_OK);
/* adding data for unittest */
-
- if (IS_ENABLED(CONFIG_UML))
- unittest_unflatten_overlay_base();
-
res = unittest_data_add();
if (res)
return res;
diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
index 2c87e7728a65..f76a358e2b5b 100644
--- a/drivers/pci/controller/cadence/pci-j721e.c
+++ b/drivers/pci/controller/cadence/pci-j721e.c
@@ -7,6 +7,8 @@
*/
#include <linux/clk.h>
+#include <linux/clk-provider.h>
+#include <linux/container_of.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/io.h>
@@ -22,6 +24,8 @@
#include "../../pci.h"
#include "pcie-cadence.h"
+#define cdns_pcie_to_rc(p) container_of(p, struct cdns_pcie_rc, pcie)
+
#define ENABLE_REG_SYS_2 0x108
#define STATUS_REG_SYS_2 0x508
#define STATUS_CLR_REG_SYS_2 0x708
@@ -42,18 +46,17 @@ enum link_status {
};
#define J721E_MODE_RC BIT(7)
-#define LANE_COUNT_MASK BIT(8)
#define LANE_COUNT(n) ((n) << 8)
#define GENERATION_SEL_MASK GENMASK(1, 0)
-#define MAX_LANES 2
-
struct j721e_pcie {
struct cdns_pcie *cdns_pcie;
struct clk *refclk;
u32 mode;
u32 num_lanes;
+ u32 max_lanes;
+ struct gpio_desc *reset_gpio;
void __iomem *user_cfg_base;
void __iomem *intd_cfg_base;
u32 linkdown_irq_regfield;
@@ -71,6 +74,7 @@ struct j721e_pcie_data {
unsigned int quirk_disable_flr:1;
u32 linkdown_irq_regfield;
unsigned int byte_access_allowed:1;
+ unsigned int max_lanes;
};
static inline u32 j721e_pcie_user_readl(struct j721e_pcie *pcie, u32 offset)
@@ -206,11 +210,15 @@ static int j721e_pcie_set_lane_count(struct j721e_pcie *pcie,
{
struct device *dev = pcie->cdns_pcie->dev;
u32 lanes = pcie->num_lanes;
+ u32 mask = BIT(8);
u32 val = 0;
int ret;
+ if (pcie->max_lanes == 4)
+ mask = GENMASK(9, 8);
+
val = LANE_COUNT(lanes - 1);
- ret = regmap_update_bits(syscon, offset, LANE_COUNT_MASK, val);
+ ret = regmap_update_bits(syscon, offset, mask, val);
if (ret)
dev_err(dev, "failed to set link count\n");
@@ -290,11 +298,13 @@ static const struct j721e_pcie_data j721e_pcie_rc_data = {
.quirk_retrain_flag = true,
.byte_access_allowed = false,
.linkdown_irq_regfield = LINK_DOWN,
+ .max_lanes = 2,
};
static const struct j721e_pcie_data j721e_pcie_ep_data = {
.mode = PCI_MODE_EP,
.linkdown_irq_regfield = LINK_DOWN,
+ .max_lanes = 2,
};
static const struct j721e_pcie_data j7200_pcie_rc_data = {
@@ -302,23 +312,27 @@ static const struct j721e_pcie_data j7200_pcie_rc_data = {
.quirk_detect_quiet_flag = true,
.linkdown_irq_regfield = J7200_LINK_DOWN,
.byte_access_allowed = true,
+ .max_lanes = 2,
};
static const struct j721e_pcie_data j7200_pcie_ep_data = {
.mode = PCI_MODE_EP,
.quirk_detect_quiet_flag = true,
.quirk_disable_flr = true,
+ .max_lanes = 2,
};
static const struct j721e_pcie_data am64_pcie_rc_data = {
.mode = PCI_MODE_RC,
.linkdown_irq_regfield = J7200_LINK_DOWN,
.byte_access_allowed = true,
+ .max_lanes = 1,
};
static const struct j721e_pcie_data am64_pcie_ep_data = {
.mode = PCI_MODE_EP,
.linkdown_irq_regfield = J7200_LINK_DOWN,
+ .max_lanes = 1,
};
static const struct of_device_id of_j721e_pcie_match[] = {
@@ -432,9 +446,13 @@ static int j721e_pcie_probe(struct platform_device *pdev)
pcie->user_cfg_base = base;
ret = of_property_read_u32(node, "num-lanes", &num_lanes);
- if (ret || num_lanes > MAX_LANES)
+ if (ret || num_lanes > data->max_lanes) {
+ dev_warn(dev, "num-lanes property not provided or invalid, setting num-lanes to 1\n");
num_lanes = 1;
+ }
+
pcie->num_lanes = num_lanes;
+ pcie->max_lanes = data->max_lanes;
if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48)))
return -EINVAL;
@@ -475,6 +493,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
dev_err(dev, "Failed to get reset GPIO\n");
goto err_get_sync;
}
+ pcie->reset_gpio = gpiod;
ret = cdns_pcie_init_phy(dev, cdns_pcie);
if (ret) {
@@ -497,15 +516,14 @@ static int j721e_pcie_probe(struct platform_device *pdev)
pcie->refclk = clk;
/*
- * "Power Sequencing and Reset Signal Timings" table in
- * PCI EXPRESS CARD ELECTROMECHANICAL SPECIFICATION, REV. 3.0
- * indicates PERST# should be deasserted after minimum of 100us
- * once REFCLK is stable. The REFCLK to the connector in RC
- * mode is selected while enabling the PHY. So deassert PERST#
- * after 100 us.
+ * Section 2.2 of the PCI Express Card Electromechanical
+ * Specification (Revision 5.1) mandates that the deassertion
+ * of the PERST# signal should be delayed by 100 ms (TPVPERL).
+ * This shall ensure that the power and the reference clock
+ * are stable.
*/
if (gpiod) {
- usleep_range(100, 200);
+ msleep(PCIE_T_PVPERL_MS);
gpiod_set_value_cansleep(gpiod, 1);
}
@@ -554,6 +572,86 @@ static void j721e_pcie_remove(struct platform_device *pdev)
pm_runtime_disable(dev);
}
+static int j721e_pcie_suspend_noirq(struct device *dev)
+{
+ struct j721e_pcie *pcie = dev_get_drvdata(dev);
+
+ if (pcie->mode == PCI_MODE_RC) {
+ gpiod_set_value_cansleep(pcie->reset_gpio, 0);
+ clk_disable_unprepare(pcie->refclk);
+ }
+
+ cdns_pcie_disable_phy(pcie->cdns_pcie);
+
+ return 0;
+}
+
+static int j721e_pcie_resume_noirq(struct device *dev)
+{
+ struct j721e_pcie *pcie = dev_get_drvdata(dev);
+ struct cdns_pcie *cdns_pcie = pcie->cdns_pcie;
+ int ret;
+
+ ret = j721e_pcie_ctrl_init(pcie);
+ if (ret < 0)
+ return ret;
+
+ j721e_pcie_config_link_irq(pcie);
+
+ /*
+ * This is not called explicitly in the probe, it is called by
+ * cdns_pcie_init_phy().
+ */
+ ret = cdns_pcie_enable_phy(pcie->cdns_pcie);
+ if (ret < 0)
+ return ret;
+
+ if (pcie->mode == PCI_MODE_RC) {
+ struct cdns_pcie_rc *rc = cdns_pcie_to_rc(cdns_pcie);
+
+ ret = clk_prepare_enable(pcie->refclk);
+ if (ret < 0)
+ return ret;
+
+ /*
+ * Section 2.2 of the PCI Express Card Electromechanical
+ * Specification (Revision 5.1) mandates that the deassertion
+ * of the PERST# signal should be delayed by 100 ms (TPVPERL).
+ * This shall ensure that the power and the reference clock
+ * are stable.
+ */
+ if (pcie->reset_gpio) {
+ msleep(PCIE_T_PVPERL_MS);
+ gpiod_set_value_cansleep(pcie->reset_gpio, 1);
+ }
+
+ ret = cdns_pcie_host_link_setup(rc);
+ if (ret < 0) {
+ clk_disable_unprepare(pcie->refclk);
+ return ret;
+ }
+
+ /*
+ * Reset internal status of BARs to force reinitialization in
+ * cdns_pcie_host_init().
+ */
+ for (enum cdns_pcie_rp_bar bar = RP_BAR0; bar <= RP_NO_BAR; bar++)
+ rc->avail_ib_bar[bar] = true;
+
+ ret = cdns_pcie_host_init(rc);
+ if (ret) {
+ clk_disable_unprepare(pcie->refclk);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static DEFINE_NOIRQ_DEV_PM_OPS(j721e_pcie_pm_ops,
+ j721e_pcie_suspend_noirq,
+ j721e_pcie_resume_noirq);
+
static struct platform_driver j721e_pcie_driver = {
.probe = j721e_pcie_probe,
.remove_new = j721e_pcie_remove,
@@ -561,6 +659,7 @@ static struct platform_driver j721e_pcie_driver = {
.name = "j721e-pcie",
.of_match_table = of_j721e_pcie_match,
.suppress_bind_attrs = true,
+ .pm = pm_sleep_ptr(&j721e_pcie_pm_ops),
},
};
builtin_platform_driver(j721e_pcie_driver);
diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
index 5b14f7ee3c79..8af95e9da7ce 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
@@ -485,8 +485,7 @@ static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc)
return cdns_pcie_host_map_dma_ranges(rc);
}
-static int cdns_pcie_host_init(struct device *dev,
- struct cdns_pcie_rc *rc)
+int cdns_pcie_host_init(struct cdns_pcie_rc *rc)
{
int err;
@@ -497,6 +496,30 @@ static int cdns_pcie_host_init(struct device *dev,
return cdns_pcie_host_init_address_translation(rc);
}
+int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
+{
+ struct cdns_pcie *pcie = &rc->pcie;
+ struct device *dev = rc->pcie.dev;
+ int ret;
+
+ if (rc->quirk_detect_quiet_flag)
+ cdns_pcie_detect_quiet_min_delay_set(&rc->pcie);
+
+ cdns_pcie_host_enable_ptm_response(pcie);
+
+ ret = cdns_pcie_start_link(pcie);
+ if (ret) {
+ dev_err(dev, "Failed to start link\n");
+ return ret;
+ }
+
+ ret = cdns_pcie_host_start_link(rc);
+ if (ret)
+ dev_dbg(dev, "PCIe link never came up\n");
+
+ return 0;
+}
+
int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
{
struct device *dev = rc->pcie.dev;
@@ -533,25 +556,14 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
return PTR_ERR(rc->cfg_base);
rc->cfg_res = res;
- if (rc->quirk_detect_quiet_flag)
- cdns_pcie_detect_quiet_min_delay_set(&rc->pcie);
-
- cdns_pcie_host_enable_ptm_response(pcie);
-
- ret = cdns_pcie_start_link(pcie);
- if (ret) {
- dev_err(dev, "Failed to start link\n");
- return ret;
- }
-
- ret = cdns_pcie_host_start_link(rc);
+ ret = cdns_pcie_host_link_setup(rc);
if (ret)
- dev_dbg(dev, "PCIe link never came up\n");
+ return ret;
for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++)
rc->avail_ib_bar[bar] = true;
- ret = cdns_pcie_host_init(dev, rc);
+ ret = cdns_pcie_host_init(rc);
if (ret)
return ret;
diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
index 373cb50fcd15..d55dfd173f22 100644
--- a/drivers/pci/controller/cadence/pcie-cadence.h
+++ b/drivers/pci/controller/cadence/pcie-cadence.h
@@ -515,10 +515,22 @@ static inline bool cdns_pcie_link_up(struct cdns_pcie *pcie)
}
#ifdef CONFIG_PCIE_CADENCE_HOST
+int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc);
+int cdns_pcie_host_init(struct cdns_pcie_rc *rc);
int cdns_pcie_host_setup(struct cdns_pcie_rc *rc);
void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn,
int where);
#else
+static inline int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc)
+{
+ return 0;
+}
+
+static inline int cdns_pcie_host_init(struct cdns_pcie_rc *rc)
+{
+ return 0;
+}
+
static inline int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
{
return 0;
diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
index c5475830c835..bf9a961c9f27 100644
--- a/drivers/pci/controller/dwc/pci-keystone.c
+++ b/drivers/pci/controller/dwc/pci-keystone.c
@@ -464,6 +464,17 @@ static void __iomem *ks_pcie_other_map_bus(struct pci_bus *bus,
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u32 reg;
+ /*
+ * Checking whether the link is up here is a last line of defense
+ * against platforms that forward errors on the system bus as
+ * SError upon PCI configuration transactions issued when the link
+ * is down. This check is racy by definition and does not stop
+ * the system from triggering an SError if the link goes down
+ * after this check is performed.
+ */
+ if (!dw_pcie_link_up(pci))
+ return NULL;
+
reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) |
CFG_FUNC(PCI_FUNC(devfn));
if (!pci_is_root_bus(bus->parent))
@@ -1104,6 +1115,7 @@ static int ks_pcie_am654_set_mode(struct device *dev,
static const struct ks_pcie_of_data ks_pcie_rc_of_data = {
.host_ops = &ks_pcie_host_ops,
+ .mode = DW_PCIE_RC_TYPE,
.version = DW_PCIE_VER_365A,
};
diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c
index 1e3c3192d122..954b773eebb1 100644
--- a/drivers/pci/controller/pcie-rockchip-ep.c
+++ b/drivers/pci/controller/pcie-rockchip-ep.c
@@ -63,15 +63,25 @@ static void rockchip_pcie_clear_ep_ob_atu(struct rockchip_pcie *rockchip,
ROCKCHIP_PCIE_AT_OB_REGION_DESC1(region));
}
+static int rockchip_pcie_ep_ob_atu_num_bits(struct rockchip_pcie *rockchip,
+ u64 pci_addr, size_t size)
+{
+ int num_pass_bits = fls64(pci_addr ^ (pci_addr + size - 1));
+
+ return clamp(num_pass_bits,
+ ROCKCHIP_PCIE_AT_MIN_NUM_BITS,
+ ROCKCHIP_PCIE_AT_MAX_NUM_BITS);
+}
+
static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn,
u32 r, u64 cpu_addr, u64 pci_addr,
size_t size)
{
- int num_pass_bits = fls64(size - 1);
+ int num_pass_bits;
u32 addr0, addr1, desc0;
- if (num_pass_bits < 8)
- num_pass_bits = 8;
+ num_pass_bits = rockchip_pcie_ep_ob_atu_num_bits(rockchip,
+ pci_addr, size);
addr0 = ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) |
(lower_32_bits(pci_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR);
diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
index 6111de35f84c..15ee949f2485 100644
--- a/drivers/pci/controller/pcie-rockchip.h
+++ b/drivers/pci/controller/pcie-rockchip.h
@@ -245,6 +245,10 @@
(PCIE_EP_PF_CONFIG_REGS_BASE + (((fn) << 12) & GENMASK(19, 12)))
#define ROCKCHIP_PCIE_EP_VIRT_FUNC_BASE(fn) \
(PCIE_EP_PF_CONFIG_REGS_BASE + 0x10000 + (((fn) << 12) & GENMASK(19, 12)))
+
+#define ROCKCHIP_PCIE_AT_MIN_NUM_BITS 8
+#define ROCKCHIP_PCIE_AT_MAX_NUM_BITS 20
+
#define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
(PCIE_CORE_AXI_CONF_BASE + 0x0828 + (fn) * 0x0040 + (bar) * 0x0008)
#define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
index 34e7191f9508..87154992ea11 100644
--- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
+++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
@@ -600,12 +600,18 @@ static int pci_epf_mhi_bind(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;
+ struct device *dev = &epf->dev;
struct platform_device *pdev = to_platform_device(epc->dev.parent);
struct resource *res;
int ret;
/* Get MMIO base address from Endpoint controller */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mmio");
+ if (!res) {
+ dev_err(dev, "Failed to get \"mmio\" resource\n");
+ return -ENODEV;
+ }
+
epf_mhi->mmio_phys = res->start;
epf_mhi->mmio_size = resource_size(res);
diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
index a7d3a92391a4..d06623c751f8 100644
--- a/drivers/pci/endpoint/pci-epc-core.c
+++ b/drivers/pci/endpoint/pci-epc-core.c
@@ -663,18 +663,18 @@ void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
if (!epc || IS_ERR(epc) || !epf)
return;
+ mutex_lock(&epc->list_lock);
if (type == PRIMARY_INTERFACE) {
func_no = epf->func_no;
list = &epf->list;
+ epf->epc = NULL;
} else {
func_no = epf->sec_epc_func_no;
list = &epf->sec_epc_list;
+ epf->sec_epc = NULL;
}
-
- mutex_lock(&epc->list_lock);
clear_bit(func_no, &epc->function_num_map);
list_del(list);
- epf->epc = NULL;
mutex_unlock(&epc->list_lock);
}
EXPORT_SYMBOL_GPL(pci_epc_remove_epf);
diff --git a/drivers/pci/hotplug/cpqphp_pci.c b/drivers/pci/hotplug/cpqphp_pci.c
index 3b248426a9f4..a35af42d6a3d 100644
--- a/drivers/pci/hotplug/cpqphp_pci.c
+++ b/drivers/pci/hotplug/cpqphp_pci.c
@@ -135,11 +135,13 @@ int cpqhp_unconfigure_device(struct pci_func *func)
static int PCI_RefinedAccessConfig(struct pci_bus *bus, unsigned int devfn, u8 offset, u32 *value)
{
u32 vendID = 0;
+ int ret;
- if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID) == -1)
- return -1;
- if (vendID == 0xffffffff)
- return -1;
+ ret = pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID);
+ if (ret != PCIBIOS_SUCCESSFUL)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+ if (PCI_POSSIBLE_ERROR(vendID))
+ return PCIBIOS_DEVICE_NOT_FOUND;
return pci_bus_read_config_dword(bus, devfn, offset, value);
}
@@ -202,13 +204,15 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
{
u16 tdevice;
u32 work;
+ int ret;
u8 tbus;
ctrl->pci_bus->number = bus_num;
for (tdevice = 0; tdevice < 0xFF; tdevice++) {
/* Scan for access first */
- if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
+ ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work);
+ if (ret)
continue;
dbg("Looking for nonbridge bus_num %d dev_num %d\n", bus_num, tdevice);
/* Yep we got one. Not a bridge ? */
@@ -220,7 +224,8 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
}
for (tdevice = 0; tdevice < 0xFF; tdevice++) {
/* Scan for access first */
- if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
+ ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work);
+ if (ret)
continue;
dbg("Looking for bridge bus_num %d dev_num %d\n", bus_num, tdevice);
/* Yep we got one. bridge ? */
@@ -253,7 +258,7 @@ static int PCI_GetBusDevHelper(struct controller *ctrl, u8 *bus_num, u8 *dev_num
*dev_num = tdevice;
ctrl->pci_bus->number = tbus;
pci_bus_read_config_dword(ctrl->pci_bus, *dev_num, PCI_VENDOR_ID, &work);
- if (!nobridge || (work == 0xffffffff))
+ if (!nobridge || PCI_POSSIBLE_ERROR(work))
return 0;
dbg("bus_num %d devfn %d\n", *bus_num, *dev_num);
diff --git a/drivers/pci/of_property.c b/drivers/pci/of_property.c
index 03539e505372..ec63e9b9e238 100644
--- a/drivers/pci/of_property.c
+++ b/drivers/pci/of_property.c
@@ -126,7 +126,7 @@ static int of_pci_prop_ranges(struct pci_dev *pdev, struct of_changeset *ocs,
if (of_pci_get_addr_flags(&res[j], &flags))
continue;
- val64 = res[j].start;
+ val64 = pci_bus_address(pdev, &res[j] - pdev->resource);
of_pci_set_address(pdev, rp[i].parent_addr, val64, 0, flags,
false);
if (pci_is_bridge(pdev)) {
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 93f2f4dcf6d6..830877efe505 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -5444,7 +5444,7 @@ static ssize_t reset_method_store(struct device *dev,
const char *buf, size_t count)
{
struct pci_dev *pdev = to_pci_dev(dev);
- char *options, *name;
+ char *options, *tmp_options, *name;
int m, n;
u8 reset_methods[PCI_NUM_RESET_METHODS] = { 0 };
@@ -5464,7 +5464,8 @@ static ssize_t reset_method_store(struct device *dev,
return -ENOMEM;
n = 0;
- while ((name = strsep(&options, " ")) != NULL) {
+ tmp_options = options;
+ while ((name = strsep(&tmp_options, " ")) != NULL) {
if (sysfs_streq(name, ""))
continue;
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index d5e9010a135a..67ec4cf2fdb4 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -13,6 +13,9 @@
#define PCIE_LINK_RETRAIN_TIMEOUT_MS 1000
+/* Power stable to PERST# inactive from PCIe card Electromechanical Spec */
+#define PCIE_T_PVPERL_MS 100
+
/*
* PCIe r6.0, sec 5.3.3.2.1 <PME Synchronization>
* Recommends 1ms to 10ms timeout to check L2 ready.
diff --git a/drivers/pci/slot.c b/drivers/pci/slot.c
index 0f87cade10f7..ed645c7a4e4b 100644
--- a/drivers/pci/slot.c
+++ b/drivers/pci/slot.c
@@ -79,6 +79,7 @@ static void pci_slot_release(struct kobject *kobj)
up_read(&pci_bus_sem);
list_del(&slot->list);
+ pci_bus_put(slot->bus);
kfree(slot);
}
@@ -261,7 +262,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
goto err;
}
- slot->bus = parent;
+ slot->bus = pci_bus_get(parent);
slot->number = slot_nr;
slot->kobj.kset = pci_slots_kset;
@@ -269,6 +270,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
slot_name = make_slot_name(name);
if (!slot_name) {
err = -ENOMEM;
+ pci_bus_put(slot->bus);
kfree(slot);
goto err;
}
diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
index 0b3ce7713645..7bd1733d7977 100644
--- a/drivers/perf/arm-cmn.c
+++ b/drivers/perf/arm-cmn.c
@@ -2075,8 +2075,6 @@ static int arm_cmn_init_dtcs(struct arm_cmn *cmn)
continue;
xp = arm_cmn_node_to_xp(cmn, dn);
- dn->portid_bits = xp->portid_bits;
- dn->deviceid_bits = xp->deviceid_bits;
dn->dtc = xp->dtc;
dn->dtm = xp->dtm;
if (cmn->multi_dtm)
@@ -2307,6 +2305,8 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
}
arm_cmn_init_node_info(cmn, reg & CMN_CHILD_NODE_ADDR, dn);
+ dn->portid_bits = xp->portid_bits;
+ dn->deviceid_bits = xp->deviceid_bits;
switch (dn->type) {
case CMN_TYPE_DTC:
diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
index 6303b82566f9..31e491e7f206 100644
--- a/drivers/perf/arm_smmuv3_pmu.c
+++ b/drivers/perf/arm_smmuv3_pmu.c
@@ -431,6 +431,17 @@ static int smmu_pmu_event_init(struct perf_event *event)
return -EINVAL;
}
+ /*
+ * Ensure all events are on the same cpu so all events are in the
+ * same cpu context, to avoid races on pmu_enable etc.
+ */
+ event->cpu = smmu_pmu->on_cpu;
+
+ hwc->idx = -1;
+
+ if (event->group_leader == event)
+ return 0;
+
for_each_sibling_event(sibling, event->group_leader) {
if (is_software_event(sibling))
continue;
@@ -442,14 +453,6 @@ static int smmu_pmu_event_init(struct perf_event *event)
return -EINVAL;
}
- hwc->idx = -1;
-
- /*
- * Ensure all events are on the same cpu so all events are in the
- * same cpu context, to avoid races on pmu_enable etc.
- */
- event->cpu = smmu_pmu->on_cpu;
-
return 0;
}
diff --git a/drivers/pinctrl/pinctrl-k210.c b/drivers/pinctrl/pinctrl-k210.c
index b6d1ed9ec9a3..7c05dbf533e7 100644
--- a/drivers/pinctrl/pinctrl-k210.c
+++ b/drivers/pinctrl/pinctrl-k210.c
@@ -183,7 +183,7 @@ static const u32 k210_pinconf_mode_id_to_mode[] = {
[K210_PC_DEFAULT_INT13] = K210_PC_MODE_IN | K210_PC_PU,
};
-#undef DEFAULT
+#undef K210_PC_DEFAULT
/*
* Pin functions configuration information.
diff --git a/drivers/pinctrl/pinctrl-zynqmp.c b/drivers/pinctrl/pinctrl-zynqmp.c
index f2be341f73e1..1528f4097ff8 100644
--- a/drivers/pinctrl/pinctrl-zynqmp.c
+++ b/drivers/pinctrl/pinctrl-zynqmp.c
@@ -48,7 +48,6 @@
* @name: Name of the pin mux function
* @groups: List of pin groups for this function
* @ngroups: Number of entries in @groups
- * @node: Firmware node matching with the function
*
* This structure holds information about pin control function
* and function group names supporting that function.
diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
index 5817c52cee6b..8acaae88b87e 100644
--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
@@ -667,7 +667,7 @@ static void pmic_gpio_config_dbg_show(struct pinctrl_dev *pctldev,
"push-pull", "open-drain", "open-source"
};
static const char *const strengths[] = {
- "no", "high", "medium", "low"
+ "no", "low", "medium", "high"
};
pad = pctldev->desc->pins[pin].drv_data;
diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
index d0b4d3fc40ed..66fdc6fa73ec 100644
--- a/drivers/platform/chrome/cros_ec_typec.c
+++ b/drivers/platform/chrome/cros_ec_typec.c
@@ -390,6 +390,7 @@ static int cros_typec_init_ports(struct cros_typec_data *typec)
return 0;
unregister_ports:
+ fwnode_handle_put(fwnode);
cros_unregister_ports(typec);
return ret;
}
diff --git a/drivers/platform/x86/dell/dell-smbios-base.c b/drivers/platform/x86/dell/dell-smbios-base.c
index 6fb538a13868..9a9b9feac416 100644
--- a/drivers/platform/x86/dell/dell-smbios-base.c
+++ b/drivers/platform/x86/dell/dell-smbios-base.c
@@ -544,6 +544,7 @@ static int __init dell_smbios_init(void)
int ret, wmi, smm;
if (!dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "Dell System", NULL) &&
+ !dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "Alienware", NULL) &&
!dmi_find_device(DMI_DEV_TYPE_OEM_STRING, "www.dell.com", NULL)) {
pr_err("Unable to run on non-Dell system\n");
return -ENODEV;
diff --git a/drivers/platform/x86/dell/dell-wmi-base.c b/drivers/platform/x86/dell/dell-wmi-base.c
index 24fd7ffadda9..841a5414d28a 100644
--- a/drivers/platform/x86/dell/dell-wmi-base.c
+++ b/drivers/platform/x86/dell/dell-wmi-base.c
@@ -80,6 +80,12 @@ static const struct dmi_system_id dell_wmi_smbios_list[] __initconst = {
static const struct key_entry dell_wmi_keymap_type_0000[] = {
{ KE_IGNORE, 0x003a, { KEY_CAPSLOCK } },
+ /* Meta key lock */
+ { KE_IGNORE, 0xe000, { KEY_RIGHTMETA } },
+
+ /* Meta key unlock */
+ { KE_IGNORE, 0xe001, { KEY_RIGHTMETA } },
+
/* Key code is followed by brightness level */
{ KE_KEY, 0xe005, { KEY_BRIGHTNESSDOWN } },
{ KE_KEY, 0xe006, { KEY_BRIGHTNESSUP } },
diff --git a/drivers/platform/x86/intel/bxtwc_tmu.c b/drivers/platform/x86/intel/bxtwc_tmu.c
index d0e2a3c293b0..9ac801b929b9 100644
--- a/drivers/platform/x86/intel/bxtwc_tmu.c
+++ b/drivers/platform/x86/intel/bxtwc_tmu.c
@@ -48,9 +48,8 @@ static irqreturn_t bxt_wcove_tmu_irq_handler(int irq, void *data)
static int bxt_wcove_tmu_probe(struct platform_device *pdev)
{
struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent);
- struct regmap_irq_chip_data *regmap_irq_chip;
struct wcove_tmu *wctmu;
- int ret, virq, irq;
+ int ret;
wctmu = devm_kzalloc(&pdev->dev, sizeof(*wctmu), GFP_KERNEL);
if (!wctmu)
@@ -59,27 +58,18 @@ static int bxt_wcove_tmu_probe(struct platform_device *pdev)
wctmu->dev = &pdev->dev;
wctmu->regmap = pmic->regmap;
- irq = platform_get_irq(pdev, 0);
- if (irq < 0)
- return irq;
+ wctmu->irq = platform_get_irq(pdev, 0);
+ if (wctmu->irq < 0)
+ return wctmu->irq;
- regmap_irq_chip = pmic->irq_chip_data_tmu;
- virq = regmap_irq_get_virq(regmap_irq_chip, irq);
- if (virq < 0) {
- dev_err(&pdev->dev,
- "failed to get virtual interrupt=%d\n", irq);
- return virq;
- }
-
- ret = devm_request_threaded_irq(&pdev->dev, virq,
+ ret = devm_request_threaded_irq(&pdev->dev, wctmu->irq,
NULL, bxt_wcove_tmu_irq_handler,
IRQF_ONESHOT, "bxt_wcove_tmu", wctmu);
if (ret) {
dev_err(&pdev->dev, "request irq failed: %d,virq: %d\n",
- ret, virq);
+ ret, wctmu->irq);
return ret;
}
- wctmu->irq = virq;
/* Unmask TMU second level Wake & System alarm */
regmap_update_bits(wctmu->regmap, BXTWC_MTMUIRQ_REG,
diff --git a/drivers/platform/x86/panasonic-laptop.c b/drivers/platform/x86/panasonic-laptop.c
index ebd81846e2d5..7365286f6d2d 100644
--- a/drivers/platform/x86/panasonic-laptop.c
+++ b/drivers/platform/x86/panasonic-laptop.c
@@ -602,8 +602,7 @@ static ssize_t eco_mode_show(struct device *dev, struct device_attribute *attr,
result = 1;
break;
default:
- result = -EIO;
- break;
+ return -EIO;
}
return sysfs_emit(buf, "%u\n", result);
}
@@ -749,7 +748,12 @@ static ssize_t current_brightness_store(struct device *dev, struct device_attrib
static ssize_t cdpower_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
- return sysfs_emit(buf, "%d\n", get_optd_power_state());
+ int state = get_optd_power_state();
+
+ if (state < 0)
+ return state;
+
+ return sysfs_emit(buf, "%d\n", state);
}
static ssize_t cdpower_store(struct device *dev, struct device_attribute *attr,
diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
index 5b1f08eabd92..964670d4ca1e 100644
--- a/drivers/platform/x86/thinkpad_acpi.c
+++ b/drivers/platform/x86/thinkpad_acpi.c
@@ -8013,6 +8013,7 @@ static u8 fan_control_resume_level;
static int fan_watchdog_maxinterval;
static bool fan_with_ns_addr;
+static bool ecfw_with_fan_dec_rpm;
static struct mutex fan_mutex;
@@ -8655,7 +8656,11 @@ static ssize_t fan_fan1_input_show(struct device *dev,
if (res < 0)
return res;
- return sysfs_emit(buf, "%u\n", speed);
+ /* Check for fan speeds displayed in hexadecimal */
+ if (!ecfw_with_fan_dec_rpm)
+ return sysfs_emit(buf, "%u\n", speed);
+ else
+ return sysfs_emit(buf, "%x\n", speed);
}
static DEVICE_ATTR(fan1_input, S_IRUGO, fan_fan1_input_show, NULL);
@@ -8672,7 +8677,11 @@ static ssize_t fan_fan2_input_show(struct device *dev,
if (res < 0)
return res;
- return sysfs_emit(buf, "%u\n", speed);
+ /* Check for fan speeds displayed in hexadecimal */
+ if (!ecfw_with_fan_dec_rpm)
+ return sysfs_emit(buf, "%u\n", speed);
+ else
+ return sysfs_emit(buf, "%x\n", speed);
}
static DEVICE_ATTR(fan2_input, S_IRUGO, fan_fan2_input_show, NULL);
@@ -8748,6 +8757,7 @@ static const struct attribute_group fan_driver_attr_group = {
#define TPACPI_FAN_2CTL 0x0004 /* selects fan2 control */
#define TPACPI_FAN_NOFAN 0x0008 /* no fan available */
#define TPACPI_FAN_NS 0x0010 /* For EC with non-Standard register addresses */
+#define TPACPI_FAN_DECRPM 0x0020 /* For ECFW's with RPM in register as decimal */
static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
TPACPI_QEC_IBM('1', 'Y', TPACPI_FAN_Q1),
@@ -8769,6 +8779,7 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
TPACPI_Q_LNV3('R', '1', 'F', TPACPI_FAN_NS), /* L13 Yoga Gen 2 */
TPACPI_Q_LNV3('N', '2', 'U', TPACPI_FAN_NS), /* X13 Yoga Gen 2*/
TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */
+ TPACPI_Q_LNV3('R', '0', 'Q', TPACPI_FAN_DECRPM),/* L480 */
};
static int __init fan_init(struct ibm_init_struct *iibm)
@@ -8809,6 +8820,13 @@ static int __init fan_init(struct ibm_init_struct *iibm)
tp_features.fan_ctrl_status_undef = 1;
}
+ /* Check for the EC/BIOS with RPM reported in decimal*/
+ if (quirks & TPACPI_FAN_DECRPM) {
+ pr_info("ECFW with fan RPM as decimal in EC register\n");
+ ecfw_with_fan_dec_rpm = 1;
+ tp_features.fan_ctrl_status_undef = 1;
+ }
+
if (gfan_handle) {
/* 570, 600e/x, 770e, 770x */
fan_status_access_mode = TPACPI_FAN_RD_ACPI_GFAN;
@@ -9020,7 +9038,11 @@ static int fan_read(struct seq_file *m)
if (rc < 0)
return rc;
- seq_printf(m, "speed:\t\t%d\n", speed);
+ /* Check for fan speeds displayed in hexadecimal */
+ if (!ecfw_with_fan_dec_rpm)
+ seq_printf(m, "speed:\t\t%d\n", speed);
+ else
+ seq_printf(m, "speed:\t\t%x\n", speed);
if (fan_status_access_mode == TPACPI_FAN_RD_TPEC_NS) {
/*
diff --git a/drivers/platform/x86/x86-android-tablets/core.c b/drivers/platform/x86/x86-android-tablets/core.c
index a0fa0b6859c9..63a348af83db 100644
--- a/drivers/platform/x86/x86-android-tablets/core.c
+++ b/drivers/platform/x86/x86-android-tablets/core.c
@@ -230,20 +230,20 @@ static void x86_android_tablet_remove(struct platform_device *pdev)
{
int i;
- for (i = 0; i < serdev_count; i++) {
+ for (i = serdev_count - 1; i >= 0; i--) {
if (serdevs[i])
serdev_device_remove(serdevs[i]);
}
kfree(serdevs);
- for (i = 0; i < pdev_count; i++)
+ for (i = pdev_count - 1; i >= 0; i--)
platform_device_unregister(pdevs[i]);
kfree(pdevs);
kfree(buttons);
- for (i = 0; i < i2c_client_count; i++)
+ for (i = i2c_client_count - 1; i >= 0; i--)
i2c_unregister_device(i2c_clients[i]);
kfree(i2c_clients);
diff --git a/drivers/pmdomain/ti/ti_sci_pm_domains.c b/drivers/pmdomain/ti/ti_sci_pm_domains.c
index f520228e1b6a..4449d36042c2 100644
--- a/drivers/pmdomain/ti/ti_sci_pm_domains.c
+++ b/drivers/pmdomain/ti/ti_sci_pm_domains.c
@@ -161,6 +161,7 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
break;
if (args.args_count >= 1 && args.np == dev->of_node) {
+ of_node_put(args.np);
if (args.args[0] > max_id) {
max_id = args.args[0];
} else {
@@ -188,7 +189,10 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
pm_genpd_init(&pd->pd, NULL, true);
list_add(&pd->node, &pd_provider->pd_list);
+ } else {
+ of_node_put(args.np);
}
+
index++;
}
}
diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
index 4296600e8912..23c873656757 100644
--- a/drivers/power/supply/bq27xxx_battery.c
+++ b/drivers/power/supply/bq27xxx_battery.c
@@ -449,9 +449,29 @@ static u8
[BQ27XXX_REG_AP] = 0x18,
BQ27XXX_DM_REG_ROWS,
},
+ bq27426_regs[BQ27XXX_REG_MAX] = {
+ [BQ27XXX_REG_CTRL] = 0x00,
+ [BQ27XXX_REG_TEMP] = 0x02,
+ [BQ27XXX_REG_INT_TEMP] = 0x1e,
+ [BQ27XXX_REG_VOLT] = 0x04,
+ [BQ27XXX_REG_AI] = 0x10,
+ [BQ27XXX_REG_FLAGS] = 0x06,
+ [BQ27XXX_REG_TTE] = INVALID_REG_ADDR,
+ [BQ27XXX_REG_TTF] = INVALID_REG_ADDR,
+ [BQ27XXX_REG_TTES] = INVALID_REG_ADDR,
+ [BQ27XXX_REG_TTECP] = INVALID_REG_ADDR,
+ [BQ27XXX_REG_NAC] = 0x08,
+ [BQ27XXX_REG_RC] = 0x0c,
+ [BQ27XXX_REG_FCC] = 0x0e,
+ [BQ27XXX_REG_CYCT] = INVALID_REG_ADDR,
+ [BQ27XXX_REG_AE] = INVALID_REG_ADDR,
+ [BQ27XXX_REG_SOC] = 0x1c,
+ [BQ27XXX_REG_DCAP] = INVALID_REG_ADDR,
+ [BQ27XXX_REG_AP] = 0x18,
+ BQ27XXX_DM_REG_ROWS,
+ },
#define bq27411_regs bq27421_regs
#define bq27425_regs bq27421_regs
-#define bq27426_regs bq27421_regs
#define bq27441_regs bq27421_regs
#define bq27621_regs bq27421_regs
bq27z561_regs[BQ27XXX_REG_MAX] = {
@@ -769,10 +789,23 @@ static enum power_supply_property bq27421_props[] = {
};
#define bq27411_props bq27421_props
#define bq27425_props bq27421_props
-#define bq27426_props bq27421_props
#define bq27441_props bq27421_props
#define bq27621_props bq27421_props
+static enum power_supply_property bq27426_props[] = {
+ POWER_SUPPLY_PROP_STATUS,
+ POWER_SUPPLY_PROP_PRESENT,
+ POWER_SUPPLY_PROP_VOLTAGE_NOW,
+ POWER_SUPPLY_PROP_CURRENT_NOW,
+ POWER_SUPPLY_PROP_CAPACITY,
+ POWER_SUPPLY_PROP_CAPACITY_LEVEL,
+ POWER_SUPPLY_PROP_TEMP,
+ POWER_SUPPLY_PROP_TECHNOLOGY,
+ POWER_SUPPLY_PROP_CHARGE_FULL,
+ POWER_SUPPLY_PROP_CHARGE_NOW,
+ POWER_SUPPLY_PROP_MANUFACTURER,
+};
+
static enum power_supply_property bq27z561_props[] = {
POWER_SUPPLY_PROP_STATUS,
POWER_SUPPLY_PROP_PRESENT,
diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
index 416409e2fd6d..1893d37dd575 100644
--- a/drivers/power/supply/power_supply_core.c
+++ b/drivers/power/supply/power_supply_core.c
@@ -480,8 +480,6 @@ EXPORT_SYMBOL_GPL(power_supply_get_by_name);
*/
void power_supply_put(struct power_supply *psy)
{
- might_sleep();
-
atomic_dec(&psy->use_cnt);
put_device(&psy->dev);
}
diff --git a/drivers/power/supply/rt9471.c b/drivers/power/supply/rt9471.c
index 868b0703d15c..522a67736fa5 100644
--- a/drivers/power/supply/rt9471.c
+++ b/drivers/power/supply/rt9471.c
@@ -139,6 +139,19 @@ enum {
RT9471_PORTSTAT_DCP,
};
+enum {
+ RT9471_ICSTAT_SLEEP = 0,
+ RT9471_ICSTAT_VBUSRDY,
+ RT9471_ICSTAT_TRICKLECHG,
+ RT9471_ICSTAT_PRECHG,
+ RT9471_ICSTAT_FASTCHG,
+ RT9471_ICSTAT_IEOC,
+ RT9471_ICSTAT_BGCHG,
+ RT9471_ICSTAT_CHGDONE,
+ RT9471_ICSTAT_CHGFAULT,
+ RT9471_ICSTAT_OTG = 15,
+};
+
struct rt9471_chip {
struct device *dev;
struct regmap *regmap;
@@ -153,8 +166,8 @@ struct rt9471_chip {
};
static const struct reg_field rt9471_reg_fields[F_MAX_FIELDS] = {
- [F_WDT] = REG_FIELD(RT9471_REG_TOP, 0, 0),
- [F_WDT_RST] = REG_FIELD(RT9471_REG_TOP, 1, 1),
+ [F_WDT] = REG_FIELD(RT9471_REG_TOP, 0, 1),
+ [F_WDT_RST] = REG_FIELD(RT9471_REG_TOP, 2, 2),
[F_CHG_EN] = REG_FIELD(RT9471_REG_FUNC, 0, 0),
[F_HZ] = REG_FIELD(RT9471_REG_FUNC, 5, 5),
[F_BATFET_DIS] = REG_FIELD(RT9471_REG_FUNC, 7, 7),
@@ -255,31 +268,32 @@ static int rt9471_get_ieoc(struct rt9471_chip *chip, int *microamp)
static int rt9471_get_status(struct rt9471_chip *chip, int *status)
{
- unsigned int chg_ready, chg_done, fault_stat;
+ unsigned int ic_stat;
int ret;
- ret = regmap_field_read(chip->rm_fields[F_ST_CHG_RDY], &chg_ready);
- if (ret)
- return ret;
-
- ret = regmap_field_read(chip->rm_fields[F_ST_CHG_DONE], &chg_done);
+ ret = regmap_field_read(chip->rm_fields[F_IC_STAT], &ic_stat);
if (ret)
return ret;
- ret = regmap_read(chip->regmap, RT9471_REG_STAT1, &fault_stat);
- if (ret)
- return ret;
-
- fault_stat &= RT9471_CHGFAULT_MASK;
-
- if (chg_ready && chg_done)
- *status = POWER_SUPPLY_STATUS_FULL;
- else if (chg_ready && fault_stat)
+ switch (ic_stat) {
+ case RT9471_ICSTAT_VBUSRDY:
+ case RT9471_ICSTAT_CHGFAULT:
*status = POWER_SUPPLY_STATUS_NOT_CHARGING;
- else if (chg_ready && !fault_stat)
+ break;
+ case RT9471_ICSTAT_TRICKLECHG ... RT9471_ICSTAT_BGCHG:
*status = POWER_SUPPLY_STATUS_CHARGING;
- else
+ break;
+ case RT9471_ICSTAT_CHGDONE:
+ *status = POWER_SUPPLY_STATUS_FULL;
+ break;
+ case RT9471_ICSTAT_SLEEP:
+ case RT9471_ICSTAT_OTG:
*status = POWER_SUPPLY_STATUS_DISCHARGING;
+ break;
+ default:
+ *status = POWER_SUPPLY_STATUS_UNKNOWN;
+ break;
+ }
return 0;
}
diff --git a/drivers/pwm/pwm-imx27.c b/drivers/pwm/pwm-imx27.c
index 29a3089c534c..660a71b7263c 100644
--- a/drivers/pwm/pwm-imx27.c
+++ b/drivers/pwm/pwm-imx27.c
@@ -26,6 +26,7 @@
#define MX3_PWMSR 0x04 /* PWM Status Register */
#define MX3_PWMSAR 0x0C /* PWM Sample Register */
#define MX3_PWMPR 0x10 /* PWM Period Register */
+#define MX3_PWMCNR 0x14 /* PWM Counter Register */
#define MX3_PWMCR_FWM GENMASK(27, 26)
#define MX3_PWMCR_STOPEN BIT(25)
@@ -217,11 +218,13 @@ static void pwm_imx27_wait_fifo_slot(struct pwm_chip *chip,
static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
const struct pwm_state *state)
{
- unsigned long period_cycles, duty_cycles, prescale;
+ unsigned long period_cycles, duty_cycles, prescale, period_us, tmp;
struct pwm_imx27_chip *imx = to_pwm_imx27_chip(chip);
struct pwm_state cstate;
unsigned long long c;
unsigned long long clkrate;
+ unsigned long flags;
+ int val;
int ret;
u32 cr;
@@ -264,7 +267,98 @@ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
pwm_imx27_sw_reset(chip);
}
- writel(duty_cycles, imx->mmio_base + MX3_PWMSAR);
+ val = readl(imx->mmio_base + MX3_PWMPR);
+ val = val >= MX3_PWMPR_MAX ? MX3_PWMPR_MAX : val;
+ cr = readl(imx->mmio_base + MX3_PWMCR);
+ tmp = NSEC_PER_SEC * (u64)(val + 2) * MX3_PWMCR_PRESCALER_GET(cr);
+ tmp = DIV_ROUND_UP_ULL(tmp, clkrate);
+ period_us = DIV_ROUND_UP_ULL(tmp, 1000);
+
+ /*
+ * ERR051198:
+ * PWM: PWM output may not function correctly if the FIFO is empty when
+ * a new SAR value is programmed
+ *
+ * Description:
+ * When the PWM FIFO is empty, a new value programmed to the PWM Sample
+ * register (PWM_PWMSAR) will be directly applied even if the current
+ * timer period has not expired.
+ *
+ * If the new SAMPLE value programmed in the PWM_PWMSAR register is
+ * less than the previous value, and the PWM counter register
+ * (PWM_PWMCNR) that contains the current COUNT value is greater than
+ * the new programmed SAMPLE value, the current period will not flip
+ * the level. This may result in an output pulse with a duty cycle of
+ * 100%.
+ *
+ * Consider a change from
+ * ________
+ * / \______/
+ * ^ * ^
+ * to
+ * ____
+ * / \__________/
+ * ^ ^
+ * At the time marked by *, the new write value will be directly applied
+ * to SAR even the current period is not over if FIFO is empty.
+ *
+ * ________ ____________________
+ * / \______/ \__________/
+ * ^ ^ * ^ ^
+ * |<-- old SAR -->| |<-- new SAR -->|
+ *
+ * That is the output is active for a whole period.
+ *
+ * Workaround:
+ * Check new SAR less than old SAR and current counter is in errata
+ * windows, write extra old SAR into FIFO and new SAR will effect at
+ * next period.
+ *
+ * Sometime period is quite long, such as over 1 second. If add old SAR
+ * into FIFO unconditional, new SAR have to wait for next period. It
+ * may be too long.
+ *
+ * Turn off the interrupt to ensure that not IRQ and schedule happen
+ * during above operations. If any irq and schedule happen, counter
+ * in PWM will be out of data and take wrong action.
+ *
+ * Add a safety margin 1.5us because it needs some time to complete
+ * IO write.
+ *
+ * Use writel_relaxed() to minimize the interval between two writes to
+ * the SAR register to increase the fastest PWM frequency supported.
+ *
+ * When the PWM period is longer than 2us(or <500kHz), this workaround
+ * can solve this problem. No software workaround is available if PWM
+ * period is shorter than IO write. Just try best to fill old data
+ * into FIFO.
+ */
+ c = clkrate * 1500;
+ do_div(c, NSEC_PER_SEC);
+
+ local_irq_save(flags);
+ val = FIELD_GET(MX3_PWMSR_FIFOAV, readl_relaxed(imx->mmio_base + MX3_PWMSR));
+
+ if (duty_cycles < imx->duty_cycle && (cr & MX3_PWMCR_EN)) {
+ if (period_us < 2) { /* 2us = 500 kHz */
+ /* Best effort attempt to fix up >500 kHz case */
+ udelay(3 * period_us);
+ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
+ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
+ } else if (val < MX3_PWMSR_FIFOAV_2WORDS) {
+ val = readl_relaxed(imx->mmio_base + MX3_PWMCNR);
+ /*
+ * If counter is close to period, controller may roll over when
+ * next IO write.
+ */
+ if ((val + c >= duty_cycles && val < imx->duty_cycle) ||
+ val + c >= period_cycles)
+ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
+ }
+ }
+ writel_relaxed(duty_cycles, imx->mmio_base + MX3_PWMSAR);
+ local_irq_restore(flags);
+
writel(period_cycles, imx->mmio_base + MX3_PWMPR);
/*
diff --git a/drivers/regulator/rk808-regulator.c b/drivers/regulator/rk808-regulator.c
index 867a2cf243f6..374d80dc6d17 100644
--- a/drivers/regulator/rk808-regulator.c
+++ b/drivers/regulator/rk808-regulator.c
@@ -1286,6 +1286,8 @@ static const struct regulator_desc rk809_reg[] = {
.n_linear_ranges = ARRAY_SIZE(rk817_buck1_voltage_ranges),
.vsel_reg = RK817_BUCK3_ON_VSEL_REG,
.vsel_mask = RK817_BUCK_VSEL_MASK,
+ .apply_reg = RK817_POWER_CONFIG,
+ .apply_bit = RK817_BUCK3_FB_RES_INTER,
.enable_reg = RK817_POWER_EN_REG(0),
.enable_mask = ENABLE_MASK(RK817_ID_DCDC3),
.enable_val = ENABLE_MASK(RK817_ID_DCDC3),
@@ -1647,7 +1649,7 @@ static int rk808_regulator_dt_parse_pdata(struct device *dev,
}
if (!pdata->dvs_gpio[i]) {
- dev_info(dev, "there is no dvs%d gpio\n", i);
+ dev_dbg(dev, "there is no dvs%d gpio\n", i);
continue;
}
@@ -1683,12 +1685,6 @@ static int rk808_regulator_probe(struct platform_device *pdev)
if (!pdata)
return -ENOMEM;
- ret = rk808_regulator_dt_parse_pdata(&pdev->dev, regmap, pdata);
- if (ret < 0)
- return ret;
-
- platform_set_drvdata(pdev, pdata);
-
switch (rk808->variant) {
case RK805_ID:
regulators = rk805_reg;
@@ -1699,6 +1695,11 @@ static int rk808_regulator_probe(struct platform_device *pdev)
nregulators = ARRAY_SIZE(rk806_reg);
break;
case RK808_ID:
+ /* DVS0/1 GPIOs are supported on the RK808 only */
+ ret = rk808_regulator_dt_parse_pdata(&pdev->dev, regmap, pdata);
+ if (ret < 0)
+ return ret;
+
regulators = rk808_reg;
nregulators = RK808_NUM_REGULATORS;
break;
@@ -1720,6 +1721,8 @@ static int rk808_regulator_probe(struct platform_device *pdev)
return -EINVAL;
}
+ platform_set_drvdata(pdev, pdata);
+
config.dev = &pdev->dev;
config.driver_data = pdata;
config.regmap = regmap;
diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
index 22fe7b5f5236..2d717f2ed396 100644
--- a/drivers/remoteproc/qcom_q6v5_mss.c
+++ b/drivers/remoteproc/qcom_q6v5_mss.c
@@ -1161,6 +1161,9 @@ static int q6v5_mba_load(struct q6v5 *qproc)
goto disable_active_clks;
}
+ if (qproc->has_mba_logs)
+ qcom_pil_info_store("mba", qproc->mba_phys, MBA_LOG_SIZE);
+
writel(qproc->mba_phys, qproc->rmb_base + RMB_MBA_IMAGE_REG);
if (qproc->dp_size) {
writel(qproc->mba_phys + SZ_1M, qproc->rmb_base + RMB_PMI_CODE_START_REG);
@@ -1171,9 +1174,6 @@ static int q6v5_mba_load(struct q6v5 *qproc)
if (ret)
goto reclaim_mba;
- if (qproc->has_mba_logs)
- qcom_pil_info_store("mba", qproc->mba_phys, MBA_LOG_SIZE);
-
ret = q6v5_rmb_mba_wait(qproc, 0, 5000);
if (ret == -ETIMEDOUT) {
dev_err(qproc->dev, "MBA boot timed out\n");
diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
index b5447dd2dd35..6235721f2c1a 100644
--- a/drivers/remoteproc/qcom_q6v5_pas.c
+++ b/drivers/remoteproc/qcom_q6v5_pas.c
@@ -832,6 +832,7 @@ static const struct adsp_data sm8250_adsp_resource = {
.crash_reason_smem = 423,
.firmware_name = "adsp.mdt",
.pas_id = 1,
+ .minidump_id = 5,
.auto_boot = true,
.proxy_pd_names = (char*[]){
"lcx",
@@ -973,6 +974,7 @@ static const struct adsp_data sm8350_cdsp_resource = {
.crash_reason_smem = 601,
.firmware_name = "cdsp.mdt",
.pas_id = 18,
+ .minidump_id = 7,
.auto_boot = true,
.proxy_pd_names = (char*[]){
"cx",
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
index d877a1a1aeb4..c7f91a82e634 100644
--- a/drivers/rpmsg/qcom_glink_native.c
+++ b/drivers/rpmsg/qcom_glink_native.c
@@ -1117,7 +1117,8 @@ void qcom_glink_native_rx(struct qcom_glink *glink)
qcom_glink_rx_advance(glink, ALIGN(sizeof(msg), 8));
break;
case GLINK_CMD_OPEN:
- ret = qcom_glink_rx_defer(glink, param2);
+ /* upper 16 bits of param2 are the "prio" field */
+ ret = qcom_glink_rx_defer(glink, param2 & 0xffff);
break;
case GLINK_CMD_TX_DATA:
case GLINK_CMD_TX_DATA_CONT:
diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
index 0b23706d9fd3..4a7c41a6c21e 100644
--- a/drivers/rtc/interface.c
+++ b/drivers/rtc/interface.c
@@ -904,13 +904,18 @@ void rtc_timer_do_work(struct work_struct *work)
struct timerqueue_node *next;
ktime_t now;
struct rtc_time tm;
+ int err;
struct rtc_device *rtc =
container_of(work, struct rtc_device, irqwork);
mutex_lock(&rtc->ops_lock);
again:
- __rtc_read_time(rtc, &tm);
+ err = __rtc_read_time(rtc, &tm);
+ if (err) {
+ mutex_unlock(&rtc->ops_lock);
+ return;
+ }
now = rtc_tm_to_ktime(tm);
while ((next = timerqueue_getnext(&rtc->timerqueue))) {
if (next->expires > now)
diff --git a/drivers/rtc/rtc-ab-eoz9.c b/drivers/rtc/rtc-ab-eoz9.c
index 04e1b8e93bc1..79d5ee7b818c 100644
--- a/drivers/rtc/rtc-ab-eoz9.c
+++ b/drivers/rtc/rtc-ab-eoz9.c
@@ -396,13 +396,6 @@ static int abeoz9z3_temp_read(struct device *dev,
if (ret < 0)
return ret;
- if ((val & ABEOZ9_REG_CTRL_STATUS_V1F) ||
- (val & ABEOZ9_REG_CTRL_STATUS_V2F)) {
- dev_err(dev,
- "thermometer might be disabled due to low voltage\n");
- return -EINVAL;
- }
-
switch (attr) {
case hwmon_temp_input:
ret = regmap_read(regmap, ABEOZ9_REG_REG_TEMP, &val);
diff --git a/drivers/rtc/rtc-abx80x.c b/drivers/rtc/rtc-abx80x.c
index 1298962402ff..3fee27914ba8 100644
--- a/drivers/rtc/rtc-abx80x.c
+++ b/drivers/rtc/rtc-abx80x.c
@@ -39,7 +39,7 @@
#define ABX8XX_REG_STATUS 0x0f
#define ABX8XX_STATUS_AF BIT(2)
#define ABX8XX_STATUS_BLF BIT(4)
-#define ABX8XX_STATUS_WDT BIT(6)
+#define ABX8XX_STATUS_WDT BIT(5)
#define ABX8XX_REG_CTRL1 0x10
#define ABX8XX_CTRL_WRITE BIT(0)
diff --git a/drivers/rtc/rtc-rzn1.c b/drivers/rtc/rtc-rzn1.c
index 56ebbd4d0481..8570c8e63d70 100644
--- a/drivers/rtc/rtc-rzn1.c
+++ b/drivers/rtc/rtc-rzn1.c
@@ -111,8 +111,8 @@ static int rzn1_rtc_read_time(struct device *dev, struct rtc_time *tm)
tm->tm_hour = bcd2bin(tm->tm_hour);
tm->tm_wday = bcd2bin(tm->tm_wday);
tm->tm_mday = bcd2bin(tm->tm_mday);
- tm->tm_mon = bcd2bin(tm->tm_mon);
- tm->tm_year = bcd2bin(tm->tm_year);
+ tm->tm_mon = bcd2bin(tm->tm_mon) - 1;
+ tm->tm_year = bcd2bin(tm->tm_year) + 100;
return 0;
}
@@ -128,8 +128,8 @@ static int rzn1_rtc_set_time(struct device *dev, struct rtc_time *tm)
tm->tm_hour = bin2bcd(tm->tm_hour);
tm->tm_wday = bin2bcd(rzn1_rtc_tm_to_wday(tm));
tm->tm_mday = bin2bcd(tm->tm_mday);
- tm->tm_mon = bin2bcd(tm->tm_mon);
- tm->tm_year = bin2bcd(tm->tm_year);
+ tm->tm_mon = bin2bcd(tm->tm_mon + 1);
+ tm->tm_year = bin2bcd(tm->tm_year - 100);
val = readl(rtc->base + RZN1_RTC_CTL2);
if (!(val & RZN1_RTC_CTL2_STOPPED)) {
diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c
index d492a2d26600..c6d4522411b3 100644
--- a/drivers/rtc/rtc-st-lpc.c
+++ b/drivers/rtc/rtc-st-lpc.c
@@ -218,15 +218,14 @@ static int st_rtc_probe(struct platform_device *pdev)
return -EINVAL;
}
- ret = devm_request_irq(&pdev->dev, rtc->irq, st_rtc_handler, 0,
- pdev->name, rtc);
+ ret = devm_request_irq(&pdev->dev, rtc->irq, st_rtc_handler,
+ IRQF_NO_AUTOEN, pdev->name, rtc);
if (ret) {
dev_err(&pdev->dev, "Failed to request irq %i\n", rtc->irq);
return ret;
}
enable_irq_wake(rtc->irq);
- disable_irq(rtc->irq);
rtc->clk = devm_clk_get_enabled(&pdev->dev, NULL);
if (IS_ERR(rtc->clk))
diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c
index 6127add746d1..81ef9002f064 100644
--- a/drivers/s390/cio/cio.c
+++ b/drivers/s390/cio/cio.c
@@ -459,10 +459,14 @@ int cio_update_schib(struct subchannel *sch)
{
struct schib schib;
- if (stsch(sch->schid, &schib) || !css_sch_is_valid(&schib))
+ if (stsch(sch->schid, &schib))
return -ENODEV;
memcpy(&sch->schib, &schib, sizeof(schib));
+
+ if (!css_sch_is_valid(&schib))
+ return -EACCES;
+
return 0;
}
EXPORT_SYMBOL_GPL(cio_update_schib);
diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
index 57e0050dbaa5..6b374026cd4f 100644
--- a/drivers/s390/cio/device.c
+++ b/drivers/s390/cio/device.c
@@ -1387,14 +1387,18 @@ enum io_sch_action {
IO_SCH_VERIFY,
IO_SCH_DISC,
IO_SCH_NOP,
+ IO_SCH_ORPH_CDEV,
};
static enum io_sch_action sch_get_action(struct subchannel *sch)
{
struct ccw_device *cdev;
+ int rc;
cdev = sch_get_cdev(sch);
- if (cio_update_schib(sch)) {
+ rc = cio_update_schib(sch);
+
+ if (rc == -ENODEV) {
/* Not operational. */
if (!cdev)
return IO_SCH_UNREG;
@@ -1402,6 +1406,16 @@ static enum io_sch_action sch_get_action(struct subchannel *sch)
return IO_SCH_UNREG;
return IO_SCH_ORPH_UNREG;
}
+
+ /* Avoid unregistering subchannels without working device. */
+ if (rc == -EACCES) {
+ if (!cdev)
+ return IO_SCH_NOP;
+ if (ccw_device_notify(cdev, CIO_GONE) != NOTIFY_OK)
+ return IO_SCH_UNREG_CDEV;
+ return IO_SCH_ORPH_CDEV;
+ }
+
/* Operational. */
if (!cdev)
return IO_SCH_ATTACH;
@@ -1471,6 +1485,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
rc = 0;
goto out_unlock;
case IO_SCH_ORPH_UNREG:
+ case IO_SCH_ORPH_CDEV:
case IO_SCH_ORPH_ATTACH:
ccw_device_set_disconnected(cdev);
break;
@@ -1502,6 +1517,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
/* Handle attached ccw device. */
switch (action) {
case IO_SCH_ORPH_UNREG:
+ case IO_SCH_ORPH_CDEV:
case IO_SCH_ORPH_ATTACH:
/* Move ccw device to orphanage. */
rc = ccw_device_move_to_orph(cdev);
diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c
index d2ffdf2491da..70fcb5c40cfe 100644
--- a/drivers/s390/crypto/pkey_api.c
+++ b/drivers/s390/crypto/pkey_api.c
@@ -1366,9 +1366,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
rc = cca_clr2seckey(kcs.cardnr, kcs.domain, kcs.keytype,
kcs.clrkey.clrkey, kcs.seckey.seckey);
DEBUG_DBG("%s cca_clr2seckey()=%d\n", __func__, rc);
- if (rc)
- break;
- if (copy_to_user(ucs, &kcs, sizeof(kcs)))
+ if (!rc && copy_to_user(ucs, &kcs, sizeof(kcs)))
rc = -EFAULT;
memzero_explicit(&kcs, sizeof(kcs));
break;
@@ -1401,9 +1399,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
kcp.protkey.protkey,
&kcp.protkey.len, &kcp.protkey.type);
DEBUG_DBG("%s pkey_clr2protkey()=%d\n", __func__, rc);
- if (rc)
- break;
- if (copy_to_user(ucp, &kcp, sizeof(kcp)))
+ if (!rc && copy_to_user(ucp, &kcp, sizeof(kcp)))
rc = -EFAULT;
memzero_explicit(&kcp, sizeof(kcp));
break;
@@ -1555,11 +1551,14 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
if (copy_from_user(&kcs, ucs, sizeof(kcs)))
return -EFAULT;
apqns = _copy_apqns_from_user(kcs.apqns, kcs.apqn_entries);
- if (IS_ERR(apqns))
+ if (IS_ERR(apqns)) {
+ memzero_explicit(&kcs, sizeof(kcs));
return PTR_ERR(apqns);
+ }
kkey = kzalloc(klen, GFP_KERNEL);
if (!kkey) {
kfree(apqns);
+ memzero_explicit(&kcs, sizeof(kcs));
return -ENOMEM;
}
rc = pkey_clr2seckey2(apqns, kcs.apqn_entries,
@@ -1569,15 +1568,18 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
kfree(apqns);
if (rc) {
kfree(kkey);
+ memzero_explicit(&kcs, sizeof(kcs));
break;
}
if (kcs.key) {
if (kcs.keylen < klen) {
kfree(kkey);
+ memzero_explicit(&kcs, sizeof(kcs));
return -EINVAL;
}
if (copy_to_user(kcs.key, kkey, klen)) {
kfree(kkey);
+ memzero_explicit(&kcs, sizeof(kcs));
return -EFAULT;
}
}
diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c
index 62cb7a864fd5..70c7515a822f 100644
--- a/drivers/scsi/bfa/bfad.c
+++ b/drivers/scsi/bfa/bfad.c
@@ -1693,9 +1693,8 @@ bfad_init(void)
error = bfad_im_module_init();
if (error) {
- error = -ENOMEM;
printk(KERN_WARNING "bfad_im_module_init failure\n");
- goto ext;
+ return -ENOMEM;
}
if (strcmp(FCPI_NAME, " fcpim") == 0)
diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
index e4363b8c6ad2..db9ae206974c 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
@@ -1539,10 +1539,16 @@ void hisi_sas_controller_reset_done(struct hisi_hba *hisi_hba)
/* Init and wait for PHYs to come up and all libsas event finished. */
for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) {
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
+ struct asd_sas_phy *sas_phy = &phy->sas_phy;
- if (!(hisi_hba->phy_state & BIT(phy_no)))
+ if (!sas_phy->phy->enabled)
continue;
+ if (!(hisi_hba->phy_state & BIT(phy_no))) {
+ hisi_sas_phy_enable(hisi_hba, phy_no, 1);
+ continue;
+ }
+
async_schedule_domain(hisi_sas_async_init_wait_phyup,
phy, &async);
}
diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
index 0a01575ab06d..0ad8a10002ce 100644
--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
+++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
@@ -175,7 +175,8 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
ndlp->nlp_state, ndlp->fc4_xpt_flags);
/* Don't schedule a worker thread event if the vport is going down. */
- if (vport->load_flag & FC_UNLOADING) {
+ if ((vport->load_flag & FC_UNLOADING) ||
+ !(phba->hba_flag & HBA_SETUP)) {
spin_lock_irqsave(&ndlp->lock, iflags);
ndlp->rport = NULL;
diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
index cf506556f3b0..070654cc9292 100644
--- a/drivers/scsi/lpfc/lpfc_scsi.c
+++ b/drivers/scsi/lpfc/lpfc_scsi.c
@@ -5546,11 +5546,20 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
iocb = &lpfc_cmd->cur_iocbq;
if (phba->sli_rev == LPFC_SLI_REV4) {
- pring_s4 = phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring;
- if (!pring_s4) {
+ /* if the io_wq & pring are gone, the port was reset. */
+ if (!phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq ||
+ !phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring) {
+ lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+ "2877 SCSI Layer I/O Abort Request "
+ "IO CMPL Status x%x ID %d LUN %llu "
+ "HBA_SETUP %d\n", FAILED,
+ cmnd->device->id,
+ (u64)cmnd->device->lun,
+ (HBA_SETUP & phba->hba_flag));
ret = FAILED;
goto out_unlock_hba;
}
+ pring_s4 = phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring;
spin_lock(&pring_s4->ring_lock);
}
/* the command is in process of being cancelled */
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index 9cd22588c8eb..9b1ffa84a062 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -4684,6 +4684,17 @@ lpfc_sli_flush_io_rings(struct lpfc_hba *phba)
/* Look on all the FCP Rings for the iotag */
if (phba->sli_rev >= LPFC_SLI_REV4) {
for (i = 0; i < phba->cfg_hdw_queue; i++) {
+ if (!phba->sli4_hba.hdwq ||
+ !phba->sli4_hba.hdwq[i].io_wq) {
+ lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
+ "7777 hdwq's deleted %lx "
+ "%lx %x %x\n",
+ (unsigned long)phba->pport->load_flag,
+ (unsigned long)phba->hba_flag,
+ phba->link_state,
+ phba->sli.sli_flag);
+ return;
+ }
pring = phba->sli4_hba.hdwq[i].io_wq->pring;
spin_lock_irq(&pring->ring_lock);
diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
index 14625e6bc882..9a81d14aef6b 100644
--- a/drivers/scsi/qedf/qedf_main.c
+++ b/drivers/scsi/qedf/qedf_main.c
@@ -2737,6 +2737,7 @@ static int qedf_alloc_and_init_sb(struct qedf_ctx *qedf,
sb_id, QED_SB_TYPE_STORAGE);
if (ret) {
+ dma_free_coherent(&qedf->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys);
QEDF_ERR(&qedf->dbg_ctx,
"Status block initialization failed (0x%x) for id = %d.\n",
ret, sb_id);
diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
index cd0180b1f5b9..ede8d1f6ae23 100644
--- a/drivers/scsi/qedi/qedi_main.c
+++ b/drivers/scsi/qedi/qedi_main.c
@@ -369,6 +369,7 @@ static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi,
ret = qedi_ops->common->sb_init(qedi->cdev, sb_info, sb_virt, sb_phys,
sb_id, QED_SB_TYPE_STORAGE);
if (ret) {
+ dma_free_coherent(&qedi->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys);
QEDI_ERR(&qedi->dbg_ctx,
"Status block initialization failed for id = %d.\n",
sb_id);
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index e6d8beb87776..dc9722b290f2 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -307,10 +307,6 @@ sg_open(struct inode *inode, struct file *filp)
if (retval)
goto sg_put;
- retval = scsi_autopm_get_device(device);
- if (retval)
- goto sdp_put;
-
/* scsi_block_when_processing_errors() may block so bypass
* check if O_NONBLOCK. Permits SCSI commands to be issued
* during error recovery. Tread carefully. */
@@ -318,7 +314,7 @@ sg_open(struct inode *inode, struct file *filp)
scsi_block_when_processing_errors(device))) {
retval = -ENXIO;
/* we are in error recovery for this device */
- goto error_out;
+ goto sdp_put;
}
mutex_lock(&sdp->open_rel_lock);
@@ -371,8 +367,6 @@ sg_open(struct inode *inode, struct file *filp)
}
error_mutex_locked:
mutex_unlock(&sdp->open_rel_lock);
-error_out:
- scsi_autopm_put_device(device);
sdp_put:
kref_put(&sdp->d_ref, sg_device_destroy);
scsi_device_put(device);
@@ -392,7 +386,6 @@ sg_release(struct inode *inode, struct file *filp)
SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp, "sg_release\n"));
mutex_lock(&sdp->open_rel_lock);
- scsi_autopm_put_device(sdp->device);
kref_put(&sfp->f_ref, sg_remove_sfp);
sdp->open_cnt--;
diff --git a/drivers/sh/intc/core.c b/drivers/sh/intc/core.c
index ca4f4ca413f1..b19388b349be 100644
--- a/drivers/sh/intc/core.c
+++ b/drivers/sh/intc/core.c
@@ -209,7 +209,6 @@ int __init register_intc_controller(struct intc_desc *desc)
goto err0;
INIT_LIST_HEAD(&d->list);
- list_add_tail(&d->list, &intc_list);
raw_spin_lock_init(&d->lock);
INIT_RADIX_TREE(&d->tree, GFP_ATOMIC);
@@ -369,6 +368,7 @@ int __init register_intc_controller(struct intc_desc *desc)
d->skip_suspend = desc->skip_syscore_suspend;
+ list_add_tail(&d->list, &intc_list);
nr_intc_controllers++;
return 0;
diff --git a/drivers/soc/fsl/rcpm.c b/drivers/soc/fsl/rcpm.c
index 3d0cae30c769..06bd94b29fb3 100644
--- a/drivers/soc/fsl/rcpm.c
+++ b/drivers/soc/fsl/rcpm.c
@@ -36,6 +36,7 @@ static void copy_ippdexpcr1_setting(u32 val)
return;
regs = of_iomap(np, 0);
+ of_node_put(np);
if (!regs)
return;
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index ba788762835f..e339253ccba8 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -586,7 +586,8 @@ int geni_se_clk_tbl_get(struct geni_se *se, unsigned long **tbl)
for (i = 0; i < MAX_CLK_PERF_LEVEL; i++) {
freq = clk_round_rate(se->clk, freq + 1);
- if (freq <= 0 || freq == se->clk_perf_tbl[i - 1])
+ if (freq <= 0 ||
+ (i > 0 && freq == se->clk_perf_tbl[i - 1]))
break;
se->clk_perf_tbl[i] = freq;
}
diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
index 880b41a57da0..2af2b2406fdf 100644
--- a/drivers/soc/qcom/socinfo.c
+++ b/drivers/soc/qcom/socinfo.c
@@ -757,10 +757,16 @@ static int qcom_socinfo_probe(struct platform_device *pdev)
qs->attr.revision = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%u.%u",
SOCINFO_MAJOR(le32_to_cpu(info->ver)),
SOCINFO_MINOR(le32_to_cpu(info->ver)));
- if (offsetof(struct socinfo, serial_num) <= item_size)
+ if (!qs->attr.soc_id || !qs->attr.revision)
+ return -ENOMEM;
+
+ if (offsetof(struct socinfo, serial_num) <= item_size) {
qs->attr.serial_number = devm_kasprintf(&pdev->dev, GFP_KERNEL,
"%u",
le32_to_cpu(info->serial_num));
+ if (!qs->attr.serial_number)
+ return -ENOMEM;
+ }
qs->soc_dev = soc_device_register(&qs->attr);
if (IS_ERR(qs->soc_dev))
diff --git a/drivers/soc/ti/smartreflex.c b/drivers/soc/ti/smartreflex.c
index 62b2f1464e46..55c48ddcf50d 100644
--- a/drivers/soc/ti/smartreflex.c
+++ b/drivers/soc/ti/smartreflex.c
@@ -202,10 +202,10 @@ static int sr_late_init(struct omap_sr *sr_info)
if (sr_class->notify && sr_class->notify_flags && sr_info->irq) {
ret = devm_request_irq(&sr_info->pdev->dev, sr_info->irq,
- sr_interrupt, 0, sr_info->name, sr_info);
+ sr_interrupt, IRQF_NO_AUTOEN,
+ sr_info->name, sr_info);
if (ret)
goto error;
- disable_irq(sr_info->irq);
}
return ret;
diff --git a/drivers/soc/xilinx/xlnx_event_manager.c b/drivers/soc/xilinx/xlnx_event_manager.c
index 098a2ecfd5c6..8f6a2614d8eb 100644
--- a/drivers/soc/xilinx/xlnx_event_manager.c
+++ b/drivers/soc/xilinx/xlnx_event_manager.c
@@ -174,8 +174,10 @@ static int xlnx_add_cb_for_suspend(event_cb_func_t cb_fun, void *data)
INIT_LIST_HEAD(&eve_data->cb_list_head);
cb_data = kmalloc(sizeof(*cb_data), GFP_KERNEL);
- if (!cb_data)
+ if (!cb_data) {
+ kfree(eve_data);
return -ENOMEM;
+ }
cb_data->eve_cb = cb_fun;
cb_data->agent_data = data;
diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
index 6f9e9d871677..e7ae7cb4b92a 100644
--- a/drivers/spi/atmel-quadspi.c
+++ b/drivers/spi/atmel-quadspi.c
@@ -183,7 +183,7 @@ static const char *atmel_qspi_reg_name(u32 offset, char *tmp, size_t sz)
case QSPI_MR:
return "MR";
case QSPI_RD:
- return "MR";
+ return "RD";
case QSPI_TD:
return "TD";
case QSPI_SR:
diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
index 13313f07839b..514a2c5c8422 100644
--- a/drivers/spi/spi-fsl-lpspi.c
+++ b/drivers/spi/spi-fsl-lpspi.c
@@ -891,7 +891,7 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
return ret;
}
- ret = devm_request_irq(&pdev->dev, irq, fsl_lpspi_isr, 0,
+ ret = devm_request_irq(&pdev->dev, irq, fsl_lpspi_isr, IRQF_NO_AUTOEN,
dev_name(&pdev->dev), fsl_lpspi);
if (ret) {
dev_err(&pdev->dev, "can't get irq%d: %d\n", irq, ret);
@@ -948,14 +948,10 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
ret = fsl_lpspi_dma_init(&pdev->dev, fsl_lpspi, controller);
if (ret == -EPROBE_DEFER)
goto out_pm_get;
- if (ret < 0)
+ if (ret < 0) {
dev_warn(&pdev->dev, "dma setup error %d, use pio\n", ret);
- else
- /*
- * disable LPSPI module IRQ when enable DMA mode successfully,
- * to prevent the unexpected LPSPI module IRQ events.
- */
- disable_irq(irq);
+ enable_irq(irq);
+ }
ret = devm_spi_register_controller(&pdev->dev, controller);
if (ret < 0) {
diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
index e9ad9b0b598b..d1afa4140e8a 100644
--- a/drivers/spi/spi-tegra210-quad.c
+++ b/drivers/spi/spi-tegra210-quad.c
@@ -341,7 +341,7 @@ tegra_qspi_fill_tx_fifo_from_client_txbuf(struct tegra_qspi *tqspi, struct spi_t
for (count = 0; count < max_n_32bit; count++) {
u32 x = 0;
- for (i = 0; len && (i < bytes_per_word); i++, len--)
+ for (i = 0; len && (i < min(4, bytes_per_word)); i++, len--)
x |= (u32)(*tx_buf++) << (i * 8);
tegra_qspi_writel(tqspi, x, QSPI_TX_FIFO);
}
diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
index 9a46b2478f4e..3503e6c0a5c9 100644
--- a/drivers/spi/spi-zynqmp-gqspi.c
+++ b/drivers/spi/spi-zynqmp-gqspi.c
@@ -1341,6 +1341,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
clk_dis_all:
pm_runtime_disable(&pdev->dev);
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_put_noidle(&pdev->dev);
pm_runtime_set_suspended(&pdev->dev);
clk_disable_unprepare(xqspi->refclk);
@@ -1371,6 +1372,7 @@ static void zynqmp_qspi_remove(struct platform_device *pdev)
zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
pm_runtime_disable(&pdev->dev);
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_put_noidle(&pdev->dev);
pm_runtime_set_suspended(&pdev->dev);
clk_disable_unprepare(xqspi->refclk);
diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index 5c57c7378ee7..72e514cee056 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -426,6 +426,16 @@ static int spi_probe(struct device *dev)
spi->irq = 0;
}
+ if (has_acpi_companion(dev) && spi->irq < 0) {
+ struct acpi_device *adev = to_acpi_device_node(dev->fwnode);
+
+ spi->irq = acpi_dev_gpio_irq_get(adev, 0);
+ if (spi->irq == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+ if (spi->irq < 0)
+ spi->irq = 0;
+ }
+
ret = dev_pm_domain_attach(dev, true);
if (ret)
return ret;
@@ -2706,9 +2716,6 @@ static acpi_status acpi_register_spi_device(struct spi_controller *ctlr,
acpi_set_modalias(adev, acpi_device_hid(adev), spi->modalias,
sizeof(spi->modalias));
- if (spi->irq < 0)
- spi->irq = acpi_dev_gpio_irq_get(adev, 0);
-
acpi_device_set_enumerated(adev);
adev->power.flags.ignore_parent = true;
diff --git a/drivers/staging/media/atomisp/pci/sh_css_params.c b/drivers/staging/media/atomisp/pci/sh_css_params.c
index 588f2adab058..760fe9bef211 100644
--- a/drivers/staging/media/atomisp/pci/sh_css_params.c
+++ b/drivers/staging/media/atomisp/pci/sh_css_params.c
@@ -4144,6 +4144,8 @@ ia_css_3a_statistics_allocate(const struct ia_css_3a_grid_info *grid)
goto err;
/* No weighted histogram, no structure, treat the histogram data as a byte dump in a byte array */
me->rgby_data = kvmalloc(sizeof_hmem(HMEM0_ID), GFP_KERNEL);
+ if (!me->rgby_data)
+ goto err;
IA_CSS_LEAVE("return=%p", me);
return me;
diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
index ffc2871a021c..f252bac676bc 100644
--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
@@ -144,7 +144,7 @@ static ssize_t current_uuid_show(struct device *dev,
struct int3400_thermal_priv *priv = dev_get_drvdata(dev);
int i, length = 0;
- if (priv->current_uuid_index > 0)
+ if (priv->current_uuid_index >= 0)
return sprintf(buf, "%s\n",
int3400_thermal_uuids[priv->current_uuid_index]);
diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
index d7ac7eef680e..dad909547179 100644
--- a/drivers/thermal/thermal_core.c
+++ b/drivers/thermal/thermal_core.c
@@ -1336,6 +1336,7 @@ thermal_zone_device_register_with_trips(const char *type, struct thermal_trip *t
thermal_zone_destroy_device_groups(tz);
goto remove_id;
}
+ thermal_zone_device_init(tz);
result = device_register(&tz->device);
if (result)
goto release_device;
@@ -1381,7 +1382,6 @@ thermal_zone_device_register_with_trips(const char *type, struct thermal_trip *t
INIT_DELAYED_WORK(&tz->poll_queue, thermal_zone_device_check);
- thermal_zone_device_init(tz);
/* Update the new thermal zone and mark it as already updated. */
if (atomic_cmpxchg(&tz->need_update, 1, 0))
thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c
index e2aa2a1a02dd..ecbce226b874 100644
--- a/drivers/tty/serial/8250/8250_fintek.c
+++ b/drivers/tty/serial/8250/8250_fintek.c
@@ -21,6 +21,7 @@
#define CHIP_ID_F81866 0x1010
#define CHIP_ID_F81966 0x0215
#define CHIP_ID_F81216AD 0x1602
+#define CHIP_ID_F81216E 0x1617
#define CHIP_ID_F81216H 0x0501
#define CHIP_ID_F81216 0x0802
#define VENDOR_ID1 0x23
@@ -158,6 +159,7 @@ static int fintek_8250_check_id(struct fintek_8250 *pdata)
case CHIP_ID_F81866:
case CHIP_ID_F81966:
case CHIP_ID_F81216AD:
+ case CHIP_ID_F81216E:
case CHIP_ID_F81216H:
case CHIP_ID_F81216:
break;
@@ -181,6 +183,7 @@ static int fintek_8250_get_ldn_range(struct fintek_8250 *pdata, int *min,
return 0;
case CHIP_ID_F81216AD:
+ case CHIP_ID_F81216E:
case CHIP_ID_F81216H:
case CHIP_ID_F81216:
*min = F81216_LDN_LOW;
@@ -250,6 +253,7 @@ static void fintek_8250_set_irq_mode(struct fintek_8250 *pdata, bool is_level)
break;
case CHIP_ID_F81216AD:
+ case CHIP_ID_F81216E:
case CHIP_ID_F81216H:
case CHIP_ID_F81216:
sio_write_mask_reg(pdata, FINTEK_IRQ_MODE, IRQ_SHARE,
@@ -263,7 +267,8 @@ static void fintek_8250_set_irq_mode(struct fintek_8250 *pdata, bool is_level)
static void fintek_8250_set_max_fifo(struct fintek_8250 *pdata)
{
switch (pdata->pid) {
- case CHIP_ID_F81216H: /* 128Bytes FIFO */
+ case CHIP_ID_F81216E: /* 128Bytes FIFO */
+ case CHIP_ID_F81216H:
case CHIP_ID_F81966:
case CHIP_ID_F81866:
sio_write_mask_reg(pdata, FIFO_CTRL,
@@ -297,6 +302,7 @@ static void fintek_8250_set_termios(struct uart_port *port,
goto exit;
switch (pdata->pid) {
+ case CHIP_ID_F81216E:
case CHIP_ID_F81216H:
reg = RS485;
break;
@@ -346,6 +352,7 @@ static void fintek_8250_set_termios_handler(struct uart_8250_port *uart)
struct fintek_8250 *pdata = uart->port.private_data;
switch (pdata->pid) {
+ case CHIP_ID_F81216E:
case CHIP_ID_F81216H:
case CHIP_ID_F81966:
case CHIP_ID_F81866:
@@ -438,6 +445,11 @@ static void fintek_8250_set_rs485_handler(struct uart_8250_port *uart)
uart->port.rs485_supported = fintek_8250_rs485_supported;
break;
+ case CHIP_ID_F81216E: /* F81216E does not support RS485 delays */
+ uart->port.rs485_config = fintek_8250_rs485_config;
+ uart->port.rs485_supported = fintek_8250_rs485_supported;
+ break;
+
default: /* No RS485 Auto direction functional */
break;
}
diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
index 4caecc3525bf..9ed62bc7cdd8 100644
--- a/drivers/tty/serial/8250/8250_omap.c
+++ b/drivers/tty/serial/8250/8250_omap.c
@@ -766,12 +766,12 @@ static void omap_8250_shutdown(struct uart_port *port)
struct uart_8250_port *up = up_to_u8250p(port);
struct omap8250_priv *priv = port->private_data;
+ pm_runtime_get_sync(port->dev);
+
flush_work(&priv->qos_work);
if (up->dma)
omap_8250_rx_dma_flush(up);
- pm_runtime_get_sync(port->dev);
-
serial_out(up, UART_OMAP_WER, 0);
if (priv->habit & UART_HAS_EFR2)
serial_out(up, UART_OMAP_EFR2, 0x0);
diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
index 7a9924d9b294..f290fbe21d63 100644
--- a/drivers/tty/serial/sc16is7xx.c
+++ b/drivers/tty/serial/sc16is7xx.c
@@ -545,6 +545,8 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
SC16IS7XX_MCR_CLKSEL_BIT,
prescaler == 1 ? 0 : SC16IS7XX_MCR_CLKSEL_BIT);
+ mutex_lock(&one->efr_lock);
+
/* Open the LCR divisors for configuration */
sc16is7xx_port_write(port, SC16IS7XX_LCR_REG,
SC16IS7XX_LCR_CONF_MODE_A);
@@ -558,6 +560,8 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud)
/* Put LCR back to the normal mode */
sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr);
+ mutex_unlock(&one->efr_lock);
+
return DIV_ROUND_CLOSEST((clk / prescaler) / 16, div);
}
diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
index 493fc4742895..117abcf366d9 100644
--- a/drivers/tty/tty_io.c
+++ b/drivers/tty/tty_io.c
@@ -3607,7 +3607,7 @@ static struct ctl_table tty_table[] = {
.data = &tty_ldisc_autoload,
.maxlen = sizeof(tty_ldisc_autoload),
.mode = 0644,
- .proc_handler = proc_dointvec,
+ .proc_handler = proc_dointvec_minmax,
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
index 3396e0388512..268189f01e15 100644
--- a/drivers/ufs/host/ufs-exynos.c
+++ b/drivers/ufs/host/ufs-exynos.c
@@ -1228,12 +1228,12 @@ static void exynos_ufs_dev_hw_reset(struct ufs_hba *hba)
hci_writel(ufs, 1 << 0, HCI_GPIO_OUT);
}
-static void exynos_ufs_pre_hibern8(struct ufs_hba *hba, u8 enter)
+static void exynos_ufs_pre_hibern8(struct ufs_hba *hba, enum uic_cmd_dme cmd)
{
struct exynos_ufs *ufs = ufshcd_get_variant(hba);
struct exynos_ufs_uic_attr *attr = ufs->drv_data->uic_attr;
- if (!enter) {
+ if (cmd == UIC_CMD_DME_HIBER_EXIT) {
if (ufs->opts & EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL)
exynos_ufs_disable_auto_ctrl_hcc(ufs);
exynos_ufs_ungate_clks(ufs);
@@ -1261,11 +1261,11 @@ static void exynos_ufs_pre_hibern8(struct ufs_hba *hba, u8 enter)
}
}
-static void exynos_ufs_post_hibern8(struct ufs_hba *hba, u8 enter)
+static void exynos_ufs_post_hibern8(struct ufs_hba *hba, enum uic_cmd_dme cmd)
{
struct exynos_ufs *ufs = ufshcd_get_variant(hba);
- if (!enter) {
+ if (cmd == UIC_CMD_DME_HIBER_EXIT) {
u32 cur_mode = 0;
u32 pwrmode;
@@ -1284,7 +1284,7 @@ static void exynos_ufs_post_hibern8(struct ufs_hba *hba, u8 enter)
if (!(ufs->opts & EXYNOS_UFS_OPT_SKIP_CONNECTION_ESTAB))
exynos_ufs_establish_connt(ufs);
- } else {
+ } else if (cmd == UIC_CMD_DME_HIBER_ENTER) {
ufs->entry_hibern8_t = ktime_get();
exynos_ufs_gate_clks(ufs);
if (ufs->opts & EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL)
@@ -1371,15 +1371,15 @@ static int exynos_ufs_pwr_change_notify(struct ufs_hba *hba,
}
static void exynos_ufs_hibern8_notify(struct ufs_hba *hba,
- enum uic_cmd_dme enter,
+ enum uic_cmd_dme cmd,
enum ufs_notify_change_status notify)
{
switch ((u8)notify) {
case PRE_CHANGE:
- exynos_ufs_pre_hibern8(hba, enter);
+ exynos_ufs_pre_hibern8(hba, cmd);
break;
case POST_CHANGE:
- exynos_ufs_post_hibern8(hba, enter);
+ exynos_ufs_post_hibern8(hba, cmd);
break;
}
}
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 867000cdeb96..9971076c31de 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -1196,11 +1196,14 @@ static u32 dwc3_calc_trbs_left(struct dwc3_ep *dep)
* pending to be processed by the driver.
*/
if (dep->trb_enqueue == dep->trb_dequeue) {
+ struct dwc3_request *req;
+
/*
- * If there is any request remained in the started_list at
- * this point, that means there is no TRB available.
+ * If there is any request remained in the started_list with
+ * active TRBs at this point, then there is no TRB available.
*/
- if (!list_empty(&dep->started_list))
+ req = next_request(&dep->started_list);
+ if (req && req->num_trbs)
return 0;
return DWC3_TRB_NUM - 1;
@@ -1433,8 +1436,8 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
struct scatterlist *s;
int i;
unsigned int length = req->request.length;
- unsigned int remaining = req->request.num_mapped_sgs
- - req->num_queued_sgs;
+ unsigned int remaining = req->num_pending_sgs;
+ unsigned int num_queued_sgs = req->request.num_mapped_sgs - remaining;
unsigned int num_trbs = req->num_trbs;
bool needs_extra_trb = dwc3_needs_extra_trb(dep, req);
@@ -1442,7 +1445,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
* If we resume preparing the request, then get the remaining length of
* the request and resume where we left off.
*/
- for_each_sg(req->request.sg, s, req->num_queued_sgs, i)
+ for_each_sg(req->request.sg, s, num_queued_sgs, i)
length -= sg_dma_len(s);
for_each_sg(sg, s, remaining, i) {
diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
index 0e151b54aae8..9225c21d1184 100644
--- a/drivers/usb/gadget/composite.c
+++ b/drivers/usb/gadget/composite.c
@@ -2111,8 +2111,20 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
memset(buf, 0, w_length);
buf[5] = 0x01;
switch (ctrl->bRequestType & USB_RECIP_MASK) {
+ /*
+ * The Microsoft CompatID OS Descriptor Spec(w_index = 0x4) and
+ * Extended Prop OS Desc Spec(w_index = 0x5) state that the
+ * HighByte of wValue is the InterfaceNumber and the LowByte is
+ * the PageNumber. This high/low byte ordering is incorrectly
+ * documented in the Spec. USB analyzer output on the below
+ * request packets show the high/low byte inverted i.e LowByte
+ * is the InterfaceNumber and the HighByte is the PageNumber.
+ * Since we dont support >64KB CompatID/ExtendedProp descriptors,
+ * PageNumber is set to 0. Hence verify that the HighByte is 0
+ * for below two cases.
+ */
case USB_RECIP_DEVICE:
- if (w_index != 0x4 || (w_value & 0xff))
+ if (w_index != 0x4 || (w_value >> 8))
break;
buf[6] = w_index;
/* Number of ext compat interfaces */
@@ -2128,9 +2140,9 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
}
break;
case USB_RECIP_INTERFACE:
- if (w_index != 0x5 || (w_value & 0xff))
+ if (w_index != 0x5 || (w_value >> 8))
break;
- interface = w_value >> 8;
+ interface = w_value & 0xFF;
if (interface >= MAX_CONFIG_INTERFACES ||
!os_desc_cfg->interface[interface])
break;
diff --git a/drivers/usb/host/ehci-spear.c b/drivers/usb/host/ehci-spear.c
index d0e94e4c9fe2..11294f196ee3 100644
--- a/drivers/usb/host/ehci-spear.c
+++ b/drivers/usb/host/ehci-spear.c
@@ -105,7 +105,9 @@ static int spear_ehci_hcd_drv_probe(struct platform_device *pdev)
/* registers start at offset 0x0 */
hcd_to_ehci(hcd)->caps = hcd->regs;
- clk_prepare_enable(sehci->clk);
+ retval = clk_prepare_enable(sehci->clk);
+ if (retval)
+ goto err_put_hcd;
retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
if (retval)
goto err_stop_ehci;
@@ -130,8 +132,7 @@ static void spear_ehci_hcd_drv_remove(struct platform_device *pdev)
usb_remove_hcd(hcd);
- if (sehci->clk)
- clk_disable_unprepare(sehci->clk);
+ clk_disable_unprepare(sehci->clk);
usb_put_hcd(hcd);
}
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 6e38b6b480e0..258e64d6522c 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -994,6 +994,13 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
unsigned int slot_id = ep->vdev->slot_id;
int err;
+ /*
+ * This is not going to work if the hardware is changing its dequeue
+ * pointers as we look at them. Completion handler will call us later.
+ */
+ if (ep->ep_state & SET_DEQ_PENDING)
+ return 0;
+
xhci = ep->xhci;
list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, cancelled_td_list) {
@@ -1354,7 +1361,6 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
struct xhci_ep_ctx *ep_ctx;
struct xhci_slot_ctx *slot_ctx;
struct xhci_td *td, *tmp_td;
- bool deferred = false;
ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3]));
stream_id = TRB_TO_STREAM_ID(le32_to_cpu(trb->generic.field[2]));
@@ -1455,8 +1461,6 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
xhci_dbg(ep->xhci, "%s: Giveback cancelled URB %p TD\n",
__func__, td->urb);
xhci_td_cleanup(ep->xhci, td, ep_ring, td->status);
- } else if (td->cancel_status == TD_CLEARING_CACHE_DEFERRED) {
- deferred = true;
} else {
xhci_dbg(ep->xhci, "%s: Keep cancelled URB %p TD as cancel_status is %d\n",
__func__, td->urb, td->cancel_status);
@@ -1467,11 +1471,15 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
ep->queued_deq_seg = NULL;
ep->queued_deq_ptr = NULL;
- if (deferred) {
- /* We have more streams to clear */
+ /* Check for deferred or newly cancelled TDs */
+ if (!list_empty(&ep->cancelled_td_list)) {
xhci_dbg(ep->xhci, "%s: Pending TDs to clear, continuing with invalidation\n",
__func__);
xhci_invalidate_cancelled_tds(ep);
+ /* Try to restart the endpoint if all is done */
+ ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
+ /* Start giving back any TDs invalidated above */
+ xhci_giveback_invalidated_tds(ep);
} else {
/* Restart any rings with pending URBs */
xhci_dbg(ep->xhci, "%s: All TDs cleared, ring doorbell\n", __func__);
diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c
index 6fb5140e29b9..225863321dc4 100644
--- a/drivers/usb/misc/chaoskey.c
+++ b/drivers/usb/misc/chaoskey.c
@@ -27,6 +27,8 @@ static struct usb_class_driver chaoskey_class;
static int chaoskey_rng_read(struct hwrng *rng, void *data,
size_t max, bool wait);
+static DEFINE_MUTEX(chaoskey_list_lock);
+
#define usb_dbg(usb_if, format, arg...) \
dev_dbg(&(usb_if)->dev, format, ## arg)
@@ -233,6 +235,7 @@ static void chaoskey_disconnect(struct usb_interface *interface)
usb_deregister_dev(interface, &chaoskey_class);
usb_set_intfdata(interface, NULL);
+ mutex_lock(&chaoskey_list_lock);
mutex_lock(&dev->lock);
dev->present = false;
@@ -244,6 +247,7 @@ static void chaoskey_disconnect(struct usb_interface *interface)
} else
mutex_unlock(&dev->lock);
+ mutex_unlock(&chaoskey_list_lock);
usb_dbg(interface, "disconnect done");
}
@@ -251,6 +255,7 @@ static int chaoskey_open(struct inode *inode, struct file *file)
{
struct chaoskey *dev;
struct usb_interface *interface;
+ int rv = 0;
/* get the interface from minor number and driver information */
interface = usb_find_interface(&chaoskey_driver, iminor(inode));
@@ -266,18 +271,23 @@ static int chaoskey_open(struct inode *inode, struct file *file)
}
file->private_data = dev;
+ mutex_lock(&chaoskey_list_lock);
mutex_lock(&dev->lock);
- ++dev->open;
+ if (dev->present)
+ ++dev->open;
+ else
+ rv = -ENODEV;
mutex_unlock(&dev->lock);
+ mutex_unlock(&chaoskey_list_lock);
- usb_dbg(interface, "open success");
- return 0;
+ return rv;
}
static int chaoskey_release(struct inode *inode, struct file *file)
{
struct chaoskey *dev = file->private_data;
struct usb_interface *interface;
+ int rv = 0;
if (dev == NULL)
return -ENODEV;
@@ -286,14 +296,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
usb_dbg(interface, "release");
+ mutex_lock(&chaoskey_list_lock);
mutex_lock(&dev->lock);
usb_dbg(interface, "open count at release is %d", dev->open);
if (dev->open <= 0) {
usb_dbg(interface, "invalid open count (%d)", dev->open);
- mutex_unlock(&dev->lock);
- return -ENODEV;
+ rv = -ENODEV;
+ goto bail;
}
--dev->open;
@@ -302,13 +313,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
if (dev->open == 0) {
mutex_unlock(&dev->lock);
chaoskey_free(dev);
- } else
- mutex_unlock(&dev->lock);
- } else
- mutex_unlock(&dev->lock);
-
+ goto destruction;
+ }
+ }
+bail:
+ mutex_unlock(&dev->lock);
+destruction:
+ mutex_unlock(&chaoskey_list_lock);
usb_dbg(interface, "release success");
- return 0;
+ return rv;
}
static void chaos_read_callback(struct urb *urb)
diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
index 1e3df27bab58..4fae04094021 100644
--- a/drivers/usb/misc/iowarrior.c
+++ b/drivers/usb/misc/iowarrior.c
@@ -277,28 +277,45 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer,
struct iowarrior *dev;
int read_idx;
int offset;
+ int retval;
dev = file->private_data;
+ if (file->f_flags & O_NONBLOCK) {
+ retval = mutex_trylock(&dev->mutex);
+ if (!retval)
+ return -EAGAIN;
+ } else {
+ retval = mutex_lock_interruptible(&dev->mutex);
+ if (retval)
+ return -ERESTARTSYS;
+ }
+
/* verify that the device wasn't unplugged */
- if (!dev || !dev->present)
- return -ENODEV;
+ if (!dev->present) {
+ retval = -ENODEV;
+ goto exit;
+ }
dev_dbg(&dev->interface->dev, "minor %d, count = %zd\n",
dev->minor, count);
/* read count must be packet size (+ time stamp) */
if ((count != dev->report_size)
- && (count != (dev->report_size + 1)))
- return -EINVAL;
+ && (count != (dev->report_size + 1))) {
+ retval = -EINVAL;
+ goto exit;
+ }
/* repeat until no buffer overrun in callback handler occur */
do {
atomic_set(&dev->overflow_flag, 0);
if ((read_idx = read_index(dev)) == -1) {
/* queue empty */
- if (file->f_flags & O_NONBLOCK)
- return -EAGAIN;
+ if (file->f_flags & O_NONBLOCK) {
+ retval = -EAGAIN;
+ goto exit;
+ }
else {
//next line will return when there is either new data, or the device is unplugged
int r = wait_event_interruptible(dev->read_wait,
@@ -309,28 +326,37 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer,
-1));
if (r) {
//we were interrupted by a signal
- return -ERESTART;
+ retval = -ERESTART;
+ goto exit;
}
if (!dev->present) {
//The device was unplugged
- return -ENODEV;
+ retval = -ENODEV;
+ goto exit;
}
if (read_idx == -1) {
// Can this happen ???
- return 0;
+ retval = 0;
+ goto exit;
}
}
}
offset = read_idx * (dev->report_size + 1);
if (copy_to_user(buffer, dev->read_queue + offset, count)) {
- return -EFAULT;
+ retval = -EFAULT;
+ goto exit;
}
} while (atomic_read(&dev->overflow_flag));
read_idx = ++read_idx == MAX_INTERRUPT_BUFFER ? 0 : read_idx;
atomic_set(&dev->read_idx, read_idx);
+ mutex_unlock(&dev->mutex);
return count;
+
+exit:
+ mutex_unlock(&dev->mutex);
+ return retval;
}
/*
@@ -886,7 +912,6 @@ static int iowarrior_probe(struct usb_interface *interface,
static void iowarrior_disconnect(struct usb_interface *interface)
{
struct iowarrior *dev = usb_get_intfdata(interface);
- int minor = dev->minor;
usb_deregister_dev(interface, &iowarrior_class);
@@ -910,9 +935,6 @@ static void iowarrior_disconnect(struct usb_interface *interface)
mutex_unlock(&dev->mutex);
iowarrior_delete(dev);
}
-
- dev_info(&interface->dev, "I/O-Warror #%d now disconnected\n",
- minor - IOWARRIOR_MINOR_BASE);
}
/* usb specific object needed to register this driver with the usb subsystem */
diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
index c313cd41f7a5..0eed614ac127 100644
--- a/drivers/usb/misc/yurex.c
+++ b/drivers/usb/misc/yurex.c
@@ -441,7 +441,10 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
if (count == 0)
goto error;
- mutex_lock(&dev->io_mutex);
+ retval = mutex_lock_interruptible(&dev->io_mutex);
+ if (retval < 0)
+ return -EINTR;
+
if (dev->disconnected) { /* already disconnected */
mutex_unlock(&dev->io_mutex);
retval = -ENODEV;
diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
index 051c6da7cf6d..f175cb2c3e7b 100644
--- a/drivers/usb/musb/musb_gadget.c
+++ b/drivers/usb/musb/musb_gadget.c
@@ -1170,12 +1170,19 @@ struct free_record {
*/
void musb_ep_restart(struct musb *musb, struct musb_request *req)
{
+ u16 csr;
+ void __iomem *epio = req->ep->hw_ep->regs;
+
trace_musb_req_start(req);
musb_ep_select(musb->mregs, req->epnum);
- if (req->tx)
+ if (req->tx) {
txstate(musb, req);
- else
- rxstate(musb, req);
+ } else {
+ csr = musb_readw(epio, MUSB_RXCSR);
+ csr |= MUSB_RXCSR_FLUSHFIFO | MUSB_RXCSR_P_WZC_BITS;
+ musb_writew(epio, MUSB_RXCSR, csr);
+ musb_writew(epio, MUSB_RXCSR, csr);
+ }
}
static int musb_ep_restart_resume_work(struct musb *musb, void *data)
diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c
index 64bdba7ea993..afb7192adc8e 100644
--- a/drivers/usb/typec/class.c
+++ b/drivers/usb/typec/class.c
@@ -2147,14 +2147,16 @@ void typec_port_register_altmodes(struct typec_port *port,
const struct typec_altmode_ops *ops, void *drvdata,
struct typec_altmode **altmodes, size_t n)
{
- struct fwnode_handle *altmodes_node, *child;
+ struct fwnode_handle *child;
struct typec_altmode_desc desc;
struct typec_altmode *alt;
size_t index = 0;
u32 svid, vdo;
int ret;
- altmodes_node = device_get_named_child_node(&port->dev, "altmodes");
+ struct fwnode_handle *altmodes_node __free(fwnode_handle) =
+ device_get_named_child_node(&port->dev, "altmodes");
+
if (!altmodes_node)
return; /* No altmodes specified */
diff --git a/drivers/usb/typec/tcpm/wcove.c b/drivers/usb/typec/tcpm/wcove.c
index 87d4abde0ea2..e08244f555f0 100644
--- a/drivers/usb/typec/tcpm/wcove.c
+++ b/drivers/usb/typec/tcpm/wcove.c
@@ -621,10 +621,6 @@ static int wcove_typec_probe(struct platform_device *pdev)
if (irq < 0)
return irq;
- irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr, irq);
- if (irq < 0)
- return irq;
-
ret = guid_parse(WCOVE_DSM_UUID, &wcove->guid);
if (ret)
return ret;
diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
index 59fa9f3d5ec8..aa4ab4c847fd 100644
--- a/drivers/vdpa/mlx5/core/mr.c
+++ b/drivers/vdpa/mlx5/core/mr.c
@@ -227,7 +227,6 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
unsigned long lgcd = 0;
int log_entity_size;
unsigned long size;
- u64 start = 0;
int err;
struct page *pg;
unsigned int nsg;
@@ -238,10 +237,9 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
struct device *dma = mvdev->vdev.dma_dev;
for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1);
- map; map = vhost_iotlb_itree_next(map, start, mr->end - 1)) {
+ map; map = vhost_iotlb_itree_next(map, mr->start, mr->end - 1)) {
size = maplen(map, mr);
lgcd = gcd(lgcd, size);
- start += size;
}
log_entity_size = ilog2(lgcd);
diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
index 7e2e62ab0869..a2ad4f7c716b 100644
--- a/drivers/vfio/pci/vfio_pci_config.c
+++ b/drivers/vfio/pci/vfio_pci_config.c
@@ -313,6 +313,10 @@ static int vfio_virt_config_read(struct vfio_pci_core_device *vdev, int pos,
return count;
}
+static struct perm_bits direct_ro_perms = {
+ .readfn = vfio_direct_config_read,
+};
+
/* Default capability regions to read-only, no-virtualization */
static struct perm_bits cap_perms[PCI_CAP_ID_MAX + 1] = {
[0 ... PCI_CAP_ID_MAX] = { .readfn = vfio_direct_config_read }
@@ -1897,9 +1901,17 @@ static ssize_t vfio_config_do_rw(struct vfio_pci_core_device *vdev, char __user
cap_start = *ppos;
} else {
if (*ppos >= PCI_CFG_SPACE_SIZE) {
- WARN_ON(cap_id > PCI_EXT_CAP_ID_MAX);
+ /*
+ * We can get a cap_id that exceeds PCI_EXT_CAP_ID_MAX
+ * if we're hiding an unknown capability at the start
+ * of the extended capability list. Use default, ro
+ * access, which will virtualize the id and next values.
+ */
+ if (cap_id > PCI_EXT_CAP_ID_MAX)
+ perm = &direct_ro_perms;
+ else
+ perm = &ecap_perms[cap_id];
- perm = &ecap_perms[cap_id];
cap_start = vfio_find_cap_start(vdev, *ppos);
} else {
WARN_ON(cap_id > PCI_CAP_ID_MAX);
diff --git a/drivers/video/fbdev/sh7760fb.c b/drivers/video/fbdev/sh7760fb.c
index 08a4943dc541..d0ee5fec647a 100644
--- a/drivers/video/fbdev/sh7760fb.c
+++ b/drivers/video/fbdev/sh7760fb.c
@@ -409,12 +409,11 @@ static int sh7760fb_alloc_mem(struct fb_info *info)
vram = PAGE_SIZE;
fbmem = dma_alloc_coherent(info->device, vram, &par->fbdma, GFP_KERNEL);
-
if (!fbmem)
return -ENOMEM;
if ((par->fbdma & SH7760FB_DMA_MASK) != SH7760FB_DMA_MASK) {
- sh7760fb_free_mem(info);
+ dma_free_coherent(info->device, vram, fbmem, par->fbdma);
dev_err(info->device, "kernel gave me memory at 0x%08lx, which is"
"unusable for the LCDC\n", (unsigned long)par->fbdma);
return -ENOMEM;
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 1a9ded0cddcb..25164d56c9d9 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -313,7 +313,7 @@ int xenbus_dev_probe(struct device *_dev)
if (err) {
dev_warn(&dev->dev, "watch_otherend on %s failed.\n",
dev->nodename);
- return err;
+ goto fail_remove;
}
dev->spurious_threshold = 1;
@@ -322,6 +322,12 @@ int xenbus_dev_probe(struct device *_dev)
dev->nodename);
return 0;
+fail_remove:
+ if (drv->remove) {
+ down(&dev->reclaim_sem);
+ drv->remove(dev);
+ up(&dev->reclaim_sem);
+ }
fail_put:
module_put(drv->driver.owner);
fail:
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 2eb4e03080ac..25c902e7556d 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -617,10 +617,16 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
atomic_inc(&cow->refs);
rcu_assign_pointer(root->node, cow);
- btrfs_free_tree_block(trans, btrfs_root_id(root), buf,
- parent_start, last_ref);
+ ret = btrfs_free_tree_block(trans, btrfs_root_id(root), buf,
+ parent_start, last_ref);
free_extent_buffer(buf);
add_root_to_dirty_list(root);
+ if (ret < 0) {
+ btrfs_tree_unlock(cow);
+ free_extent_buffer(cow);
+ btrfs_abort_transaction(trans, ret);
+ return ret;
+ }
} else {
WARN_ON(trans->transid != btrfs_header_generation(parent));
ret = btrfs_tree_mod_log_insert_key(parent, parent_slot,
@@ -645,8 +651,14 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
return ret;
}
}
- btrfs_free_tree_block(trans, btrfs_root_id(root), buf,
- parent_start, last_ref);
+ ret = btrfs_free_tree_block(trans, btrfs_root_id(root), buf,
+ parent_start, last_ref);
+ if (ret < 0) {
+ btrfs_tree_unlock(cow);
+ free_extent_buffer(cow);
+ btrfs_abort_transaction(trans, ret);
+ return ret;
+ }
}
if (unlock_orig)
btrfs_tree_unlock(buf);
@@ -1121,9 +1133,13 @@ static noinline int balance_level(struct btrfs_trans_handle *trans,
free_extent_buffer(mid);
root_sub_used(root, mid->len);
- btrfs_free_tree_block(trans, btrfs_root_id(root), mid, 0, 1);
+ ret = btrfs_free_tree_block(trans, btrfs_root_id(root), mid, 0, 1);
/* once for the root ptr */
free_extent_buffer_stale(mid);
+ if (ret < 0) {
+ btrfs_abort_transaction(trans, ret);
+ goto out;
+ }
return 0;
}
if (btrfs_header_nritems(mid) >
@@ -1191,10 +1207,14 @@ static noinline int balance_level(struct btrfs_trans_handle *trans,
goto out;
}
root_sub_used(root, right->len);
- btrfs_free_tree_block(trans, btrfs_root_id(root), right,
+ ret = btrfs_free_tree_block(trans, btrfs_root_id(root), right,
0, 1);
free_extent_buffer_stale(right);
right = NULL;
+ if (ret < 0) {
+ btrfs_abort_transaction(trans, ret);
+ goto out;
+ }
} else {
struct btrfs_disk_key right_key;
btrfs_node_key(right, &right_key, 0);
@@ -1249,9 +1269,13 @@ static noinline int balance_level(struct btrfs_trans_handle *trans,
goto out;
}
root_sub_used(root, mid->len);
- btrfs_free_tree_block(trans, btrfs_root_id(root), mid, 0, 1);
+ ret = btrfs_free_tree_block(trans, btrfs_root_id(root), mid, 0, 1);
free_extent_buffer_stale(mid);
mid = NULL;
+ if (ret < 0) {
+ btrfs_abort_transaction(trans, ret);
+ goto out;
+ }
} else {
/* update the parent key to reflect our changes */
struct btrfs_disk_key mid_key;
@@ -2133,7 +2157,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root,
const struct btrfs_key *key, struct btrfs_path *p,
int ins_len, int cow)
{
- struct btrfs_fs_info *fs_info = root->fs_info;
+ struct btrfs_fs_info *fs_info;
struct extent_buffer *b;
int slot;
int ret;
@@ -2146,6 +2170,10 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root,
int min_write_lock_level;
int prev_cmp;
+ if (!root)
+ return -EINVAL;
+
+ fs_info = root->fs_info;
might_sleep();
lowest_level = p->lowest_level;
@@ -3022,7 +3050,11 @@ static noinline int insert_new_root(struct btrfs_trans_handle *trans,
old = root->node;
ret = btrfs_tree_mod_log_insert_root(root->node, c, false);
if (ret < 0) {
- btrfs_free_tree_block(trans, btrfs_root_id(root), c, 0, 1);
+ int ret2;
+
+ ret2 = btrfs_free_tree_block(trans, btrfs_root_id(root), c, 0, 1);
+ if (ret2 < 0)
+ btrfs_abort_transaction(trans, ret2);
btrfs_tree_unlock(c);
free_extent_buffer(c);
return ret;
@@ -4587,9 +4619,12 @@ static noinline int btrfs_del_leaf(struct btrfs_trans_handle *trans,
root_sub_used(root, leaf->len);
atomic_inc(&leaf->refs);
- btrfs_free_tree_block(trans, btrfs_root_id(root), leaf, 0, 1);
+ ret = btrfs_free_tree_block(trans, btrfs_root_id(root), leaf, 0, 1);
free_extent_buffer_stale(leaf);
- return 0;
+ if (ret < 0)
+ btrfs_abort_transaction(trans, ret);
+
+ return ret;
}
/*
* delete the item at the leaf level in path. If that empties
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index b3680e1c7054..7aa8c1a2161b 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2401,7 +2401,7 @@ int btrfs_cross_ref_exist(struct btrfs_root *root, u64 objectid, u64 offset,
goto out;
ret = check_delayed_ref(root, path, objectid, offset, bytenr);
- } while (ret == -EAGAIN);
+ } while (ret == -EAGAIN && !path->nowait);
out:
btrfs_release_path(path);
@@ -3290,10 +3290,10 @@ static noinline int check_ref_cleanup(struct btrfs_trans_handle *trans,
return 0;
}
-void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
- u64 root_id,
- struct extent_buffer *buf,
- u64 parent, int last_ref)
+int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+ u64 root_id,
+ struct extent_buffer *buf,
+ u64 parent, int last_ref)
{
struct btrfs_fs_info *fs_info = trans->fs_info;
struct btrfs_ref generic_ref = { 0 };
@@ -3307,7 +3307,8 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
if (root_id != BTRFS_TREE_LOG_OBJECTID) {
btrfs_ref_tree_mod(fs_info, &generic_ref);
ret = btrfs_add_delayed_tree_ref(trans, &generic_ref, NULL);
- BUG_ON(ret); /* -ENOMEM */
+ if (ret < 0)
+ return ret;
}
if (last_ref && btrfs_header_generation(buf) == trans->transid) {
@@ -3371,6 +3372,7 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
*/
clear_bit(EXTENT_BUFFER_CORRUPT, &buf->bflags);
}
+ return 0;
}
/* Can return -ENOMEM */
@@ -5168,7 +5170,6 @@ static noinline int walk_down_proc(struct btrfs_trans_handle *trans,
eb->start, level, 1,
&wc->refs[level],
&wc->flags[level]);
- BUG_ON(ret == -ENOMEM);
if (ret)
return ret;
if (unlikely(wc->refs[level] == 0)) {
@@ -5474,7 +5475,7 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
struct walk_control *wc)
{
struct btrfs_fs_info *fs_info = root->fs_info;
- int ret;
+ int ret = 0;
int level = wc->level;
struct extent_buffer *eb = path->nodes[level];
u64 parent = 0;
@@ -5565,12 +5566,14 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
goto owner_mismatch;
}
- btrfs_free_tree_block(trans, btrfs_root_id(root), eb, parent,
- wc->refs[level] == 1);
+ ret = btrfs_free_tree_block(trans, btrfs_root_id(root), eb, parent,
+ wc->refs[level] == 1);
+ if (ret < 0)
+ btrfs_abort_transaction(trans, ret);
out:
wc->refs[level] = 0;
wc->flags[level] = 0;
- return 0;
+ return ret;
owner_mismatch:
btrfs_err_rl(fs_info, "unexpected tree owner, have %llu expect %llu",
diff --git a/fs/btrfs/extent-tree.h b/fs/btrfs/extent-tree.h
index 88c249c37516..ef1c1c99294e 100644
--- a/fs/btrfs/extent-tree.h
+++ b/fs/btrfs/extent-tree.h
@@ -114,10 +114,10 @@ struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans,
int level, u64 hint,
u64 empty_size,
enum btrfs_lock_nesting nest);
-void btrfs_free_tree_block(struct btrfs_trans_handle *trans,
- u64 root_id,
- struct extent_buffer *buf,
- u64 parent, int last_ref);
+int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
+ u64 root_id,
+ struct extent_buffer *buf,
+ u64 parent, int last_ref);
int btrfs_alloc_reserved_file_extent(struct btrfs_trans_handle *trans,
struct btrfs_root *root, u64 owner,
u64 offset, u64 ram_bytes,
diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c
index 7b598b070700..a0d8160b5375 100644
--- a/fs/btrfs/free-space-tree.c
+++ b/fs/btrfs/free-space-tree.c
@@ -1289,10 +1289,14 @@ int btrfs_delete_free_space_tree(struct btrfs_fs_info *fs_info)
btrfs_tree_lock(free_space_root->node);
btrfs_clear_buffer_dirty(trans, free_space_root->node);
btrfs_tree_unlock(free_space_root->node);
- btrfs_free_tree_block(trans, btrfs_root_id(free_space_root),
- free_space_root->node, 0, 1);
-
+ ret = btrfs_free_tree_block(trans, btrfs_root_id(free_space_root),
+ free_space_root->node, 0, 1);
btrfs_put_root(free_space_root);
+ if (ret < 0) {
+ btrfs_abort_transaction(trans, ret);
+ btrfs_end_transaction(trans);
+ return ret;
+ }
return btrfs_commit_transaction(trans);
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index 5f0c9c3f3bbf..ae6806bc3929 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -707,6 +707,8 @@ static noinline int create_subvol(struct mnt_idmap *idmap,
ret = btrfs_insert_root(trans, fs_info->tree_root, &key,
root_item);
if (ret) {
+ int ret2;
+
/*
* Since we don't abort the transaction in this case, free the
* tree block so that we don't leak space and leave the
@@ -717,7 +719,9 @@ static noinline int create_subvol(struct mnt_idmap *idmap,
btrfs_tree_lock(leaf);
btrfs_clear_buffer_dirty(trans, leaf);
btrfs_tree_unlock(leaf);
- btrfs_free_tree_block(trans, objectid, leaf, 0, 1);
+ ret2 = btrfs_free_tree_block(trans, objectid, leaf, 0, 1);
+ if (ret2 < 0)
+ btrfs_abort_transaction(trans, ret2);
free_extent_buffer(leaf);
goto out;
}
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 74b82390fe84..1b9f4f16d124 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1320,9 +1320,11 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
btrfs_tree_lock(quota_root->node);
btrfs_clear_buffer_dirty(trans, quota_root->node);
btrfs_tree_unlock(quota_root->node);
- btrfs_free_tree_block(trans, btrfs_root_id(quota_root),
- quota_root->node, 0, 1);
+ ret = btrfs_free_tree_block(trans, btrfs_root_id(quota_root),
+ quota_root->node, 0, 1);
+ if (ret < 0)
+ btrfs_abort_transaction(trans, ret);
out:
btrfs_put_root(quota_root);
diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
index 1ea5bfb8876e..28ac7995716e 100644
--- a/fs/btrfs/ref-verify.c
+++ b/fs/btrfs/ref-verify.c
@@ -849,6 +849,7 @@ int btrfs_ref_tree_mod(struct btrfs_fs_info *fs_info,
"dropping a ref for a root that doesn't have a ref on the block");
dump_block_entry(fs_info, be);
dump_ref_action(fs_info, ra);
+ rb_erase(&ref->node, &be->refs);
kfree(ref);
kfree(ra);
goto out_unlock;
diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
index 2185e2908dba..d1a0264b08a6 100644
--- a/fs/cachefiles/ondemand.c
+++ b/fs/cachefiles/ondemand.c
@@ -78,8 +78,10 @@ static ssize_t cachefiles_ondemand_fd_write_iter(struct kiocb *kiocb,
trace_cachefiles_ondemand_fd_write(object, file_inode(file), pos, len);
ret = __cachefiles_write(object, file, pos, iter, NULL, NULL);
- if (!ret)
+ if (!ret) {
ret = len;
+ kiocb->ki_pos += ret;
+ }
return ret;
}
diff --git a/fs/ceph/super.c b/fs/ceph/super.c
index ec51e398562c..4f51a2e74d07 100644
--- a/fs/ceph/super.c
+++ b/fs/ceph/super.c
@@ -281,7 +281,9 @@ static int ceph_parse_new_source(const char *dev_name, const char *dev_name_end,
size_t len;
struct ceph_fsid fsid;
struct ceph_parse_opts_ctx *pctx = fc->fs_private;
+ struct ceph_options *opts = pctx->copts;
struct ceph_mount_options *fsopt = pctx->opts;
+ const char *name_start = dev_name;
char *fsid_start, *fs_name_start;
if (*dev_name_end != '=') {
@@ -292,8 +294,14 @@ static int ceph_parse_new_source(const char *dev_name, const char *dev_name_end,
fsid_start = strchr(dev_name, '@');
if (!fsid_start)
return invalfc(fc, "missing cluster fsid");
- ++fsid_start; /* start of cluster fsid */
+ len = fsid_start - name_start;
+ kfree(opts->name);
+ opts->name = kstrndup(name_start, len, GFP_KERNEL);
+ if (!opts->name)
+ return -ENOMEM;
+ dout("using %s entity name", opts->name);
+ ++fsid_start; /* start of cluster fsid */
fs_name_start = strchr(fsid_start, '.');
if (!fs_name_start)
return invalfc(fc, "missing file system name");
diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
index 6bd435a565f6..76566c2cbf63 100644
--- a/fs/erofs/zmap.c
+++ b/fs/erofs/zmap.c
@@ -234,7 +234,7 @@ static int z_erofs_load_compact_lcluster(struct z_erofs_maprecorder *m,
unsigned int amortizedshift;
erofs_off_t pos;
- if (lcn >= totalidx)
+ if (lcn >= totalidx || vi->z_logical_clusterbits > 14)
return -EINVAL;
m->lcn = lcn;
@@ -409,7 +409,7 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
u64 lcn = m->lcn, headlcn = map->m_la >> lclusterbits;
int err;
- do {
+ while (1) {
/* handle the last EOF pcluster (no next HEAD lcluster) */
if ((lcn << lclusterbits) >= inode->i_size) {
map->m_llen = inode->i_size - map->m_la;
@@ -421,14 +421,16 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
return err;
if (m->type == Z_EROFS_LCLUSTER_TYPE_NONHEAD) {
- DBG_BUGON(!m->delta[1] &&
- m->clusterofs != 1 << lclusterbits);
+ /* work around invalid d1 generated by pre-1.0 mkfs */
+ if (unlikely(!m->delta[1])) {
+ m->delta[1] = 1;
+ DBG_BUGON(1);
+ }
} else if (m->type == Z_EROFS_LCLUSTER_TYPE_PLAIN ||
m->type == Z_EROFS_LCLUSTER_TYPE_HEAD1 ||
m->type == Z_EROFS_LCLUSTER_TYPE_HEAD2) {
- /* go on until the next HEAD lcluster */
if (lcn != headlcn)
- break;
+ break; /* ends at the next HEAD lcluster */
m->delta[1] = 1;
} else {
erofs_err(inode->i_sb, "unknown type %u @ lcn %llu of nid %llu",
@@ -437,8 +439,7 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
return -EOPNOTSUPP;
}
lcn += m->delta[1];
- } while (m->delta[1]);
-
+ }
map->m_llen = (lcn << lclusterbits) + m->clusterofs - map->m_la;
return 0;
}
diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
index 95c51b025b91..f340e96b499f 100644
--- a/fs/exfat/namei.c
+++ b/fs/exfat/namei.c
@@ -377,6 +377,7 @@ static int exfat_find_empty_entry(struct inode *inode,
if (ei->start_clu == EXFAT_EOF_CLUSTER) {
ei->start_clu = clu.dir;
p_dir->dir = clu.dir;
+ hint_femp.eidx = 0;
}
/* append to the FAT chain */
diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
index 79b20d6ae39e..396474e9e2bf 100644
--- a/fs/ext4/balloc.c
+++ b/fs/ext4/balloc.c
@@ -545,7 +545,8 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group,
trace_ext4_read_block_bitmap_load(sb, block_group, ignore_locked);
ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO |
(ignore_locked ? REQ_RAHEAD : 0),
- ext4_end_bitmap_read);
+ ext4_end_bitmap_read,
+ ext4_simulate_fail(sb, EXT4_SIM_BBITMAP_EIO));
return bh;
verify:
err = ext4_validate_block_bitmap(sb, desc, block_group, bh);
@@ -569,7 +570,6 @@ int ext4_wait_block_bitmap(struct super_block *sb, ext4_group_t block_group,
if (!desc)
return -EFSCORRUPTED;
wait_on_buffer(bh);
- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_BBITMAP_EIO);
if (!buffer_uptodate(bh)) {
ext4_error_err(sb, EIO, "Cannot read block bitmap - "
"block_group = %u, block_bitmap = %llu",
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 7bbf0b9bdff2..3db01b933c3e 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -1849,14 +1849,6 @@ static inline bool ext4_simulate_fail(struct super_block *sb,
return false;
}
-static inline void ext4_simulate_fail_bh(struct super_block *sb,
- struct buffer_head *bh,
- unsigned long code)
-{
- if (!IS_ERR(bh) && ext4_simulate_fail(sb, code))
- clear_buffer_uptodate(bh);
-}
-
/*
* Error number codes for s_{first,last}_error_errno
*
@@ -3072,9 +3064,9 @@ extern struct buffer_head *ext4_sb_bread(struct super_block *sb,
extern struct buffer_head *ext4_sb_bread_unmovable(struct super_block *sb,
sector_t block);
extern void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
- bh_end_io_t *end_io);
+ bh_end_io_t *end_io, bool simu_fail);
extern int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
- bh_end_io_t *end_io);
+ bh_end_io_t *end_io, bool simu_fail);
extern int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait);
extern void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block);
extern int ext4_seq_options_show(struct seq_file *seq, void *offset);
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 1c059ac1c1ef..5ea75af6ca22 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -564,7 +564,7 @@ __read_extent_tree_block(const char *function, unsigned int line,
if (!bh_uptodate_or_lock(bh)) {
trace_ext4_ext_load_extent(inode, pblk, _RET_IP_);
- err = ext4_read_bh(bh, 0, NULL);
+ err = ext4_read_bh(bh, 0, NULL, false);
if (err < 0)
goto errout;
}
diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c
index cdf9bfe10137..53a05b8292f0 100644
--- a/fs/ext4/fsmap.c
+++ b/fs/ext4/fsmap.c
@@ -185,6 +185,56 @@ static inline ext4_fsblk_t ext4_fsmap_next_pblk(struct ext4_fsmap *fmr)
return fmr->fmr_physical + fmr->fmr_length;
}
+static int ext4_getfsmap_meta_helper(struct super_block *sb,
+ ext4_group_t agno, ext4_grpblk_t start,
+ ext4_grpblk_t len, void *priv)
+{
+ struct ext4_getfsmap_info *info = priv;
+ struct ext4_fsmap *p;
+ struct ext4_fsmap *tmp;
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ ext4_fsblk_t fsb, fs_start, fs_end;
+ int error;
+
+ fs_start = fsb = (EXT4_C2B(sbi, start) +
+ ext4_group_first_block_no(sb, agno));
+ fs_end = fs_start + EXT4_C2B(sbi, len);
+
+ /* Return relevant extents from the meta_list */
+ list_for_each_entry_safe(p, tmp, &info->gfi_meta_list, fmr_list) {
+ if (p->fmr_physical < info->gfi_next_fsblk) {
+ list_del(&p->fmr_list);
+ kfree(p);
+ continue;
+ }
+ if (p->fmr_physical <= fs_start ||
+ p->fmr_physical + p->fmr_length <= fs_end) {
+ /* Emit the retained free extent record if present */
+ if (info->gfi_lastfree.fmr_owner) {
+ error = ext4_getfsmap_helper(sb, info,
+ &info->gfi_lastfree);
+ if (error)
+ return error;
+ info->gfi_lastfree.fmr_owner = 0;
+ }
+ error = ext4_getfsmap_helper(sb, info, p);
+ if (error)
+ return error;
+ fsb = p->fmr_physical + p->fmr_length;
+ if (info->gfi_next_fsblk < fsb)
+ info->gfi_next_fsblk = fsb;
+ list_del(&p->fmr_list);
+ kfree(p);
+ continue;
+ }
+ }
+ if (info->gfi_next_fsblk < fsb)
+ info->gfi_next_fsblk = fsb;
+
+ return 0;
+}
+
+
/* Transform a blockgroup's free record into a fsmap */
static int ext4_getfsmap_datadev_helper(struct super_block *sb,
ext4_group_t agno, ext4_grpblk_t start,
@@ -539,6 +589,7 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
error = ext4_mballoc_query_range(sb, info->gfi_agno,
EXT4_B2C(sbi, info->gfi_low.fmr_physical),
EXT4_B2C(sbi, info->gfi_high.fmr_physical),
+ ext4_getfsmap_meta_helper,
ext4_getfsmap_datadev_helper, info);
if (error)
goto err;
@@ -560,7 +611,8 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
/* Report any gaps at the end of the bg */
info->gfi_last = true;
- error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster, 0, info);
+ error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1,
+ 0, info);
if (error)
goto err;
diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
index 1a1e2214c581..d4d0ad689d3c 100644
--- a/fs/ext4/ialloc.c
+++ b/fs/ext4/ialloc.c
@@ -194,8 +194,9 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
* submit the buffer_head for reading
*/
trace_ext4_load_inode_bitmap(sb, block_group);
- ext4_read_bh(bh, REQ_META | REQ_PRIO, ext4_end_bitmap_read);
- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_IBITMAP_EIO);
+ ext4_read_bh(bh, REQ_META | REQ_PRIO,
+ ext4_end_bitmap_read,
+ ext4_simulate_fail(sb, EXT4_SIM_IBITMAP_EIO));
if (!buffer_uptodate(bh)) {
put_bh(bh);
ext4_error_err(sb, EIO, "Cannot read inode bitmap - "
diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
index a9f3716119d3..f2c495b745f1 100644
--- a/fs/ext4/indirect.c
+++ b/fs/ext4/indirect.c
@@ -170,7 +170,7 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth,
}
if (!bh_uptodate_or_lock(bh)) {
- if (ext4_read_bh(bh, 0, NULL) < 0) {
+ if (ext4_read_bh(bh, 0, NULL, false) < 0) {
put_bh(bh);
goto failure;
}
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 14f7098bcefe..18ec9106c5b0 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -4508,10 +4508,10 @@ static int __ext4_get_inode_loc(struct super_block *sb, unsigned long ino,
* Read the block from disk.
*/
trace_ext4_load_inode(sb, ino);
- ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL);
+ ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL,
+ ext4_simulate_fail(sb, EXT4_SIM_INODE_EIO));
blk_finish_plug(&plug);
wait_on_buffer(bh);
- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_INODE_EIO);
if (!buffer_uptodate(bh)) {
if (ret_block)
*ret_block = block;
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 87ba7f58216f..8a9f8c95c6f1 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -7155,13 +7155,14 @@ int
ext4_mballoc_query_range(
struct super_block *sb,
ext4_group_t group,
- ext4_grpblk_t start,
+ ext4_grpblk_t first,
ext4_grpblk_t end,
+ ext4_mballoc_query_range_fn meta_formatter,
ext4_mballoc_query_range_fn formatter,
void *priv)
{
void *bitmap;
- ext4_grpblk_t next;
+ ext4_grpblk_t start, next;
struct ext4_buddy e4b;
int error;
@@ -7172,10 +7173,19 @@ ext4_mballoc_query_range(
ext4_lock_group(sb, group);
- start = max(e4b.bd_info->bb_first_free, start);
+ start = max(e4b.bd_info->bb_first_free, first);
if (end >= EXT4_CLUSTERS_PER_GROUP(sb))
end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
-
+ if (meta_formatter && start != first) {
+ if (start > end)
+ start = end;
+ ext4_unlock_group(sb, group);
+ error = meta_formatter(sb, group, first, start - first,
+ priv);
+ if (error)
+ goto out_unload;
+ ext4_lock_group(sb, group);
+ }
while (start <= end) {
start = mb_find_next_zero_bit(bitmap, end + 1, start);
if (start > end)
diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
index 498af2abc5d8..dd16050022f5 100644
--- a/fs/ext4/mballoc.h
+++ b/fs/ext4/mballoc.h
@@ -260,6 +260,7 @@ ext4_mballoc_query_range(
ext4_group_t agno,
ext4_grpblk_t start,
ext4_grpblk_t end,
+ ext4_mballoc_query_range_fn meta_formatter,
ext4_mballoc_query_range_fn formatter,
void *priv);
diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
index bd946d0c71b7..d64c04ed061a 100644
--- a/fs/ext4/mmp.c
+++ b/fs/ext4/mmp.c
@@ -94,7 +94,7 @@ static int read_mmp_block(struct super_block *sb, struct buffer_head **bh,
}
lock_buffer(*bh);
- ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL);
+ ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL, false);
if (ret)
goto warn_exit;
diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
index 0bfd5ff103aa..5e6b07b34960 100644
--- a/fs/ext4/move_extent.c
+++ b/fs/ext4/move_extent.c
@@ -165,15 +165,16 @@ mext_folio_double_lock(struct inode *inode1, struct inode *inode2,
return 0;
}
-/* Force page buffers uptodate w/o dropping page's lock */
-static int
-mext_page_mkuptodate(struct folio *folio, unsigned from, unsigned to)
+/* Force folio buffers uptodate w/o dropping folio's lock */
+static int mext_page_mkuptodate(struct folio *folio, size_t from, size_t to)
{
struct inode *inode = folio->mapping->host;
sector_t block;
- struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
+ struct buffer_head *bh, *head;
unsigned int blocksize, block_start, block_end;
- int i, err, nr = 0, partial = 0;
+ int nr = 0;
+ bool partial = false;
+
BUG_ON(!folio_test_locked(folio));
BUG_ON(folio_test_writeback(folio));
@@ -193,38 +194,44 @@ mext_page_mkuptodate(struct folio *folio, unsigned from, unsigned to)
block_end = block_start + blocksize;
if (block_end <= from || block_start >= to) {
if (!buffer_uptodate(bh))
- partial = 1;
+ partial = true;
continue;
}
if (buffer_uptodate(bh))
continue;
if (!buffer_mapped(bh)) {
- err = ext4_get_block(inode, block, bh, 0);
- if (err) {
- folio_set_error(folio);
+ int err = ext4_get_block(inode, block, bh, 0);
+ if (err)
return err;
- }
if (!buffer_mapped(bh)) {
folio_zero_range(folio, block_start, blocksize);
set_buffer_uptodate(bh);
continue;
}
}
- BUG_ON(nr >= MAX_BUF_PER_PAGE);
- arr[nr++] = bh;
+ lock_buffer(bh);
+ if (buffer_uptodate(bh)) {
+ unlock_buffer(bh);
+ continue;
+ }
+ ext4_read_bh_nowait(bh, 0, NULL, false);
+ nr++;
}
/* No io required */
if (!nr)
goto out;
- for (i = 0; i < nr; i++) {
- bh = arr[i];
- if (!bh_uptodate_or_lock(bh)) {
- err = ext4_read_bh(bh, 0, NULL);
- if (err)
- return err;
- }
- }
+ bh = head;
+ do {
+ if (bh_offset(bh) + blocksize <= from)
+ continue;
+ if (bh_offset(bh) > to)
+ break;
+ wait_on_buffer(bh);
+ if (buffer_uptodate(bh))
+ continue;
+ return -EIO;
+ } while ((bh = bh->b_this_page) != head);
out:
if (!partial)
folio_mark_uptodate(folio);
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index dfdd7e5cf038..7ab4f5a9bf5b 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -117,7 +117,6 @@ static void ext4_finish_bio(struct bio *bio)
if (bio->bi_status) {
int err = blk_status_to_errno(bio->bi_status);
- folio_set_error(folio);
mapping_set_error(folio->mapping, err);
}
bh = head = folio_buffers(folio);
@@ -441,8 +440,6 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio,
BUG_ON(!folio_test_locked(folio));
BUG_ON(folio_test_writeback(folio));
- folio_clear_error(folio);
-
/*
* Comments copied from block_write_full_page:
*
diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
index 3e7d160f543f..8cb83e7b699b 100644
--- a/fs/ext4/readpage.c
+++ b/fs/ext4/readpage.c
@@ -296,7 +296,6 @@ int ext4_mpage_readpages(struct inode *inode,
if (ext4_map_blocks(NULL, inode, &map, 0) < 0) {
set_error_page:
- folio_set_error(folio);
folio_zero_segment(folio, 0,
folio_size(folio));
folio_unlock(folio);
diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
index 5f105171df7b..b34007541e08 100644
--- a/fs/ext4/resize.c
+++ b/fs/ext4/resize.c
@@ -1301,7 +1301,7 @@ static struct buffer_head *ext4_get_bitmap(struct super_block *sb, __u64 block)
if (unlikely(!bh))
return NULL;
if (!bh_uptodate_or_lock(bh)) {
- if (ext4_read_bh(bh, 0, NULL) < 0) {
+ if (ext4_read_bh(bh, 0, NULL, false) < 0) {
brelse(bh);
return NULL;
}
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 1d14a38017a7..2346ef071b24 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -161,8 +161,14 @@ MODULE_ALIAS("ext3");
static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
- bh_end_io_t *end_io)
+ bh_end_io_t *end_io, bool simu_fail)
{
+ if (simu_fail) {
+ clear_buffer_uptodate(bh);
+ unlock_buffer(bh);
+ return;
+ }
+
/*
* buffer's verified bit is no longer valid after reading from
* disk again due to write out error, clear it to make sure we
@@ -176,7 +182,7 @@ static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
}
void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
- bh_end_io_t *end_io)
+ bh_end_io_t *end_io, bool simu_fail)
{
BUG_ON(!buffer_locked(bh));
@@ -184,10 +190,11 @@ void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
unlock_buffer(bh);
return;
}
- __ext4_read_bh(bh, op_flags, end_io);
+ __ext4_read_bh(bh, op_flags, end_io, simu_fail);
}
-int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io)
+int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
+ bh_end_io_t *end_io, bool simu_fail)
{
BUG_ON(!buffer_locked(bh));
@@ -196,7 +203,7 @@ int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io
return 0;
}
- __ext4_read_bh(bh, op_flags, end_io);
+ __ext4_read_bh(bh, op_flags, end_io, simu_fail);
wait_on_buffer(bh);
if (buffer_uptodate(bh))
@@ -208,10 +215,10 @@ int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait)
{
lock_buffer(bh);
if (!wait) {
- ext4_read_bh_nowait(bh, op_flags, NULL);
+ ext4_read_bh_nowait(bh, op_flags, NULL, false);
return 0;
}
- return ext4_read_bh(bh, op_flags, NULL);
+ return ext4_read_bh(bh, op_flags, NULL, false);
}
/*
@@ -259,7 +266,7 @@ void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block)
if (likely(bh)) {
if (trylock_buffer(bh))
- ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL);
+ ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL, false);
brelse(bh);
}
}
@@ -339,9 +346,9 @@ __u32 ext4_free_group_clusters(struct super_block *sb,
__u32 ext4_free_inodes_count(struct super_block *sb,
struct ext4_group_desc *bg)
{
- return le16_to_cpu(bg->bg_free_inodes_count_lo) |
+ return le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_lo)) |
(EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT ?
- (__u32)le16_to_cpu(bg->bg_free_inodes_count_hi) << 16 : 0);
+ (__u32)le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_hi)) << 16 : 0);
}
__u32 ext4_used_dirs_count(struct super_block *sb,
@@ -395,9 +402,9 @@ void ext4_free_group_clusters_set(struct super_block *sb,
void ext4_free_inodes_set(struct super_block *sb,
struct ext4_group_desc *bg, __u32 count)
{
- bg->bg_free_inodes_count_lo = cpu_to_le16((__u16)count);
+ WRITE_ONCE(bg->bg_free_inodes_count_lo, cpu_to_le16((__u16)count));
if (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT)
- bg->bg_free_inodes_count_hi = cpu_to_le16(count >> 16);
+ WRITE_ONCE(bg->bg_free_inodes_count_hi, cpu_to_le16(count >> 16));
}
void ext4_used_dirs_set(struct super_block *sb,
@@ -6544,9 +6551,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
goto restore_opts;
}
- if (test_opt2(sb, ABORT))
- ext4_abort(sb, ESHUTDOWN, "Abort forced by user");
-
sb->s_flags = (sb->s_flags & ~SB_POSIXACL) |
(test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0);
@@ -6715,6 +6719,14 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
ext4_stop_mmpd(sbi);
+ /*
+ * Handle aborting the filesystem as the last thing during remount to
+ * avoid obsure errors during remount when some option changes fail to
+ * apply due to shutdown filesystem.
+ */
+ if (test_opt2(sb, ABORT))
+ ext4_abort(sb, ESHUTDOWN, "Abort forced by user");
+
return 0;
restore_opts:
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 1a33a8c1623f..c6317596e695 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -32,7 +32,7 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io,
f2fs_build_fault_attr(sbi, 0, 0);
if (!end_io)
f2fs_flush_merged_writes(sbi);
- f2fs_handle_critical_error(sbi, reason, end_io);
+ f2fs_handle_critical_error(sbi, reason);
}
/*
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 1c59a3b2b2c3..acd0764b0286 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -924,6 +924,7 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
#ifdef CONFIG_BLK_DEV_ZONED
static bool is_end_zone_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr)
{
+ struct block_device *bdev = sbi->sb->s_bdev;
int devi = 0;
if (f2fs_is_multi_device(sbi)) {
@@ -934,8 +935,9 @@ static bool is_end_zone_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr)
return false;
}
blkaddr -= FDEV(devi).start_blk;
+ bdev = FDEV(devi).bdev;
}
- return bdev_zoned_model(FDEV(devi).bdev) == BLK_ZONED_HM &&
+ return bdev_is_zoned(bdev) &&
f2fs_blkz_is_seq(sbi, devi, blkaddr) &&
(blkaddr % sbi->blocks_per_blkz == sbi->blocks_per_blkz - 1);
}
@@ -1873,25 +1875,6 @@ static int f2fs_xattr_fiemap(struct inode *inode,
return (err < 0 ? err : 0);
}
-static loff_t max_inode_blocks(struct inode *inode)
-{
- loff_t result = ADDRS_PER_INODE(inode);
- loff_t leaf_count = ADDRS_PER_BLOCK(inode);
-
- /* two direct node blocks */
- result += (leaf_count * 2);
-
- /* two indirect node blocks */
- leaf_count *= NIDS_PER_BLOCK;
- result += (leaf_count * 2);
-
- /* one double indirect node block */
- leaf_count *= NIDS_PER_BLOCK;
- result += leaf_count;
-
- return result;
-}
-
int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len)
{
@@ -1964,8 +1947,7 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
if (!compr_cluster && !(map.m_flags & F2FS_MAP_FLAGS)) {
start_blk = next_pgofs;
- if (blks_to_bytes(inode, start_blk) < blks_to_bytes(inode,
- max_inode_blocks(inode)))
+ if (blks_to_bytes(inode, start_blk) < maxbytes)
goto prep_next;
flags |= FIEMAP_EXTENT_LAST;
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 7faf9446ea5d..33620642ae5e 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -3588,8 +3588,7 @@ int f2fs_quota_sync(struct super_block *sb, int type);
loff_t max_file_blocks(struct inode *inode);
void f2fs_quota_off_umount(struct super_block *sb);
void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag);
-void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
- bool irq_context);
+void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason);
void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error);
void f2fs_handle_error_async(struct f2fs_sb_info *sbi, unsigned char error);
int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover);
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 74fac935bd09..196755a34833 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -846,7 +846,11 @@ static bool f2fs_force_buffered_io(struct inode *inode, int rw)
return true;
if (f2fs_compressed_file(inode))
return true;
- if (f2fs_has_inline_data(inode))
+ /*
+ * only force direct read to use buffered IO, for direct write,
+ * it expects inline data conversion before committing IO.
+ */
+ if (f2fs_has_inline_data(inode) && rw == READ)
return true;
/* disallow direct IO if any of devices has unaligned blksize */
@@ -2308,9 +2312,12 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
if (readonly)
goto out;
- /* grab sb->s_umount to avoid racing w/ remount() */
+ /*
+ * grab sb->s_umount to avoid racing w/ remount() and other shutdown
+ * paths.
+ */
if (need_lock)
- down_read(&sbi->sb->s_umount);
+ down_write(&sbi->sb->s_umount);
f2fs_stop_gc_thread(sbi);
f2fs_stop_discard_thread(sbi);
@@ -2319,7 +2326,7 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
clear_opt(sbi, DISCARD);
if (need_lock)
- up_read(&sbi->sb->s_umount);
+ up_write(&sbi->sb->s_umount);
f2fs_update_time(sbi, REQ_TIME);
out:
@@ -3755,7 +3762,7 @@ static int reserve_compress_blocks(struct dnode_of_data *dn, pgoff_t count,
to_reserved = cluster_size - compr_blocks - reserved;
/* for the case all blocks in cluster were reserved */
- if (to_reserved == 1) {
+ if (reserved && to_reserved == 1) {
dn->ofs_in_node += cluster_size;
goto next;
}
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index 888c301ffe8f..e99041582414 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -228,6 +228,8 @@ static int select_gc_type(struct f2fs_sb_info *sbi, int gc_type)
switch (sbi->gc_mode) {
case GC_IDLE_CB:
+ case GC_URGENT_LOW:
+ case GC_URGENT_MID:
gc_mode = GC_CB;
break;
case GC_IDLE_GREEDY:
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index c765bda3beaa..a9ab93d30dce 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -905,6 +905,16 @@ static int truncate_node(struct dnode_of_data *dn)
if (err)
return err;
+ if (ni.blk_addr != NEW_ADDR &&
+ !f2fs_is_valid_blkaddr(sbi, ni.blk_addr, DATA_GENERIC_ENHANCE)) {
+ f2fs_err_ratelimited(sbi,
+ "nat entry is corrupted, run fsck to fix it, ino:%u, "
+ "nid:%u, blkaddr:%u", ni.ino, ni.nid, ni.blk_addr);
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
+ f2fs_handle_error(sbi, ERROR_INCONSISTENT_NAT);
+ return -EFSCORRUPTED;
+ }
+
/* Deallocate node address */
f2fs_invalidate_blocks(sbi, ni.blk_addr);
dec_valid_node_count(sbi, dn->inode, dn->nid == dn->inode->i_ino);
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index c0ba379a6d8f..670104628ddb 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -2848,7 +2848,8 @@ static void change_curseg(struct f2fs_sb_info *sbi, int type)
struct f2fs_summary_block *sum_node;
struct page *sum_page;
- write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno));
+ if (curseg->inited)
+ write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno));
__set_test_and_inuse(sbi, new_segno);
@@ -3757,8 +3758,8 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
}
}
- f2fs_bug_on(sbi, !IS_DATASEG(type));
curseg = CURSEG_I(sbi, type);
+ f2fs_bug_on(sbi, !IS_DATASEG(curseg->seg_type));
mutex_lock(&curseg->curseg_mutex);
down_write(&sit_i->sentry_lock);
diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
index 952970166d5d..cd2ec6acc717 100644
--- a/fs/f2fs/segment.h
+++ b/fs/f2fs/segment.h
@@ -559,18 +559,21 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi)
}
static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
- unsigned int node_blocks, unsigned int dent_blocks)
+ unsigned int node_blocks, unsigned int data_blocks,
+ unsigned int dent_blocks)
{
- unsigned segno, left_blocks;
+ unsigned int segno, left_blocks, blocks;
int i;
- /* check current node sections in the worst case. */
- for (i = CURSEG_HOT_NODE; i <= CURSEG_COLD_NODE; i++) {
+ /* check current data/node sections in the worst case. */
+ for (i = CURSEG_HOT_DATA; i < NR_PERSISTENT_LOG; i++) {
segno = CURSEG_I(sbi, i)->segno;
left_blocks = CAP_BLKS_PER_SEC(sbi) -
get_ckpt_valid_blocks(sbi, segno, true);
- if (node_blocks > left_blocks)
+
+ blocks = i <= CURSEG_COLD_DATA ? data_blocks : node_blocks;
+ if (blocks > left_blocks)
return false;
}
@@ -584,8 +587,9 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
}
/*
- * calculate needed sections for dirty node/dentry
- * and call has_curseg_enough_space
+ * calculate needed sections for dirty node/dentry and call
+ * has_curseg_enough_space, please note that, it needs to account
+ * dirty data as well in lfs mode when checkpoint is disabled.
*/
static inline void __get_secs_required(struct f2fs_sb_info *sbi,
unsigned int *lower_p, unsigned int *upper_p, bool *curseg_p)
@@ -594,19 +598,30 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
get_pages(sbi, F2FS_DIRTY_DENTS) +
get_pages(sbi, F2FS_DIRTY_IMETA);
unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS);
+ unsigned int total_data_blocks = 0;
unsigned int node_secs = total_node_blocks / CAP_BLKS_PER_SEC(sbi);
unsigned int dent_secs = total_dent_blocks / CAP_BLKS_PER_SEC(sbi);
+ unsigned int data_secs = 0;
unsigned int node_blocks = total_node_blocks % CAP_BLKS_PER_SEC(sbi);
unsigned int dent_blocks = total_dent_blocks % CAP_BLKS_PER_SEC(sbi);
+ unsigned int data_blocks = 0;
+
+ if (f2fs_lfs_mode(sbi) &&
+ unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
+ total_data_blocks = get_pages(sbi, F2FS_DIRTY_DATA);
+ data_secs = total_data_blocks / CAP_BLKS_PER_SEC(sbi);
+ data_blocks = total_data_blocks % CAP_BLKS_PER_SEC(sbi);
+ }
if (lower_p)
- *lower_p = node_secs + dent_secs;
+ *lower_p = node_secs + dent_secs + data_secs;
if (upper_p)
*upper_p = node_secs + dent_secs +
- (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0);
+ (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0) +
+ (data_blocks ? 1 : 0);
if (curseg_p)
*curseg_p = has_curseg_enough_space(sbi,
- node_blocks, dent_blocks);
+ node_blocks, data_blocks, dent_blocks);
}
static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index 540fa1dfc77d..f05d0e43db9e 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -4093,8 +4093,7 @@ static bool system_going_down(void)
|| system_state == SYSTEM_RESTART;
}
-void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
- bool irq_context)
+void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason)
{
struct super_block *sb = sbi->sb;
bool shutdown = reason == STOP_CP_REASON_SHUTDOWN;
@@ -4106,10 +4105,12 @@ void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
if (!f2fs_hw_is_readonly(sbi)) {
save_stop_reason(sbi, reason);
- if (irq_context && !shutdown)
- schedule_work(&sbi->s_error_work);
- else
- f2fs_record_stop_reason(sbi);
+ /*
+ * always create an asynchronous task to record stop_reason
+ * in order to avoid potential deadlock when running into
+ * f2fs_record_stop_reason() synchronously.
+ */
+ schedule_work(&sbi->s_error_work);
}
/*
diff --git a/fs/fscache/volume.c b/fs/fscache/volume.c
index cb75c07b5281..ced14ac78cc1 100644
--- a/fs/fscache/volume.c
+++ b/fs/fscache/volume.c
@@ -322,8 +322,7 @@ void fscache_create_volume(struct fscache_volume *volume, bool wait)
}
return;
no_wait:
- clear_bit_unlock(FSCACHE_VOLUME_CREATING, &volume->flags);
- wake_up_bit(&volume->flags, FSCACHE_VOLUME_CREATING);
+ clear_and_wake_up_bit(FSCACHE_VOLUME_CREATING, &volume->flags);
}
/*
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index 685e3ef9e900..2c0908a30210 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -274,7 +274,7 @@ static void gfs2_glock_remove_from_lru(struct gfs2_glock *gl)
* Enqueue the glock on the work queue. Passes one glock reference on to the
* work queue.
*/
-static void __gfs2_glock_queue_work(struct gfs2_glock *gl, unsigned long delay) {
+static void gfs2_glock_queue_work(struct gfs2_glock *gl, unsigned long delay) {
if (!queue_delayed_work(glock_workqueue, &gl->gl_work, delay)) {
/*
* We are holding the lockref spinlock, and the work was still
@@ -287,12 +287,6 @@ static void __gfs2_glock_queue_work(struct gfs2_glock *gl, unsigned long delay)
}
}
-static void gfs2_glock_queue_work(struct gfs2_glock *gl, unsigned long delay) {
- spin_lock(&gl->gl_lockref.lock);
- __gfs2_glock_queue_work(gl, delay);
- spin_unlock(&gl->gl_lockref.lock);
-}
-
static void __gfs2_glock_put(struct gfs2_glock *gl)
{
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
@@ -311,14 +305,6 @@ static void __gfs2_glock_put(struct gfs2_glock *gl)
sdp->sd_lockstruct.ls_ops->lm_put_lock(gl);
}
-/*
- * Cause the glock to be put in work queue context.
- */
-void gfs2_glock_queue_put(struct gfs2_glock *gl)
-{
- gfs2_glock_queue_work(gl, 0);
-}
-
/**
* gfs2_glock_put() - Decrement reference count on glock
* @gl: The glock to put
@@ -333,6 +319,23 @@ void gfs2_glock_put(struct gfs2_glock *gl)
__gfs2_glock_put(gl);
}
+/*
+ * gfs2_glock_put_async - Decrement reference count without sleeping
+ * @gl: The glock to put
+ *
+ * Decrement the reference count on glock immediately unless it is the last
+ * reference. Defer putting the last reference to work queue context.
+ */
+void gfs2_glock_put_async(struct gfs2_glock *gl)
+{
+ if (lockref_put_or_lock(&gl->gl_lockref))
+ return;
+
+ GLOCK_BUG_ON(gl, gl->gl_lockref.count != 1);
+ gfs2_glock_queue_work(gl, 0);
+ spin_unlock(&gl->gl_lockref.lock);
+}
+
/**
* may_grant - check if it's ok to grant a new lock
* @gl: The glock
@@ -806,7 +809,7 @@ __acquires(&gl->gl_lockref.lock)
*/
clear_bit(GLF_LOCK, &gl->gl_flags);
clear_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
- __gfs2_glock_queue_work(gl, GL_GLOCK_DFT_HOLD);
+ gfs2_glock_queue_work(gl, GL_GLOCK_DFT_HOLD);
return;
} else {
clear_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags);
@@ -836,7 +839,7 @@ __acquires(&gl->gl_lockref.lock)
/* Complete the operation now. */
finish_xmote(gl, target);
- __gfs2_glock_queue_work(gl, 0);
+ gfs2_glock_queue_work(gl, 0);
}
/**
@@ -883,7 +886,7 @@ __acquires(&gl->gl_lockref.lock)
clear_bit(GLF_LOCK, &gl->gl_flags);
smp_mb__after_atomic();
gl->gl_lockref.count++;
- __gfs2_glock_queue_work(gl, 0);
+ gfs2_glock_queue_work(gl, 0);
return;
out_unlock:
@@ -1020,14 +1023,15 @@ bool gfs2_queue_try_to_evict(struct gfs2_glock *gl)
&gl->gl_delete, 0);
}
-static bool gfs2_queue_verify_evict(struct gfs2_glock *gl)
+bool gfs2_queue_verify_delete(struct gfs2_glock *gl, bool later)
{
struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
+ unsigned long delay;
- if (test_and_set_bit(GLF_VERIFY_EVICT, &gl->gl_flags))
+ if (test_and_set_bit(GLF_VERIFY_DELETE, &gl->gl_flags))
return false;
- return queue_delayed_work(sdp->sd_delete_wq,
- &gl->gl_delete, 5 * HZ);
+ delay = later ? 5 * HZ : 0;
+ return queue_delayed_work(sdp->sd_delete_wq, &gl->gl_delete, delay);
}
static void delete_work_func(struct work_struct *work)
@@ -1059,19 +1063,19 @@ static void delete_work_func(struct work_struct *work)
if (gfs2_try_evict(gl)) {
if (test_bit(SDF_KILL, &sdp->sd_flags))
goto out;
- if (gfs2_queue_verify_evict(gl))
+ if (gfs2_queue_verify_delete(gl, true))
return;
}
goto out;
}
- if (test_and_clear_bit(GLF_VERIFY_EVICT, &gl->gl_flags)) {
+ if (test_and_clear_bit(GLF_VERIFY_DELETE, &gl->gl_flags)) {
inode = gfs2_lookup_by_inum(sdp, no_addr, gl->gl_no_formal_ino,
GFS2_BLKST_UNLINKED);
if (IS_ERR(inode)) {
if (PTR_ERR(inode) == -EAGAIN &&
!test_bit(SDF_KILL, &sdp->sd_flags) &&
- gfs2_queue_verify_evict(gl))
+ gfs2_queue_verify_delete(gl, true))
return;
} else {
d_prune_aliases(inode);
@@ -1115,12 +1119,12 @@ static void glock_work_func(struct work_struct *work)
drop_refs--;
if (gl->gl_name.ln_type != LM_TYPE_INODE)
delay = 0;
- __gfs2_glock_queue_work(gl, delay);
+ gfs2_glock_queue_work(gl, delay);
}
/*
* Drop the remaining glock references manually here. (Mind that
- * __gfs2_glock_queue_work depends on the lockref spinlock begin held
+ * gfs2_glock_queue_work depends on the lockref spinlock begin held
* here as well.)
*/
gl->gl_lockref.count -= drop_refs;
@@ -1607,7 +1611,7 @@ int gfs2_glock_nq(struct gfs2_holder *gh)
test_and_clear_bit(GLF_FROZEN, &gl->gl_flags))) {
set_bit(GLF_REPLY_PENDING, &gl->gl_flags);
gl->gl_lockref.count++;
- __gfs2_glock_queue_work(gl, 0);
+ gfs2_glock_queue_work(gl, 0);
}
run_queue(gl, 1);
spin_unlock(&gl->gl_lockref.lock);
@@ -1672,7 +1676,7 @@ static void __gfs2_glock_dq(struct gfs2_holder *gh)
!test_bit(GLF_DEMOTE, &gl->gl_flags) &&
gl->gl_name.ln_type == LM_TYPE_INODE)
delay = gl->gl_hold_time;
- __gfs2_glock_queue_work(gl, delay);
+ gfs2_glock_queue_work(gl, delay);
}
}
@@ -1896,7 +1900,7 @@ void gfs2_glock_cb(struct gfs2_glock *gl, unsigned int state)
delay = gl->gl_hold_time;
}
handle_callback(gl, state, delay, true);
- __gfs2_glock_queue_work(gl, delay);
+ gfs2_glock_queue_work(gl, delay);
spin_unlock(&gl->gl_lockref.lock);
}
@@ -1956,7 +1960,7 @@ void gfs2_glock_complete(struct gfs2_glock *gl, int ret)
gl->gl_lockref.count++;
set_bit(GLF_REPLY_PENDING, &gl->gl_flags);
- __gfs2_glock_queue_work(gl, 0);
+ gfs2_glock_queue_work(gl, 0);
spin_unlock(&gl->gl_lockref.lock);
}
@@ -2009,15 +2013,14 @@ __acquires(&lru_lock)
atomic_inc(&lru_count);
continue;
}
- if (test_and_set_bit(GLF_LOCK, &gl->gl_flags)) {
+ if (test_bit(GLF_LOCK, &gl->gl_flags)) {
spin_unlock(&gl->gl_lockref.lock);
goto add_back_to_lru;
}
gl->gl_lockref.count++;
if (demote_ok(gl))
handle_callback(gl, LM_ST_UNLOCKED, 0, false);
- WARN_ON(!test_and_clear_bit(GLF_LOCK, &gl->gl_flags));
- __gfs2_glock_queue_work(gl, 0);
+ gfs2_glock_queue_work(gl, 0);
spin_unlock(&gl->gl_lockref.lock);
cond_resched_lock(&lru_lock);
}
@@ -2117,7 +2120,7 @@ static void glock_hash_walk(glock_examiner examiner, const struct gfs2_sbd *sdp)
void gfs2_cancel_delete_work(struct gfs2_glock *gl)
{
clear_bit(GLF_TRY_TO_EVICT, &gl->gl_flags);
- clear_bit(GLF_VERIFY_EVICT, &gl->gl_flags);
+ clear_bit(GLF_VERIFY_DELETE, &gl->gl_flags);
if (cancel_delayed_work(&gl->gl_delete))
gfs2_glock_put(gl);
}
@@ -2155,7 +2158,7 @@ static void thaw_glock(struct gfs2_glock *gl)
spin_lock(&gl->gl_lockref.lock);
set_bit(GLF_REPLY_PENDING, &gl->gl_flags);
- __gfs2_glock_queue_work(gl, 0);
+ gfs2_glock_queue_work(gl, 0);
spin_unlock(&gl->gl_lockref.lock);
}
@@ -2174,7 +2177,7 @@ static void clear_glock(struct gfs2_glock *gl)
gl->gl_lockref.count++;
if (gl->gl_state != LM_ST_UNLOCKED)
handle_callback(gl, LM_ST_UNLOCKED, 0, false);
- __gfs2_glock_queue_work(gl, 0);
+ gfs2_glock_queue_work(gl, 0);
}
spin_unlock(&gl->gl_lockref.lock);
}
@@ -2354,7 +2357,7 @@ static const char *gflags2str(char *buf, const struct gfs2_glock *gl)
*p++ = 'N';
if (test_bit(GLF_TRY_TO_EVICT, gflags))
*p++ = 'e';
- if (test_bit(GLF_VERIFY_EVICT, gflags))
+ if (test_bit(GLF_VERIFY_DELETE, gflags))
*p++ = 'E';
*p = 0;
return buf;
@@ -2533,8 +2536,7 @@ static void gfs2_glock_iter_next(struct gfs2_glock_iter *gi, loff_t n)
if (gl) {
if (n == 0)
return;
- if (!lockref_put_not_zero(&gl->gl_lockref))
- gfs2_glock_queue_put(gl);
+ gfs2_glock_put_async(gl);
}
for (;;) {
gl = rhashtable_walk_next(&gi->hti);
diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h
index f7ee9ca948ee..aae9fabbb76c 100644
--- a/fs/gfs2/glock.h
+++ b/fs/gfs2/glock.h
@@ -186,7 +186,7 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
int create, struct gfs2_glock **glp);
struct gfs2_glock *gfs2_glock_hold(struct gfs2_glock *gl);
void gfs2_glock_put(struct gfs2_glock *gl);
-void gfs2_glock_queue_put(struct gfs2_glock *gl);
+void gfs2_glock_put_async(struct gfs2_glock *gl);
void __gfs2_holder_init(struct gfs2_glock *gl, unsigned int state,
u16 flags, struct gfs2_holder *gh,
@@ -259,6 +259,7 @@ static inline int gfs2_glock_nq_init(struct gfs2_glock *gl,
void gfs2_glock_cb(struct gfs2_glock *gl, unsigned int state);
void gfs2_glock_complete(struct gfs2_glock *gl, int ret);
bool gfs2_queue_try_to_evict(struct gfs2_glock *gl);
+bool gfs2_queue_verify_delete(struct gfs2_glock *gl, bool later);
void gfs2_cancel_delete_work(struct gfs2_glock *gl);
void gfs2_flush_delete_work(struct gfs2_sbd *sdp);
void gfs2_gl_hash_clear(struct gfs2_sbd *sdp);
diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
index 60abd7050c99..853fad2bc485 100644
--- a/fs/gfs2/incore.h
+++ b/fs/gfs2/incore.h
@@ -331,7 +331,7 @@ enum {
GLF_BLOCKING = 15,
GLF_FREEING = 16, /* Wait for glock to be freed */
GLF_TRY_TO_EVICT = 17, /* iopen glocks only */
- GLF_VERIFY_EVICT = 18, /* iopen glocks only */
+ GLF_VERIFY_DELETE = 18, /* iopen glocks only */
};
struct gfs2_glock {
diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
index 767549066066..2be5551241b3 100644
--- a/fs/gfs2/log.c
+++ b/fs/gfs2/log.c
@@ -790,7 +790,7 @@ void gfs2_glock_remove_revoke(struct gfs2_glock *gl)
{
if (atomic_dec_return(&gl->gl_revokes) == 0) {
clear_bit(GLF_LFLUSH, &gl->gl_flags);
- gfs2_glock_queue_put(gl);
+ gfs2_glock_put_async(gl);
}
}
diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
index 396d0f4a259d..4a5e2732d1da 100644
--- a/fs/gfs2/rgrp.c
+++ b/fs/gfs2/rgrp.c
@@ -1879,7 +1879,7 @@ static void try_rgrp_unlink(struct gfs2_rgrpd *rgd, u64 *last_unlinked, u64 skip
*/
ip = gl->gl_object;
- if (ip || !gfs2_queue_try_to_evict(gl))
+ if (ip || !gfs2_queue_verify_delete(gl, false))
gfs2_glock_put(gl);
else
found++;
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index 1200cb805999..09285dc782cf 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -1053,8 +1053,8 @@ static int gfs2_drop_inode(struct inode *inode)
struct gfs2_glock *gl = ip->i_iopen_gh.gh_gl;
gfs2_glock_hold(gl);
- if (!gfs2_queue_try_to_evict(gl))
- gfs2_glock_queue_put(gl);
+ if (!gfs2_queue_verify_delete(gl, true))
+ gfs2_glock_put_async(gl);
return 0;
}
@@ -1270,7 +1270,7 @@ static int gfs2_dinode_dealloc(struct gfs2_inode *ip)
static void gfs2_glock_put_eventually(struct gfs2_glock *gl)
{
if (current->flags & PF_MEMALLOC)
- gfs2_glock_queue_put(gl);
+ gfs2_glock_put_async(gl);
else
gfs2_glock_put(gl);
}
diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c
index b65261e0cae3..268ff47b0396 100644
--- a/fs/gfs2/util.c
+++ b/fs/gfs2/util.c
@@ -255,7 +255,7 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp)
gfs2_glock_nq(&sdp->sd_live_gh);
}
- gfs2_glock_queue_put(live_gl); /* drop extra reference we acquired */
+ gfs2_glock_put(live_gl); /* drop extra reference we acquired */
clear_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags);
/*
diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
index 583c196ecd52..1473b04fc0f3 100644
--- a/fs/hfsplus/hfsplus_fs.h
+++ b/fs/hfsplus/hfsplus_fs.h
@@ -156,6 +156,7 @@ struct hfsplus_sb_info {
/* Runtime variables */
u32 blockoffset;
+ u32 min_io_size;
sector_t part_start;
sector_t sect_count;
int fs_shift;
@@ -306,7 +307,7 @@ struct hfsplus_readdir_data {
*/
static inline unsigned short hfsplus_min_io_size(struct super_block *sb)
{
- return max_t(unsigned short, bdev_logical_block_size(sb->s_bdev),
+ return max_t(unsigned short, HFSPLUS_SB(sb)->min_io_size,
HFSPLUS_SECTOR_SIZE);
}
diff --git a/fs/hfsplus/wrapper.c b/fs/hfsplus/wrapper.c
index 0b791adf02e5..a51a58db3fef 100644
--- a/fs/hfsplus/wrapper.c
+++ b/fs/hfsplus/wrapper.c
@@ -171,6 +171,8 @@ int hfsplus_read_wrapper(struct super_block *sb)
if (!blocksize)
goto out;
+ sbi->min_io_size = blocksize;
+
if (hfsplus_get_last_session(sb, &part_start, &part_size))
goto out;
diff --git a/fs/inode.c b/fs/inode.c
index 9cafde77e2b0..030e07b169c2 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -593,6 +593,7 @@ void dump_mapping(const struct address_space *mapping)
struct hlist_node *dentry_first;
struct dentry *dentry_ptr;
struct dentry dentry;
+ char fname[64] = {};
unsigned long ino;
/*
@@ -628,11 +629,14 @@ void dump_mapping(const struct address_space *mapping)
return;
}
+ if (strncpy_from_kernel_nofault(fname, dentry.d_name.name, 63) < 0)
+ strscpy(fname, "<invalid>", 63);
/*
- * if dentry is corrupted, the %pd handler may still crash,
- * but it's unlikely that we reach here with a corrupt mapping
+ * Even if strncpy_from_kernel_nofault() succeeded,
+ * the fname could be unreliable
*/
- pr_warn("aops:%ps ino:%lx dentry name:\"%pd\"\n", a_ops, ino, &dentry);
+ pr_warn("aops:%ps ino:%lx dentry name(?):\"%s\"\n",
+ a_ops, ino, fname);
}
void clear_inode(struct inode *inode)
diff --git a/fs/jffs2/erase.c b/fs/jffs2/erase.c
index acd32f05b519..ef3a1e1b6cb0 100644
--- a/fs/jffs2/erase.c
+++ b/fs/jffs2/erase.c
@@ -338,10 +338,9 @@ static int jffs2_block_check_erase(struct jffs2_sb_info *c, struct jffs2_erasebl
} while(--retlen);
mtd_unpoint(c->mtd, jeb->offset, c->sector_size);
if (retlen) {
- pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08tx\n",
- *wordebuf,
- jeb->offset +
- c->sector_size-retlen * sizeof(*wordebuf));
+ *bad_offset = jeb->offset + c->sector_size - retlen * sizeof(*wordebuf);
+ pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08x\n",
+ *wordebuf, *bad_offset);
return -EIO;
}
return 0;
diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
index 49e064c1f551..7252941bf165 100644
--- a/fs/jfs/xattr.c
+++ b/fs/jfs/xattr.c
@@ -559,7 +559,7 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
size_check:
if (EALIST_SIZE(ea_buf->xattr) != ea_size) {
- int size = min_t(int, EALIST_SIZE(ea_buf->xattr), ea_size);
+ int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr));
printk(KERN_ERR "ea_get: invalid extended attribute\n");
print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index 8bceaac2205c..a92b234ae087 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -11,7 +11,7 @@
#include <linux/nfs_page.h>
#include <linux/wait_bit.h>
-#define NFS_SB_MASK (SB_RDONLY|SB_NOSUID|SB_NODEV|SB_NOEXEC|SB_SYNCHRONOUS)
+#define NFS_SB_MASK (SB_NOSUID|SB_NODEV|SB_NOEXEC|SB_SYNCHRONOUS)
extern const struct export_operations nfs_export_ops;
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 299ea2b86df6..4b12e45f5753 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -2528,12 +2528,14 @@ static void nfs4_open_release(void *calldata)
struct nfs4_opendata *data = calldata;
struct nfs4_state *state = NULL;
+ /* In case of error, no cleanup! */
+ if (data->rpc_status != 0 || !data->rpc_done) {
+ nfs_release_seqid(data->o_arg.seqid);
+ goto out_free;
+ }
/* If this request hasn't been cancelled, do nothing */
if (!data->cancelled)
goto out_free;
- /* In case of error, no cleanup! */
- if (data->rpc_status != 0 || !data->rpc_done)
- goto out_free;
/* In case we need an open_confirm, no cleanup! */
if (data->o_res.rflags & NFS4_OPEN_RESULT_CONFIRM)
goto out_free;
diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
index b7da17e53007..d4d3ec58047e 100644
--- a/fs/nfsd/export.c
+++ b/fs/nfsd/export.c
@@ -40,15 +40,24 @@
#define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS)
#define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1)
-static void expkey_put(struct kref *ref)
+static void expkey_put_work(struct work_struct *work)
{
- struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
+ struct svc_expkey *key =
+ container_of(to_rcu_work(work), struct svc_expkey, ek_rcu_work);
if (test_bit(CACHE_VALID, &key->h.flags) &&
!test_bit(CACHE_NEGATIVE, &key->h.flags))
path_put(&key->ek_path);
auth_domain_put(key->ek_client);
- kfree_rcu(key, ek_rcu);
+ kfree(key);
+}
+
+static void expkey_put(struct kref *ref)
+{
+ struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
+
+ INIT_RCU_WORK(&key->ek_rcu_work, expkey_put_work);
+ queue_rcu_work(system_wq, &key->ek_rcu_work);
}
static int expkey_upcall(struct cache_detail *cd, struct cache_head *h)
@@ -351,16 +360,26 @@ static void export_stats_destroy(struct export_stats *stats)
EXP_STATS_COUNTERS_NUM);
}
-static void svc_export_put(struct kref *ref)
+static void svc_export_put_work(struct work_struct *work)
{
- struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
+ struct svc_export *exp =
+ container_of(to_rcu_work(work), struct svc_export, ex_rcu_work);
+
path_put(&exp->ex_path);
auth_domain_put(exp->ex_client);
nfsd4_fslocs_free(&exp->ex_fslocs);
export_stats_destroy(exp->ex_stats);
kfree(exp->ex_stats);
kfree(exp->ex_uuid);
- kfree_rcu(exp, ex_rcu);
+ kfree(exp);
+}
+
+static void svc_export_put(struct kref *ref)
+{
+ struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
+
+ INIT_RCU_WORK(&exp->ex_rcu_work, svc_export_put_work);
+ queue_rcu_work(system_wq, &exp->ex_rcu_work);
}
static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h)
@@ -1366,9 +1385,12 @@ static int e_show(struct seq_file *m, void *p)
return 0;
}
- exp_get(exp);
+ if (!cache_get_rcu(&exp->h))
+ return 0;
+
if (cache_check(cd, &exp->h, NULL))
return 0;
+
exp_put(exp);
return svc_export_show(m, cd, cp);
}
diff --git a/fs/nfsd/export.h b/fs/nfsd/export.h
index ca9dc230ae3d..9d895570ceba 100644
--- a/fs/nfsd/export.h
+++ b/fs/nfsd/export.h
@@ -75,7 +75,7 @@ struct svc_export {
u32 ex_layout_types;
struct nfsd4_deviceid_map *ex_devid_map;
struct cache_detail *cd;
- struct rcu_head ex_rcu;
+ struct rcu_work ex_rcu_work;
unsigned long ex_xprtsec_modes;
struct export_stats *ex_stats;
};
@@ -92,7 +92,7 @@ struct svc_expkey {
u32 ek_fsid[6];
struct path ek_path;
- struct rcu_head ek_rcu;
+ struct rcu_work ek_rcu_work;
};
#define EX_ISSYNC(exp) (!((exp)->ex_flags & NFSEXP_ASYNC))
diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
index 4039ffcf90ba..49a49529c6b8 100644
--- a/fs/nfsd/nfs4callback.c
+++ b/fs/nfsd/nfs4callback.c
@@ -297,17 +297,17 @@ static int decode_cb_compound4res(struct xdr_stream *xdr,
u32 length;
__be32 *p;
- p = xdr_inline_decode(xdr, 4 + 4);
+ p = xdr_inline_decode(xdr, XDR_UNIT);
if (unlikely(p == NULL))
goto out_overflow;
- hdr->status = be32_to_cpup(p++);
+ hdr->status = be32_to_cpup(p);
/* Ignore the tag */
- length = be32_to_cpup(p++);
- p = xdr_inline_decode(xdr, length + 4);
- if (unlikely(p == NULL))
+ if (xdr_stream_decode_u32(xdr, &length) < 0)
+ goto out_overflow;
+ if (xdr_inline_decode(xdr, length) == NULL)
+ goto out_overflow;
+ if (xdr_stream_decode_u32(xdr, &hdr->nops) < 0)
goto out_overflow;
- p += XDR_QUADLEN(length);
- hdr->nops = be32_to_cpup(p);
return 0;
out_overflow:
return -EIO;
@@ -1379,6 +1379,8 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
ses = c->cn_session;
}
spin_unlock(&clp->cl_lock);
+ if (!c)
+ return;
err = setup_callback_client(clp, &conn, ses);
if (err) {
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index d64f792964e1..b3eca08f15b1 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -1285,7 +1285,7 @@ static void nfsd4_stop_copy(struct nfsd4_copy *copy)
nfs4_put_copy(copy);
}
-static struct nfsd4_copy *nfsd4_get_copy(struct nfs4_client *clp)
+static struct nfsd4_copy *nfsd4_unhash_copy(struct nfs4_client *clp)
{
struct nfsd4_copy *copy = NULL;
@@ -1294,6 +1294,9 @@ static struct nfsd4_copy *nfsd4_get_copy(struct nfs4_client *clp)
copy = list_first_entry(&clp->async_copies, struct nfsd4_copy,
copies);
refcount_inc(©->refcount);
+ copy->cp_clp = NULL;
+ if (!list_empty(©->copies))
+ list_del_init(©->copies);
}
spin_unlock(&clp->async_lock);
return copy;
@@ -1303,7 +1306,7 @@ void nfsd4_shutdown_copy(struct nfs4_client *clp)
{
struct nfsd4_copy *copy;
- while ((copy = nfsd4_get_copy(clp)) != NULL)
+ while ((copy = nfsd4_unhash_copy(clp)) != NULL)
nfsd4_stop_copy(copy);
}
#ifdef CONFIG_NFSD_V4_2_INTER_SSC
diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
index 4395577825a7..892fecce18b8 100644
--- a/fs/nfsd/nfs4recover.c
+++ b/fs/nfsd/nfs4recover.c
@@ -658,7 +658,8 @@ nfs4_reset_recoverydir(char *recdir)
return status;
status = -ENOTDIR;
if (d_is_dir(path.dentry)) {
- strcpy(user_recovery_dirname, recdir);
+ strscpy(user_recovery_dirname, recdir,
+ sizeof(user_recovery_dirname));
status = 0;
}
path_put(&path);
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 901fc68636cd..a25cb2ff1b0b 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1625,6 +1625,14 @@ static void release_open_stateid(struct nfs4_ol_stateid *stp)
free_ol_stateid_reaplist(&reaplist);
}
+static bool nfs4_openowner_unhashed(struct nfs4_openowner *oo)
+{
+ lockdep_assert_held(&oo->oo_owner.so_client->cl_lock);
+
+ return list_empty(&oo->oo_owner.so_strhash) &&
+ list_empty(&oo->oo_perclient);
+}
+
static void unhash_openowner_locked(struct nfs4_openowner *oo)
{
struct nfs4_client *clp = oo->oo_owner.so_client;
@@ -4632,6 +4640,12 @@ init_open_stateid(struct nfs4_file *fp, struct nfsd4_open *open)
spin_lock(&oo->oo_owner.so_client->cl_lock);
spin_lock(&fp->fi_lock);
+ if (nfs4_openowner_unhashed(oo)) {
+ mutex_unlock(&stp->st_mutex);
+ stp = NULL;
+ goto out_unlock;
+ }
+
retstp = nfsd4_find_existing_open(fp, open);
if (retstp)
goto out_unlock;
@@ -5751,6 +5765,11 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
if (!stp) {
stp = init_open_stateid(fp, open);
+ if (!stp) {
+ status = nfserr_jukebox;
+ goto out;
+ }
+
if (!open->op_stp)
new_stp = true;
}
diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
index b5d8f238fce4..9cc4ebb53504 100644
--- a/fs/notify/fsnotify.c
+++ b/fs/notify/fsnotify.c
@@ -310,16 +310,19 @@ static int fsnotify_handle_event(struct fsnotify_group *group, __u32 mask,
if (!inode_mark)
return 0;
- if (mask & FS_EVENT_ON_CHILD) {
- /*
- * Some events can be sent on both parent dir and child marks
- * (e.g. FS_ATTRIB). If both parent dir and child are
- * watching, report the event once to parent dir with name (if
- * interested) and once to child without name (if interested).
- * The child watcher is expecting an event without a file name
- * and without the FS_EVENT_ON_CHILD flag.
- */
- mask &= ~FS_EVENT_ON_CHILD;
+ /*
+ * Some events can be sent on both parent dir and child marks (e.g.
+ * FS_ATTRIB). If both parent dir and child are watching, report the
+ * event once to parent dir with name (if interested) and once to child
+ * without name (if interested).
+ *
+ * In any case regardless whether the parent is watching or not, the
+ * child watcher is expecting an event without the FS_EVENT_ON_CHILD
+ * flag. The file name is expected if and only if this is a directory
+ * event.
+ */
+ mask &= ~FS_EVENT_ON_CHILD;
+ if (!(mask & ALL_FSNOTIFY_DIRENT_EVENTS)) {
dir = NULL;
name = NULL;
}
diff --git a/fs/ocfs2/aops.h b/fs/ocfs2/aops.h
index 3a520117fa59..a9ce7947228c 100644
--- a/fs/ocfs2/aops.h
+++ b/fs/ocfs2/aops.h
@@ -70,6 +70,8 @@ enum ocfs2_iocb_lock_bits {
OCFS2_IOCB_NUM_LOCKS
};
+#define ocfs2_iocb_init_rw_locked(iocb) \
+ (iocb->private = NULL)
#define ocfs2_iocb_clear_rw_locked(iocb) \
clear_bit(OCFS2_IOCB_RW_LOCK, (unsigned long *)&iocb->private)
#define ocfs2_iocb_rw_locked_level(iocb) \
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index e4acb795d119..0585f281ff62 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -2397,6 +2397,8 @@ static ssize_t ocfs2_file_write_iter(struct kiocb *iocb,
} else
inode_lock(inode);
+ ocfs2_iocb_init_rw_locked(iocb);
+
/*
* Concurrent O_DIRECT writes are allowed with
* mount_option "coherency=buffered".
@@ -2543,6 +2545,8 @@ static ssize_t ocfs2_file_read_iter(struct kiocb *iocb,
if (!direct_io && nowait)
return -EOPNOTSUPP;
+ ocfs2_iocb_init_rw_locked(iocb);
+
/*
* buffered reads protect themselves in ->read_folio(). O_DIRECT reads
* need locks to protect pending reads from racing with truncate.
diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
index fca29dba7b14..9c42a30317d5 100644
--- a/fs/overlayfs/inode.c
+++ b/fs/overlayfs/inode.c
@@ -741,8 +741,13 @@ static int ovl_security_fileattr(const struct path *realpath, struct fileattr *f
struct file *file;
unsigned int cmd;
int err;
+ unsigned int flags;
+
+ flags = O_RDONLY;
+ if (force_o_largefile())
+ flags |= O_LARGEFILE;
- file = dentry_open(realpath, O_RDONLY, current_cred());
+ file = dentry_open(realpath, flags, current_cred());
if (IS_ERR(file))
return PTR_ERR(file);
diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
index 89e0d60d35b6..0bf3ffcd072f 100644
--- a/fs/overlayfs/util.c
+++ b/fs/overlayfs/util.c
@@ -171,6 +171,9 @@ void ovl_dentry_init_flags(struct dentry *dentry, struct dentry *upperdentry,
bool ovl_dentry_weird(struct dentry *dentry)
{
+ if (!d_can_lookup(dentry) && !d_is_file(dentry) && !d_is_symlink(dentry))
+ return true;
+
return dentry->d_flags & (DCACHE_NEED_AUTOMOUNT |
DCACHE_MANAGE_TRANSIT |
DCACHE_OP_HASH |
diff --git a/fs/proc/array.c b/fs/proc/array.c
index 37b8061d84bb..34a47fb0c57f 100644
--- a/fs/proc/array.c
+++ b/fs/proc/array.c
@@ -477,13 +477,13 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
int permitted;
struct mm_struct *mm;
unsigned long long start_time;
- unsigned long cmin_flt = 0, cmaj_flt = 0;
- unsigned long min_flt = 0, maj_flt = 0;
- u64 cutime, cstime, utime, stime;
- u64 cgtime, gtime;
+ unsigned long cmin_flt, cmaj_flt, min_flt, maj_flt;
+ u64 cutime, cstime, cgtime, utime, stime, gtime;
unsigned long rsslim = 0;
unsigned long flags;
int exit_code = task->exit_code;
+ struct signal_struct *sig = task->signal;
+ unsigned int seq = 1;
state = *get_task_state(task);
vsize = eip = esp = 0;
@@ -511,12 +511,8 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
sigemptyset(&sigign);
sigemptyset(&sigcatch);
- cutime = cstime = 0;
- cgtime = gtime = 0;
if (lock_task_sighand(task, &flags)) {
- struct signal_struct *sig = task->signal;
-
if (sig->tty) {
struct pid *pgrp = tty_get_pgrp(sig->tty);
tty_pgrp = pid_nr_ns(pgrp, ns);
@@ -527,26 +523,9 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
num_threads = get_nr_threads(task);
collect_sigign_sigcatch(task, &sigign, &sigcatch);
- cmin_flt = sig->cmin_flt;
- cmaj_flt = sig->cmaj_flt;
- cutime = sig->cutime;
- cstime = sig->cstime;
- cgtime = sig->cgtime;
rsslim = READ_ONCE(sig->rlim[RLIMIT_RSS].rlim_cur);
- /* add up live thread stats at the group level */
if (whole) {
- struct task_struct *t = task;
- do {
- min_flt += t->min_flt;
- maj_flt += t->maj_flt;
- gtime += task_gtime(t);
- } while_each_thread(task, t);
-
- min_flt += sig->min_flt;
- maj_flt += sig->maj_flt;
- gtime += sig->gtime;
-
if (sig->flags & (SIGNAL_GROUP_EXIT | SIGNAL_STOP_STOPPED))
exit_code = sig->group_exit_code;
}
@@ -561,6 +540,34 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
if (permitted && (!whole || num_threads < 2))
wchan = !task_is_running(task);
+ do {
+ seq++; /* 2 on the 1st/lockless path, otherwise odd */
+ flags = read_seqbegin_or_lock_irqsave(&sig->stats_lock, &seq);
+
+ cmin_flt = sig->cmin_flt;
+ cmaj_flt = sig->cmaj_flt;
+ cutime = sig->cutime;
+ cstime = sig->cstime;
+ cgtime = sig->cgtime;
+
+ if (whole) {
+ struct task_struct *t;
+
+ min_flt = sig->min_flt;
+ maj_flt = sig->maj_flt;
+ gtime = sig->gtime;
+
+ rcu_read_lock();
+ __for_each_thread(sig, t) {
+ min_flt += t->min_flt;
+ maj_flt += t->maj_flt;
+ gtime += task_gtime(t);
+ }
+ rcu_read_unlock();
+ }
+ } while (need_seqretry(&sig->stats_lock, seq));
+ done_seqretry_irqrestore(&sig->stats_lock, seq, flags);
+
if (whole) {
thread_group_cputime_adjusted(task, &utime, &stime);
} else {
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
index 7e4fa9c68c1d..1127457d0fcb 100644
--- a/fs/proc/kcore.c
+++ b/fs/proc/kcore.c
@@ -493,13 +493,13 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter)
* the previous entry, search for a matching entry.
*/
if (!m || start < m->addr || start >= m->addr + m->size) {
- struct kcore_list *iter;
+ struct kcore_list *pos;
m = NULL;
- list_for_each_entry(iter, &kclist_head, list) {
- if (start >= iter->addr &&
- start < iter->addr + iter->size) {
- m = iter;
+ list_for_each_entry(pos, &kclist_head, list) {
+ if (start >= pos->addr &&
+ start < pos->addr + pos->size) {
+ m = pos;
break;
}
}
@@ -599,6 +599,7 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter)
ret = -EFAULT;
goto out;
}
+ ret = 0;
/*
* We know the bounce buffer is safe to copy from, so
* use _copy_to_iter() directly.
diff --git a/fs/proc/softirqs.c b/fs/proc/softirqs.c
index f4616083faef..04bb29721419 100644
--- a/fs/proc/softirqs.c
+++ b/fs/proc/softirqs.c
@@ -20,7 +20,7 @@ static int show_softirqs(struct seq_file *p, void *v)
for (i = 0; i < NR_SOFTIRQS; i++) {
seq_printf(p, "%12s:", softirq_to_name[i]);
for_each_possible_cpu(j)
- seq_printf(p, " %10u", kstat_softirqs_cpu(i, j));
+ seq_put_decimal_ull_width(p, " ", kstat_softirqs_cpu(i, j), 10);
seq_putc(p, '\n');
}
return 0;
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 23dbde1de252..67562c78e57d 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -690,6 +690,8 @@ int dquot_writeback_dquots(struct super_block *sb, int type)
WARN_ON_ONCE(!rwsem_is_locked(&sb->s_umount));
+ flush_delayed_work("a_release_work);
+
for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
if (type != -1 && cnt != type)
continue;
diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c
index 0ff2491c311d..9c0ef4195b58 100644
--- a/fs/smb/client/cached_dir.c
+++ b/fs/smb/client/cached_dir.c
@@ -17,6 +17,11 @@ static void free_cached_dir(struct cached_fid *cfid);
static void smb2_close_cached_fid(struct kref *ref);
static void cfids_laundromat_worker(struct work_struct *work);
+struct cached_dir_dentry {
+ struct list_head entry;
+ struct dentry *dentry;
+};
+
static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
const char *path,
bool lookup_only,
@@ -59,6 +64,16 @@ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
list_add(&cfid->entry, &cfids->entries);
cfid->on_list = true;
kref_get(&cfid->refcount);
+ /*
+ * Set @cfid->has_lease to true during construction so that the lease
+ * reference can be put in cached_dir_lease_break() due to a potential
+ * lease break right after the request is sent or while @cfid is still
+ * being cached, or if a reconnection is triggered during construction.
+ * Concurrent processes won't be to use it yet due to @cfid->time being
+ * zero.
+ */
+ cfid->has_lease = true;
+
spin_unlock(&cfids->cfid_list_lock);
return cfid;
}
@@ -176,12 +191,12 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
return -ENOENT;
}
/*
- * Return cached fid if it has a lease. Otherwise, it is either a new
- * entry or laundromat worker removed it from @cfids->entries. Caller
- * will put last reference if the latter.
+ * Return cached fid if it is valid (has a lease and has a time).
+ * Otherwise, it is either a new entry or laundromat worker removed it
+ * from @cfids->entries. Caller will put last reference if the latter.
*/
spin_lock(&cfids->cfid_list_lock);
- if (cfid->has_lease) {
+ if (cfid->has_lease && cfid->time) {
spin_unlock(&cfids->cfid_list_lock);
*ret_cfid = cfid;
kfree(utf16_path);
@@ -212,6 +227,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
}
}
cfid->dentry = dentry;
+ cfid->tcon = tcon;
/*
* We do not hold the lock for the open because in case
@@ -267,15 +283,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
smb2_set_related(&rqst[1]);
- /*
- * Set @cfid->has_lease to true before sending out compounded request so
- * its lease reference can be put in cached_dir_lease_break() due to a
- * potential lease break right after the request is sent or while @cfid
- * is still being cached. Concurrent processes won't be to use it yet
- * due to @cfid->time being zero.
- */
- cfid->has_lease = true;
-
if (retries) {
smb2_set_replay(server, &rqst[0]);
smb2_set_replay(server, &rqst[1]);
@@ -292,7 +299,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
}
goto oshr_free;
}
- cfid->tcon = tcon;
cfid->is_open = true;
spin_lock(&cfids->cfid_list_lock);
@@ -347,6 +353,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
SMB2_query_info_free(&rqst[1]);
free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
+out:
if (rc) {
spin_lock(&cfids->cfid_list_lock);
if (cfid->on_list) {
@@ -358,23 +365,14 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
/*
* We are guaranteed to have two references at this
* point. One for the caller and one for a potential
- * lease. Release the Lease-ref so that the directory
- * will be closed when the caller closes the cached
- * handle.
+ * lease. Release one here, and the second below.
*/
cfid->has_lease = false;
- spin_unlock(&cfids->cfid_list_lock);
kref_put(&cfid->refcount, smb2_close_cached_fid);
- goto out;
}
spin_unlock(&cfids->cfid_list_lock);
- }
-out:
- if (rc) {
- if (cfid->is_open)
- SMB2_close(0, cfid->tcon, cfid->fid.persistent_fid,
- cfid->fid.volatile_fid);
- free_cached_dir(cfid);
+
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
} else {
*ret_cfid = cfid;
atomic_inc(&tcon->num_remote_opens);
@@ -401,7 +399,7 @@ int open_cached_dir_by_dentry(struct cifs_tcon *tcon,
spin_lock(&cfids->cfid_list_lock);
list_for_each_entry(cfid, &cfids->entries, entry) {
if (dentry && cfid->dentry == dentry) {
- cifs_dbg(FYI, "found a cached root file handle by dentry\n");
+ cifs_dbg(FYI, "found a cached file handle by dentry\n");
kref_get(&cfid->refcount);
*ret_cfid = cfid;
spin_unlock(&cfids->cfid_list_lock);
@@ -477,7 +475,10 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
struct cifs_tcon *tcon;
struct tcon_link *tlink;
struct cached_fids *cfids;
+ struct cached_dir_dentry *tmp_list, *q;
+ LIST_HEAD(entry);
+ spin_lock(&cifs_sb->tlink_tree_lock);
for (node = rb_first(root); node; node = rb_next(node)) {
tlink = rb_entry(node, struct tcon_link, tl_rbnode);
tcon = tlink_tcon(tlink);
@@ -486,11 +487,30 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
cfids = tcon->cfids;
if (cfids == NULL)
continue;
+ spin_lock(&cfids->cfid_list_lock);
list_for_each_entry(cfid, &cfids->entries, entry) {
- dput(cfid->dentry);
+ tmp_list = kmalloc(sizeof(*tmp_list), GFP_ATOMIC);
+ if (tmp_list == NULL)
+ break;
+ spin_lock(&cfid->fid_lock);
+ tmp_list->dentry = cfid->dentry;
cfid->dentry = NULL;
+ spin_unlock(&cfid->fid_lock);
+
+ list_add_tail(&tmp_list->entry, &entry);
}
+ spin_unlock(&cfids->cfid_list_lock);
+ }
+ spin_unlock(&cifs_sb->tlink_tree_lock);
+
+ list_for_each_entry_safe(tmp_list, q, &entry, entry) {
+ list_del(&tmp_list->entry);
+ dput(tmp_list->dentry);
+ kfree(tmp_list);
}
+
+ /* Flush any pending work that will drop dentries */
+ flush_workqueue(cfid_put_wq);
}
/*
@@ -501,50 +521,71 @@ void invalidate_all_cached_dirs(struct cifs_tcon *tcon)
{
struct cached_fids *cfids = tcon->cfids;
struct cached_fid *cfid, *q;
- LIST_HEAD(entry);
if (cfids == NULL)
return;
+ /*
+ * Mark all the cfids as closed, and move them to the cfids->dying list.
+ * They'll be cleaned up later by cfids_invalidation_worker. Take
+ * a reference to each cfid during this process.
+ */
spin_lock(&cfids->cfid_list_lock);
list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
- list_move(&cfid->entry, &entry);
+ list_move(&cfid->entry, &cfids->dying);
cfids->num_entries--;
cfid->is_open = false;
cfid->on_list = false;
- /* To prevent race with smb2_cached_lease_break() */
- kref_get(&cfid->refcount);
- }
- spin_unlock(&cfids->cfid_list_lock);
-
- list_for_each_entry_safe(cfid, q, &entry, entry) {
- list_del(&cfid->entry);
- cancel_work_sync(&cfid->lease_break);
if (cfid->has_lease) {
/*
- * We lease was never cancelled from the server so we
- * need to drop the reference.
+ * The lease was never cancelled from the server,
+ * so steal that reference.
*/
- spin_lock(&cfids->cfid_list_lock);
cfid->has_lease = false;
- spin_unlock(&cfids->cfid_list_lock);
- kref_put(&cfid->refcount, smb2_close_cached_fid);
- }
- /* Drop the extra reference opened above*/
- kref_put(&cfid->refcount, smb2_close_cached_fid);
+ } else
+ kref_get(&cfid->refcount);
}
+ /*
+ * Queue dropping of the dentries once locks have been dropped
+ */
+ if (!list_empty(&cfids->dying))
+ queue_work(cfid_put_wq, &cfids->invalidation_work);
+ spin_unlock(&cfids->cfid_list_lock);
}
static void
-smb2_cached_lease_break(struct work_struct *work)
+cached_dir_offload_close(struct work_struct *work)
{
struct cached_fid *cfid = container_of(work,
- struct cached_fid, lease_break);
+ struct cached_fid, close_work);
+ struct cifs_tcon *tcon = cfid->tcon;
+
+ WARN_ON(cfid->on_list);
- spin_lock(&cfid->cfids->cfid_list_lock);
- cfid->has_lease = false;
- spin_unlock(&cfid->cfids->cfid_list_lock);
kref_put(&cfid->refcount, smb2_close_cached_fid);
+ cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_cached_close);
+}
+
+/*
+ * Release the cached directory's dentry, and then queue work to drop cached
+ * directory itself (closing on server if needed).
+ *
+ * Must be called with a reference to the cached_fid and a reference to the
+ * tcon.
+ */
+static void cached_dir_put_work(struct work_struct *work)
+{
+ struct cached_fid *cfid = container_of(work, struct cached_fid,
+ put_work);
+ struct dentry *dentry;
+
+ spin_lock(&cfid->fid_lock);
+ dentry = cfid->dentry;
+ cfid->dentry = NULL;
+ spin_unlock(&cfid->fid_lock);
+
+ dput(dentry);
+ queue_work(serverclose_wq, &cfid->close_work);
}
int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
@@ -561,6 +602,7 @@ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
!memcmp(lease_key,
cfid->fid.lease_key,
SMB2_LEASE_KEY_SIZE)) {
+ cfid->has_lease = false;
cfid->time = 0;
/*
* We found a lease remove it from the list
@@ -570,8 +612,10 @@ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
cfid->on_list = false;
cfids->num_entries--;
- queue_work(cifsiod_wq,
- &cfid->lease_break);
+ ++tcon->tc_count;
+ trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
+ netfs_trace_tcon_ref_get_cached_lease_break);
+ queue_work(cfid_put_wq, &cfid->put_work);
spin_unlock(&cfids->cfid_list_lock);
return true;
}
@@ -593,7 +637,8 @@ static struct cached_fid *init_cached_dir(const char *path)
return NULL;
}
- INIT_WORK(&cfid->lease_break, smb2_cached_lease_break);
+ INIT_WORK(&cfid->close_work, cached_dir_offload_close);
+ INIT_WORK(&cfid->put_work, cached_dir_put_work);
INIT_LIST_HEAD(&cfid->entry);
INIT_LIST_HEAD(&cfid->dirents.entries);
mutex_init(&cfid->dirents.de_mutex);
@@ -606,6 +651,9 @@ static void free_cached_dir(struct cached_fid *cfid)
{
struct cached_dirent *dirent, *q;
+ WARN_ON(work_pending(&cfid->close_work));
+ WARN_ON(work_pending(&cfid->put_work));
+
dput(cfid->dentry);
cfid->dentry = NULL;
@@ -623,10 +671,30 @@ static void free_cached_dir(struct cached_fid *cfid)
kfree(cfid);
}
+static void cfids_invalidation_worker(struct work_struct *work)
+{
+ struct cached_fids *cfids = container_of(work, struct cached_fids,
+ invalidation_work);
+ struct cached_fid *cfid, *q;
+ LIST_HEAD(entry);
+
+ spin_lock(&cfids->cfid_list_lock);
+ /* move cfids->dying to the local list */
+ list_cut_before(&entry, &cfids->dying, &cfids->dying);
+ spin_unlock(&cfids->cfid_list_lock);
+
+ list_for_each_entry_safe(cfid, q, &entry, entry) {
+ list_del(&cfid->entry);
+ /* Drop the ref-count acquired in invalidate_all_cached_dirs */
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
+ }
+}
+
static void cfids_laundromat_worker(struct work_struct *work)
{
struct cached_fids *cfids;
struct cached_fid *cfid, *q;
+ struct dentry *dentry;
LIST_HEAD(entry);
cfids = container_of(work, struct cached_fids, laundromat_work.work);
@@ -638,33 +706,42 @@ static void cfids_laundromat_worker(struct work_struct *work)
cfid->on_list = false;
list_move(&cfid->entry, &entry);
cfids->num_entries--;
- /* To prevent race with smb2_cached_lease_break() */
- kref_get(&cfid->refcount);
+ if (cfid->has_lease) {
+ /*
+ * Our lease has not yet been cancelled from the
+ * server. Steal that reference.
+ */
+ cfid->has_lease = false;
+ } else
+ kref_get(&cfid->refcount);
}
}
spin_unlock(&cfids->cfid_list_lock);
list_for_each_entry_safe(cfid, q, &entry, entry) {
list_del(&cfid->entry);
- /*
- * Cancel and wait for the work to finish in case we are racing
- * with it.
- */
- cancel_work_sync(&cfid->lease_break);
- if (cfid->has_lease) {
+
+ spin_lock(&cfid->fid_lock);
+ dentry = cfid->dentry;
+ cfid->dentry = NULL;
+ spin_unlock(&cfid->fid_lock);
+
+ dput(dentry);
+ if (cfid->is_open) {
+ spin_lock(&cifs_tcp_ses_lock);
+ ++cfid->tcon->tc_count;
+ trace_smb3_tcon_ref(cfid->tcon->debug_id, cfid->tcon->tc_count,
+ netfs_trace_tcon_ref_get_cached_laundromat);
+ spin_unlock(&cifs_tcp_ses_lock);
+ queue_work(serverclose_wq, &cfid->close_work);
+ } else
/*
- * Our lease has not yet been cancelled from the server
- * so we need to drop the reference.
+ * Drop the ref-count from above, either the lease-ref (if there
+ * was one) or the extra one acquired.
*/
- spin_lock(&cfids->cfid_list_lock);
- cfid->has_lease = false;
- spin_unlock(&cfids->cfid_list_lock);
kref_put(&cfid->refcount, smb2_close_cached_fid);
- }
- /* Drop the extra reference opened above */
- kref_put(&cfid->refcount, smb2_close_cached_fid);
}
- queue_delayed_work(cifsiod_wq, &cfids->laundromat_work,
+ queue_delayed_work(cfid_put_wq, &cfids->laundromat_work,
dir_cache_timeout * HZ);
}
@@ -677,9 +754,11 @@ struct cached_fids *init_cached_dirs(void)
return NULL;
spin_lock_init(&cfids->cfid_list_lock);
INIT_LIST_HEAD(&cfids->entries);
+ INIT_LIST_HEAD(&cfids->dying);
+ INIT_WORK(&cfids->invalidation_work, cfids_invalidation_worker);
INIT_DELAYED_WORK(&cfids->laundromat_work, cfids_laundromat_worker);
- queue_delayed_work(cifsiod_wq, &cfids->laundromat_work,
+ queue_delayed_work(cfid_put_wq, &cfids->laundromat_work,
dir_cache_timeout * HZ);
return cfids;
@@ -698,6 +777,7 @@ void free_cached_dirs(struct cached_fids *cfids)
return;
cancel_delayed_work_sync(&cfids->laundromat_work);
+ cancel_work_sync(&cfids->invalidation_work);
spin_lock(&cfids->cfid_list_lock);
list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
@@ -705,6 +785,11 @@ void free_cached_dirs(struct cached_fids *cfids)
cfid->is_open = false;
list_move(&cfid->entry, &entry);
}
+ list_for_each_entry_safe(cfid, q, &cfids->dying, entry) {
+ cfid->on_list = false;
+ cfid->is_open = false;
+ list_move(&cfid->entry, &entry);
+ }
spin_unlock(&cfids->cfid_list_lock);
list_for_each_entry_safe(cfid, q, &entry, entry) {
diff --git a/fs/smb/client/cached_dir.h b/fs/smb/client/cached_dir.h
index 81ba0fd5cc16..1dfe79d947a6 100644
--- a/fs/smb/client/cached_dir.h
+++ b/fs/smb/client/cached_dir.h
@@ -44,7 +44,8 @@ struct cached_fid {
spinlock_t fid_lock;
struct cifs_tcon *tcon;
struct dentry *dentry;
- struct work_struct lease_break;
+ struct work_struct put_work;
+ struct work_struct close_work;
struct smb2_file_all_info file_all_info;
struct cached_dirents dirents;
};
@@ -53,10 +54,13 @@ struct cached_fid {
struct cached_fids {
/* Must be held when:
* - accessing the cfids->entries list
+ * - accessing the cfids->dying list
*/
spinlock_t cfid_list_lock;
int num_entries;
struct list_head entries;
+ struct list_head dying;
+ struct work_struct invalidation_work;
struct delayed_work laundromat_work;
};
diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
index 2d9f8bdb6d4e..6ed0f2548232 100644
--- a/fs/smb/client/cifsfs.c
+++ b/fs/smb/client/cifsfs.c
@@ -156,6 +156,7 @@ struct workqueue_struct *fileinfo_put_wq;
struct workqueue_struct *cifsoplockd_wq;
struct workqueue_struct *deferredclose_wq;
struct workqueue_struct *serverclose_wq;
+struct workqueue_struct *cfid_put_wq;
__u32 cifs_lock_secret;
/*
@@ -1899,9 +1900,16 @@ init_cifs(void)
goto out_destroy_deferredclose_wq;
}
+ cfid_put_wq = alloc_workqueue("cfid_put_wq",
+ WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
+ if (!cfid_put_wq) {
+ rc = -ENOMEM;
+ goto out_destroy_serverclose_wq;
+ }
+
rc = cifs_init_inodecache();
if (rc)
- goto out_destroy_serverclose_wq;
+ goto out_destroy_cfid_put_wq;
rc = init_mids();
if (rc)
@@ -1963,6 +1971,8 @@ init_cifs(void)
destroy_mids();
out_destroy_inodecache:
cifs_destroy_inodecache();
+out_destroy_cfid_put_wq:
+ destroy_workqueue(cfid_put_wq);
out_destroy_serverclose_wq:
destroy_workqueue(serverclose_wq);
out_destroy_deferredclose_wq:
diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index 111540eff66e..6b57b167a49d 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -592,6 +592,7 @@ struct smb_version_operations {
/* Check for STATUS_NETWORK_NAME_DELETED */
bool (*is_network_name_deleted)(char *buf, struct TCP_Server_Info *srv);
int (*parse_reparse_point)(struct cifs_sb_info *cifs_sb,
+ const char *full_path,
struct kvec *rsp_iov,
struct cifs_open_info_data *data);
int (*create_reparse_symlink)(const unsigned int xid,
@@ -2022,7 +2023,7 @@ require use of the stronger protocol */
* cifsInodeInfo->lock_sem cifsInodeInfo->llist cifs_init_once
* ->can_cache_brlcks
* cifsInodeInfo->deferred_lock cifsInodeInfo->deferred_closes cifsInodeInfo_alloc
- * cached_fid->fid_mutex cifs_tcon->crfid tcon_info_alloc
+ * cached_fids->cfid_list_lock cifs_tcon->cfids->entries init_cached_dirs
* cifsFileInfo->fh_mutex cifsFileInfo cifs_new_fileinfo
* cifsFileInfo->file_info_lock cifsFileInfo->count cifs_new_fileinfo
* ->invalidHandle initiate_cifs_search
@@ -2111,6 +2112,7 @@ extern struct workqueue_struct *fileinfo_put_wq;
extern struct workqueue_struct *cifsoplockd_wq;
extern struct workqueue_struct *deferredclose_wq;
extern struct workqueue_struct *serverclose_wq;
+extern struct workqueue_struct *cfid_put_wq;
extern __u32 cifs_lock_secret;
extern mempool_t *cifs_sm_req_poolp;
diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
index fbc358c09da3..fa7901ad3b80 100644
--- a/fs/smb/client/cifsproto.h
+++ b/fs/smb/client/cifsproto.h
@@ -679,6 +679,7 @@ char *extract_hostname(const char *unc);
char *extract_sharename(const char *unc);
int parse_reparse_point(struct reparse_data_buffer *buf,
u32 plen, struct cifs_sb_info *cifs_sb,
+ const char *full_path,
bool unicode, struct cifs_open_info_data *data);
int cifs_sfu_make_node(unsigned int xid, struct inode *inode,
struct dentry *dentry, struct cifs_tcon *tcon,
diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
index 1df0a6edcc21..7b850c40b2f3 100644
--- a/fs/smb/client/connect.c
+++ b/fs/smb/client/connect.c
@@ -1908,11 +1908,35 @@ static int match_session(struct cifs_ses *ses, struct smb3_fs_context *ctx)
CIFS_MAX_USERNAME_LEN))
return 0;
if ((ctx->username && strlen(ctx->username) != 0) &&
- ses->password != NULL &&
- strncmp(ses->password,
- ctx->password ? ctx->password : "",
- CIFS_MAX_PASSWORD_LEN))
- return 0;
+ ses->password != NULL) {
+
+ /* New mount can only share sessions with an existing mount if:
+ * 1. Both password and password2 match, or
+ * 2. password2 of the old mount matches password of the new mount
+ * and password of the old mount matches password2 of the new
+ * mount
+ */
+ if (ses->password2 != NULL && ctx->password2 != NULL) {
+ if (!((strncmp(ses->password, ctx->password ?
+ ctx->password : "", CIFS_MAX_PASSWORD_LEN) == 0 &&
+ strncmp(ses->password2, ctx->password2,
+ CIFS_MAX_PASSWORD_LEN) == 0) ||
+ (strncmp(ses->password, ctx->password2,
+ CIFS_MAX_PASSWORD_LEN) == 0 &&
+ strncmp(ses->password2, ctx->password ?
+ ctx->password : "", CIFS_MAX_PASSWORD_LEN) == 0)))
+ return 0;
+
+ } else if ((ses->password2 == NULL && ctx->password2 != NULL) ||
+ (ses->password2 != NULL && ctx->password2 == NULL)) {
+ return 0;
+
+ } else {
+ if (strncmp(ses->password, ctx->password ?
+ ctx->password : "", CIFS_MAX_PASSWORD_LEN))
+ return 0;
+ }
+ }
}
if (strcmp(ctx->local_nls->charset, ses->local_nls->charset))
@@ -2256,6 +2280,7 @@ struct cifs_ses *
cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
{
int rc = 0;
+ int retries = 0;
unsigned int xid;
struct cifs_ses *ses;
struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr;
@@ -2274,6 +2299,8 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
cifs_dbg(FYI, "Session needs reconnect\n");
mutex_lock(&ses->session_mutex);
+
+retry_old_session:
rc = cifs_negotiate_protocol(xid, ses, server);
if (rc) {
mutex_unlock(&ses->session_mutex);
@@ -2286,6 +2313,13 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
rc = cifs_setup_session(xid, ses, server,
ctx->local_nls);
if (rc) {
+ if (((rc == -EACCES) || (rc == -EKEYEXPIRED) ||
+ (rc == -EKEYREVOKED)) && !retries && ses->password2) {
+ retries++;
+ cifs_dbg(FYI, "Session reconnect failed, retrying with alternate password\n");
+ swap(ses->password, ses->password2);
+ goto retry_old_session;
+ }
mutex_unlock(&ses->session_mutex);
/* problem -- put our reference */
cifs_put_smb_ses(ses);
@@ -2361,6 +2395,7 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
ses->chans_need_reconnect = 1;
spin_unlock(&ses->chan_lock);
+retry_new_session:
mutex_lock(&ses->session_mutex);
rc = cifs_negotiate_protocol(xid, ses, server);
if (!rc)
@@ -2373,8 +2408,16 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
sizeof(ses->smb3signingkey));
spin_unlock(&ses->chan_lock);
- if (rc)
- goto get_ses_fail;
+ if (rc) {
+ if (((rc == -EACCES) || (rc == -EKEYEXPIRED) ||
+ (rc == -EKEYREVOKED)) && !retries && ses->password2) {
+ retries++;
+ cifs_dbg(FYI, "Session setup failed, retrying with alternate password\n");
+ swap(ses->password, ses->password2);
+ goto retry_new_session;
+ } else
+ goto get_ses_fail;
+ }
/*
* success, put it on the list and add it as first channel
@@ -2558,7 +2601,7 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx)
if (ses->server->dialect >= SMB20_PROT_ID &&
(ses->server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING))
- nohandlecache = ctx->nohandlecache;
+ nohandlecache = ctx->nohandlecache || !dir_cache_timeout;
else
nohandlecache = true;
tcon = tcon_info_alloc(!nohandlecache, netfs_trace_tcon_ref_new);
diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
index 8d7484400fe8..4e77ba191ef8 100644
--- a/fs/smb/client/fs_context.c
+++ b/fs/smb/client/fs_context.c
@@ -888,12 +888,37 @@ do { \
cifs_sb->ctx->field = NULL; \
} while (0)
+int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
+{
+ if (ses->password &&
+ cifs_sb->ctx->password &&
+ strcmp(ses->password, cifs_sb->ctx->password)) {
+ kfree_sensitive(cifs_sb->ctx->password);
+ cifs_sb->ctx->password = kstrdup(ses->password, GFP_KERNEL);
+ if (!cifs_sb->ctx->password)
+ return -ENOMEM;
+ }
+ if (ses->password2 &&
+ cifs_sb->ctx->password2 &&
+ strcmp(ses->password2, cifs_sb->ctx->password2)) {
+ kfree_sensitive(cifs_sb->ctx->password2);
+ cifs_sb->ctx->password2 = kstrdup(ses->password2, GFP_KERNEL);
+ if (!cifs_sb->ctx->password2) {
+ kfree_sensitive(cifs_sb->ctx->password);
+ cifs_sb->ctx->password = NULL;
+ return -ENOMEM;
+ }
+ }
+ return 0;
+}
+
static int smb3_reconfigure(struct fs_context *fc)
{
struct smb3_fs_context *ctx = smb3_fc2context(fc);
struct dentry *root = fc->root;
struct cifs_sb_info *cifs_sb = CIFS_SB(root->d_sb);
struct cifs_ses *ses = cifs_sb_master_tcon(cifs_sb)->ses;
+ char *new_password = NULL, *new_password2 = NULL;
bool need_recon = false;
int rc;
@@ -913,21 +938,63 @@ static int smb3_reconfigure(struct fs_context *fc)
STEAL_STRING(cifs_sb, ctx, UNC);
STEAL_STRING(cifs_sb, ctx, source);
STEAL_STRING(cifs_sb, ctx, username);
+
if (need_recon == false)
STEAL_STRING_SENSITIVE(cifs_sb, ctx, password);
else {
- kfree_sensitive(ses->password);
- ses->password = kstrdup(ctx->password, GFP_KERNEL);
- if (!ses->password)
- return -ENOMEM;
- kfree_sensitive(ses->password2);
- ses->password2 = kstrdup(ctx->password2, GFP_KERNEL);
- if (!ses->password2) {
- kfree_sensitive(ses->password);
- ses->password = NULL;
+ if (ctx->password) {
+ new_password = kstrdup(ctx->password, GFP_KERNEL);
+ if (!new_password)
+ return -ENOMEM;
+ } else
+ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password);
+ }
+
+ /*
+ * if a new password2 has been specified, then reset it's value
+ * inside the ses struct
+ */
+ if (ctx->password2) {
+ new_password2 = kstrdup(ctx->password2, GFP_KERNEL);
+ if (!new_password2) {
+ kfree_sensitive(new_password);
return -ENOMEM;
}
+ } else
+ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password2);
+
+ /*
+ * we may update the passwords in the ses struct below. Make sure we do
+ * not race with smb2_reconnect
+ */
+ mutex_lock(&ses->session_mutex);
+
+ /*
+ * smb2_reconnect may swap password and password2 in case session setup
+ * failed. First get ctx passwords in sync with ses passwords. It should
+ * be okay to do this even if this function were to return an error at a
+ * later stage
+ */
+ rc = smb3_sync_session_ctx_passwords(cifs_sb, ses);
+ if (rc) {
+ mutex_unlock(&ses->session_mutex);
+ return rc;
}
+
+ /*
+ * now that allocations for passwords are done, commit them
+ */
+ if (new_password) {
+ kfree_sensitive(ses->password);
+ ses->password = new_password;
+ }
+ if (new_password2) {
+ kfree_sensitive(ses->password2);
+ ses->password2 = new_password2;
+ }
+
+ mutex_unlock(&ses->session_mutex);
+
STEAL_STRING(cifs_sb, ctx, domainname);
STEAL_STRING(cifs_sb, ctx, nodename);
STEAL_STRING(cifs_sb, ctx, iocharset);
diff --git a/fs/smb/client/fs_context.h b/fs/smb/client/fs_context.h
index cf577ec0dd0a..bbd2063ab838 100644
--- a/fs/smb/client/fs_context.h
+++ b/fs/smb/client/fs_context.h
@@ -298,6 +298,7 @@ static inline struct smb3_fs_context *smb3_fc2context(const struct fs_context *f
}
extern int smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx);
+extern int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses);
extern void smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb);
/*
diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
index e7970cbeb861..0f73f0dc6deb 100644
--- a/fs/smb/client/inode.c
+++ b/fs/smb/client/inode.c
@@ -1054,6 +1054,7 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data,
rc = 0;
} else if (iov && server->ops->parse_reparse_point) {
rc = server->ops->parse_reparse_point(cifs_sb,
+ full_path,
iov, data);
}
break;
@@ -2412,13 +2413,10 @@ cifs_dentry_needs_reval(struct dentry *dentry)
return true;
if (!open_cached_dir_by_dentry(tcon, dentry->d_parent, &cfid)) {
- spin_lock(&cfid->fid_lock);
if (cfid->time && cifs_i->time > cfid->time) {
- spin_unlock(&cfid->fid_lock);
close_cached_dir(cfid);
return false;
}
- spin_unlock(&cfid->fid_lock);
close_cached_dir(cfid);
}
/*
diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
index 74abbdf5026c..f74d0a86f44a 100644
--- a/fs/smb/client/reparse.c
+++ b/fs/smb/client/reparse.c
@@ -35,6 +35,9 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
u16 len, plen;
int rc = 0;
+ if (strlen(symname) > REPARSE_SYM_PATH_MAX)
+ return -ENAMETOOLONG;
+
sym = kstrdup(symname, GFP_KERNEL);
if (!sym)
return -ENOMEM;
@@ -64,7 +67,7 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
if (rc < 0)
goto out;
- plen = 2 * UniStrnlen((wchar_t *)path, PATH_MAX);
+ plen = 2 * UniStrnlen((wchar_t *)path, REPARSE_SYM_PATH_MAX);
len = sizeof(*buf) + plen * 2;
buf = kzalloc(len, GFP_KERNEL);
if (!buf) {
@@ -532,9 +535,76 @@ static int parse_reparse_posix(struct reparse_posix_data *buf,
return 0;
}
+int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
+ bool unicode, bool relative,
+ const char *full_path,
+ struct cifs_sb_info *cifs_sb)
+{
+ char sep = CIFS_DIR_SEP(cifs_sb);
+ char *linux_target = NULL;
+ char *smb_target = NULL;
+ int levels;
+ int rc;
+ int i;
+
+ smb_target = cifs_strndup_from_utf16(buf, len, unicode, cifs_sb->local_nls);
+ if (!smb_target) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ if (smb_target[0] == sep && relative) {
+ /*
+ * This is a relative SMB symlink from the top of the share,
+ * which is the top level directory of the Linux mount point.
+ * Linux does not support such relative symlinks, so convert
+ * it to the relative symlink from the current directory.
+ * full_path is the SMB path to the symlink (from which is
+ * extracted current directory) and smb_target is the SMB path
+ * where symlink points, therefore full_path must always be on
+ * the SMB share.
+ */
+ int smb_target_len = strlen(smb_target)+1;
+ levels = 0;
+ for (i = 1; full_path[i]; i++) { /* i=1 to skip leading sep */
+ if (full_path[i] == sep)
+ levels++;
+ }
+ linux_target = kmalloc(levels*3 + smb_target_len, GFP_KERNEL);
+ if (!linux_target) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ for (i = 0; i < levels; i++) {
+ linux_target[i*3 + 0] = '.';
+ linux_target[i*3 + 1] = '.';
+ linux_target[i*3 + 2] = sep;
+ }
+ memcpy(linux_target + levels*3, smb_target+1, smb_target_len); /* +1 to skip leading sep */
+ } else {
+ linux_target = smb_target;
+ smb_target = NULL;
+ }
+
+ if (sep == '\\')
+ convert_delimiter(linux_target, '/');
+
+ rc = 0;
+ *target = linux_target;
+
+ cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, *target);
+
+out:
+ if (rc != 0)
+ kfree(linux_target);
+ kfree(smb_target);
+ return rc;
+}
+
static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym,
u32 plen, bool unicode,
struct cifs_sb_info *cifs_sb,
+ const char *full_path,
struct cifs_open_info_data *data)
{
unsigned int len;
@@ -549,20 +619,18 @@ static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym,
return -EIO;
}
- data->symlink_target = cifs_strndup_from_utf16(sym->PathBuffer + offs,
- len, unicode,
- cifs_sb->local_nls);
- if (!data->symlink_target)
- return -ENOMEM;
-
- convert_delimiter(data->symlink_target, '/');
- cifs_dbg(FYI, "%s: target path: %s\n", __func__, data->symlink_target);
-
- return 0;
+ return smb2_parse_native_symlink(&data->symlink_target,
+ sym->PathBuffer + offs,
+ len,
+ unicode,
+ le32_to_cpu(sym->Flags) & SYMLINK_FLAG_RELATIVE,
+ full_path,
+ cifs_sb);
}
int parse_reparse_point(struct reparse_data_buffer *buf,
u32 plen, struct cifs_sb_info *cifs_sb,
+ const char *full_path,
bool unicode, struct cifs_open_info_data *data)
{
struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
@@ -577,7 +645,7 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
case IO_REPARSE_TAG_SYMLINK:
return parse_reparse_symlink(
(struct reparse_symlink_data_buffer *)buf,
- plen, unicode, cifs_sb, data);
+ plen, unicode, cifs_sb, full_path, data);
case IO_REPARSE_TAG_LX_SYMLINK:
case IO_REPARSE_TAG_AF_UNIX:
case IO_REPARSE_TAG_LX_FIFO:
@@ -593,6 +661,7 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
}
int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
+ const char *full_path,
struct kvec *rsp_iov,
struct cifs_open_info_data *data)
{
@@ -602,7 +671,7 @@ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
buf = (struct reparse_data_buffer *)((u8 *)io +
le32_to_cpu(io->OutputOffset));
- return parse_reparse_point(buf, plen, cifs_sb, true, data);
+ return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data);
}
static void wsl_to_fattr(struct cifs_open_info_data *data,
diff --git a/fs/smb/client/reparse.h b/fs/smb/client/reparse.h
index 158e7b7aae64..ff05b0e75c92 100644
--- a/fs/smb/client/reparse.h
+++ b/fs/smb/client/reparse.h
@@ -12,6 +12,8 @@
#include "fs_context.h"
#include "cifsglob.h"
+#define REPARSE_SYM_PATH_MAX 4060
+
/*
* Used only by cifs.ko to ignore reparse points from files when client or
* server doesn't support FSCTL_GET_REPARSE_POINT.
@@ -115,7 +117,9 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
int smb2_mknod_reparse(unsigned int xid, struct inode *inode,
struct dentry *dentry, struct cifs_tcon *tcon,
const char *full_path, umode_t mode, dev_t dev);
-int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb, struct kvec *rsp_iov,
+int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
+ const char *full_path,
+ struct kvec *rsp_iov,
struct cifs_open_info_data *data);
#endif /* _CIFS_REPARSE_H */
diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
index e3a195824b40..b0c0572f9d1f 100644
--- a/fs/smb/client/smb1ops.c
+++ b/fs/smb/client/smb1ops.c
@@ -994,17 +994,17 @@ static int cifs_query_symlink(const unsigned int xid,
}
static int cifs_parse_reparse_point(struct cifs_sb_info *cifs_sb,
+ const char *full_path,
struct kvec *rsp_iov,
struct cifs_open_info_data *data)
{
struct reparse_data_buffer *buf;
TRANSACT_IOCTL_RSP *io = rsp_iov->iov_base;
- bool unicode = !!(io->hdr.Flags2 & SMBFLG2_UNICODE);
u32 plen = le16_to_cpu(io->ByteCount);
buf = (struct reparse_data_buffer *)((__u8 *)&io->hdr.Protocol +
le32_to_cpu(io->DataOffset));
- return parse_reparse_point(buf, plen, cifs_sb, unicode, data);
+ return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data);
}
static bool
diff --git a/fs/smb/client/smb2file.c b/fs/smb/client/smb2file.c
index e0ee96d69d49..db9c807115c6 100644
--- a/fs/smb/client/smb2file.c
+++ b/fs/smb/client/smb2file.c
@@ -63,12 +63,12 @@ static struct smb2_symlink_err_rsp *symlink_data(const struct kvec *iov)
return sym;
}
-int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path)
+int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov,
+ const char *full_path, char **path)
{
struct smb2_symlink_err_rsp *sym;
unsigned int sub_offs, sub_len;
unsigned int print_offs, print_len;
- char *s;
if (!cifs_sb || !iov || !iov->iov_base || !iov->iov_len || !path)
return -EINVAL;
@@ -86,15 +86,13 @@ int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec
iov->iov_len < SMB2_SYMLINK_STRUCT_SIZE + print_offs + print_len)
return -EINVAL;
- s = cifs_strndup_from_utf16((char *)sym->PathBuffer + sub_offs, sub_len, true,
- cifs_sb->local_nls);
- if (!s)
- return -ENOMEM;
- convert_delimiter(s, '/');
- cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, s);
-
- *path = s;
- return 0;
+ return smb2_parse_native_symlink(path,
+ (char *)sym->PathBuffer + sub_offs,
+ sub_len,
+ true,
+ le32_to_cpu(sym->Flags) & SYMLINK_FLAG_RELATIVE,
+ full_path,
+ cifs_sb);
}
int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock, void *buf)
@@ -126,6 +124,7 @@ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32
goto out;
if (hdr->Status == STATUS_STOPPED_ON_SYMLINK) {
rc = smb2_parse_symlink_response(oparms->cifs_sb, &err_iov,
+ oparms->path,
&data->symlink_target);
if (!rc) {
memset(smb2_data, 0, sizeof(*smb2_data));
diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
index daa841dfbadc..8ea476b1fe19 100644
--- a/fs/smb/client/smb2inode.c
+++ b/fs/smb/client/smb2inode.c
@@ -828,6 +828,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
static int parse_create_response(struct cifs_open_info_data *data,
struct cifs_sb_info *cifs_sb,
+ const char *full_path,
const struct kvec *iov)
{
struct smb2_create_rsp *rsp = iov->iov_base;
@@ -841,6 +842,7 @@ static int parse_create_response(struct cifs_open_info_data *data,
break;
case STATUS_STOPPED_ON_SYMLINK:
rc = smb2_parse_symlink_response(cifs_sb, iov,
+ full_path,
&data->symlink_target);
if (rc)
return rc;
@@ -930,14 +932,14 @@ int smb2_query_path_info(const unsigned int xid,
switch (rc) {
case 0:
- rc = parse_create_response(data, cifs_sb, &out_iov[0]);
+ rc = parse_create_response(data, cifs_sb, full_path, &out_iov[0]);
break;
case -EOPNOTSUPP:
/*
* BB TODO: When support for special files added to Samba
* re-verify this path.
*/
- rc = parse_create_response(data, cifs_sb, &out_iov[0]);
+ rc = parse_create_response(data, cifs_sb, full_path, &out_iov[0]);
if (rc || !data->reparse_point)
goto out;
diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
index ab6e79be2c15..6645f147d57c 100644
--- a/fs/smb/client/smb2ops.c
+++ b/fs/smb/client/smb2ops.c
@@ -4016,7 +4016,7 @@ map_oplock_to_lease(u8 oplock)
if (oplock == SMB2_OPLOCK_LEVEL_EXCLUSIVE)
return SMB2_LEASE_WRITE_CACHING_LE | SMB2_LEASE_READ_CACHING_LE;
else if (oplock == SMB2_OPLOCK_LEVEL_II)
- return SMB2_LEASE_READ_CACHING_LE;
+ return SMB2_LEASE_READ_CACHING_LE | SMB2_LEASE_HANDLE_CACHING_LE;
else if (oplock == SMB2_OPLOCK_LEVEL_BATCH)
return SMB2_LEASE_HANDLE_CACHING_LE | SMB2_LEASE_READ_CACHING_LE |
SMB2_LEASE_WRITE_CACHING_LE;
diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
index a86a3fbfb5a4..38b26468eb0c 100644
--- a/fs/smb/client/smb2pdu.c
+++ b/fs/smb/client/smb2pdu.c
@@ -1228,7 +1228,9 @@ SMB2_negotiate(const unsigned int xid,
* SMB3.0 supports only 1 cipher and doesn't have a encryption neg context
* Set the cipher type manually.
*/
- if (server->dialect == SMB30_PROT_ID && (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
+ if ((server->dialect == SMB30_PROT_ID ||
+ server->dialect == SMB302_PROT_ID) &&
+ (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
server->cipher_type = SMB2_ENCRYPTION_AES128_CCM;
security_blob = smb2_get_data_area_len(&blob_offset, &blob_length,
diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
index f6fafa997e99..613667b46c58 100644
--- a/fs/smb/client/smb2proto.h
+++ b/fs/smb/client/smb2proto.h
@@ -113,7 +113,14 @@ extern int smb3_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
struct cifs_sb_info *cifs_sb,
const unsigned char *path, char *pbuf,
unsigned int *pbytes_read);
-int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path);
+int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
+ bool unicode, bool relative,
+ const char *full_path,
+ struct cifs_sb_info *cifs_sb);
+int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb,
+ const struct kvec *iov,
+ const char *full_path,
+ char **path);
int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock,
void *buf);
extern int smb2_unlock_range(struct cifsFileInfo *cfile,
diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
index 604e52876cd2..563cb4d8edf0 100644
--- a/fs/smb/client/trace.h
+++ b/fs/smb/client/trace.h
@@ -27,6 +27,8 @@
EM(netfs_trace_tcon_ref_free_ipc, "FRE Ipc ") \
EM(netfs_trace_tcon_ref_free_ipc_fail, "FRE Ipc-F ") \
EM(netfs_trace_tcon_ref_free_reconnect_server, "FRE Reconn") \
+ EM(netfs_trace_tcon_ref_get_cached_laundromat, "GET Ch-Lau") \
+ EM(netfs_trace_tcon_ref_get_cached_lease_break, "GET Ch-Lea") \
EM(netfs_trace_tcon_ref_get_cancelled_close, "GET Cn-Cls") \
EM(netfs_trace_tcon_ref_get_dfs_refer, "GET DfsRef") \
EM(netfs_trace_tcon_ref_get_find, "GET Find ") \
@@ -35,6 +37,7 @@
EM(netfs_trace_tcon_ref_new, "NEW ") \
EM(netfs_trace_tcon_ref_new_ipc, "NEW Ipc ") \
EM(netfs_trace_tcon_ref_new_reconnect_server, "NEW Reconn") \
+ EM(netfs_trace_tcon_ref_put_cached_close, "PUT Ch-Cls") \
EM(netfs_trace_tcon_ref_put_cancelled_close, "PUT Cn-Cls") \
EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \
EM(netfs_trace_tcon_ref_put_cancelled_mid, "PUT Cn-Mid") \
diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
index b6e0b71c281d..1450e007ac70 100644
--- a/fs/smb/server/server.c
+++ b/fs/smb/server/server.c
@@ -276,8 +276,12 @@ static void handle_ksmbd_work(struct work_struct *wk)
* disconnection. waitqueue_active is safe because it
* uses atomic operation for condition.
*/
+ atomic_inc(&conn->refcnt);
if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
wake_up(&conn->r_count_q);
+
+ if (atomic_dec_and_test(&conn->refcnt))
+ kfree(conn);
}
/**
diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
index b08fb28d16b5..3409488d39ba 100644
--- a/fs/ubifs/super.c
+++ b/fs/ubifs/super.c
@@ -777,10 +777,10 @@ static void init_constants_master(struct ubifs_info *c)
* necessary to report something for the 'statfs()' call.
*
* Subtract the LEB reserved for GC, the LEB which is reserved for
- * deletions, minimum LEBs for the index, and assume only one journal
- * head is available.
+ * deletions, minimum LEBs for the index, the LEBs which are reserved
+ * for each journal head.
*/
- tmp64 = c->main_lebs - 1 - 1 - MIN_INDEX_LEBS - c->jhead_cnt + 1;
+ tmp64 = c->main_lebs - 1 - 1 - MIN_INDEX_LEBS - c->jhead_cnt;
tmp64 *= (long long)c->leb_size - c->leb_overhead;
tmp64 = ubifs_reported_space(c, tmp64);
c->block_cnt = tmp64 >> UBIFS_BLOCK_SHIFT;
diff --git a/fs/ubifs/tnc_commit.c b/fs/ubifs/tnc_commit.c
index a55e04822d16..7c43e0ccf6d4 100644
--- a/fs/ubifs/tnc_commit.c
+++ b/fs/ubifs/tnc_commit.c
@@ -657,6 +657,8 @@ static int get_znodes_to_commit(struct ubifs_info *c)
znode->alt = 0;
cnext = find_next_dirty(znode);
if (!cnext) {
+ ubifs_assert(c, !znode->parent);
+ znode->cparent = NULL;
znode->cnext = c->cnext;
break;
}
diff --git a/fs/unicode/utf8-core.c b/fs/unicode/utf8-core.c
index 8395066341a4..0400824ef493 100644
--- a/fs/unicode/utf8-core.c
+++ b/fs/unicode/utf8-core.c
@@ -198,7 +198,7 @@ struct unicode_map *utf8_load(unsigned int version)
return um;
out_symbol_put:
- symbol_put(um->tables);
+ symbol_put(utf8_data_table);
out_free_um:
kfree(um);
return ERR_PTR(-EINVAL);
diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
index 424acdd4b0fc..50dd27b0f215 100644
--- a/fs/xfs/libxfs/xfs_sb.c
+++ b/fs/xfs/libxfs/xfs_sb.c
@@ -260,13 +260,6 @@ xfs_validate_sb_write(
* the kernel cannot support since we checked for unsupported bits in
* the read verifier, which means that memory is corrupt.
*/
- if (xfs_sb_has_compat_feature(sbp, XFS_SB_FEAT_COMPAT_UNKNOWN)) {
- xfs_warn(mp,
-"Corruption detected in superblock compatible features (0x%x)!",
- (sbp->sb_features_compat & XFS_SB_FEAT_COMPAT_UNKNOWN));
- return -EFSCORRUPTED;
- }
-
if (!xfs_is_readonly(mp) &&
xfs_sb_has_ro_compat_feature(sbp, XFS_SB_FEAT_RO_COMPAT_UNKNOWN)) {
xfs_alert(mp,
diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
index 9f9d3abad2cf..d11de0fa5c5f 100644
--- a/fs/xfs/xfs_log_recover.c
+++ b/fs/xfs/xfs_log_recover.c
@@ -2456,7 +2456,10 @@ xlog_recover_process_data(
ohead = (struct xlog_op_header *)dp;
dp += sizeof(*ohead);
- ASSERT(dp <= end);
+ if (dp > end) {
+ xfs_warn(log->l_mp, "%s: op header overrun", __func__);
+ return -EFSCORRUPTED;
+ }
/* errors will abort recovery */
error = xlog_recover_process_ophdr(log, rhash, rhead, ohead,
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 63029bc7c9dd..7e11ca6f86dc 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -139,14 +139,6 @@
* often happens at runtime)
*/
-#if defined(CONFIG_MEMORY_HOTPLUG)
-#define MEM_KEEP(sec) *(.mem##sec)
-#define MEM_DISCARD(sec)
-#else
-#define MEM_KEEP(sec)
-#define MEM_DISCARD(sec) *(.mem##sec)
-#endif
-
#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_NO_PATCHABLE
#define KEEP_PATCHABLE KEEP(*(__patchable_function_entries))
#define PATCHABLE_DISCARDS
@@ -355,10 +347,9 @@
*(.data..decrypted) \
*(.ref.data) \
*(.data..shared_aligned) /* percpu related */ \
- MEM_KEEP(init.data*) \
- *(.data.unlikely) \
+ *(.data..unlikely) \
__start_once = .; \
- *(.data.once) \
+ *(.data..once) \
__end_once = .; \
STRUCT_ALIGN(); \
*(__tracepoints) \
@@ -519,7 +510,6 @@
/* __*init sections */ \
__init_rodata : AT(ADDR(__init_rodata) - LOAD_OFFSET) { \
*(.ref.rodata) \
- MEM_KEEP(init.rodata) \
} \
\
/* Built-in module parameters. */ \
@@ -570,8 +560,7 @@
*(.text.unknown .text.unknown.*) \
NOINSTR_TEXT \
*(.ref.text) \
- *(.text.asan.* .text.tsan.*) \
- MEM_KEEP(init.text*) \
+ *(.text.asan.* .text.tsan.*)
/* sched.text is aling to function alignment to secure we have same
@@ -678,7 +667,6 @@
#define INIT_DATA \
KEEP(*(SORT(___kentry+*))) \
*(.init.data .init.data.*) \
- MEM_DISCARD(init.data*) \
KERNEL_CTORS() \
MCOUNT_REC() \
*(.init.rodata .init.rodata.*) \
@@ -686,7 +674,6 @@
TRACE_SYSCALLS() \
KPROBE_BLACKLIST() \
ERROR_INJECT_WHITELIST() \
- MEM_DISCARD(init.rodata) \
CLK_OF_TABLES() \
RESERVEDMEM_OF_TABLES() \
TIMER_OF_TABLES() \
@@ -704,8 +691,7 @@
#define INIT_TEXT \
*(.init.text .init.text.*) \
- *(.text.startup) \
- MEM_DISCARD(init.text*)
+ *(.text.startup)
#define EXIT_DATA \
*(.exit.data .exit.data.*) \
diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h
index 6e950594215a..99ae7960a8d1 100644
--- a/include/linux/avf/virtchnl.h
+++ b/include/linux/avf/virtchnl.h
@@ -245,6 +245,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES BIT(6)
/* used to negotiate communicating link speeds in Mbps */
#define VIRTCHNL_VF_CAP_ADV_LINK_SPEED BIT(7)
+#define VIRTCHNL_VF_OFFLOAD_CRC BIT(10)
#define VIRTCHNL_VF_OFFLOAD_VLAN_V2 BIT(15)
#define VIRTCHNL_VF_OFFLOAD_VLAN BIT(16)
#define VIRTCHNL_VF_OFFLOAD_RX_POLLING BIT(17)
@@ -300,7 +301,13 @@ VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
/* VIRTCHNL_OP_CONFIG_RX_QUEUE
* VF sends this message to set up parameters for one RX queue.
* External data buffer contains one instance of virtchnl_rxq_info.
- * PF configures requested queue and returns a status code.
+ * PF configures requested queue and returns a status code. The
+ * crc_disable flag disables CRC stripping on the VF. Setting
+ * the crc_disable flag to 1 will disable CRC stripping for each
+ * queue in the VF where the flag is set. The VIRTCHNL_VF_OFFLOAD_CRC
+ * offload must have been set prior to sending this info or the PF
+ * will ignore the request. This flag should be set the same for
+ * all of the queues for a VF.
*/
/* Rx queue config info */
@@ -312,7 +319,7 @@ struct virtchnl_rxq_info {
u16 splithdr_enabled; /* deprecated with AVF 1.0 */
u32 databuffer_size;
u32 max_pkt_size;
- u8 pad0;
+ u8 crc_disable;
u8 rxdid;
u8 pad1[2];
u64 dma_ring_addr;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index a7b65d4ab616..ef35e9a9878c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1184,7 +1184,7 @@ static inline unsigned int queue_io_min(const struct request_queue *q)
return q->limits.io_min;
}
-static inline int bdev_io_min(struct block_device *bdev)
+static inline unsigned int bdev_io_min(struct block_device *bdev)
{
return queue_io_min(bdev_get_queue(bdev));
}
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 92919d52f7e1..cb8e97665eaa 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -319,12 +319,34 @@ struct bpf_func_state {
struct bpf_stack_state *stack;
};
-struct bpf_idx_pair {
- u32 prev_idx;
+#define MAX_CALL_FRAMES 8
+
+/* instruction history flags, used in bpf_jmp_history_entry.flags field */
+enum {
+ /* instruction references stack slot through PTR_TO_STACK register;
+ * we also store stack's frame number in lower 3 bits (MAX_CALL_FRAMES is 8)
+ * and accessed stack slot's index in next 6 bits (MAX_BPF_STACK is 512,
+ * 8 bytes per slot, so slot index (spi) is [0, 63])
+ */
+ INSN_F_FRAMENO_MASK = 0x7, /* 3 bits */
+
+ INSN_F_SPI_MASK = 0x3f, /* 6 bits */
+ INSN_F_SPI_SHIFT = 3, /* shifted 3 bits to the left */
+
+ INSN_F_STACK_ACCESS = BIT(9), /* we need 10 bits total */
+};
+
+static_assert(INSN_F_FRAMENO_MASK + 1 >= MAX_CALL_FRAMES);
+static_assert(INSN_F_SPI_MASK + 1 >= MAX_BPF_STACK / 8);
+
+struct bpf_jmp_history_entry {
u32 idx;
+ /* insn idx can't be bigger than 1 million */
+ u32 prev_idx : 22;
+ /* special flags, e.g., whether insn is doing register stack spill/load */
+ u32 flags : 10;
};
-#define MAX_CALL_FRAMES 8
/* Maximum number of register states that can exist at once */
#define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
struct bpf_verifier_state {
@@ -407,7 +429,7 @@ struct bpf_verifier_state {
* For most states jmp_history_cnt is [0-3].
* For loops can go up to ~40.
*/
- struct bpf_idx_pair *jmp_history;
+ struct bpf_jmp_history_entry *jmp_history;
u32 jmp_history_cnt;
u32 dfs_depth;
u32 callback_unroll_depth;
@@ -640,6 +662,7 @@ struct bpf_verifier_env {
int cur_stack;
} cfg;
struct backtrack_state bt;
+ struct bpf_jmp_history_entry *cur_hist_ent;
u32 pass_cnt; /* number of times do_check() was called */
u32 subprog_cnt;
/* number of instructions analyzed by the verifier */
diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
index f5859b8c68b4..7e0a2efd90ca 100644
--- a/include/linux/compiler_attributes.h
+++ b/include/linux/compiler_attributes.h
@@ -94,19 +94,6 @@
# define __copy(symbol)
#endif
-/*
- * Optional: only supported since gcc >= 14
- * Optional: only supported since clang >= 18
- *
- * gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896
- * clang: https://reviews.llvm.org/D148381
- */
-#if __has_attribute(__counted_by__)
-# define __counted_by(member) __attribute__((__counted_by__(member)))
-#else
-# define __counted_by(member)
-#endif
-
/*
* Optional: not supported by gcc
* Optional: only supported since clang >= 14.0
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index 0a182f088c89..02f616dfb15f 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -295,6 +295,25 @@ struct ftrace_likely_data {
#define __no_sanitize_or_inline __always_inline
#endif
+/*
+ * Optional: only supported since gcc >= 15
+ * Optional: only supported since clang >= 18
+ *
+ * gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896
+ * clang: https://github.com/llvm/llvm-project/pull/76348
+ *
+ * __bdos on clang < 19.1.2 can erroneously return 0:
+ * https://github.com/llvm/llvm-project/pull/110497
+ *
+ * __bdos on clang < 19.1.3 can be off by 4:
+ * https://github.com/llvm/llvm-project/pull/112636
+ */
+#ifdef CONFIG_CC_HAS_COUNTED_BY
+# define __counted_by(member) __attribute__((__counted_by__(member)))
+#else
+# define __counted_by(member)
+#endif
+
/* Section for code which can't be instrumented at all */
#define __noinstr_section(section) \
noinline notrace __attribute((__section__(section))) \
diff --git a/include/linux/hisi_acc_qm.h b/include/linux/hisi_acc_qm.h
index 5c4b3a68053f..8070bff54bfa 100644
--- a/include/linux/hisi_acc_qm.h
+++ b/include/linux/hisi_acc_qm.h
@@ -225,6 +225,12 @@ struct hisi_qm_status {
struct hisi_qm;
+enum acc_err_result {
+ ACC_ERR_NONE,
+ ACC_ERR_NEED_RESET,
+ ACC_ERR_RECOVERED,
+};
+
struct hisi_qm_err_info {
char *acpi_rst;
u32 msi_wr_port;
@@ -253,9 +259,9 @@ struct hisi_qm_err_ini {
void (*close_axi_master_ooo)(struct hisi_qm *qm);
void (*open_sva_prefetch)(struct hisi_qm *qm);
void (*close_sva_prefetch)(struct hisi_qm *qm);
- void (*log_dev_hw_err)(struct hisi_qm *qm, u32 err_sts);
void (*show_last_dfx_regs)(struct hisi_qm *qm);
void (*err_info_init)(struct hisi_qm *qm);
+ enum acc_err_result (*get_err_result)(struct hisi_qm *qm);
};
struct hisi_qm_cap_info {
diff --git a/include/linux/init.h b/include/linux/init.h
index 01b52c9c7526..63d2ee4f1f0e 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -84,11 +84,15 @@
#define __exit __section(".exit.text") __exitused __cold notrace
-/* Used for MEMORY_HOTPLUG */
-#define __meminit __section(".meminit.text") __cold notrace \
- __latent_entropy
-#define __meminitdata __section(".meminit.data")
-#define __meminitconst __section(".meminit.rodata")
+#ifdef CONFIG_MEMORY_HOTPLUG
+#define __meminit
+#define __meminitdata
+#define __meminitconst
+#else
+#define __meminit __init
+#define __meminitdata __initdata
+#define __meminitconst __initconst
+#endif
/* For assembly routines */
#define __HEAD .section ".head.text","ax"
diff --git a/include/linux/jiffies.h b/include/linux/jiffies.h
index e0ae2a43e0eb..03f38fe9b9a1 100644
--- a/include/linux/jiffies.h
+++ b/include/linux/jiffies.h
@@ -499,7 +499,7 @@ static inline unsigned long _msecs_to_jiffies(const unsigned int m)
* - all other values are converted to jiffies by either multiplying
* the input value by a factor or dividing it with a factor and
* handling any 32-bit overflows.
- * for the details see __msecs_to_jiffies()
+ * for the details see _msecs_to_jiffies()
*
* msecs_to_jiffies() checks for the passed in value being a constant
* via __builtin_constant_p() allowing gcc to eliminate most of the
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index dc2844b071c2..919a5cb6368d 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -230,7 +230,7 @@ static inline void lockdep_init_map(struct lockdep_map *lock, const char *name,
(lock)->dep_map.lock_type)
#define lockdep_set_subclass(lock, sub) \
- lockdep_init_map_type(&(lock)->dep_map, #lock, (lock)->dep_map.key, sub,\
+ lockdep_init_map_type(&(lock)->dep_map, (lock)->dep_map.name, (lock)->dep_map.key, sub,\
(lock)->dep_map.wait_type_inner, \
(lock)->dep_map.wait_type_outer, \
(lock)->dep_map.lock_type)
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 7c3e7b0b0e8f..28c21d5b25f6 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -46,7 +46,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
} \
} while (0)
#define VM_WARN_ON_ONCE_PAGE(cond, page) ({ \
- static bool __section(".data.once") __warned; \
+ static bool __section(".data..once") __warned; \
int __ret_warn_once = !!(cond); \
\
if (unlikely(__ret_warn_once && !__warned)) { \
@@ -66,7 +66,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
unlikely(__ret_warn); \
})
#define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \
- static bool __section(".data.once") __warned; \
+ static bool __section(".data..once") __warned; \
int __ret_warn_once = !!(cond); \
\
if (unlikely(__ret_warn_once && !__warned)) { \
@@ -77,7 +77,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
unlikely(__ret_warn_once); \
})
#define VM_WARN_ON_ONCE_MM(cond, mm) ({ \
- static bool __section(".data.once") __warned; \
+ static bool __section(".data..once") __warned; \
int __ret_warn_once = !!(cond); \
\
if (unlikely(__ret_warn_once && !__warned)) { \
diff --git a/include/linux/netpoll.h b/include/linux/netpoll.h
index bd19c4b91e31..3ddf205b7e2c 100644
--- a/include/linux/netpoll.h
+++ b/include/linux/netpoll.h
@@ -71,7 +71,7 @@ static inline void *netpoll_poll_lock(struct napi_struct *napi)
{
struct net_device *dev = napi->dev;
- if (dev && dev->npinfo) {
+ if (dev && rcu_access_pointer(dev->npinfo)) {
int owner = smp_processor_id();
while (cmpxchg(&napi->poll_owner, -1, owner) != -1)
diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h
index d69ad5bb1eb1..b8d6c0c20876 100644
--- a/include/linux/of_fdt.h
+++ b/include/linux/of_fdt.h
@@ -31,6 +31,7 @@ extern void *of_fdt_unflatten_tree(const unsigned long *blob,
extern int __initdata dt_root_addr_cells;
extern int __initdata dt_root_size_cells;
extern void *initial_boot_params;
+extern phys_addr_t initial_boot_params_pa;
extern char __dtb_start[];
extern char __dtb_end[];
@@ -70,8 +71,8 @@ extern u64 dt_mem_next_cell(int s, const __be32 **cellp);
/* Early flat tree scan hooks */
extern int early_init_dt_scan_root(void);
-extern bool early_init_dt_scan(void *params);
-extern bool early_init_dt_verify(void *params);
+extern bool early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys);
+extern bool early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys);
extern void early_init_dt_scan_nodes(void);
extern const char *of_flat_dt_get_machine_name(void);
diff --git a/include/linux/once.h b/include/linux/once.h
index bc714d414448..30346fcdc799 100644
--- a/include/linux/once.h
+++ b/include/linux/once.h
@@ -46,7 +46,7 @@ void __do_once_sleepable_done(bool *done, struct static_key_true *once_key,
#define DO_ONCE(func, ...) \
({ \
bool ___ret = false; \
- static bool __section(".data.once") ___done = false; \
+ static bool __section(".data..once") ___done = false; \
static DEFINE_STATIC_KEY_TRUE(___once_key); \
if (static_branch_unlikely(&___once_key)) { \
unsigned long ___flags; \
@@ -64,7 +64,7 @@ void __do_once_sleepable_done(bool *done, struct static_key_true *once_key,
#define DO_ONCE_SLEEPABLE(func, ...) \
({ \
bool ___ret = false; \
- static bool __section(".data.once") ___done = false; \
+ static bool __section(".data..once") ___done = false; \
static DEFINE_STATIC_KEY_TRUE(___once_key); \
if (static_branch_unlikely(&___once_key)) { \
___ret = __do_once_sleepable_start(&___done); \
diff --git a/include/linux/once_lite.h b/include/linux/once_lite.h
index b7bce4983638..27de7bc32a06 100644
--- a/include/linux/once_lite.h
+++ b/include/linux/once_lite.h
@@ -12,7 +12,7 @@
#define __ONCE_LITE_IF(condition) \
({ \
- static bool __section(".data.once") __already_done; \
+ static bool __section(".data..once") __already_done; \
bool __ret_cond = !!(condition); \
bool __ret_once = false; \
\
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 6466c2f79292..7602d1f8a9ec 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -398,7 +398,7 @@ static inline int debug_lockdep_rcu_enabled(void)
*/
#define RCU_LOCKDEP_WARN(c, s) \
do { \
- static bool __section(".data.unlikely") __warned; \
+ static bool __section(".data..unlikely") __warned; \
if (debug_lockdep_rcu_enabled() && (c) && \
debug_lockdep_rcu_enabled() && !__warned) { \
__warned = true; \
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index e9bd2f65d7f4..b4b4ce9a4151 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -682,6 +682,23 @@ static __always_inline unsigned raw_read_seqcount_latch(const seqcount_latch_t *
return READ_ONCE(s->seqcount.sequence);
}
+/**
+ * read_seqcount_latch() - pick even/odd latch data copy
+ * @s: Pointer to seqcount_latch_t
+ *
+ * See write_seqcount_latch() for details and a full reader/writer usage
+ * example.
+ *
+ * Return: sequence counter raw value. Use the lowest bit as an index for
+ * picking which data copy to read. The full counter must then be checked
+ * with read_seqcount_latch_retry().
+ */
+static __always_inline unsigned read_seqcount_latch(const seqcount_latch_t *s)
+{
+ kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX);
+ return raw_read_seqcount_latch(s);
+}
+
/**
* raw_read_seqcount_latch_retry() - end a seqcount_latch_t read section
* @s: Pointer to seqcount_latch_t
@@ -696,9 +713,34 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
return unlikely(READ_ONCE(s->seqcount.sequence) != start);
}
+/**
+ * read_seqcount_latch_retry() - end a seqcount_latch_t read section
+ * @s: Pointer to seqcount_latch_t
+ * @start: count, from read_seqcount_latch()
+ *
+ * Return: true if a read section retry is required, else false
+ */
+static __always_inline int
+read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+{
+ kcsan_atomic_next(0);
+ return raw_read_seqcount_latch_retry(s, start);
+}
+
/**
* raw_write_seqcount_latch() - redirect latch readers to even/odd copy
* @s: Pointer to seqcount_latch_t
+ */
+static __always_inline void raw_write_seqcount_latch(seqcount_latch_t *s)
+{
+ smp_wmb(); /* prior stores before incrementing "sequence" */
+ s->seqcount.sequence++;
+ smp_wmb(); /* increment "sequence" before following stores */
+}
+
+/**
+ * write_seqcount_latch_begin() - redirect latch readers to odd copy
+ * @s: Pointer to seqcount_latch_t
*
* The latch technique is a multiversion concurrency control method that allows
* queries during non-atomic modifications. If you can guarantee queries never
@@ -726,17 +768,11 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
*
* void latch_modify(struct latch_struct *latch, ...)
* {
- * smp_wmb(); // Ensure that the last data[1] update is visible
- * latch->seq.sequence++;
- * smp_wmb(); // Ensure that the seqcount update is visible
- *
+ * write_seqcount_latch_begin(&latch->seq);
* modify(latch->data[0], ...);
- *
- * smp_wmb(); // Ensure that the data[0] update is visible
- * latch->seq.sequence++;
- * smp_wmb(); // Ensure that the seqcount update is visible
- *
+ * write_seqcount_latch(&latch->seq);
* modify(latch->data[1], ...);
+ * write_seqcount_latch_end(&latch->seq);
* }
*
* The query will have a form like::
@@ -747,13 +783,13 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
* unsigned seq, idx;
*
* do {
- * seq = raw_read_seqcount_latch(&latch->seq);
+ * seq = read_seqcount_latch(&latch->seq);
*
* idx = seq & 0x01;
* entry = data_query(latch->data[idx], ...);
*
* // This includes needed smp_rmb()
- * } while (raw_read_seqcount_latch_retry(&latch->seq, seq));
+ * } while (read_seqcount_latch_retry(&latch->seq, seq));
*
* return entry;
* }
@@ -777,11 +813,31 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
* When data is a dynamic data structure; one should use regular RCU
* patterns to manage the lifetimes of the objects within.
*/
-static inline void raw_write_seqcount_latch(seqcount_latch_t *s)
+static __always_inline void write_seqcount_latch_begin(seqcount_latch_t *s)
{
- smp_wmb(); /* prior stores before incrementing "sequence" */
- s->seqcount.sequence++;
- smp_wmb(); /* increment "sequence" before following stores */
+ kcsan_nestable_atomic_begin();
+ raw_write_seqcount_latch(s);
+}
+
+/**
+ * write_seqcount_latch() - redirect latch readers to even copy
+ * @s: Pointer to seqcount_latch_t
+ */
+static __always_inline void write_seqcount_latch(seqcount_latch_t *s)
+{
+ raw_write_seqcount_latch(s);
+}
+
+/**
+ * write_seqcount_latch_end() - end a seqcount_latch_t write section
+ * @s: Pointer to seqcount_latch_t
+ *
+ * Marks the end of a seqcount_latch_t writer section, after all copies of the
+ * latch-protected data have been updated.
+ */
+static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s)
+{
+ kcsan_nestable_atomic_end();
}
/*
@@ -834,11 +890,7 @@ typedef struct {
*/
static inline unsigned read_seqbegin(const seqlock_t *sl)
{
- unsigned ret = read_seqcount_begin(&sl->seqcount);
-
- kcsan_atomic_next(0); /* non-raw usage, assume closing read_seqretry() */
- kcsan_flat_atomic_begin();
- return ret;
+ return read_seqcount_begin(&sl->seqcount);
}
/**
@@ -854,12 +906,6 @@ static inline unsigned read_seqbegin(const seqlock_t *sl)
*/
static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
{
- /*
- * Assume not nested: read_seqretry() may be called multiple times when
- * completing read critical section.
- */
- kcsan_flat_atomic_end();
-
return read_seqcount_retry(&sl->seqcount, start);
}
diff --git a/include/linux/sock_diag.h b/include/linux/sock_diag.h
index 0b9ecd8cf979..110978dc9af1 100644
--- a/include/linux/sock_diag.h
+++ b/include/linux/sock_diag.h
@@ -13,6 +13,7 @@ struct nlmsghdr;
struct sock;
struct sock_diag_handler {
+ struct module *owner;
__u8 family;
int (*dump)(struct sk_buff *skb, struct nlmsghdr *nlh);
int (*get_info)(struct sk_buff *skb, struct sock *sk);
@@ -22,8 +23,13 @@ struct sock_diag_handler {
int sock_diag_register(const struct sock_diag_handler *h);
void sock_diag_unregister(const struct sock_diag_handler *h);
-void sock_diag_register_inet_compat(int (*fn)(struct sk_buff *skb, struct nlmsghdr *nlh));
-void sock_diag_unregister_inet_compat(int (*fn)(struct sk_buff *skb, struct nlmsghdr *nlh));
+struct sock_diag_inet_compat {
+ struct module *owner;
+ int (*fn)(struct sk_buff *skb, struct nlmsghdr *nlh);
+};
+
+void sock_diag_register_inet_compat(const struct sock_diag_inet_compat *ptr);
+void sock_diag_unregister_inet_compat(const struct sock_diag_inet_compat *ptr);
u64 __sock_gen_cookie(struct sock *sk);
diff --git a/include/linux/util_macros.h b/include/linux/util_macros.h
index 6bb460c3e818..825487fb66fa 100644
--- a/include/linux/util_macros.h
+++ b/include/linux/util_macros.h
@@ -4,19 +4,6 @@
#include <linux/math.h>
-#define __find_closest(x, a, as, op) \
-({ \
- typeof(as) __fc_i, __fc_as = (as) - 1; \
- typeof(x) __fc_x = (x); \
- typeof(*a) const *__fc_a = (a); \
- for (__fc_i = 0; __fc_i < __fc_as; __fc_i++) { \
- if (__fc_x op DIV_ROUND_CLOSEST(__fc_a[__fc_i] + \
- __fc_a[__fc_i + 1], 2)) \
- break; \
- } \
- (__fc_i); \
-})
-
/**
* find_closest - locate the closest element in a sorted array
* @x: The reference value.
@@ -25,8 +12,27 @@
* @as: Size of 'a'.
*
* Returns the index of the element closest to 'x'.
+ * Note: If using an array of negative numbers (or mixed positive numbers),
+ * then be sure that 'x' is of a signed-type to get good results.
*/
-#define find_closest(x, a, as) __find_closest(x, a, as, <=)
+#define find_closest(x, a, as) \
+({ \
+ typeof(as) __fc_i, __fc_as = (as) - 1; \
+ long __fc_mid_x, __fc_x = (x); \
+ long __fc_left, __fc_right; \
+ typeof(*a) const *__fc_a = (a); \
+ for (__fc_i = 0; __fc_i < __fc_as; __fc_i++) { \
+ __fc_mid_x = (__fc_a[__fc_i] + __fc_a[__fc_i + 1]) / 2; \
+ if (__fc_x <= __fc_mid_x) { \
+ __fc_left = __fc_x - __fc_a[__fc_i]; \
+ __fc_right = __fc_a[__fc_i + 1] - __fc_x; \
+ if (__fc_right < __fc_left) \
+ __fc_i++; \
+ break; \
+ } \
+ } \
+ (__fc_i); \
+})
/**
* find_closest_descending - locate the closest element in a sorted array
@@ -36,9 +42,27 @@
* @as: Size of 'a'.
*
* Similar to find_closest() but 'a' is expected to be sorted in descending
- * order.
+ * order. The iteration is done in reverse order, so that the comparison
+ * of '__fc_right' & '__fc_left' also works for unsigned numbers.
*/
-#define find_closest_descending(x, a, as) __find_closest(x, a, as, >=)
+#define find_closest_descending(x, a, as) \
+({ \
+ typeof(as) __fc_i, __fc_as = (as) - 1; \
+ long __fc_mid_x, __fc_x = (x); \
+ long __fc_left, __fc_right; \
+ typeof(*a) const *__fc_a = (a); \
+ for (__fc_i = __fc_as; __fc_i >= 1; __fc_i--) { \
+ __fc_mid_x = (__fc_a[__fc_i] + __fc_a[__fc_i - 1]) / 2; \
+ if (__fc_x <= __fc_mid_x) { \
+ __fc_left = __fc_x - __fc_a[__fc_i]; \
+ __fc_right = __fc_a[__fc_i - 1] - __fc_x; \
+ if (__fc_right < __fc_left) \
+ __fc_i--; \
+ break; \
+ } \
+ } \
+ (__fc_i); \
+})
/**
* is_insidevar - check if the @ptr points inside the @var memory range.
diff --git a/include/media/v4l2-dv-timings.h b/include/media/v4l2-dv-timings.h
index 8fa963326bf6..c64096b5c782 100644
--- a/include/media/v4l2-dv-timings.h
+++ b/include/media/v4l2-dv-timings.h
@@ -146,15 +146,18 @@ void v4l2_print_dv_timings(const char *dev_prefix, const char *prefix,
* @polarities: the horizontal and vertical polarities (same as struct
* v4l2_bt_timings polarities).
* @interlaced: if this flag is true, it indicates interlaced format
+ * @cap: the v4l2_dv_timings_cap capabilities.
* @fmt: the resulting timings.
*
* This function will attempt to detect if the given values correspond to a
* valid CVT format. If so, then it will return true, and fmt will be filled
* in with the found CVT timings.
*/
-bool v4l2_detect_cvt(unsigned frame_height, unsigned hfreq, unsigned vsync,
- unsigned active_width, u32 polarities, bool interlaced,
- struct v4l2_dv_timings *fmt);
+bool v4l2_detect_cvt(unsigned int frame_height, unsigned int hfreq,
+ unsigned int vsync, unsigned int active_width,
+ u32 polarities, bool interlaced,
+ const struct v4l2_dv_timings_cap *cap,
+ struct v4l2_dv_timings *fmt);
/**
* v4l2_detect_gtf - detect if the given timings follow the GTF standard
@@ -170,15 +173,18 @@ bool v4l2_detect_cvt(unsigned frame_height, unsigned hfreq, unsigned vsync,
* image height, so it has to be passed explicitly. Usually
* the native screen aspect ratio is used for this. If it
* is not filled in correctly, then 16:9 will be assumed.
+ * @cap: the v4l2_dv_timings_cap capabilities.
* @fmt: the resulting timings.
*
* This function will attempt to detect if the given values correspond to a
* valid GTF format. If so, then it will return true, and fmt will be filled
* in with the found GTF timings.
*/
-bool v4l2_detect_gtf(unsigned frame_height, unsigned hfreq, unsigned vsync,
- u32 polarities, bool interlaced, struct v4l2_fract aspect,
- struct v4l2_dv_timings *fmt);
+bool v4l2_detect_gtf(unsigned int frame_height, unsigned int hfreq,
+ unsigned int vsync, u32 polarities, bool interlaced,
+ struct v4l2_fract aspect,
+ const struct v4l2_dv_timings_cap *cap,
+ struct v4l2_dv_timings *fmt);
/**
* v4l2_calc_aspect_ratio - calculate the aspect ratio based on bytes
diff --git a/include/net/ieee80211_radiotap.h b/include/net/ieee80211_radiotap.h
index 2338f8d2a8b3..c6cb6f642742 100644
--- a/include/net/ieee80211_radiotap.h
+++ b/include/net/ieee80211_radiotap.h
@@ -24,25 +24,27 @@
* struct ieee80211_radiotap_header - base radiotap header
*/
struct ieee80211_radiotap_header {
- /**
- * @it_version: radiotap version, always 0
- */
- uint8_t it_version;
-
- /**
- * @it_pad: padding (or alignment)
- */
- uint8_t it_pad;
-
- /**
- * @it_len: overall radiotap header length
- */
- __le16 it_len;
-
- /**
- * @it_present: (first) present word
- */
- __le32 it_present;
+ __struct_group(ieee80211_radiotap_header_fixed, hdr, __packed,
+ /**
+ * @it_version: radiotap version, always 0
+ */
+ uint8_t it_version;
+
+ /**
+ * @it_pad: padding (or alignment)
+ */
+ uint8_t it_pad;
+
+ /**
+ * @it_len: overall radiotap header length
+ */
+ __le16 it_len;
+
+ /**
+ * @it_present: (first) present word
+ */
+ __le32 it_present;
+ );
/**
* @it_optional: all remaining presence bitmaps
@@ -50,6 +52,9 @@ struct ieee80211_radiotap_header {
__le32 it_optional[];
} __packed;
+static_assert(offsetof(struct ieee80211_radiotap_header, it_optional) == sizeof(struct ieee80211_radiotap_header_fixed),
+ "struct member likely outside of __struct_group()");
+
/* version is always 0 */
#define PKTHDR_RADIOTAP_VERSION 0
diff --git a/include/net/net_debug.h b/include/net/net_debug.h
index 1e74684cbbdb..4a79204c8d30 100644
--- a/include/net/net_debug.h
+++ b/include/net/net_debug.h
@@ -27,7 +27,7 @@ void netdev_info(const struct net_device *dev, const char *format, ...);
#define netdev_level_once(level, dev, fmt, ...) \
do { \
- static bool __section(".data.once") __print_once; \
+ static bool __section(".data..once") __print_once; \
\
if (!__print_once) { \
__print_once = true; \
diff --git a/include/net/sock.h b/include/net/sock.h
index e0be8bd98396..a6b795ec7c9c 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -2219,7 +2219,7 @@ sk_dst_set(struct sock *sk, struct dst_entry *dst)
sk_tx_queue_clear(sk);
WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
- old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst);
+ old_dst = unrcu_pointer(xchg(&sk->sk_dst_cache, RCU_INITIALIZER(dst)));
dst_release(old_dst);
}
diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h
index 51c13cf9c5ae..63a0922937e7 100644
--- a/include/uapi/linux/rtnetlink.h
+++ b/include/uapi/linux/rtnetlink.h
@@ -174,7 +174,7 @@ enum {
#define RTM_GETLINKPROP RTM_GETLINKPROP
RTM_NEWVLAN = 112,
-#define RTM_NEWNVLAN RTM_NEWVLAN
+#define RTM_NEWVLAN RTM_NEWVLAN
RTM_DELVLAN,
#define RTM_DELVLAN RTM_DELVLAN
RTM_GETVLAN,
diff --git a/init/Kconfig b/init/Kconfig
index 6054ba684c53..60ed7713b5ee 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -107,6 +107,15 @@ config CC_HAS_ASM_INLINE
config CC_HAS_NO_PROFILE_FN_ATTR
def_bool $(success,echo '__attribute__((no_profile_instrument_function)) int x();' | $(CC) -x c - -c -o /dev/null -Werror)
+config CC_HAS_COUNTED_BY
+ # TODO: when gcc 15 is released remove the build test and add
+ # a gcc version check
+ def_bool $(success,echo 'struct flex { int count; int array[] __attribute__((__counted_by__(count))); };' | $(CC) $(CLANG_FLAGS) -x c - -c -o /dev/null -Werror)
+ # clang needs to be at least 19.1.3 to avoid __bdos miscalculations
+ # https://github.com/llvm/llvm-project/pull/110497
+ # https://github.com/llvm/llvm-project/pull/112636
+ depends on !(CC_IS_CLANG && CLANG_VERSION < 190103)
+
config PAHOLE_VERSION
int
default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
diff --git a/init/initramfs.c b/init/initramfs.c
index efc477b905a4..148988bd8ab2 100644
--- a/init/initramfs.c
+++ b/init/initramfs.c
@@ -358,6 +358,15 @@ static int __init do_name(void)
{
state = SkipIt;
next_state = Reset;
+
+ /* name_len > 0 && name_len <= PATH_MAX checked in do_header */
+ if (collected[name_len - 1] != '\0') {
+ pr_err("initramfs name without nulterm: %.*s\n",
+ (int)name_len, collected);
+ error("malformed archive");
+ return 1;
+ }
+
if (strcmp(collected, "TRAILER!!!") == 0) {
free_hash();
return 0;
@@ -422,6 +431,12 @@ static int __init do_copy(void)
static int __init do_symlink(void)
{
+ if (collected[name_len - 1] != '\0') {
+ pr_err("initramfs symlink without nulterm: %.*s\n",
+ (int)name_len, collected);
+ error("malformed archive");
+ return 1;
+ }
collected[N_ALIGN(name_len) + body_len] = '\0';
clean_path(collected, 0);
init_symlink(collected + N_ALIGN(name_len), collected);
diff --git a/ipc/namespace.c b/ipc/namespace.c
index 6ecc30effd3e..4df91ceeeafe 100644
--- a/ipc/namespace.c
+++ b/ipc/namespace.c
@@ -83,13 +83,15 @@ static struct ipc_namespace *create_ipc_ns(struct user_namespace *user_ns,
err = msg_init_ns(ns);
if (err)
- goto fail_put;
+ goto fail_ipc;
sem_init_ns(ns);
shm_init_ns(ns);
return ns;
+fail_ipc:
+ retire_ipc_sysctls(ns);
fail_mq:
retire_mq_sysctls(ns);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4f19a091571b..5ca02af3a872 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1762,8 +1762,8 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
int i, err;
dst_state->jmp_history = copy_array(dst_state->jmp_history, src->jmp_history,
- src->jmp_history_cnt, sizeof(struct bpf_idx_pair),
- GFP_USER);
+ src->jmp_history_cnt, sizeof(*dst_state->jmp_history),
+ GFP_USER);
if (!dst_state->jmp_history)
return -ENOMEM;
dst_state->jmp_history_cnt = src->jmp_history_cnt;
@@ -3397,6 +3397,21 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno,
return __check_reg_arg(env, state->regs, regno, t);
}
+static int insn_stack_access_flags(int frameno, int spi)
+{
+ return INSN_F_STACK_ACCESS | (spi << INSN_F_SPI_SHIFT) | frameno;
+}
+
+static int insn_stack_access_spi(int insn_flags)
+{
+ return (insn_flags >> INSN_F_SPI_SHIFT) & INSN_F_SPI_MASK;
+}
+
+static int insn_stack_access_frameno(int insn_flags)
+{
+ return insn_flags & INSN_F_FRAMENO_MASK;
+}
+
static void mark_jmp_point(struct bpf_verifier_env *env, int idx)
{
env->insn_aux_data[idx].jmp_point = true;
@@ -3408,28 +3423,51 @@ static bool is_jmp_point(struct bpf_verifier_env *env, int insn_idx)
}
/* for any branch, call, exit record the history of jmps in the given state */
-static int push_jmp_history(struct bpf_verifier_env *env,
- struct bpf_verifier_state *cur)
+static int push_jmp_history(struct bpf_verifier_env *env, struct bpf_verifier_state *cur,
+ int insn_flags)
{
u32 cnt = cur->jmp_history_cnt;
- struct bpf_idx_pair *p;
+ struct bpf_jmp_history_entry *p;
size_t alloc_size;
- if (!is_jmp_point(env, env->insn_idx))
+ /* combine instruction flags if we already recorded this instruction */
+ if (env->cur_hist_ent) {
+ /* atomic instructions push insn_flags twice, for READ and
+ * WRITE sides, but they should agree on stack slot
+ */
+ WARN_ONCE((env->cur_hist_ent->flags & insn_flags) &&
+ (env->cur_hist_ent->flags & insn_flags) != insn_flags,
+ "verifier insn history bug: insn_idx %d cur flags %x new flags %x\n",
+ env->insn_idx, env->cur_hist_ent->flags, insn_flags);
+ env->cur_hist_ent->flags |= insn_flags;
return 0;
+ }
cnt++;
alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p)));
p = krealloc(cur->jmp_history, alloc_size, GFP_USER);
if (!p)
return -ENOMEM;
- p[cnt - 1].idx = env->insn_idx;
- p[cnt - 1].prev_idx = env->prev_insn_idx;
cur->jmp_history = p;
+
+ p = &cur->jmp_history[cnt - 1];
+ p->idx = env->insn_idx;
+ p->prev_idx = env->prev_insn_idx;
+ p->flags = insn_flags;
cur->jmp_history_cnt = cnt;
+ env->cur_hist_ent = p;
+
return 0;
}
+static struct bpf_jmp_history_entry *get_jmp_hist_entry(struct bpf_verifier_state *st,
+ u32 hist_end, int insn_idx)
+{
+ if (hist_end > 0 && st->jmp_history[hist_end - 1].idx == insn_idx)
+ return &st->jmp_history[hist_end - 1];
+ return NULL;
+}
+
/* Backtrack one insn at a time. If idx is not at the top of recorded
* history then previous instruction came from straight line execution.
* Return -ENOENT if we exhausted all instructions within given state.
@@ -3591,9 +3629,14 @@ static inline bool bt_is_reg_set(struct backtrack_state *bt, u32 reg)
return bt->reg_masks[bt->frame] & (1 << reg);
}
+static inline bool bt_is_frame_slot_set(struct backtrack_state *bt, u32 frame, u32 slot)
+{
+ return bt->stack_masks[frame] & (1ull << slot);
+}
+
static inline bool bt_is_slot_set(struct backtrack_state *bt, u32 slot)
{
- return bt->stack_masks[bt->frame] & (1ull << slot);
+ return bt_is_frame_slot_set(bt, bt->frame, slot);
}
/* format registers bitmask, e.g., "r0,r2,r4" for 0x15 mask */
@@ -3647,7 +3690,7 @@ static bool calls_callback(struct bpf_verifier_env *env, int insn_idx);
* - *was* processed previously during backtracking.
*/
static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
- struct backtrack_state *bt)
+ struct bpf_jmp_history_entry *hist, struct backtrack_state *bt)
{
const struct bpf_insn_cbs cbs = {
.cb_call = disasm_kfunc_name,
@@ -3660,7 +3703,7 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
u8 mode = BPF_MODE(insn->code);
u32 dreg = insn->dst_reg;
u32 sreg = insn->src_reg;
- u32 spi, i;
+ u32 spi, i, fr;
if (insn->code == 0)
return 0;
@@ -3723,20 +3766,15 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
* by 'precise' mark in corresponding register of this state.
* No further tracking necessary.
*/
- if (insn->src_reg != BPF_REG_FP)
+ if (!hist || !(hist->flags & INSN_F_STACK_ACCESS))
return 0;
-
/* dreg = *(u64 *)[fp - off] was a fill from the stack.
* that [fp - off] slot contains scalar that needs to be
* tracked with precision
*/
- spi = (-insn->off - 1) / BPF_REG_SIZE;
- if (spi >= 64) {
- verbose(env, "BUG spi %d\n", spi);
- WARN_ONCE(1, "verifier backtracking bug");
- return -EFAULT;
- }
- bt_set_slot(bt, spi);
+ spi = insn_stack_access_spi(hist->flags);
+ fr = insn_stack_access_frameno(hist->flags);
+ bt_set_frame_slot(bt, fr, spi);
} else if (class == BPF_STX || class == BPF_ST) {
if (bt_is_reg_set(bt, dreg))
/* stx & st shouldn't be using _scalar_ dst_reg
@@ -3745,17 +3783,13 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
*/
return -ENOTSUPP;
/* scalars can only be spilled into stack */
- if (insn->dst_reg != BPF_REG_FP)
+ if (!hist || !(hist->flags & INSN_F_STACK_ACCESS))
return 0;
- spi = (-insn->off - 1) / BPF_REG_SIZE;
- if (spi >= 64) {
- verbose(env, "BUG spi %d\n", spi);
- WARN_ONCE(1, "verifier backtracking bug");
- return -EFAULT;
- }
- if (!bt_is_slot_set(bt, spi))
+ spi = insn_stack_access_spi(hist->flags);
+ fr = insn_stack_access_frameno(hist->flags);
+ if (!bt_is_frame_slot_set(bt, fr, spi))
return 0;
- bt_clear_slot(bt, spi);
+ bt_clear_frame_slot(bt, fr, spi);
if (class == BPF_STX)
bt_set_reg(bt, sreg);
} else if (class == BPF_JMP || class == BPF_JMP32) {
@@ -3799,10 +3833,14 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
WARN_ONCE(1, "verifier backtracking bug");
return -EFAULT;
}
- /* we don't track register spills perfectly,
- * so fallback to force-precise instead of failing */
- if (bt_stack_mask(bt) != 0)
- return -ENOTSUPP;
+ /* we are now tracking register spills correctly,
+ * so any instance of leftover slots is a bug
+ */
+ if (bt_stack_mask(bt) != 0) {
+ verbose(env, "BUG stack slots %llx\n", bt_stack_mask(bt));
+ WARN_ONCE(1, "verifier backtracking bug (subprog leftover stack slots)");
+ return -EFAULT;
+ }
/* propagate r1-r5 to the caller */
for (i = BPF_REG_1; i <= BPF_REG_5; i++) {
if (bt_is_reg_set(bt, i)) {
@@ -3827,8 +3865,11 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
WARN_ONCE(1, "verifier backtracking bug");
return -EFAULT;
}
- if (bt_stack_mask(bt) != 0)
- return -ENOTSUPP;
+ if (bt_stack_mask(bt) != 0) {
+ verbose(env, "BUG stack slots %llx\n", bt_stack_mask(bt));
+ WARN_ONCE(1, "verifier backtracking bug (callback leftover stack slots)");
+ return -EFAULT;
+ }
/* clear r1-r5 in callback subprog's mask */
for (i = BPF_REG_1; i <= BPF_REG_5; i++)
bt_clear_reg(bt, i);
@@ -4265,6 +4306,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno)
for (;;) {
DECLARE_BITMAP(mask, 64);
u32 history = st->jmp_history_cnt;
+ struct bpf_jmp_history_entry *hist;
if (env->log.level & BPF_LOG_LEVEL2) {
verbose(env, "mark_precise: frame%d: last_idx %d first_idx %d subseq_idx %d \n",
@@ -4328,7 +4370,8 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno)
err = 0;
skip_first = false;
} else {
- err = backtrack_insn(env, i, subseq_idx, bt);
+ hist = get_jmp_hist_entry(st, history, i);
+ err = backtrack_insn(env, i, subseq_idx, hist, bt);
}
if (err == -ENOTSUPP) {
mark_all_scalars_precise(env, env->cur_state);
@@ -4381,22 +4424,10 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno)
bitmap_from_u64(mask, bt_frame_stack_mask(bt, fr));
for_each_set_bit(i, mask, 64) {
if (i >= func->allocated_stack / BPF_REG_SIZE) {
- /* the sequence of instructions:
- * 2: (bf) r3 = r10
- * 3: (7b) *(u64 *)(r3 -8) = r0
- * 4: (79) r4 = *(u64 *)(r10 -8)
- * doesn't contain jmps. It's backtracked
- * as a single block.
- * During backtracking insn 3 is not recognized as
- * stack access, so at the end of backtracking
- * stack slot fp-8 is still marked in stack_mask.
- * However the parent state may not have accessed
- * fp-8 and it's "unallocated" stack space.
- * In such case fallback to conservative.
- */
- mark_all_scalars_precise(env, env->cur_state);
- bt_reset(bt);
- return 0;
+ verbose(env, "BUG backtracking (stack slot %d, total slots %d)\n",
+ i, func->allocated_stack / BPF_REG_SIZE);
+ WARN_ONCE(1, "verifier backtracking bug (stack slot out of bounds)");
+ return -EFAULT;
}
if (!is_spilled_scalar_reg(&func->stack[i])) {
@@ -4561,7 +4592,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
int i, slot = -off - 1, spi = slot / BPF_REG_SIZE, err;
struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
struct bpf_reg_state *reg = NULL;
- u32 dst_reg = insn->dst_reg;
+ int insn_flags = insn_stack_access_flags(state->frameno, spi);
/* caller checked that off % size == 0 and -MAX_BPF_STACK <= off < 0,
* so it's aligned access and [off, off + size) are within stack limits
@@ -4599,17 +4630,6 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
mark_stack_slot_scratched(env, spi);
if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
!register_is_null(reg) && env->bpf_capable) {
- if (dst_reg != BPF_REG_FP) {
- /* The backtracking logic can only recognize explicit
- * stack slot address like [fp - 8]. Other spill of
- * scalar via different register has to be conservative.
- * Backtrack from here and mark all registers as precise
- * that contributed into 'reg' being a constant.
- */
- err = mark_chain_precision(env, value_regno);
- if (err)
- return err;
- }
save_register_state(state, spi, reg, size);
/* Break the relation on a narrowing spill. */
if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
@@ -4621,6 +4641,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
__mark_reg_known(&fake_reg, insn->imm);
fake_reg.type = SCALAR_VALUE;
save_register_state(state, spi, &fake_reg, size);
+ insn_flags = 0; /* not a register spill */
} else if (reg && is_spillable_regtype(reg->type)) {
/* register containing pointer is being spilled into stack */
if (size != BPF_REG_SIZE) {
@@ -4666,9 +4687,12 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
/* Mark slots affected by this stack write. */
for (i = 0; i < size; i++)
- state->stack[spi].slot_type[(slot - i) % BPF_REG_SIZE] =
- type;
+ state->stack[spi].slot_type[(slot - i) % BPF_REG_SIZE] = type;
+ insn_flags = 0; /* not a register spill */
}
+
+ if (insn_flags)
+ return push_jmp_history(env, env->cur_state, insn_flags);
return 0;
}
@@ -4857,6 +4881,7 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
int i, slot = -off - 1, spi = slot / BPF_REG_SIZE;
struct bpf_reg_state *reg;
u8 *stype, type;
+ int insn_flags = insn_stack_access_flags(reg_state->frameno, spi);
stype = reg_state->stack[spi].slot_type;
reg = ®_state->stack[spi].spilled_ptr;
@@ -4902,12 +4927,10 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
return -EACCES;
}
mark_reg_unknown(env, state->regs, dst_regno);
+ insn_flags = 0; /* not restoring original register state */
}
state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
- return 0;
- }
-
- if (dst_regno >= 0) {
+ } else if (dst_regno >= 0) {
/* restore register state from stack */
copy_register_state(&state->regs[dst_regno], reg);
/* mark reg as written since spilled pointer state likely
@@ -4943,7 +4966,10 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
if (dst_regno >= 0)
mark_reg_stack_read(env, reg_state, off, off + size, dst_regno);
+ insn_flags = 0; /* we are not restoring spilled register */
}
+ if (insn_flags)
+ return push_jmp_history(env, env->cur_state, insn_flags);
return 0;
}
@@ -7027,7 +7053,6 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
BPF_SIZE(insn->code), BPF_WRITE, -1, true, false);
if (err)
return err;
-
return 0;
}
@@ -16773,7 +16798,8 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
* the precision needs to be propagated back in
* the current state.
*/
- err = err ? : push_jmp_history(env, cur);
+ if (is_jmp_point(env, env->insn_idx))
+ err = err ? : push_jmp_history(env, cur, 0);
err = err ? : propagate_precision(env, &sl->state);
if (err)
return err;
@@ -16997,6 +17023,9 @@ static int do_check(struct bpf_verifier_env *env)
u8 class;
int err;
+ /* reset current history entry on each new instruction */
+ env->cur_hist_ent = NULL;
+
env->prev_insn_idx = prev_insn_idx;
if (env->insn_idx >= insn_cnt) {
verbose(env, "invalid insn idx %d insn_cnt %d\n",
@@ -17036,7 +17065,7 @@ static int do_check(struct bpf_verifier_env *env)
}
if (is_jmp_point(env, env->insn_idx)) {
- err = push_jmp_history(env, state);
+ err = push_jmp_history(env, state, 0);
if (err)
return err;
}
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index b927f0623ac7..36097e8c904f 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -2096,8 +2096,10 @@ int cgroup_setup_root(struct cgroup_root *root, u16 ss_mask)
if (ret)
goto exit_stats;
- ret = cgroup_bpf_inherit(root_cgrp);
- WARN_ON_ONCE(ret);
+ if (root == &cgrp_dfl_root) {
+ ret = cgroup_bpf_inherit(root_cgrp);
+ WARN_ON_ONCE(ret);
+ }
trace_cgroup_setup_root(root);
@@ -2270,10 +2272,8 @@ static void cgroup_kill_sb(struct super_block *sb)
* And don't kill the default root.
*/
if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root &&
- !percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
- cgroup_bpf_offline(&root->cgrp);
+ !percpu_ref_is_dying(&root->cgrp.self.refcnt))
percpu_ref_kill(&root->cgrp.self.refcnt);
- }
cgroup_put(&root->cgrp);
kernfs_kill_sb(sb);
}
@@ -5618,9 +5618,11 @@ static struct cgroup *cgroup_create(struct cgroup *parent, const char *name,
if (ret)
goto out_kernfs_remove;
- ret = cgroup_bpf_inherit(cgrp);
- if (ret)
- goto out_psi_free;
+ if (cgrp->root == &cgrp_dfl_root) {
+ ret = cgroup_bpf_inherit(cgrp);
+ if (ret)
+ goto out_psi_free;
+ }
/*
* New cgroup inherits effective freeze counter, and
@@ -5938,7 +5940,8 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
cgroup1_check_for_release(parent);
- cgroup_bpf_offline(cgrp);
+ if (cgrp->root == &cgrp_dfl_root)
+ cgroup_bpf_offline(cgrp);
/* put the base reference */
percpu_ref_kill(&cgrp->self.refcnt);
diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
index ed46d9e8c0e4..902575db9aec 100644
--- a/kernel/rcu/rcuscale.c
+++ b/kernel/rcu/rcuscale.c
@@ -780,13 +780,15 @@ kfree_scale_init(void)
if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start < 2 * HZ)) {
pr_alert("ERROR: call_rcu() CBs are not being lazy as expected!\n");
WARN_ON_ONCE(1);
- return -1;
+ firsterr = -1;
+ goto unwind;
}
if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start > 3 * HZ)) {
pr_alert("ERROR: call_rcu() CBs are being too lazy!\n");
WARN_ON_ONCE(1);
- return -1;
+ firsterr = -1;
+ goto unwind;
}
}
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 3d7b119f6e2a..fda08520c75c 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3150,7 +3150,7 @@ static int krc_count(struct kfree_rcu_cpu *krcp)
}
static void
-schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
+__schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
{
long delay, delay_left;
@@ -3164,6 +3164,16 @@ schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
queue_delayed_work(system_wq, &krcp->monitor_work, delay);
}
+static void
+schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
+{
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&krcp->lock, flags);
+ __schedule_delayed_monitor_work(krcp);
+ raw_spin_unlock_irqrestore(&krcp->lock, flags);
+}
+
static void
kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp)
{
@@ -3460,7 +3470,7 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
// Set timer to drain after KFREE_DRAIN_JIFFIES.
if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING)
- schedule_delayed_monitor_work(krcp);
+ __schedule_delayed_monitor_work(krcp);
unlock_return:
krc_this_cpu_unlock(krcp, flags);
diff --git a/kernel/signal.c b/kernel/signal.c
index 3808eaa2f49a..49c8c24b444d 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -1996,14 +1996,15 @@ int send_sigqueue(struct sigqueue *q, struct pid *pid, enum pid_type type)
* into t->pending).
*
* Where type is not PIDTYPE_PID, signals must be delivered to the
- * process. In this case, prefer to deliver to current if it is in
- * the same thread group as the target process, which avoids
- * unnecessarily waking up a potentially idle task.
+ * process. In this case, prefer to deliver to current if it is in the
+ * same thread group as the target process and its sighand is stable,
+ * which avoids unnecessarily waking up a potentially idle task.
*/
t = pid_task(pid, type);
if (!t)
goto ret;
- if (type != PIDTYPE_PID && same_thread_group(t, current))
+ if (type != PIDTYPE_PID &&
+ same_thread_group(t, current) && !current->exit_state)
t = current;
if (!likely(lock_task_sighand(t, &flags)))
goto ret;
diff --git a/kernel/time/time.c b/kernel/time/time.c
index 642647f5046b..1ad88e97b4eb 100644
--- a/kernel/time/time.c
+++ b/kernel/time/time.c
@@ -556,9 +556,9 @@ EXPORT_SYMBOL(ns_to_timespec64);
* - all other values are converted to jiffies by either multiplying
* the input value by a factor or dividing it with a factor and
* handling any 32-bit overflows.
- * for the details see __msecs_to_jiffies()
+ * for the details see _msecs_to_jiffies()
*
- * __msecs_to_jiffies() checks for the passed in value being a constant
+ * msecs_to_jiffies() checks for the passed in value being a constant
* via __builtin_constant_p() allowing gcc to eliminate most of the
* code, __msecs_to_jiffies() is called if the value passed does not
* allow constant folding and the actual conversion must be done at
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 9064f75de7e4..e8fb6ada323c 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3098,7 +3098,6 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
struct bpf_prog *prog = link->link.prog;
bool sleepable = prog->aux->sleepable;
struct bpf_run_ctx *old_run_ctx;
- int err = 0;
if (link->task && current->mm != link->task->mm)
return 0;
@@ -3111,7 +3110,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
migrate_disable();
old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
- err = bpf_prog_run(link->link.prog, regs);
+ bpf_prog_run(link->link.prog, regs);
bpf_reset_run_ctx(old_run_ctx);
migrate_enable();
@@ -3120,7 +3119,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
rcu_read_unlock_trace();
else
rcu_read_unlock();
- return err;
+ return 0;
}
static bool
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 175eba24f562..1043936b352d 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4562,6 +4562,9 @@ ftrace_mod_callback(struct trace_array *tr, struct ftrace_hash *hash,
char *func;
int ret;
+ if (!tr)
+ return -ENODEV;
+
/* match_records() modifies func, and we need the original */
func = kstrdup(func_orig, GFP_KERNEL);
if (!func)
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index 05e791241812..3ff9caa4a71b 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -352,10 +352,16 @@ void perf_uprobe_destroy(struct perf_event *p_event)
int perf_trace_add(struct perf_event *p_event, int flags)
{
struct trace_event_call *tp_event = p_event->tp_event;
+ struct hw_perf_event *hwc = &p_event->hw;
if (!(flags & PERF_EF_START))
p_event->hw.state = PERF_HES_STOPPED;
+ if (is_sampling_event(p_event)) {
+ hwc->last_period = hwc->sample_period;
+ perf_swevent_set_period(p_event);
+ }
+
/*
* If TRACE_REG_PERF_ADD returns false; no custom action was performed
* and we need to take the default action of enqueueing our event on
diff --git a/lib/maple_tree.c b/lib/maple_tree.c
index 4e05511c8d1e..4eda94906360 100644
--- a/lib/maple_tree.c
+++ b/lib/maple_tree.c
@@ -3547,9 +3547,20 @@ static inline int mas_root_expand(struct ma_state *mas, void *entry)
return slot;
}
+/*
+ * mas_store_root() - Storing value into root.
+ * @mas: The maple state
+ * @entry: The entry to store.
+ *
+ * There is no root node now and we are storing a value into the root - this
+ * function either assigns the pointer or expands into a node.
+ */
static inline void mas_store_root(struct ma_state *mas, void *entry)
{
- if (likely((mas->last != 0) || (mas->index != 0)))
+ if (!entry) {
+ if (!mas->index)
+ rcu_assign_pointer(mas->tree->ma_root, NULL);
+ } else if (likely((mas->last != 0) || (mas->index != 0)))
mas_root_expand(mas, entry);
else if (((unsigned long) (entry) & 3) == 2)
mas_root_expand(mas, entry);
diff --git a/lib/string_helpers.c b/lib/string_helpers.c
index 9982344cca34..8f9f28dfdb39 100644
--- a/lib/string_helpers.c
+++ b/lib/string_helpers.c
@@ -52,7 +52,7 @@ void string_get_size(u64 size, u64 blk_size, const enum string_size_units units,
static const unsigned int rounding[] = { 500, 50, 5 };
int i = 0, j;
u32 remainder = 0, sf_cap;
- char tmp[8];
+ char tmp[12];
const char *unit;
tmp[0] = '\0';
diff --git a/mm/internal.h b/mm/internal.h
index a0b24d005579..f773db493a99 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -40,7 +40,7 @@ struct folio_batch;
* when we specify __GFP_NOWARN.
*/
#define WARN_ON_ONCE_GFP(cond, gfp) ({ \
- static bool __section(".data.once") __warned; \
+ static bool __section(".data..once") __warned; \
int __ret_warn_once = !!(cond); \
\
if (unlikely(!(gfp & __GFP_NOWARN) && __ret_warn_once && !__warned)) { \
diff --git a/mm/slab.h b/mm/slab.h
index 799a315695c6..62df6eeeb5ea 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -78,6 +78,11 @@ struct slab {
struct {
unsigned inuse:16;
unsigned objects:15;
+ /*
+ * If slab debugging is enabled then the
+ * frozen bit can be reused to indicate
+ * that the slab was corrupted
+ */
unsigned frozen:1;
};
};
diff --git a/mm/slub.c b/mm/slub.c
index f7940048138c..d2544c88a5c4 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1275,6 +1275,11 @@ static int check_slab(struct kmem_cache *s, struct slab *slab)
slab->inuse, slab->objects);
return 0;
}
+ if (slab->frozen) {
+ slab_err(s, slab, "Slab disabled since SLUB metadata consistency check failed");
+ return 0;
+ }
+
/* Slab_pad_check fixes things up after itself */
slab_pad_check(s, slab);
return 1;
@@ -1463,6 +1468,7 @@ static noinline bool alloc_debug_processing(struct kmem_cache *s,
slab_fix(s, "Marking all objects used");
slab->inuse = slab->objects;
slab->freelist = NULL;
+ slab->frozen = 1; /* mark consistency-failed slab as frozen */
}
return false;
}
@@ -2162,7 +2168,8 @@ static void *alloc_single_from_partial(struct kmem_cache *s,
slab->inuse++;
if (!alloc_debug_processing(s, slab, object, orig_size)) {
- remove_partial(n, slab);
+ if (folio_test_slab(slab_folio(slab)))
+ remove_partial(n, slab);
return NULL;
}
diff --git a/mm/vmstat.c b/mm/vmstat.c
index e9616c4ca12d..57891697846b 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1723,6 +1723,7 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
zone_page_state(zone, i));
#ifdef CONFIG_NUMA
+ fold_vm_zone_numa_events(zone);
for (i = 0; i < NR_VM_NUMA_EVENT_ITEMS; i++)
seq_printf(m, "\n %-12s %lu", numa_stat_name(i),
zone_numa_event_state(zone, i));
diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
index 1fffe2bed5b0..6387ee924a2d 100644
--- a/net/9p/trans_xen.c
+++ b/net/9p/trans_xen.c
@@ -287,7 +287,7 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
if (!priv->rings[i].intf)
break;
if (priv->rings[i].irq > 0)
- unbind_from_irqhandler(priv->rings[i].irq, priv->dev);
+ unbind_from_irqhandler(priv->rings[i].irq, ring);
if (priv->rings[i].data.in) {
for (j = 0;
j < (1 << priv->rings[i].intf->ring_order);
@@ -466,6 +466,7 @@ static int xen_9pfs_front_init(struct xenbus_device *dev)
goto error;
}
+ xenbus_switch_state(dev, XenbusStateInitialised);
return 0;
error_xenbus:
@@ -513,8 +514,10 @@ static void xen_9pfs_front_changed(struct xenbus_device *dev,
break;
case XenbusStateInitWait:
- if (!xen_9pfs_front_init(dev))
- xenbus_switch_state(dev, XenbusStateInitialised);
+ if (dev->state != XenbusStateInitialising)
+ break;
+
+ xen_9pfs_front_init(dev);
break;
case XenbusStateConnected:
diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
index 367e32fe30eb..4b54dbbf0729 100644
--- a/net/bluetooth/hci_sysfs.c
+++ b/net/bluetooth/hci_sysfs.c
@@ -21,16 +21,6 @@ static const struct device_type bt_link = {
.release = bt_link_release,
};
-/*
- * The rfcomm tty device will possibly retain even when conn
- * is down, and sysfs doesn't support move zombie device,
- * so we should move the device before conn device is destroyed.
- */
-static int __match_tty(struct device *dev, void *data)
-{
- return !strncmp(dev_name(dev), "rfcomm", 6);
-}
-
void hci_conn_init_sysfs(struct hci_conn *conn)
{
struct hci_dev *hdev = conn->hdev;
@@ -73,10 +63,13 @@ void hci_conn_del_sysfs(struct hci_conn *conn)
return;
}
+ /* If there are devices using the connection as parent reset it to NULL
+ * before unregistering the device.
+ */
while (1) {
struct device *dev;
- dev = device_find_child(&conn->dev, NULL, __match_tty);
+ dev = device_find_any_child(&conn->dev);
if (!dev)
break;
device_move(dev, NULL, DPM_ORDER_DEV_LAST);
diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
index 1f3a39c20a91..1175248e4bec 100644
--- a/net/bluetooth/mgmt.c
+++ b/net/bluetooth/mgmt.c
@@ -1318,7 +1318,8 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
struct mgmt_mode *cp;
/* Make sure cmd still outstanding. */
- if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
+ if (err == -ECANCELED ||
+ cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
return;
cp = cmd->param;
@@ -1351,7 +1352,13 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
static int set_powered_sync(struct hci_dev *hdev, void *data)
{
struct mgmt_pending_cmd *cmd = data;
- struct mgmt_mode *cp = cmd->param;
+ struct mgmt_mode *cp;
+
+ /* Make sure cmd still outstanding. */
+ if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
+ return -ECANCELED;
+
+ cp = cmd->param;
BT_DBG("%s", hdev->name);
@@ -1503,7 +1510,8 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
bt_dev_dbg(hdev, "err %d", err);
/* Make sure cmd still outstanding. */
- if (cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
+ if (err == -ECANCELED ||
+ cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
return;
hci_dev_lock(hdev);
@@ -1677,7 +1685,8 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
bt_dev_dbg(hdev, "err %d", err);
/* Make sure cmd still outstanding. */
- if (cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
+ if (err == -ECANCELED ||
+ cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
return;
hci_dev_lock(hdev);
@@ -1910,7 +1919,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
bool changed;
/* Make sure cmd still outstanding. */
- if (cmd != pending_find(MGMT_OP_SET_SSP, hdev))
+ if (err == -ECANCELED || cmd != pending_find(MGMT_OP_SET_SSP, hdev))
return;
if (err) {
@@ -3775,7 +3784,8 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
bt_dev_dbg(hdev, "err %d", err);
- if (cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
+ if (err == -ECANCELED ||
+ cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
return;
if (status) {
@@ -3950,7 +3960,8 @@ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
struct sk_buff *skb = cmd->skb;
u8 status = mgmt_status(err);
- if (cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
+ if (err == -ECANCELED ||
+ cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
return;
if (!status) {
@@ -5841,13 +5852,16 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
{
struct mgmt_pending_cmd *cmd = data;
+ bt_dev_dbg(hdev, "err %d", err);
+
+ if (err == -ECANCELED)
+ return;
+
if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) &&
cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) &&
cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
return;
- bt_dev_dbg(hdev, "err %d", err);
-
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
cmd->param, 1);
mgmt_pending_remove(cmd);
@@ -6080,7 +6094,8 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
{
struct mgmt_pending_cmd *cmd = data;
- if (cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
+ if (err == -ECANCELED ||
+ cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
return;
bt_dev_dbg(hdev, "err %d", err);
@@ -8025,7 +8040,8 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
u8 status = mgmt_status(err);
u16 eir_len;
- if (cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
+ if (err == -ECANCELED ||
+ cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
return;
if (!status) {
diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
index cbff37b32734..4fae82fedcca 100644
--- a/net/bluetooth/rfcomm/sock.c
+++ b/net/bluetooth/rfcomm/sock.c
@@ -729,7 +729,8 @@ static int rfcomm_sock_getsockopt_old(struct socket *sock, int optname, char __u
struct sock *l2cap_sk;
struct l2cap_conn *conn;
struct rfcomm_conninfo cinfo;
- int len, err = 0;
+ int err = 0;
+ size_t len;
u32 opt;
BT_DBG("sk %p", sk);
@@ -783,7 +784,7 @@ static int rfcomm_sock_getsockopt_old(struct socket *sock, int optname, char __u
cinfo.hci_handle = conn->hcon->handle;
memcpy(cinfo.dev_class, conn->hcon->dev_class, 3);
- len = min_t(unsigned int, len, sizeof(cinfo));
+ len = min(len, sizeof(cinfo));
if (copy_to_user(optval, (char *) &cinfo, len))
err = -EFAULT;
@@ -802,7 +803,8 @@ static int rfcomm_sock_getsockopt(struct socket *sock, int level, int optname, c
{
struct sock *sk = sock->sk;
struct bt_security sec;
- int len, err = 0;
+ int err = 0;
+ size_t len;
BT_DBG("sk %p", sk);
@@ -827,7 +829,7 @@ static int rfcomm_sock_getsockopt(struct socket *sock, int level, int optname, c
sec.level = rfcomm_pi(sk)->sec_level;
sec.key_size = 0;
- len = min_t(unsigned int, len, sizeof(sec));
+ len = min(len, sizeof(sec));
if (copy_to_user(optval, (char *) &sec, len))
err = -EFAULT;
diff --git a/net/core/filter.c b/net/core/filter.c
index f9d05eff80b1..b64e7139eae1 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -2602,18 +2602,16 @@ BPF_CALL_2(bpf_msg_cork_bytes, struct sk_msg *, msg, u32, bytes)
static void sk_msg_reset_curr(struct sk_msg *msg)
{
- u32 i = msg->sg.start;
- u32 len = 0;
-
- do {
- len += sk_msg_elem(msg, i)->length;
- sk_msg_iter_var_next(i);
- if (len >= msg->sg.size)
- break;
- } while (i != msg->sg.end);
+ if (!msg->sg.size) {
+ msg->sg.curr = msg->sg.start;
+ msg->sg.copybreak = 0;
+ } else {
+ u32 i = msg->sg.end;
- msg->sg.curr = i;
- msg->sg.copybreak = 0;
+ sk_msg_iter_var_prev(i);
+ msg->sg.curr = i;
+ msg->sg.copybreak = msg->sg.data[i].length;
+ }
}
static const struct bpf_func_proto bpf_msg_cork_bytes_proto = {
@@ -2776,7 +2774,7 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
sk_msg_iter_var_next(i);
} while (i != msg->sg.end);
- if (start >= offset + l)
+ if (start > offset + l)
return -EINVAL;
space = MAX_MSG_FRAGS - sk_msg_elem_used(msg);
@@ -2801,6 +2799,8 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
raw = page_address(page);
+ if (i == msg->sg.end)
+ sk_msg_iter_var_prev(i);
psge = sk_msg_elem(msg, i);
front = start - offset;
back = psge->length - front;
@@ -2817,7 +2817,13 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
}
put_page(sg_page(psge));
- } else if (start - offset) {
+ new = i;
+ goto place_new;
+ }
+
+ if (start - offset) {
+ if (i == msg->sg.end)
+ sk_msg_iter_var_prev(i);
psge = sk_msg_elem(msg, i);
rsge = sk_msg_elem_cpy(msg, i);
@@ -2828,39 +2834,44 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
sk_msg_iter_var_next(i);
sg_unmark_end(psge);
sg_unmark_end(&rsge);
- sk_msg_iter_next(msg, end);
}
/* Slot(s) to place newly allocated data */
+ sk_msg_iter_next(msg, end);
new = i;
+ sk_msg_iter_var_next(i);
+
+ if (i == msg->sg.end) {
+ if (!rsge.length)
+ goto place_new;
+ sk_msg_iter_next(msg, end);
+ goto place_new;
+ }
/* Shift one or two slots as needed */
- if (!copy) {
- sge = sk_msg_elem_cpy(msg, i);
+ sge = sk_msg_elem_cpy(msg, new);
+ sg_unmark_end(&sge);
+ nsge = sk_msg_elem_cpy(msg, i);
+ if (rsge.length) {
sk_msg_iter_var_next(i);
- sg_unmark_end(&sge);
+ nnsge = sk_msg_elem_cpy(msg, i);
sk_msg_iter_next(msg, end);
+ }
- nsge = sk_msg_elem_cpy(msg, i);
+ while (i != msg->sg.end) {
+ msg->sg.data[i] = sge;
+ sge = nsge;
+ sk_msg_iter_var_next(i);
if (rsge.length) {
- sk_msg_iter_var_next(i);
+ nsge = nnsge;
nnsge = sk_msg_elem_cpy(msg, i);
- }
-
- while (i != msg->sg.end) {
- msg->sg.data[i] = sge;
- sge = nsge;
- sk_msg_iter_var_next(i);
- if (rsge.length) {
- nsge = nnsge;
- nnsge = sk_msg_elem_cpy(msg, i);
- } else {
- nsge = sk_msg_elem_cpy(msg, i);
- }
+ } else {
+ nsge = sk_msg_elem_cpy(msg, i);
}
}
+place_new:
/* Place newly allocated data buffer */
sk_mem_charge(msg->sk, len);
msg->sg.size += len;
@@ -2889,8 +2900,10 @@ static const struct bpf_func_proto bpf_msg_push_data_proto = {
static void sk_msg_shift_left(struct sk_msg *msg, int i)
{
+ struct scatterlist *sge = sk_msg_elem(msg, i);
int prev;
+ put_page(sg_page(sge));
do {
prev = i;
sk_msg_iter_var_next(i);
@@ -2927,6 +2940,9 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
if (unlikely(flags))
return -EINVAL;
+ if (unlikely(len == 0))
+ return 0;
+
/* First find the starting scatterlist element */
i = msg->sg.start;
do {
@@ -2939,7 +2955,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
} while (i != msg->sg.end);
/* Bounds checks: start and pop must be inside message */
- if (start >= offset + l || last >= msg->sg.size)
+ if (start >= offset + l || last > msg->sg.size)
return -EINVAL;
space = MAX_MSG_FRAGS - sk_msg_elem_used(msg);
@@ -2968,12 +2984,12 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
*/
if (start != offset) {
struct scatterlist *nsge, *sge = sk_msg_elem(msg, i);
- int a = start;
+ int a = start - offset;
int b = sge->length - pop - a;
sk_msg_iter_var_next(i);
- if (pop < sge->length - a) {
+ if (b > 0) {
if (space) {
sge->length = a;
sk_msg_shift_right(msg, i);
@@ -2992,7 +3008,6 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
if (unlikely(!page))
return -ENOMEM;
- sge->length = a;
orig = sg_page(sge);
from = sg_virt(sge);
to = page_address(page);
@@ -3002,7 +3017,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
put_page(orig);
}
pop = 0;
- } else if (pop >= sge->length - a) {
+ } else {
pop -= (sge->length - a);
sge->length = a;
}
@@ -3036,7 +3051,6 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
pop -= sge->length;
sk_msg_shift_left(msg, i);
}
- sk_msg_iter_var_next(i);
}
sk_mem_uncharge(msg->sk, len - pop);
diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c
index fae9c4694186..412816076b8b 100644
--- a/net/core/gen_estimator.c
+++ b/net/core/gen_estimator.c
@@ -206,7 +206,7 @@ void gen_kill_estimator(struct net_rate_estimator __rcu **rate_est)
{
struct net_rate_estimator *est;
- est = xchg((__force struct net_rate_estimator **)rate_est, NULL);
+ est = unrcu_pointer(xchg(rate_est, NULL));
if (est) {
timer_shutdown_sync(&est->timer);
kfree_rcu(est, rcu);
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index bbf40b999713..846fd672f0e5 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -1117,9 +1117,9 @@ static void sk_psock_strp_data_ready(struct sock *sk)
if (tls_sw_has_ctx_rx(sk)) {
psock->saved_data_ready(sk);
} else {
- write_lock_bh(&sk->sk_callback_lock);
+ read_lock_bh(&sk->sk_callback_lock);
strp_data_ready(&psock->strp);
- write_unlock_bh(&sk->sk_callback_lock);
+ read_unlock_bh(&sk->sk_callback_lock);
}
}
rcu_read_unlock();
diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c
index c53b731f2d67..70007fc578a1 100644
--- a/net/core/sock_diag.c
+++ b/net/core/sock_diag.c
@@ -16,9 +16,10 @@
#include <linux/inet_diag.h>
#include <linux/sock_diag.h>
-static const struct sock_diag_handler *sock_diag_handlers[AF_MAX];
-static int (*inet_rcv_compat)(struct sk_buff *skb, struct nlmsghdr *nlh);
-static DEFINE_MUTEX(sock_diag_table_mutex);
+static const struct sock_diag_handler __rcu *sock_diag_handlers[AF_MAX];
+
+static const struct sock_diag_inet_compat __rcu *inet_rcv_compat;
+
static struct workqueue_struct *broadcast_wq;
DEFINE_COOKIE(sock_cookie);
@@ -122,6 +123,24 @@ static size_t sock_diag_nlmsg_size(void)
+ nla_total_size_64bit(sizeof(struct tcp_info))); /* INET_DIAG_INFO */
}
+static const struct sock_diag_handler *sock_diag_lock_handler(int family)
+{
+ const struct sock_diag_handler *handler;
+
+ rcu_read_lock();
+ handler = rcu_dereference(sock_diag_handlers[family]);
+ if (handler && !try_module_get(handler->owner))
+ handler = NULL;
+ rcu_read_unlock();
+
+ return handler;
+}
+
+static void sock_diag_unlock_handler(const struct sock_diag_handler *handler)
+{
+ module_put(handler->owner);
+}
+
static void sock_diag_broadcast_destroy_work(struct work_struct *work)
{
struct broadcast_sk *bsk =
@@ -138,12 +157,12 @@ static void sock_diag_broadcast_destroy_work(struct work_struct *work)
if (!skb)
goto out;
- mutex_lock(&sock_diag_table_mutex);
- hndl = sock_diag_handlers[sk->sk_family];
- if (hndl && hndl->get_info)
- err = hndl->get_info(skb, sk);
- mutex_unlock(&sock_diag_table_mutex);
-
+ hndl = sock_diag_lock_handler(sk->sk_family);
+ if (hndl) {
+ if (hndl->get_info)
+ err = hndl->get_info(skb, sk);
+ sock_diag_unlock_handler(hndl);
+ }
if (!err)
nlmsg_multicast(sock_net(sk)->diag_nlsk, skb, 0, group,
GFP_KERNEL);
@@ -166,51 +185,43 @@ void sock_diag_broadcast_destroy(struct sock *sk)
queue_work(broadcast_wq, &bsk->work);
}
-void sock_diag_register_inet_compat(int (*fn)(struct sk_buff *skb, struct nlmsghdr *nlh))
+void sock_diag_register_inet_compat(const struct sock_diag_inet_compat *ptr)
{
- mutex_lock(&sock_diag_table_mutex);
- inet_rcv_compat = fn;
- mutex_unlock(&sock_diag_table_mutex);
+ xchg(&inet_rcv_compat, RCU_INITIALIZER(ptr));
}
EXPORT_SYMBOL_GPL(sock_diag_register_inet_compat);
-void sock_diag_unregister_inet_compat(int (*fn)(struct sk_buff *skb, struct nlmsghdr *nlh))
+void sock_diag_unregister_inet_compat(const struct sock_diag_inet_compat *ptr)
{
- mutex_lock(&sock_diag_table_mutex);
- inet_rcv_compat = NULL;
- mutex_unlock(&sock_diag_table_mutex);
+ const struct sock_diag_inet_compat *old;
+
+ old = unrcu_pointer(xchg(&inet_rcv_compat, NULL));
+ WARN_ON_ONCE(old != ptr);
}
EXPORT_SYMBOL_GPL(sock_diag_unregister_inet_compat);
int sock_diag_register(const struct sock_diag_handler *hndl)
{
- int err = 0;
+ int family = hndl->family;
- if (hndl->family >= AF_MAX)
+ if (family >= AF_MAX)
return -EINVAL;
- mutex_lock(&sock_diag_table_mutex);
- if (sock_diag_handlers[hndl->family])
- err = -EBUSY;
- else
- WRITE_ONCE(sock_diag_handlers[hndl->family], hndl);
- mutex_unlock(&sock_diag_table_mutex);
-
- return err;
+ return !cmpxchg((const struct sock_diag_handler **)
+ &sock_diag_handlers[family],
+ NULL, hndl) ? 0 : -EBUSY;
}
EXPORT_SYMBOL_GPL(sock_diag_register);
-void sock_diag_unregister(const struct sock_diag_handler *hnld)
+void sock_diag_unregister(const struct sock_diag_handler *hndl)
{
- int family = hnld->family;
+ int family = hndl->family;
if (family >= AF_MAX)
return;
- mutex_lock(&sock_diag_table_mutex);
- BUG_ON(sock_diag_handlers[family] != hnld);
- WRITE_ONCE(sock_diag_handlers[family], NULL);
- mutex_unlock(&sock_diag_table_mutex);
+ xchg((const struct sock_diag_handler **)&sock_diag_handlers[family],
+ NULL);
}
EXPORT_SYMBOL_GPL(sock_diag_unregister);
@@ -227,20 +238,20 @@ static int __sock_diag_cmd(struct sk_buff *skb, struct nlmsghdr *nlh)
return -EINVAL;
req->sdiag_family = array_index_nospec(req->sdiag_family, AF_MAX);
- if (READ_ONCE(sock_diag_handlers[req->sdiag_family]) == NULL)
+ if (!rcu_access_pointer(sock_diag_handlers[req->sdiag_family]))
sock_load_diag_module(req->sdiag_family, 0);
- mutex_lock(&sock_diag_table_mutex);
- hndl = sock_diag_handlers[req->sdiag_family];
+ hndl = sock_diag_lock_handler(req->sdiag_family);
if (hndl == NULL)
- err = -ENOENT;
- else if (nlh->nlmsg_type == SOCK_DIAG_BY_FAMILY)
+ return -ENOENT;
+
+ if (nlh->nlmsg_type == SOCK_DIAG_BY_FAMILY)
err = hndl->dump(skb, nlh);
else if (nlh->nlmsg_type == SOCK_DESTROY && hndl->destroy)
err = hndl->destroy(skb, nlh);
else
err = -EOPNOTSUPP;
- mutex_unlock(&sock_diag_table_mutex);
+ sock_diag_unlock_handler(hndl);
return err;
}
@@ -248,20 +259,27 @@ static int __sock_diag_cmd(struct sk_buff *skb, struct nlmsghdr *nlh)
static int sock_diag_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh,
struct netlink_ext_ack *extack)
{
+ const struct sock_diag_inet_compat *ptr;
int ret;
switch (nlh->nlmsg_type) {
case TCPDIAG_GETSOCK:
case DCCPDIAG_GETSOCK:
- if (inet_rcv_compat == NULL)
+
+ if (!rcu_access_pointer(inet_rcv_compat))
sock_load_diag_module(AF_INET, 0);
- mutex_lock(&sock_diag_table_mutex);
- if (inet_rcv_compat != NULL)
- ret = inet_rcv_compat(skb, nlh);
- else
- ret = -EOPNOTSUPP;
- mutex_unlock(&sock_diag_table_mutex);
+ rcu_read_lock();
+ ptr = rcu_dereference(inet_rcv_compat);
+ if (ptr && !try_module_get(ptr->owner))
+ ptr = NULL;
+ rcu_read_unlock();
+
+ ret = -EOPNOTSUPP;
+ if (ptr) {
+ ret = ptr->fn(skb, nlh);
+ module_put(ptr->owner);
+ }
return ret;
case SOCK_DIAG_BY_FAMILY:
@@ -286,12 +304,12 @@ static int sock_diag_bind(struct net *net, int group)
switch (group) {
case SKNLGRP_INET_TCP_DESTROY:
case SKNLGRP_INET_UDP_DESTROY:
- if (!READ_ONCE(sock_diag_handlers[AF_INET]))
+ if (!rcu_access_pointer(sock_diag_handlers[AF_INET]))
sock_load_diag_module(AF_INET, 0);
break;
case SKNLGRP_INET6_TCP_DESTROY:
case SKNLGRP_INET6_UDP_DESTROY:
- if (!READ_ONCE(sock_diag_handlers[AF_INET6]))
+ if (!rcu_access_pointer(sock_diag_handlers[AF_INET6]))
sock_load_diag_module(AF_INET6, 0);
break;
}
diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
index c5f7bd01379c..906c38b9d66f 100644
--- a/net/hsr/hsr_device.c
+++ b/net/hsr/hsr_device.c
@@ -253,6 +253,8 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
skb->dev = master->dev;
skb->priority = TC_PRIO_CONTROL;
+ skb_reset_network_header(skb);
+ skb_reset_transport_header(skb);
if (dev_hard_header(skb, skb->dev, ETH_P_PRP,
hsr->sup_multicast_addr,
skb->dev->dev_addr, skb->len) <= 0)
@@ -260,8 +262,6 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
skb_reset_mac_header(skb);
skb_reset_mac_len(skb);
- skb_reset_network_header(skb);
- skb_reset_transport_header(skb);
return skb;
out:
diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
index 685474ef11c4..8daa6418e25a 100644
--- a/net/ipv4/cipso_ipv4.c
+++ b/net/ipv4/cipso_ipv4.c
@@ -1955,7 +1955,7 @@ int cipso_v4_req_setattr(struct request_sock *req,
buf = NULL;
req_inet = inet_rsk(req);
- opt = xchg((__force struct ip_options_rcu **)&req_inet->ireq_opt, opt);
+ opt = unrcu_pointer(xchg(&req_inet->ireq_opt, RCU_INITIALIZER(opt)));
if (opt)
kfree_rcu(opt, rcu);
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index ca8cc0988b61..bd032ac2376e 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -1124,7 +1124,7 @@ static void reqsk_timer_handler(struct timer_list *t)
drop:
__inet_csk_reqsk_queue_drop(sk_listener, oreq, true);
- reqsk_put(req);
+ reqsk_put(oreq);
}
static bool reqsk_queue_hash_req(struct request_sock *req,
diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
index 87ecefea7239..5d09ab3ed735 100644
--- a/net/ipv4/inet_diag.c
+++ b/net/ipv4/inet_diag.c
@@ -1397,6 +1397,7 @@ int inet_diag_handler_get_info(struct sk_buff *skb, struct sock *sk)
}
static const struct sock_diag_handler inet_diag_handler = {
+ .owner = THIS_MODULE,
.family = AF_INET,
.dump = inet_diag_handler_cmd,
.get_info = inet_diag_handler_get_info,
@@ -1404,6 +1405,7 @@ static const struct sock_diag_handler inet_diag_handler = {
};
static const struct sock_diag_handler inet6_diag_handler = {
+ .owner = THIS_MODULE,
.family = AF_INET6,
.dump = inet_diag_handler_cmd,
.get_info = inet_diag_handler_get_info,
@@ -1443,6 +1445,11 @@ void inet_diag_unregister(const struct inet_diag_handler *h)
}
EXPORT_SYMBOL_GPL(inet_diag_unregister);
+static const struct sock_diag_inet_compat inet_diag_compat = {
+ .owner = THIS_MODULE,
+ .fn = inet_diag_rcv_msg_compat,
+};
+
static int __init inet_diag_init(void)
{
const int inet_diag_table_size = (IPPROTO_MAX *
@@ -1461,7 +1468,7 @@ static int __init inet_diag_init(void)
if (err)
goto out_free_inet;
- sock_diag_register_inet_compat(inet_diag_rcv_msg_compat);
+ sock_diag_register_inet_compat(&inet_diag_compat);
out:
return err;
@@ -1476,7 +1483,7 @@ static void __exit inet_diag_exit(void)
{
sock_diag_unregister(&inet6_diag_handler);
sock_diag_unregister(&inet_diag_handler);
- sock_diag_unregister_inet_compat(inet_diag_rcv_msg_compat);
+ sock_diag_unregister_inet_compat(&inet_diag_compat);
kfree(inet_diag_table);
}
diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
index 66eade3fb629..dc0ad979a894 100644
--- a/net/ipv4/ipmr.c
+++ b/net/ipv4/ipmr.c
@@ -136,7 +136,7 @@ static struct mr_table *ipmr_mr_table_iter(struct net *net,
return ret;
}
-static struct mr_table *ipmr_get_table(struct net *net, u32 id)
+static struct mr_table *__ipmr_get_table(struct net *net, u32 id)
{
struct mr_table *mrt;
@@ -147,6 +147,16 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id)
return NULL;
}
+static struct mr_table *ipmr_get_table(struct net *net, u32 id)
+{
+ struct mr_table *mrt;
+
+ rcu_read_lock();
+ mrt = __ipmr_get_table(net, id);
+ rcu_read_unlock();
+ return mrt;
+}
+
static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4,
struct mr_table **mrt)
{
@@ -188,7 +198,7 @@ static int ipmr_rule_action(struct fib_rule *rule, struct flowi *flp,
arg->table = fib_rule_get_table(rule, arg);
- mrt = ipmr_get_table(rule->fr_net, arg->table);
+ mrt = __ipmr_get_table(rule->fr_net, arg->table);
if (!mrt)
return -EAGAIN;
res->mrt = mrt;
@@ -314,6 +324,8 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id)
return net->ipv4.mrt;
}
+#define __ipmr_get_table ipmr_get_table
+
static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4,
struct mr_table **mrt)
{
@@ -402,7 +414,7 @@ static struct mr_table *ipmr_new_table(struct net *net, u32 id)
if (id != RT_TABLE_DEFAULT && id >= 1000000000)
return ERR_PTR(-EINVAL);
- mrt = ipmr_get_table(net, id);
+ mrt = __ipmr_get_table(net, id);
if (mrt)
return mrt;
@@ -1373,7 +1385,7 @@ int ip_mroute_setsockopt(struct sock *sk, int optname, sockptr_t optval,
goto out_unlock;
}
- mrt = ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
+ mrt = __ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
if (!mrt) {
ret = -ENOENT;
goto out_unlock;
@@ -2261,11 +2273,13 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb,
struct mr_table *mrt;
int err;
- mrt = ipmr_get_table(net, RT_TABLE_DEFAULT);
- if (!mrt)
+ rcu_read_lock();
+ mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT);
+ if (!mrt) {
+ rcu_read_unlock();
return -ENOENT;
+ }
- rcu_read_lock();
cache = ipmr_cache_find(mrt, saddr, daddr);
if (!cache && skb->dev) {
int vif = ipmr_find_vif(mrt, skb->dev);
@@ -2550,7 +2564,7 @@ static int ipmr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
grp = tb[RTA_DST] ? nla_get_in_addr(tb[RTA_DST]) : 0;
tableid = tb[RTA_TABLE] ? nla_get_u32(tb[RTA_TABLE]) : 0;
- mrt = ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT);
+ mrt = __ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT);
if (!mrt) {
err = -ENOENT;
goto errout_free;
@@ -2602,7 +2616,7 @@ static int ipmr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
if (filter.table_id) {
struct mr_table *mrt;
- mrt = ipmr_get_table(sock_net(skb->sk), filter.table_id);
+ mrt = __ipmr_get_table(sock_net(skb->sk), filter.table_id);
if (!mrt) {
if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR)
return skb->len;
@@ -2710,7 +2724,7 @@ static int rtm_to_ipmr_mfcc(struct net *net, struct nlmsghdr *nlh,
break;
}
}
- mrt = ipmr_get_table(net, tblid);
+ mrt = __ipmr_get_table(net, tblid);
if (!mrt) {
ret = -ENOENT;
goto out;
@@ -2918,13 +2932,15 @@ static void *ipmr_vif_seq_start(struct seq_file *seq, loff_t *pos)
struct net *net = seq_file_net(seq);
struct mr_table *mrt;
- mrt = ipmr_get_table(net, RT_TABLE_DEFAULT);
- if (!mrt)
+ rcu_read_lock();
+ mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT);
+ if (!mrt) {
+ rcu_read_unlock();
return ERR_PTR(-ENOENT);
+ }
iter->mrt = mrt;
- rcu_read_lock();
return mr_vif_seq_start(seq, pos);
}
diff --git a/net/ipv4/ipmr_base.c b/net/ipv4/ipmr_base.c
index 271dc03fc6db..f0af12a2f70b 100644
--- a/net/ipv4/ipmr_base.c
+++ b/net/ipv4/ipmr_base.c
@@ -310,7 +310,8 @@ int mr_table_dump(struct mr_table *mrt, struct sk_buff *skb,
if (filter->filter_set)
flags |= NLM_F_DUMP_FILTERED;
- list_for_each_entry_rcu(mfc, &mrt->mfc_cache_list, list) {
+ list_for_each_entry_rcu(mfc, &mrt->mfc_cache_list, list,
+ lockdep_rtnl_is_held()) {
if (e < s_e)
goto next_entry;
if (filter->dev &&
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 75371928d94f..5e6615f69f17 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -3065,7 +3065,7 @@ int tcp_disconnect(struct sock *sk, int flags)
icsk->icsk_ack.rcv_mss = TCP_MIN_MSS;
memset(&tp->rx_opt, 0, sizeof(tp->rx_opt));
__sk_dst_reset(sk);
- dst_release(xchg((__force struct dst_entry **)&sk->sk_rx_dst, NULL));
+ dst_release(unrcu_pointer(xchg(&sk->sk_rx_dst, NULL)));
tcp_saved_syn_free(tp);
tp->compressed_ack = 0;
tp->segs_in = 0;
diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index fe6178715ba0..915286c3615a 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -221,11 +221,11 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
int flags,
int *addr_len)
{
- struct tcp_sock *tcp = tcp_sk(sk);
int peek = flags & MSG_PEEK;
- u32 seq = tcp->copied_seq;
struct sk_psock *psock;
+ struct tcp_sock *tcp;
int copied = 0;
+ u32 seq;
if (unlikely(flags & MSG_ERRQUEUE))
return inet_recv_error(sk, msg, len, addr_len);
@@ -238,7 +238,8 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
return tcp_recvmsg(sk, msg, len, flags, addr_len);
lock_sock(sk);
-
+ tcp = tcp_sk(sk);
+ seq = tcp->copied_seq;
/* We may have received data on the sk_receive_queue pre-accept and
* then we can not use read_skb in this context because we haven't
* assigned a sk_socket yet so have no link to the ops. The work-around
diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
index 8ed54e7334a9..0f523cbfe329 100644
--- a/net/ipv4/tcp_fastopen.c
+++ b/net/ipv4/tcp_fastopen.c
@@ -49,7 +49,7 @@ void tcp_fastopen_ctx_destroy(struct net *net)
{
struct tcp_fastopen_context *ctxt;
- ctxt = xchg((__force struct tcp_fastopen_context **)&net->ipv4.tcp_fastopen_ctx, NULL);
+ ctxt = unrcu_pointer(xchg(&net->ipv4.tcp_fastopen_ctx, NULL));
if (ctxt)
call_rcu(&ctxt->rcu, tcp_fastopen_ctx_free);
@@ -80,9 +80,10 @@ int tcp_fastopen_reset_cipher(struct net *net, struct sock *sk,
if (sk) {
q = &inet_csk(sk)->icsk_accept_queue.fastopenq;
- octx = xchg((__force struct tcp_fastopen_context **)&q->ctx, ctx);
+ octx = unrcu_pointer(xchg(&q->ctx, RCU_INITIALIZER(ctx)));
} else {
- octx = xchg((__force struct tcp_fastopen_context **)&net->ipv4.tcp_fastopen_ctx, ctx);
+ octx = unrcu_pointer(xchg(&net->ipv4.tcp_fastopen_ctx,
+ RCU_INITIALIZER(ctx)));
}
if (octx)
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 73fb814460b6..2e4e53560394 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -2232,7 +2232,7 @@ bool udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst)
struct dst_entry *old;
if (dst_hold_safe(dst)) {
- old = xchg((__force struct dst_entry **)&sk->sk_rx_dst, dst);
+ old = unrcu_pointer(xchg(&sk->sk_rx_dst, RCU_INITIALIZER(dst)));
dst_release(old);
return old != dst;
}
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index a9358c796a81..8360939acf85 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -2535,6 +2535,24 @@ static struct inet6_dev *addrconf_add_dev(struct net_device *dev)
return idev;
}
+static void delete_tempaddrs(struct inet6_dev *idev,
+ struct inet6_ifaddr *ifp)
+{
+ struct inet6_ifaddr *ift, *tmp;
+
+ write_lock_bh(&idev->lock);
+ list_for_each_entry_safe(ift, tmp, &idev->tempaddr_list, tmp_list) {
+ if (ift->ifpub != ifp)
+ continue;
+
+ in6_ifa_hold(ift);
+ write_unlock_bh(&idev->lock);
+ ipv6_del_addr(ift);
+ write_lock_bh(&idev->lock);
+ }
+ write_unlock_bh(&idev->lock);
+}
+
static void manage_tempaddrs(struct inet6_dev *idev,
struct inet6_ifaddr *ifp,
__u32 valid_lft, __u32 prefered_lft,
@@ -3076,11 +3094,12 @@ static int inet6_addr_del(struct net *net, int ifindex, u32 ifa_flags,
in6_ifa_hold(ifp);
read_unlock_bh(&idev->lock);
- if (!(ifp->flags & IFA_F_TEMPORARY) &&
- (ifa_flags & IFA_F_MANAGETEMPADDR))
- manage_tempaddrs(idev, ifp, 0, 0, false,
- jiffies);
ipv6_del_addr(ifp);
+
+ if (!(ifp->flags & IFA_F_TEMPORARY) &&
+ (ifp->flags & IFA_F_MANAGETEMPADDR))
+ delete_tempaddrs(idev, ifp);
+
addrconf_verify_rtnl(net);
if (ipv6_addr_is_multicast(pfx)) {
ipv6_mc_config(net->ipv6.mc_autojoin_sk,
@@ -4891,14 +4910,12 @@ static int inet6_addr_modify(struct net *net, struct inet6_ifaddr *ifp,
}
if (was_managetempaddr || ifp->flags & IFA_F_MANAGETEMPADDR) {
- if (was_managetempaddr &&
- !(ifp->flags & IFA_F_MANAGETEMPADDR)) {
- cfg->valid_lft = 0;
- cfg->preferred_lft = 0;
- }
- manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft,
- cfg->preferred_lft, !was_managetempaddr,
- jiffies);
+ if (was_managetempaddr && !(ifp->flags & IFA_F_MANAGETEMPADDR))
+ delete_tempaddrs(ifp->idev, ifp);
+ else
+ manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft,
+ cfg->preferred_lft, !was_managetempaddr,
+ jiffies);
}
addrconf_verify_rtnl(net);
diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
index b9c50cceba56..99843eb4d49b 100644
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -507,7 +507,7 @@ void inet6_cleanup_sock(struct sock *sk)
/* Free tx options */
- opt = xchg((__force struct ipv6_txoptions **)&np->opt, NULL);
+ opt = unrcu_pointer(xchg(&np->opt, NULL));
if (opt) {
atomic_sub(opt->tot_len, &sk->sk_omem_alloc);
txopt_put(opt);
diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
index 4356806b52bd..afa9073567dc 100644
--- a/net/ipv6/ip6_fib.c
+++ b/net/ipv6/ip6_fib.c
@@ -982,7 +982,7 @@ static void __fib6_drop_pcpu_from(struct fib6_nh *fib6_nh,
if (pcpu_rt && rcu_access_pointer(pcpu_rt->from) == match) {
struct fib6_info *from;
- from = xchg((__force struct fib6_info **)&pcpu_rt->from, NULL);
+ from = unrcu_pointer(xchg(&pcpu_rt->from, NULL));
fib6_info_release(from);
}
}
diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
index 30ca064b76ef..e24fa0843c7d 100644
--- a/net/ipv6/ip6mr.c
+++ b/net/ipv6/ip6mr.c
@@ -125,7 +125,7 @@ static struct mr_table *ip6mr_mr_table_iter(struct net *net,
return ret;
}
-static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
+static struct mr_table *__ip6mr_get_table(struct net *net, u32 id)
{
struct mr_table *mrt;
@@ -136,6 +136,16 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
return NULL;
}
+static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
+{
+ struct mr_table *mrt;
+
+ rcu_read_lock();
+ mrt = __ip6mr_get_table(net, id);
+ rcu_read_unlock();
+ return mrt;
+}
+
static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
struct mr_table **mrt)
{
@@ -177,7 +187,7 @@ static int ip6mr_rule_action(struct fib_rule *rule, struct flowi *flp,
arg->table = fib_rule_get_table(rule, arg);
- mrt = ip6mr_get_table(rule->fr_net, arg->table);
+ mrt = __ip6mr_get_table(rule->fr_net, arg->table);
if (!mrt)
return -EAGAIN;
res->mrt = mrt;
@@ -304,6 +314,8 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
return net->ipv6.mrt6;
}
+#define __ip6mr_get_table ip6mr_get_table
+
static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
struct mr_table **mrt)
{
@@ -382,7 +394,7 @@ static struct mr_table *ip6mr_new_table(struct net *net, u32 id)
{
struct mr_table *mrt;
- mrt = ip6mr_get_table(net, id);
+ mrt = __ip6mr_get_table(net, id);
if (mrt)
return mrt;
@@ -411,13 +423,15 @@ static void *ip6mr_vif_seq_start(struct seq_file *seq, loff_t *pos)
struct net *net = seq_file_net(seq);
struct mr_table *mrt;
- mrt = ip6mr_get_table(net, RT6_TABLE_DFLT);
- if (!mrt)
+ rcu_read_lock();
+ mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT);
+ if (!mrt) {
+ rcu_read_unlock();
return ERR_PTR(-ENOENT);
+ }
iter->mrt = mrt;
- rcu_read_lock();
return mr_vif_seq_start(seq, pos);
}
@@ -2278,11 +2292,13 @@ int ip6mr_get_route(struct net *net, struct sk_buff *skb, struct rtmsg *rtm,
struct mfc6_cache *cache;
struct rt6_info *rt = (struct rt6_info *)skb_dst(skb);
- mrt = ip6mr_get_table(net, RT6_TABLE_DFLT);
- if (!mrt)
+ rcu_read_lock();
+ mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT);
+ if (!mrt) {
+ rcu_read_unlock();
return -ENOENT;
+ }
- rcu_read_lock();
cache = ip6mr_cache_find(mrt, &rt->rt6i_src.addr, &rt->rt6i_dst.addr);
if (!cache && skb->dev) {
int vif = ip6mr_find_vif(mrt, skb->dev);
@@ -2563,7 +2579,7 @@ static int ip6mr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
grp = nla_get_in6_addr(tb[RTA_DST]);
tableid = tb[RTA_TABLE] ? nla_get_u32(tb[RTA_TABLE]) : 0;
- mrt = ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT);
+ mrt = __ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT);
if (!mrt) {
NL_SET_ERR_MSG_MOD(extack, "MR table does not exist");
return -ENOENT;
@@ -2608,7 +2624,7 @@ static int ip6mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
if (filter.table_id) {
struct mr_table *mrt;
- mrt = ip6mr_get_table(sock_net(skb->sk), filter.table_id);
+ mrt = __ip6mr_get_table(sock_net(skb->sk), filter.table_id);
if (!mrt) {
if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR)
return skb->len;
diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
index 0e2a0847b387..f106b19b74dd 100644
--- a/net/ipv6/ipv6_sockglue.c
+++ b/net/ipv6/ipv6_sockglue.c
@@ -111,8 +111,7 @@ struct ipv6_txoptions *ipv6_update_options(struct sock *sk,
icsk->icsk_sync_mss(sk, icsk->icsk_pmtu_cookie);
}
}
- opt = xchg((__force struct ipv6_txoptions **)&inet6_sk(sk)->opt,
- opt);
+ opt = unrcu_pointer(xchg(&inet6_sk(sk)->opt, RCU_INITIALIZER(opt)));
sk_dst_reset(sk);
return opt;
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index a9104c4c1c02..e320dfa7fe7f 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -368,7 +368,7 @@ static void ip6_dst_destroy(struct dst_entry *dst)
in6_dev_put(idev);
}
- from = xchg((__force struct fib6_info **)&rt->from, NULL);
+ from = unrcu_pointer(xchg(&rt->from, NULL));
fib6_info_release(from);
}
@@ -376,6 +376,7 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev)
{
struct rt6_info *rt = (struct rt6_info *)dst;
struct inet6_dev *idev = rt->rt6i_idev;
+ struct fib6_info *from;
if (idev && idev->dev != blackhole_netdev) {
struct inet6_dev *blackhole_idev = in6_dev_get(blackhole_netdev);
@@ -385,6 +386,8 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev)
in6_dev_put(idev);
}
}
+ from = unrcu_pointer(xchg(&rt->from, NULL));
+ fib6_info_release(from);
}
static bool __rt6_check_expired(const struct rt6_info *rt)
@@ -1430,7 +1433,7 @@ static struct rt6_info *rt6_make_pcpu_route(struct net *net,
if (res->f6i->fib6_destroying) {
struct fib6_info *from;
- from = xchg((__force struct fib6_info **)&pcpu_rt->from, NULL);
+ from = unrcu_pointer(xchg(&pcpu_rt->from, NULL));
fib6_info_release(from);
}
@@ -1447,7 +1450,6 @@ static DEFINE_SPINLOCK(rt6_exception_lock);
static void rt6_remove_exception(struct rt6_exception_bucket *bucket,
struct rt6_exception *rt6_ex)
{
- struct fib6_info *from;
struct net *net;
if (!bucket || !rt6_ex)
@@ -1459,8 +1461,6 @@ static void rt6_remove_exception(struct rt6_exception_bucket *bucket,
/* purge completely the exception to allow releasing the held resources:
* some [sk] cache may keep the dst around for unlimited time
*/
- from = xchg((__force struct fib6_info **)&rt6_ex->rt6i->from, NULL);
- fib6_info_release(from);
dst_dev_put(&rt6_ex->rt6i->dst);
hlist_del_rcu(&rt6_ex->hlist);
diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
index 815b1df0b2d1..0f660b1d3bd5 100644
--- a/net/iucv/af_iucv.c
+++ b/net/iucv/af_iucv.c
@@ -1238,7 +1238,9 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
return -EOPNOTSUPP;
/* receive/dequeue next skb:
- * the function understands MSG_PEEK and, thus, does not dequeue skb */
+ * the function understands MSG_PEEK and, thus, does not dequeue skb
+ * only refcount is increased.
+ */
skb = skb_recv_datagram(sk, flags, &err);
if (!skb) {
if (sk->sk_shutdown & RCV_SHUTDOWN)
@@ -1254,9 +1256,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
cskb = skb;
if (skb_copy_datagram_msg(cskb, offset, msg, copied)) {
- if (!(flags & MSG_PEEK))
- skb_queue_head(&sk->sk_receive_queue, skb);
- return -EFAULT;
+ err = -EFAULT;
+ goto err_out;
}
/* SOCK_SEQPACKET: set MSG_TRUNC if recv buf size is too small */
@@ -1273,11 +1274,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS,
sizeof(IUCV_SKB_CB(skb)->class),
(void *)&IUCV_SKB_CB(skb)->class);
- if (err) {
- if (!(flags & MSG_PEEK))
- skb_queue_head(&sk->sk_receive_queue, skb);
- return err;
- }
+ if (err)
+ goto err_out;
/* Mark read part of skb as used */
if (!(flags & MSG_PEEK)) {
@@ -1333,8 +1331,18 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
/* SOCK_SEQPACKET: return real length if MSG_TRUNC is set */
if (sk->sk_type == SOCK_SEQPACKET && (flags & MSG_TRUNC))
copied = rlen;
+ if (flags & MSG_PEEK)
+ skb_unref(skb);
return copied;
+
+err_out:
+ if (!(flags & MSG_PEEK))
+ skb_queue_head(&sk->sk_receive_queue, skb);
+ else
+ skb_unref(skb);
+
+ return err;
}
static inline __poll_t iucv_accept_poll(struct sock *parent)
diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
index fde1140d899e..cc25fec44f85 100644
--- a/net/llc/af_llc.c
+++ b/net/llc/af_llc.c
@@ -1099,7 +1099,7 @@ static int llc_ui_setsockopt(struct socket *sock, int level, int optname,
lock_sock(sk);
if (unlikely(level != SOL_LLC || optlen != sizeof(int)))
goto out;
- rc = copy_from_sockptr(&opt, optval, sizeof(opt));
+ rc = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
if (rc)
goto out;
rc = -EINVAL;
diff --git a/net/mac80211/main.c b/net/mac80211/main.c
index 71d60f57a886..d1046f495e63 100644
--- a/net/mac80211/main.c
+++ b/net/mac80211/main.c
@@ -145,6 +145,8 @@ static u32 ieee80211_hw_conf_chan(struct ieee80211_local *local)
}
power = ieee80211_chandef_max_power(&chandef);
+ if (local->user_power_level != IEEE80211_UNSET_POWER_LEVEL)
+ power = min(local->user_power_level, power);
rcu_read_lock();
list_for_each_entry_rcu(sdata, &local->interfaces, list) {
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index b8357d7c6b3a..01f6ce970918 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2691,8 +2691,8 @@ void mptcp_reset_tout_timer(struct mptcp_sock *msk, unsigned long fail_tout)
if (!fail_tout && !inet_csk(sk)->icsk_mtup.probe_timestamp)
return;
- close_timeout = inet_csk(sk)->icsk_mtup.probe_timestamp - tcp_jiffies32 + jiffies +
- TCP_TIMEWAIT_LEN;
+ close_timeout = (unsigned long)inet_csk(sk)->icsk_mtup.probe_timestamp -
+ tcp_jiffies32 + jiffies + TCP_TIMEWAIT_LEN;
/* the close timeout takes precedence on the fail one, and here at least one of
* them is active
diff --git a/net/netfilter/ipset/ip_set_bitmap_ip.c b/net/netfilter/ipset/ip_set_bitmap_ip.c
index e4fa00abde6a..5988b9bb9029 100644
--- a/net/netfilter/ipset/ip_set_bitmap_ip.c
+++ b/net/netfilter/ipset/ip_set_bitmap_ip.c
@@ -163,11 +163,8 @@ bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[],
ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to);
if (ret)
return ret;
- if (ip > ip_to) {
+ if (ip > ip_to)
swap(ip, ip_to);
- if (ip < map->first_ip)
- return -IPSET_ERR_BITMAP_RANGE;
- }
} else if (tb[IPSET_ATTR_CIDR]) {
u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]);
@@ -178,7 +175,7 @@ bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[],
ip_to = ip;
}
- if (ip_to > map->last_ip)
+ if (ip < map->first_ip || ip_to > map->last_ip)
return -IPSET_ERR_BITMAP_RANGE;
for (; !before(ip_to, ip); ip += map->hosts) {
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index 8a583e8f3c13..eee7997048fb 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -3231,25 +3231,37 @@ int nft_expr_inner_parse(const struct nft_ctx *ctx, const struct nlattr *nla,
if (!tb[NFTA_EXPR_DATA] || !tb[NFTA_EXPR_NAME])
return -EINVAL;
+ rcu_read_lock();
+
type = __nft_expr_type_get(ctx->family, tb[NFTA_EXPR_NAME]);
- if (!type)
- return -ENOENT;
+ if (!type) {
+ err = -ENOENT;
+ goto out_unlock;
+ }
- if (!type->inner_ops)
- return -EOPNOTSUPP;
+ if (!type->inner_ops) {
+ err = -EOPNOTSUPP;
+ goto out_unlock;
+ }
err = nla_parse_nested_deprecated(info->tb, type->maxattr,
tb[NFTA_EXPR_DATA],
type->policy, NULL);
if (err < 0)
- goto err_nla_parse;
+ goto out_unlock;
info->attr = nla;
info->ops = type->inner_ops;
+ /* No module reference will be taken on type->owner.
+ * Presence of type->inner_ops implies that the expression
+ * is builtin, so it cannot go away.
+ */
+ rcu_read_unlock();
return 0;
-err_nla_parse:
+out_unlock:
+ rcu_read_unlock();
return err;
}
@@ -3349,13 +3361,15 @@ void nft_expr_destroy(const struct nft_ctx *ctx, struct nft_expr *expr)
* Rules
*/
-static struct nft_rule *__nft_rule_lookup(const struct nft_chain *chain,
+static struct nft_rule *__nft_rule_lookup(const struct net *net,
+ const struct nft_chain *chain,
u64 handle)
{
struct nft_rule *rule;
// FIXME: this sucks
- list_for_each_entry_rcu(rule, &chain->rules, list) {
+ list_for_each_entry_rcu(rule, &chain->rules, list,
+ lockdep_commit_lock_is_held(net)) {
if (handle == rule->handle)
return rule;
}
@@ -3363,13 +3377,14 @@ static struct nft_rule *__nft_rule_lookup(const struct nft_chain *chain,
return ERR_PTR(-ENOENT);
}
-static struct nft_rule *nft_rule_lookup(const struct nft_chain *chain,
+static struct nft_rule *nft_rule_lookup(const struct net *net,
+ const struct nft_chain *chain,
const struct nlattr *nla)
{
if (nla == NULL)
return ERR_PTR(-EINVAL);
- return __nft_rule_lookup(chain, be64_to_cpu(nla_get_be64(nla)));
+ return __nft_rule_lookup(net, chain, be64_to_cpu(nla_get_be64(nla)));
}
static const struct nla_policy nft_rule_policy[NFTA_RULE_MAX + 1] = {
@@ -3661,9 +3676,10 @@ static int nf_tables_dump_rules_done(struct netlink_callback *cb)
return 0;
}
-/* called with rcu_read_lock held */
-static int nf_tables_getrule(struct sk_buff *skb, const struct nfnl_info *info,
- const struct nlattr * const nla[])
+/* Caller must hold rcu read lock or transaction mutex */
+static struct sk_buff *
+nf_tables_getrule_single(u32 portid, const struct nfnl_info *info,
+ const struct nlattr * const nla[], bool reset)
{
struct netlink_ext_ack *extack = info->extack;
u8 genmask = nft_genmask_cur(info->net);
@@ -3673,60 +3689,82 @@ static int nf_tables_getrule(struct sk_buff *skb, const struct nfnl_info *info,
struct net *net = info->net;
struct nft_table *table;
struct sk_buff *skb2;
- bool reset = false;
int err;
- if (info->nlh->nlmsg_flags & NLM_F_DUMP) {
- struct netlink_dump_control c = {
- .start= nf_tables_dump_rules_start,
- .dump = nf_tables_dump_rules,
- .done = nf_tables_dump_rules_done,
- .module = THIS_MODULE,
- .data = (void *)nla,
- };
-
- return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c);
- }
-
table = nft_table_lookup(net, nla[NFTA_RULE_TABLE], family, genmask, 0);
if (IS_ERR(table)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_TABLE]);
- return PTR_ERR(table);
+ return ERR_CAST(table);
}
chain = nft_chain_lookup(net, table, nla[NFTA_RULE_CHAIN], genmask);
if (IS_ERR(chain)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN]);
- return PTR_ERR(chain);
+ return ERR_CAST(chain);
}
- rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
+ rule = nft_rule_lookup(net, chain, nla[NFTA_RULE_HANDLE]);
if (IS_ERR(rule)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]);
- return PTR_ERR(rule);
+ return ERR_CAST(rule);
}
skb2 = alloc_skb(NLMSG_GOODSIZE, GFP_ATOMIC);
if (!skb2)
- return -ENOMEM;
+ return ERR_PTR(-ENOMEM);
+
+ err = nf_tables_fill_rule_info(skb2, net, portid,
+ info->nlh->nlmsg_seq, NFT_MSG_NEWRULE, 0,
+ family, table, chain, rule, 0, reset);
+ if (err < 0) {
+ kfree_skb(skb2);
+ return ERR_PTR(err);
+ }
+
+ return skb2;
+}
+
+static int nf_tables_getrule(struct sk_buff *skb, const struct nfnl_info *info,
+ const struct nlattr * const nla[])
+{
+ struct nftables_pernet *nft_net = nft_pernet(info->net);
+ u32 portid = NETLINK_CB(skb).portid;
+ struct net *net = info->net;
+ struct sk_buff *skb2;
+ bool reset = false;
+ char *buf;
+
+ if (info->nlh->nlmsg_flags & NLM_F_DUMP) {
+ struct netlink_dump_control c = {
+ .start= nf_tables_dump_rules_start,
+ .dump = nf_tables_dump_rules,
+ .done = nf_tables_dump_rules_done,
+ .module = THIS_MODULE,
+ .data = (void *)nla,
+ };
+
+ return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c);
+ }
if (NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_GETRULE_RESET)
reset = true;
- err = nf_tables_fill_rule_info(skb2, net, NETLINK_CB(skb).portid,
- info->nlh->nlmsg_seq, NFT_MSG_NEWRULE, 0,
- family, table, chain, rule, 0, reset);
- if (err < 0)
- goto err_fill_rule_info;
+ skb2 = nf_tables_getrule_single(portid, info, nla, reset);
+ if (IS_ERR(skb2))
+ return PTR_ERR(skb2);
- if (reset)
- audit_log_rule_reset(table, nft_pernet(net)->base_seq, 1);
+ if (!reset)
+ return nfnetlink_unicast(skb2, net, portid);
- return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid);
+ buf = kasprintf(GFP_ATOMIC, "%.*s:%u",
+ nla_len(nla[NFTA_RULE_TABLE]),
+ (char *)nla_data(nla[NFTA_RULE_TABLE]),
+ nft_net->base_seq);
+ audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1,
+ AUDIT_NFT_OP_RULE_RESET, GFP_ATOMIC);
+ kfree(buf);
-err_fill_rule_info:
- kfree_skb(skb2);
- return err;
+ return nfnetlink_unicast(skb2, net, portid);
}
void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule)
@@ -3938,7 +3976,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
if (nla[NFTA_RULE_HANDLE]) {
handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_HANDLE]));
- rule = __nft_rule_lookup(chain, handle);
+ rule = __nft_rule_lookup(net, chain, handle);
if (IS_ERR(rule)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]);
return PTR_ERR(rule);
@@ -3960,7 +3998,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
if (nla[NFTA_RULE_POSITION]) {
pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION]));
- old_rule = __nft_rule_lookup(chain, pos_handle);
+ old_rule = __nft_rule_lookup(net, chain, pos_handle);
if (IS_ERR(old_rule)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]);
return PTR_ERR(old_rule);
@@ -4177,7 +4215,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
if (chain) {
if (nla[NFTA_RULE_HANDLE]) {
- rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
+ rule = nft_rule_lookup(info->net, chain, nla[NFTA_RULE_HANDLE]);
if (IS_ERR(rule)) {
if (PTR_ERR(rule) == -ENOENT &&
NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_DESTROYRULE)
@@ -7577,9 +7615,7 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
struct nft_trans *trans;
int err = -ENOMEM;
- if (!try_module_get(type->owner))
- return -ENOENT;
-
+ /* caller must have obtained type->owner reference. */
trans = nft_trans_alloc(ctx, NFT_MSG_NEWOBJ,
sizeof(struct nft_trans_obj));
if (!trans)
@@ -7647,12 +7683,16 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
if (info->nlh->nlmsg_flags & NLM_F_REPLACE)
return -EOPNOTSUPP;
- type = __nft_obj_type_get(objtype, family);
- if (WARN_ON_ONCE(!type))
- return -ENOENT;
+ if (!obj->ops->update)
+ return 0;
+
+ type = nft_obj_type_get(net, objtype, family);
+ if (WARN_ON_ONCE(IS_ERR(type)))
+ return PTR_ERR(type);
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+ /* type->owner reference is put when transaction object is released. */
return nf_tables_updobj(&ctx, type, nla[NFTA_OBJ_DATA], obj);
}
@@ -7888,7 +7928,7 @@ static int nf_tables_dump_obj_done(struct netlink_callback *cb)
return 0;
}
-/* called with rcu_read_lock held */
+/* Caller must hold rcu read lock or transaction mutex */
static struct sk_buff *
nf_tables_getobj_single(u32 portid, const struct nfnl_info *info,
const struct nlattr * const nla[], bool reset)
@@ -9394,9 +9434,10 @@ static void nft_obj_commit_update(struct nft_trans *trans)
obj = nft_trans_obj(trans);
newobj = nft_trans_obj_newobj(trans);
- if (obj->ops->update)
- obj->ops->update(obj, newobj);
+ if (WARN_ON_ONCE(!obj->ops->update))
+ return;
+ obj->ops->update(obj, newobj);
nft_obj_destroy(&trans->ctx, newobj);
}
diff --git a/net/netlink/diag.c b/net/netlink/diag.c
index 9c4f231be275..7b15aa5f7bc2 100644
--- a/net/netlink/diag.c
+++ b/net/netlink/diag.c
@@ -241,6 +241,7 @@ static int netlink_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h)
}
static const struct sock_diag_handler netlink_diag_handler = {
+ .owner = THIS_MODULE,
.family = AF_NETLINK,
.dump = netlink_diag_handler_dump,
};
diff --git a/net/packet/diag.c b/net/packet/diag.c
index f6b200cb3c06..d4142636aa2b 100644
--- a/net/packet/diag.c
+++ b/net/packet/diag.c
@@ -245,6 +245,7 @@ static int packet_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h)
}
static const struct sock_diag_handler packet_diag_handler = {
+ .owner = THIS_MODULE,
.family = AF_PACKET,
.dump = packet_diag_handler_dump,
};
diff --git a/net/rfkill/rfkill-gpio.c b/net/rfkill/rfkill-gpio.c
index 4e32d659524e..b12edbe0ef45 100644
--- a/net/rfkill/rfkill-gpio.c
+++ b/net/rfkill/rfkill-gpio.c
@@ -31,8 +31,12 @@ static int rfkill_gpio_set_power(void *data, bool blocked)
{
struct rfkill_gpio_data *rfkill = data;
- if (!blocked && !IS_ERR(rfkill->clk) && !rfkill->clk_enabled)
- clk_enable(rfkill->clk);
+ if (!blocked && !IS_ERR(rfkill->clk) && !rfkill->clk_enabled) {
+ int ret = clk_enable(rfkill->clk);
+
+ if (ret)
+ return ret;
+ }
gpiod_set_value_cansleep(rfkill->shutdown_gpio, !blocked);
gpiod_set_value_cansleep(rfkill->reset_gpio, !blocked);
diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
index fa8aec78f63d..205e0d4d048e 100644
--- a/net/rxrpc/af_rxrpc.c
+++ b/net/rxrpc/af_rxrpc.c
@@ -661,9 +661,10 @@ static int rxrpc_setsockopt(struct socket *sock, int level, int optname,
ret = -EISCONN;
if (rx->sk.sk_state != RXRPC_UNBOUND)
goto error;
- ret = copy_from_sockptr(&min_sec_level, optval,
- sizeof(unsigned int));
- if (ret < 0)
+ ret = copy_safe_from_sockptr(&min_sec_level,
+ sizeof(min_sec_level),
+ optval, optlen);
+ if (ret)
goto error;
ret = -EINVAL;
if (min_sec_level > RXRPC_SECURITY_MAX)
diff --git a/net/sched/act_api.c b/net/sched/act_api.c
index 4572aa6e0273..e509ac28c492 100644
--- a/net/sched/act_api.c
+++ b/net/sched/act_api.c
@@ -62,7 +62,7 @@ static void tcf_set_action_cookie(struct tc_cookie __rcu **old_cookie,
{
struct tc_cookie *old;
- old = xchg((__force struct tc_cookie **)old_cookie, new_cookie);
+ old = unrcu_pointer(xchg(old_cookie, RCU_INITIALIZER(new_cookie)));
if (old)
call_rcu(&old->rcu, tcf_free_cookie_rcu);
}
diff --git a/net/smc/smc_diag.c b/net/smc/smc_diag.c
index 37833b96b508..d58c699b5328 100644
--- a/net/smc/smc_diag.c
+++ b/net/smc/smc_diag.c
@@ -250,6 +250,7 @@ static int smc_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h)
}
static const struct sock_diag_handler smc_diag_handler = {
+ .owner = THIS_MODULE,
.family = AF_SMC,
.dump = smc_diag_handler_dump,
};
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 95ff74706104..3298da2e37e4 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -1431,7 +1431,9 @@ static int c_show(struct seq_file *m, void *p)
seq_printf(m, "# expiry=%lld refcnt=%d flags=%lx\n",
convert_to_wallclock(cp->expiry_time),
kref_read(&cp->ref), cp->flags);
- cache_get(cp);
+ if (!cache_get_rcu(cp))
+ return 0;
+
if (cache_check(cd, cp, NULL))
/* cache_check does a cache_put on failure */
seq_puts(m, "# ");
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 933e12e3a55c..83996eea1006 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1562,6 +1562,10 @@ static struct svc_xprt *svc_create_socket(struct svc_serv *serv,
newlen = error;
if (protocol == IPPROTO_TCP) {
+ __netns_tracker_free(net, &sock->sk->ns_tracker, false);
+ sock->sk->sk_net_refcnt = 1;
+ get_net_track(net, &sock->sk->ns_tracker, GFP_KERNEL);
+ sock_inuse_add(net, 1);
if ((error = kernel_listen(sock, 64)) < 0)
goto bummer;
}
diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
index f0d5eeed4c88..e1d4e426b21f 100644
--- a/net/sunrpc/xprtrdma/svc_rdma.c
+++ b/net/sunrpc/xprtrdma/svc_rdma.c
@@ -234,25 +234,34 @@ static int svc_rdma_proc_init(void)
rc = percpu_counter_init(&svcrdma_stat_read, 0, GFP_KERNEL);
if (rc)
- goto out_err;
+ goto err;
rc = percpu_counter_init(&svcrdma_stat_recv, 0, GFP_KERNEL);
if (rc)
- goto out_err;
+ goto err_read;
rc = percpu_counter_init(&svcrdma_stat_sq_starve, 0, GFP_KERNEL);
if (rc)
- goto out_err;
+ goto err_recv;
rc = percpu_counter_init(&svcrdma_stat_write, 0, GFP_KERNEL);
if (rc)
- goto out_err;
+ goto err_sq;
svcrdma_table_header = register_sysctl("sunrpc/svc_rdma",
svcrdma_parm_table);
+ if (!svcrdma_table_header)
+ goto err_write;
+
return 0;
-out_err:
+err_write:
+ rc = -ENOMEM;
+ percpu_counter_destroy(&svcrdma_stat_write);
+err_sq:
percpu_counter_destroy(&svcrdma_stat_sq_starve);
+err_recv:
percpu_counter_destroy(&svcrdma_stat_recv);
+err_read:
percpu_counter_destroy(&svcrdma_stat_read);
+err:
return rc;
}
diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
index 3b05f90a3e50..9cec7bcb8a97 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
@@ -478,7 +478,13 @@ static bool xdr_check_write_chunk(struct svc_rdma_recv_ctxt *rctxt)
if (xdr_stream_decode_u32(&rctxt->rc_stream, &segcount))
return false;
- /* A bogus segcount causes this buffer overflow check to fail. */
+ /* Before trusting the segcount value enough to use it in
+ * a computation, perform a simple range check. This is an
+ * arbitrary but sensible limit (ie, not architectural).
+ */
+ if (unlikely(segcount > RPCSVC_MAXPAGES))
+ return false;
+
p = xdr_inline_decode(&rctxt->rc_stream,
segcount * rpcrdma_segment_maxsz * sizeof(*p));
return p != NULL;
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 50490b1e8a0d..1c4bc8234ea8 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1186,6 +1186,7 @@ static void xs_sock_reset_state_flags(struct rpc_xprt *xprt)
clear_bit(XPRT_SOCK_WAKE_WRITE, &transport->sock_state);
clear_bit(XPRT_SOCK_WAKE_DISCONNECT, &transport->sock_state);
clear_bit(XPRT_SOCK_NOSPACE, &transport->sock_state);
+ clear_bit(XPRT_SOCK_UPD_TIMEOUT, &transport->sock_state);
}
static void xs_run_error_worker(struct sock_xprt *transport, unsigned int nr)
@@ -1920,6 +1921,13 @@ static struct socket *xs_create_sock(struct rpc_xprt *xprt,
goto out;
}
+ if (protocol == IPPROTO_TCP) {
+ __netns_tracker_free(xprt->xprt_net, &sock->sk->ns_tracker, false);
+ sock->sk->sk_net_refcnt = 1;
+ get_net_track(xprt->xprt_net, &sock->sk->ns_tracker, GFP_KERNEL);
+ sock_inuse_add(xprt->xprt_net, 1);
+ }
+
filp = sock_alloc_file(sock, O_NONBLOCK, NULL);
if (IS_ERR(filp))
return ERR_CAST(filp);
@@ -2595,11 +2603,10 @@ static int xs_tls_handshake_sync(struct rpc_xprt *lower_xprt, struct xprtsec_par
rc = wait_for_completion_interruptible_timeout(&lower_transport->handshake_done,
XS_TLS_HANDSHAKE_TO);
if (rc <= 0) {
- if (!tls_handshake_cancel(sk)) {
- if (rc == 0)
- rc = -ETIMEDOUT;
- goto out_put_xprt;
- }
+ tls_handshake_cancel(sk);
+ if (rc == 0)
+ rc = -ETIMEDOUT;
+ goto out_put_xprt;
}
rc = lower_transport->xprt_err;
diff --git a/net/tipc/diag.c b/net/tipc/diag.c
index 73137f4aeb68..11da9d2ebbf6 100644
--- a/net/tipc/diag.c
+++ b/net/tipc/diag.c
@@ -95,6 +95,7 @@ static int tipc_sock_diag_handler_dump(struct sk_buff *skb,
}
static const struct sock_diag_handler tipc_sock_diag_handler = {
+ .owner = THIS_MODULE,
.family = AF_TIPC,
.dump = tipc_sock_diag_handler_dump,
};
diff --git a/net/unix/diag.c b/net/unix/diag.c
index 1de7500b41b6..a6bd861314df 100644
--- a/net/unix/diag.c
+++ b/net/unix/diag.c
@@ -322,6 +322,7 @@ static int unix_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h)
}
static const struct sock_diag_handler unix_diag_handler = {
+ .owner = THIS_MODULE,
.family = AF_UNIX,
.dump = unix_diag_handler_dump,
};
diff --git a/net/vmw_vsock/diag.c b/net/vmw_vsock/diag.c
index a2823b1c5e28..6efa9eb93336 100644
--- a/net/vmw_vsock/diag.c
+++ b/net/vmw_vsock/diag.c
@@ -157,6 +157,7 @@ static int vsock_diag_handler_dump(struct sk_buff *skb, struct nlmsghdr *h)
}
static const struct sock_diag_handler vsock_diag_handler = {
+ .owner = THIS_MODULE,
.family = AF_VSOCK,
.dump = vsock_diag_handler_dump,
};
diff --git a/net/xdp/xsk_diag.c b/net/xdp/xsk_diag.c
index 22b36c8143cf..e1012bfec720 100644
--- a/net/xdp/xsk_diag.c
+++ b/net/xdp/xsk_diag.c
@@ -194,6 +194,7 @@ static int xsk_diag_handler_dump(struct sk_buff *nlskb, struct nlmsghdr *hdr)
}
static const struct sock_diag_handler xsk_diag_handler = {
+ .owner = THIS_MODULE,
.family = AF_XDP,
.dump = xsk_diag_handler_dump,
};
diff --git a/rust/macros/lib.rs b/rust/macros/lib.rs
index 34ae73f5db06..7bdb3a5a18a0 100644
--- a/rust/macros/lib.rs
+++ b/rust/macros/lib.rs
@@ -298,7 +298,7 @@ pub fn pinned_drop(args: TokenStream, input: TokenStream) -> TokenStream {
/// macro_rules! pub_no_prefix {
/// ($prefix:ident, $($newname:ident),+) => {
/// kernel::macros::paste! {
-/// $(pub(crate) const fn [<$newname:lower:span>]: u32 = [<$prefix $newname:span>];)+
+/// $(pub(crate) const fn [<$newname:lower:span>]() -> u32 { [<$prefix $newname:span>] })+
/// }
/// };
/// }
diff --git a/samples/bpf/xdp_adjust_tail_kern.c b/samples/bpf/xdp_adjust_tail_kern.c
index ffdd548627f0..da67bcad1c63 100644
--- a/samples/bpf/xdp_adjust_tail_kern.c
+++ b/samples/bpf/xdp_adjust_tail_kern.c
@@ -57,6 +57,7 @@ static __always_inline void swap_mac(void *data, struct ethhdr *orig_eth)
static __always_inline __u16 csum_fold_helper(__u32 csum)
{
+ csum = (csum & 0xffff) + (csum >> 16);
return ~((csum & 0xffff) + (csum >> 16));
}
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index 7d16f863edf1..6744b58c3508 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -28,6 +28,7 @@ my %verbose_messages = ();
my %verbose_emitted = ();
my $tree = 1;
my $chk_signoff = 1;
+my $chk_fixes_tag = 1;
my $chk_patch = 1;
my $tst_only;
my $emacs = 0;
@@ -88,6 +89,7 @@ Options:
-v, --verbose verbose mode
--no-tree run without a kernel tree
--no-signoff do not check for 'Signed-off-by' line
+ --no-fixes-tag do not check for 'Fixes:' tag
--patch treat FILE as patchfile (default)
--emacs emacs compile window format
--terse one line per report
@@ -295,6 +297,7 @@ GetOptions(
'v|verbose!' => \$verbose,
'tree!' => \$tree,
'signoff!' => \$chk_signoff,
+ 'fixes-tag!' => \$chk_fixes_tag,
'patch!' => \$chk_patch,
'emacs!' => \$emacs,
'terse!' => \$terse,
@@ -1256,6 +1259,7 @@ sub git_commit_info {
}
$chk_signoff = 0 if ($file);
+$chk_fixes_tag = 0 if ($file);
my @rawlines = ();
my @lines = ();
@@ -2635,6 +2639,9 @@ sub process {
our $clean = 1;
my $signoff = 0;
+ my $fixes_tag = 0;
+ my $is_revert = 0;
+ my $needs_fixes_tag = "";
my $author = '';
my $authorsignoff = 0;
my $author_sob = '';
@@ -3188,38 +3195,44 @@ sub process {
}
}
+# These indicate a bug fix
+ if (!$in_header_lines && !$is_patch &&
+ $line =~ /^This reverts commit/) {
+ $is_revert = 1;
+ }
+
+ if (!$in_header_lines && !$is_patch &&
+ $line =~ /((?:(?:BUG: K.|UB)SAN: |Call Trace:|stable\@|syzkaller))/) {
+ $needs_fixes_tag = $1;
+ }
# Check Fixes: styles is correct
if (!$in_header_lines &&
- $line =~ /^\s*fixes:?\s*(?:commit\s*)?[0-9a-f]{5,}\b/i) {
- my $orig_commit = "";
- my $id = "0123456789ab";
- my $title = "commit title";
- my $tag_case = 1;
- my $tag_space = 1;
- my $id_length = 1;
- my $id_case = 1;
+ $line =~ /^\s*(fixes:?)\s*(?:commit\s*)?([0-9a-f]{5,40})(?:\s*($balanced_parens))?/i) {
+ my $tag = $1;
+ my $orig_commit = $2;
+ my $title;
my $title_has_quotes = 0;
-
- if ($line =~ /(\s*fixes:?)\s+([0-9a-f]{5,})\s+($balanced_parens)/i) {
- my $tag = $1;
- $orig_commit = $2;
- $title = $3;
-
- $tag_case = 0 if $tag eq "Fixes:";
- $tag_space = 0 if ($line =~ /^fixes:? [0-9a-f]{5,} ($balanced_parens)/i);
-
- $id_length = 0 if ($orig_commit =~ /^[0-9a-f]{12}$/i);
- $id_case = 0 if ($orig_commit !~ /[A-F]/);
-
+ $fixes_tag = 1;
+ if (defined $3) {
# Always strip leading/trailing parens then double quotes if existing
- $title = substr($title, 1, -1);
+ $title = substr($3, 1, -1);
if ($title =~ /^".*"$/) {
$title = substr($title, 1, -1);
$title_has_quotes = 1;
}
+ } else {
+ $title = "commit title"
}
+
+ my $tag_case = not ($tag eq "Fixes:");
+ my $tag_space = not ($line =~ /^fixes:? [0-9a-f]{5,40} ($balanced_parens)/i);
+
+ my $id_length = not ($orig_commit =~ /^[0-9a-f]{12}$/i);
+ my $id_case = not ($orig_commit !~ /[A-F]/);
+
+ my $id = "0123456789ab";
my ($cid, $ctitle) = git_commit_info($orig_commit, $id,
$title);
@@ -7680,6 +7693,12 @@ sub process {
ERROR("NOT_UNIFIED_DIFF",
"Does not appear to be a unified-diff format patch\n");
}
+ if ($is_patch && $has_commit_log && $chk_fixes_tag) {
+ if ($needs_fixes_tag ne "" && !$is_revert && !$fixes_tag) {
+ WARN("MISSING_FIXES_TAG",
+ "The commit message has '$needs_fixes_tag', perhaps it also needs a 'Fixes:' tag?\n");
+ }
+ }
if ($is_patch && $has_commit_log && $chk_signoff) {
if ($signoff == 0) {
ERROR("MISSING_SIGN_OFF",
diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
index 6583b36dbe69..efbb4836ec66 100644
--- a/scripts/mod/file2alias.c
+++ b/scripts/mod/file2alias.c
@@ -809,10 +809,7 @@ static int do_eisa_entry(const char *filename, void *symval,
char *alias)
{
DEF_FIELD_ADDR(symval, eisa_device_id, sig);
- if (sig[0])
- sprintf(alias, EISA_DEVICE_MODALIAS_FMT "*", *sig);
- else
- strcat(alias, "*");
+ sprintf(alias, EISA_DEVICE_MODALIAS_FMT "*", *sig);
return 1;
}
diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
index 828d5cc36716..4110d559ed68 100644
--- a/scripts/mod/modpost.c
+++ b/scripts/mod/modpost.c
@@ -792,25 +792,15 @@ static void check_section(const char *modname, struct elf_info *elf,
#define ALL_INIT_DATA_SECTIONS \
- ".init.setup", ".init.rodata", ".meminit.rodata", \
- ".init.data", ".meminit.data"
-#define ALL_EXIT_DATA_SECTIONS \
- ".exit.data", ".memexit.data"
-
-#define ALL_INIT_TEXT_SECTIONS \
- ".init.text", ".meminit.text"
-#define ALL_EXIT_TEXT_SECTIONS \
- ".exit.text"
+ ".init.setup", ".init.rodata", ".init.data"
#define ALL_PCI_INIT_SECTIONS \
".pci_fixup_early", ".pci_fixup_header", ".pci_fixup_final", \
".pci_fixup_enable", ".pci_fixup_resume", \
".pci_fixup_resume_early", ".pci_fixup_suspend"
-#define ALL_XXXINIT_SECTIONS MEM_INIT_SECTIONS
-
-#define ALL_INIT_SECTIONS INIT_SECTIONS, ALL_XXXINIT_SECTIONS
-#define ALL_EXIT_SECTIONS EXIT_SECTIONS
+#define ALL_INIT_SECTIONS ".init.*"
+#define ALL_EXIT_SECTIONS ".exit.*"
#define DATA_SECTIONS ".data", ".data.rel"
#define TEXT_SECTIONS ".text", ".text.*", ".sched.text", \
@@ -820,12 +810,7 @@ static void check_section(const char *modname, struct elf_info *elf,
".fixup", ".entry.text", ".exception.text", \
".coldtext", ".softirqentry.text"
-#define INIT_SECTIONS ".init.*"
-#define MEM_INIT_SECTIONS ".meminit.*"
-
-#define EXIT_SECTIONS ".exit.*"
-
-#define ALL_TEXT_SECTIONS ALL_INIT_TEXT_SECTIONS, ALL_EXIT_TEXT_SECTIONS, \
+#define ALL_TEXT_SECTIONS ".init.text", ".exit.text", \
TEXT_SECTIONS, OTHER_TEXT_SECTIONS
enum mismatch {
@@ -869,7 +854,7 @@ static const struct sectioncheck sectioncheck[] = {
},
{
.fromsec = { DATA_SECTIONS, NULL },
- .bad_tosec = { ALL_XXXINIT_SECTIONS, INIT_SECTIONS, NULL },
+ .bad_tosec = { ALL_INIT_SECTIONS, NULL },
.mismatch = DATA_TO_ANY_INIT,
},
{
@@ -877,12 +862,6 @@ static const struct sectioncheck sectioncheck[] = {
.bad_tosec = { ALL_EXIT_SECTIONS, NULL },
.mismatch = TEXTDATA_TO_ANY_EXIT,
},
-/* Do not reference init code/data from meminit code/data */
-{
- .fromsec = { ALL_XXXINIT_SECTIONS, NULL },
- .bad_tosec = { INIT_SECTIONS, NULL },
- .mismatch = XXXINIT_TO_SOME_INIT,
-},
/* Do not use exit code/data from init code */
{
.fromsec = { ALL_INIT_SECTIONS, NULL },
@@ -897,7 +876,7 @@ static const struct sectioncheck sectioncheck[] = {
},
{
.fromsec = { ALL_PCI_INIT_SECTIONS, NULL },
- .bad_tosec = { INIT_SECTIONS, NULL },
+ .bad_tosec = { ALL_INIT_SECTIONS, NULL },
.mismatch = ANY_INIT_TO_ANY_EXIT,
},
{
@@ -1009,12 +988,6 @@ static int secref_whitelist(const char *fromsec, const char *fromsym,
"*_console")))
return 0;
- /* symbols in data sections that may refer to meminit sections */
- if (match(fromsec, PATTERNS(DATA_SECTIONS)) &&
- match(tosec, PATTERNS(ALL_XXXINIT_SECTIONS)) &&
- match(fromsym, PATTERNS("*driver")))
- return 0;
-
/*
* symbols in data sections must not refer to .exit.*, but there are
* quite a few offenders, so hide these unless for W=1 builds until
@@ -1022,7 +995,7 @@ static int secref_whitelist(const char *fromsec, const char *fromsym,
*/
if (!extra_warn &&
match(fromsec, PATTERNS(DATA_SECTIONS)) &&
- match(tosec, PATTERNS(EXIT_SECTIONS)) &&
+ match(tosec, PATTERNS(ALL_EXIT_SECTIONS)) &&
match(fromsym, PATTERNS("*driver")))
return 0;
@@ -1187,10 +1160,10 @@ static void check_export_symbol(struct module *mod, struct elf_info *elf,
ELF_ST_TYPE(sym->st_info) == STT_LOPROC)
s->is_func = true;
- if (match(secname, PATTERNS(INIT_SECTIONS)))
+ if (match(secname, PATTERNS(ALL_INIT_SECTIONS)))
warn("%s: %s: EXPORT_SYMBOL used for init symbol. Remove __init or EXPORT_SYMBOL.\n",
mod->name, name);
- else if (match(secname, PATTERNS(EXIT_SECTIONS)))
+ else if (match(secname, PATTERNS(ALL_EXIT_SECTIONS)))
warn("%s: %s: EXPORT_SYMBOL used for exit symbol. Remove __exit or EXPORT_SYMBOL.\n",
mod->name, name);
}
diff --git a/security/apparmor/capability.c b/security/apparmor/capability.c
index 2fb6a2ea0b99..824859720062 100644
--- a/security/apparmor/capability.c
+++ b/security/apparmor/capability.c
@@ -96,6 +96,8 @@ static int audit_caps(struct apparmor_audit_data *ad, struct aa_profile *profile
return error;
} else {
aa_put_profile(ent->profile);
+ if (profile != ent->profile)
+ cap_clear(ent->caps);
ent->profile = aa_get_profile(profile);
cap_raise(ent->caps, cap);
}
diff --git a/security/apparmor/policy_unpack_test.c b/security/apparmor/policy_unpack_test.c
index 2b8003eb4f46..dd551587fe4a 100644
--- a/security/apparmor/policy_unpack_test.c
+++ b/security/apparmor/policy_unpack_test.c
@@ -281,6 +281,8 @@ static void policy_unpack_test_unpack_strdup_with_null_name(struct kunit *test)
((uintptr_t)puf->e->start <= (uintptr_t)string)
&& ((uintptr_t)string <= (uintptr_t)puf->e->end));
KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA);
+
+ kfree(string);
}
static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test)
@@ -296,6 +298,8 @@ static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test)
((uintptr_t)puf->e->start <= (uintptr_t)string)
&& ((uintptr_t)string <= (uintptr_t)puf->e->end));
KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA);
+
+ kfree(string);
}
static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test)
@@ -313,6 +317,8 @@ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test)
KUNIT_EXPECT_EQ(test, size, 0);
KUNIT_EXPECT_NULL(test, string);
KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start);
+
+ kfree(string);
}
static void policy_unpack_test_unpack_nameX_with_null_name(struct kunit *test)
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
index cc21c483c4a5..e40de64ec85c 100644
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -3794,9 +3794,11 @@ static vm_fault_t snd_pcm_mmap_data_fault(struct vm_fault *vmf)
return VM_FAULT_SIGBUS;
if (substream->ops->page)
page = substream->ops->page(substream, offset);
- else if (!snd_pcm_get_dma_buf(substream))
+ else if (!snd_pcm_get_dma_buf(substream)) {
+ if (WARN_ON_ONCE(!runtime->dma_area))
+ return VM_FAULT_SIGBUS;
page = virt_to_page(runtime->dma_area + offset);
- else
+ } else
page = snd_sgbuf_get_page(snd_pcm_get_dma_buf(substream), offset);
if (!page)
return VM_FAULT_SIGBUS;
diff --git a/sound/core/ump.c b/sound/core/ump.c
index 8a7ecec74b5d..b1ce4756961a 100644
--- a/sound/core/ump.c
+++ b/sound/core/ump.c
@@ -724,7 +724,10 @@ static void fill_fb_info(struct snd_ump_endpoint *ump,
info->ui_hint = buf->fb_info.ui_hint;
info->first_group = buf->fb_info.first_group;
info->num_groups = buf->fb_info.num_groups;
- info->flags = buf->fb_info.midi_10;
+ if (buf->fb_info.midi_10 < 2)
+ info->flags = buf->fb_info.midi_10;
+ else
+ info->flags = SNDRV_UMP_BLOCK_IS_MIDI1 | SNDRV_UMP_BLOCK_IS_LOWSPEED;
info->active = buf->fb_info.active;
info->midi_ci_version = buf->fb_info.midi_ci_version;
info->sysex8_streams = buf->fb_info.sysex8_streams;
diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
index e7c2ef6c6b4c..16a3e478e50b 100644
--- a/sound/hda/intel-dsp-config.c
+++ b/sound/hda/intel-dsp-config.c
@@ -721,6 +721,10 @@ static const struct config_entry acpi_config_table[] = {
#if IS_ENABLED(CONFIG_SND_SST_ATOM_HIFI2_PLATFORM_ACPI) || \
IS_ENABLED(CONFIG_SND_SOC_SOF_BAYTRAIL)
/* BayTrail */
+ {
+ .flags = FLAG_SST_OR_SOF_BYT,
+ .acpi_hid = "LPE0F28",
+ },
{
.flags = FLAG_SST_OR_SOF_BYT,
.acpi_hid = "80860F28",
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index ffe298eb7b36..92299bab2515 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -471,6 +471,8 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
break;
case 0x10ec0234:
case 0x10ec0274:
+ alc_write_coef_idx(codec, 0x6e, 0x0c25);
+ fallthrough;
case 0x10ec0294:
case 0x10ec0700:
case 0x10ec0701:
@@ -3602,25 +3604,22 @@ static void alc256_init(struct hda_codec *codec)
hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
- if (hp_pin_sense)
+ if (hp_pin_sense) {
msleep(2);
+ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
-
- snd_hda_codec_write(codec, hp_pin, 0,
- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
-
- if (hp_pin_sense || spec->ultra_low_power)
- msleep(85);
-
- snd_hda_codec_write(codec, hp_pin, 0,
+ snd_hda_codec_write(codec, hp_pin, 0,
AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
- if (hp_pin_sense || spec->ultra_low_power)
- msleep(100);
+ msleep(75);
+
+ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+ msleep(75);
+ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
+ }
alc_update_coef_idx(codec, 0x46, 3 << 12, 0);
- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */
alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15);
/*
@@ -3644,29 +3643,28 @@ static void alc256_shutup(struct hda_codec *codec)
alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
- if (hp_pin_sense)
+ if (hp_pin_sense) {
msleep(2);
- snd_hda_codec_write(codec, hp_pin, 0,
+ snd_hda_codec_write(codec, hp_pin, 0,
AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
- if (hp_pin_sense || spec->ultra_low_power)
- msleep(85);
+ msleep(75);
/* 3k pull low control for Headset jack. */
/* NOTE: call this before clearing the pin, otherwise codec stalls */
/* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
* when booting with headset plugged. So skip setting it for the codec alc257
*/
- if (spec->en_3kpull_low)
- alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ if (spec->en_3kpull_low)
+ alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
- if (!spec->no_shutup_pins)
- snd_hda_codec_write(codec, hp_pin, 0,
+ if (!spec->no_shutup_pins)
+ snd_hda_codec_write(codec, hp_pin, 0,
AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
- if (hp_pin_sense || spec->ultra_low_power)
- msleep(100);
+ msleep(75);
+ }
alc_auto_setup_eapd(codec, false);
alc_shutup_pins(codec);
@@ -3761,33 +3759,28 @@ static void alc225_init(struct hda_codec *codec)
hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin);
hp2_pin_sense = snd_hda_jack_detect(codec, 0x16);
- if (hp1_pin_sense || hp2_pin_sense)
+ if (hp1_pin_sense || hp2_pin_sense) {
msleep(2);
+ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
-
- if (hp1_pin_sense || spec->ultra_low_power)
- snd_hda_codec_write(codec, hp_pin, 0,
- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
- if (hp2_pin_sense)
- snd_hda_codec_write(codec, 0x16, 0,
- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
-
- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
- msleep(85);
-
- if (hp1_pin_sense || spec->ultra_low_power)
- snd_hda_codec_write(codec, hp_pin, 0,
- AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
- if (hp2_pin_sense)
- snd_hda_codec_write(codec, 0x16, 0,
- AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+ if (hp1_pin_sense)
+ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+ if (hp2_pin_sense)
+ snd_hda_codec_write(codec, 0x16, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+ msleep(75);
- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
- msleep(100);
+ if (hp1_pin_sense)
+ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+ if (hp2_pin_sense)
+ snd_hda_codec_write(codec, 0x16, 0,
+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
- alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
+ msleep(75);
+ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
+ }
}
static void alc225_shutup(struct hda_codec *codec)
@@ -3799,36 +3792,35 @@ static void alc225_shutup(struct hda_codec *codec)
if (!hp_pin)
hp_pin = 0x21;
- alc_disable_headset_jack_key(codec);
- /* 3k pull low control for Headset jack. */
- alc_update_coef_idx(codec, 0x4a, 0, 3 << 10);
-
hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin);
hp2_pin_sense = snd_hda_jack_detect(codec, 0x16);
- if (hp1_pin_sense || hp2_pin_sense)
+ if (hp1_pin_sense || hp2_pin_sense) {
+ alc_disable_headset_jack_key(codec);
+ /* 3k pull low control for Headset jack. */
+ alc_update_coef_idx(codec, 0x4a, 0, 3 << 10);
msleep(2);
- if (hp1_pin_sense || spec->ultra_low_power)
- snd_hda_codec_write(codec, hp_pin, 0,
- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
- if (hp2_pin_sense)
- snd_hda_codec_write(codec, 0x16, 0,
- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
-
- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
- msleep(85);
+ if (hp1_pin_sense)
+ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+ if (hp2_pin_sense)
+ snd_hda_codec_write(codec, 0x16, 0,
+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
- if (hp1_pin_sense || spec->ultra_low_power)
- snd_hda_codec_write(codec, hp_pin, 0,
- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
- if (hp2_pin_sense)
- snd_hda_codec_write(codec, 0x16, 0,
- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ msleep(75);
- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
- msleep(100);
+ if (hp1_pin_sense)
+ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ if (hp2_pin_sense)
+ snd_hda_codec_write(codec, 0x16, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ msleep(75);
+ alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
+ alc_enable_headset_jack_key(codec);
+ }
alc_auto_setup_eapd(codec, false);
alc_shutup_pins(codec);
if (spec->ultra_low_power) {
@@ -3839,9 +3831,6 @@ static void alc225_shutup(struct hda_codec *codec)
alc_update_coef_idx(codec, 0x4a, 3<<4, 2<<4);
msleep(30);
}
-
- alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
- alc_enable_headset_jack_key(codec);
}
static void alc_default_init(struct hda_codec *codec)
@@ -7265,6 +7254,8 @@ enum {
ALC290_FIXUP_SUBWOOFER_HSJACK,
ALC269_FIXUP_THINKPAD_ACPI,
ALC269_FIXUP_DMIC_THINKPAD_ACPI,
+ ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13,
+ ALC269VC_FIXUP_INFINIX_Y4_MAX,
ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO,
ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
ALC255_FIXUP_ASUS_MIC_NO_PRESENCE,
@@ -7644,6 +7635,25 @@ static const struct hda_fixup alc269_fixups[] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc269_fixup_pincfg_U7x7_headset_mic,
},
+ [ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x14, 0x90170151 }, /* use as internal speaker (LFE) */
+ { 0x1b, 0x90170152 }, /* use as internal speaker (back) */
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
+ },
+ [ALC269VC_FIXUP_INFINIX_Y4_MAX] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x1b, 0x90170150 }, /* use as internal speaker */
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
+ },
[ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO] = {
.type = HDA_FIXUP_PINS,
.v.pins = (const struct hda_pintbl[]) {
@@ -10412,7 +10422,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13),
SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
+ SND_PCI_QUIRK(0x2782, 0x1701, "Infinix Y4 Max", ALC269VC_FIXUP_INFINIX_Y4_MAX),
+ SND_PCI_QUIRK(0x2782, 0x1705, "MEDION E15433", ALC269VC_FIXUP_INFINIX_Y4_MAX),
SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
index 08f823cd8869..a00933df9168 100644
--- a/sound/soc/amd/yc/acp6x-mach.c
+++ b/sound/soc/amd/yc/acp6x-mach.c
@@ -227,6 +227,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "21M3"),
}
},
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21M4"),
+ }
+ },
{
.driver_data = &acp6x_card,
.matches = {
@@ -234,6 +241,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "21M5"),
}
},
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21ME"),
+ }
+ },
{
.driver_data = &acp6x_card,
.matches = {
@@ -530,8 +544,14 @@ static int acp6x_probe(struct platform_device *pdev)
struct acp6x_pdm *machine = NULL;
struct snd_soc_card *card;
struct acpi_device *adev;
+ acpi_handle handle;
+ acpi_integer dmic_status;
int ret;
+ bool is_dmic_enable, wov_en;
+ /* IF WOV entry not found, enable dmic based on AcpDmicConnected entry*/
+ is_dmic_enable = false;
+ wov_en = true;
/* check the parent device's firmware node has _DSD or not */
adev = ACPI_COMPANION(pdev->dev.parent);
if (adev) {
@@ -539,9 +559,19 @@ static int acp6x_probe(struct platform_device *pdev)
if (!acpi_dev_get_property(adev, "AcpDmicConnected", ACPI_TYPE_INTEGER, &obj) &&
obj->integer.value == 1)
- platform_set_drvdata(pdev, &acp6x_card);
+ is_dmic_enable = true;
}
+ handle = ACPI_HANDLE(pdev->dev.parent);
+ ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
+ if (!ACPI_FAILURE(ret))
+ wov_en = dmic_status;
+
+ if (is_dmic_enable && wov_en)
+ platform_set_drvdata(pdev, &acp6x_card);
+ else
+ return 0;
+
/* check for any DMI overrides */
dmi_id = dmi_first_match(yc_acp_quirk_table);
if (dmi_id)
diff --git a/sound/soc/codecs/da7219.c b/sound/soc/codecs/da7219.c
index 600c2db58756..dd86033b7077 100644
--- a/sound/soc/codecs/da7219.c
+++ b/sound/soc/codecs/da7219.c
@@ -1167,17 +1167,20 @@ static int da7219_set_dai_sysclk(struct snd_soc_dai *codec_dai,
struct da7219_priv *da7219 = snd_soc_component_get_drvdata(component);
int ret = 0;
- if ((da7219->clk_src == clk_id) && (da7219->mclk_rate == freq))
+ mutex_lock(&da7219->pll_lock);
+
+ if ((da7219->clk_src == clk_id) && (da7219->mclk_rate == freq)) {
+ mutex_unlock(&da7219->pll_lock);
return 0;
+ }
if ((freq < 2000000) || (freq > 54000000)) {
+ mutex_unlock(&da7219->pll_lock);
dev_err(codec_dai->dev, "Unsupported MCLK value %d\n",
freq);
return -EINVAL;
}
- mutex_lock(&da7219->pll_lock);
-
switch (clk_id) {
case DA7219_CLKSRC_MCLK_SQR:
snd_soc_component_update_bits(component, DA7219_PLL_CTRL,
diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c
index e8cdc166bdaa..1955d77cffd9 100644
--- a/sound/soc/codecs/rt5640.c
+++ b/sound/soc/codecs/rt5640.c
@@ -2422,10 +2422,20 @@ static irqreturn_t rt5640_jd_gpio_irq(int irq, void *data)
return IRQ_HANDLED;
}
-static void rt5640_cancel_work(void *data)
+static void rt5640_disable_irq_and_cancel_work(void *data)
{
struct rt5640_priv *rt5640 = data;
+ if (rt5640->jd_gpio_irq_requested) {
+ free_irq(rt5640->jd_gpio_irq, rt5640);
+ rt5640->jd_gpio_irq_requested = false;
+ }
+
+ if (rt5640->irq_requested) {
+ free_irq(rt5640->irq, rt5640);
+ rt5640->irq_requested = false;
+ }
+
cancel_delayed_work_sync(&rt5640->jack_work);
cancel_delayed_work_sync(&rt5640->bp_work);
}
@@ -2466,13 +2476,7 @@ static void rt5640_disable_jack_detect(struct snd_soc_component *component)
if (!rt5640->jack)
return;
- if (rt5640->jd_gpio_irq_requested)
- free_irq(rt5640->jd_gpio_irq, rt5640);
-
- if (rt5640->irq_requested)
- free_irq(rt5640->irq, rt5640);
-
- rt5640_cancel_work(rt5640);
+ rt5640_disable_irq_and_cancel_work(rt5640);
if (rt5640->jack->status & SND_JACK_MICROPHONE) {
rt5640_disable_micbias1_ovcd_irq(component);
@@ -2480,8 +2484,6 @@ static void rt5640_disable_jack_detect(struct snd_soc_component *component)
snd_soc_jack_report(rt5640->jack, 0, SND_JACK_BTN_0);
}
- rt5640->jd_gpio_irq_requested = false;
- rt5640->irq_requested = false;
rt5640->jd_gpio = NULL;
rt5640->jack = NULL;
}
@@ -2801,7 +2803,8 @@ static int rt5640_suspend(struct snd_soc_component *component)
if (rt5640->jack) {
/* disable jack interrupts during system suspend */
disable_irq(rt5640->irq);
- rt5640_cancel_work(rt5640);
+ cancel_delayed_work_sync(&rt5640->jack_work);
+ cancel_delayed_work_sync(&rt5640->bp_work);
}
snd_soc_component_force_bias_level(component, SND_SOC_BIAS_OFF);
@@ -3035,7 +3038,7 @@ static int rt5640_i2c_probe(struct i2c_client *i2c)
INIT_DELAYED_WORK(&rt5640->jack_work, rt5640_jack_work);
/* Make sure work is stopped on probe-error / remove */
- ret = devm_add_action_or_reset(&i2c->dev, rt5640_cancel_work, rt5640);
+ ret = devm_add_action_or_reset(&i2c->dev, rt5640_disable_irq_and_cancel_work, rt5640);
if (ret)
return ret;
diff --git a/sound/soc/codecs/rt722-sdca.c b/sound/soc/codecs/rt722-sdca.c
index 9ff607984ea1..b9b330375add 100644
--- a/sound/soc/codecs/rt722-sdca.c
+++ b/sound/soc/codecs/rt722-sdca.c
@@ -607,12 +607,8 @@ static int rt722_sdca_dmic_set_gain_get(struct snd_kcontrol *kcontrol,
if (!adc_vol_flag) /* boost gain */
ctl = regvalue / boost_step;
- else { /* ADC gain */
- if (adc_vol_flag)
- ctl = p->max - (((vol_max - regvalue) & 0xffff) / interval_offset);
- else
- ctl = p->max - (((0 - regvalue) & 0xffff) / interval_offset);
- }
+ else /* ADC gain */
+ ctl = p->max - (((vol_max - regvalue) & 0xffff) / interval_offset);
ucontrol->value.integer.value[i] = ctl;
}
diff --git a/sound/soc/codecs/tas2781-fmwlib.c b/sound/soc/codecs/tas2781-fmwlib.c
index 629e2195a890..1cc64ed8de6d 100644
--- a/sound/soc/codecs/tas2781-fmwlib.c
+++ b/sound/soc/codecs/tas2781-fmwlib.c
@@ -2022,6 +2022,7 @@ static int tasdevice_dspfw_ready(const struct firmware *fmw,
break;
case 0x202:
case 0x400:
+ case 0x401:
tas_priv->fw_parse_variable_header =
fw_parse_variable_header_git;
tas_priv->fw_parse_program_data =
diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
index 8478a4ac59f9..f57f0ab8a1ad 100644
--- a/sound/soc/fsl/fsl_micfil.c
+++ b/sound/soc/fsl/fsl_micfil.c
@@ -1051,7 +1051,7 @@ static irqreturn_t micfil_isr(int irq, void *devid)
regmap_write_bits(micfil->regmap,
REG_MICFIL_STAT,
MICFIL_STAT_CHXF(i),
- 1);
+ MICFIL_STAT_CHXF(i));
}
for (i = 0; i < MICFIL_FIFO_NUM; i++) {
@@ -1086,7 +1086,7 @@ static irqreturn_t micfil_err_isr(int irq, void *devid)
if (stat_reg & MICFIL_STAT_LOWFREQF) {
dev_dbg(&pdev->dev, "isr: ipg_clk_app is too low\n");
regmap_write_bits(micfil->regmap, REG_MICFIL_STAT,
- MICFIL_STAT_LOWFREQF, 1);
+ MICFIL_STAT_LOWFREQF, MICFIL_STAT_LOWFREQF);
}
return IRQ_HANDLED;
diff --git a/sound/soc/generic/audio-graph-card2.c b/sound/soc/generic/audio-graph-card2.c
index b1c675c6b6db..686e0dea2bc7 100644
--- a/sound/soc/generic/audio-graph-card2.c
+++ b/sound/soc/generic/audio-graph-card2.c
@@ -261,16 +261,19 @@ static enum graph_type __graph_get_type(struct device_node *lnk)
if (of_node_name_eq(np, GRAPH_NODENAME_MULTI)) {
ret = GRAPH_MULTI;
+ fw_devlink_purge_absent_suppliers(&np->fwnode);
goto out_put;
}
if (of_node_name_eq(np, GRAPH_NODENAME_DPCM)) {
ret = GRAPH_DPCM;
+ fw_devlink_purge_absent_suppliers(&np->fwnode);
goto out_put;
}
if (of_node_name_eq(np, GRAPH_NODENAME_C2C)) {
ret = GRAPH_C2C;
+ fw_devlink_purge_absent_suppliers(&np->fwnode);
goto out_put;
}
diff --git a/sound/soc/intel/atom/sst/sst_acpi.c b/sound/soc/intel/atom/sst/sst_acpi.c
index 29d44c989e5f..cfa1632ae4f0 100644
--- a/sound/soc/intel/atom/sst/sst_acpi.c
+++ b/sound/soc/intel/atom/sst/sst_acpi.c
@@ -125,6 +125,28 @@ static const struct sst_res_info bytcr_res_info = {
.acpi_ipc_irq_index = 0
};
+/* For "LPE0F28" ACPI device found on some Android factory OS models */
+static const struct sst_res_info lpe8086_res_info = {
+ .shim_offset = 0x140000,
+ .shim_size = 0x000100,
+ .shim_phy_addr = SST_BYT_SHIM_PHY_ADDR,
+ .ssp0_offset = 0xa0000,
+ .ssp0_size = 0x1000,
+ .dma0_offset = 0x98000,
+ .dma0_size = 0x4000,
+ .dma1_offset = 0x9c000,
+ .dma1_size = 0x4000,
+ .iram_offset = 0x0c0000,
+ .iram_size = 0x14000,
+ .dram_offset = 0x100000,
+ .dram_size = 0x28000,
+ .mbox_offset = 0x144000,
+ .mbox_size = 0x1000,
+ .acpi_lpe_res_index = 1,
+ .acpi_ddr_index = 0,
+ .acpi_ipc_irq_index = 0
+};
+
static struct sst_platform_info byt_rvp_platform_data = {
.probe_data = &byt_fwparse_info,
.ipc_info = &byt_ipc_info,
@@ -268,10 +290,38 @@ static int sst_acpi_probe(struct platform_device *pdev)
mach->pdata = &chv_platform_data;
pdata = mach->pdata;
- ret = kstrtouint(id->id, 16, &dev_id);
- if (ret < 0) {
- dev_err(dev, "Unique device id conversion error: %d\n", ret);
- return ret;
+ if (!strcmp(id->id, "LPE0F28")) {
+ struct resource *rsrc;
+
+ /* Use regular BYT SST PCI VID:PID */
+ dev_id = 0x80860F28;
+ byt_rvp_platform_data.res_info = &lpe8086_res_info;
+
+ /*
+ * The "LPE0F28" ACPI device has separate IO-mem resources for:
+ * DDR, SHIM, MBOX, IRAM, DRAM, CFG
+ * None of which covers the entire LPE base address range.
+ * lpe8086_res_info.acpi_lpe_res_index points to the SHIM.
+ * Patch this to cover the entire base address range as expected
+ * by sst_platform_get_resources().
+ */
+ rsrc = platform_get_resource(pdev, IORESOURCE_MEM,
+ pdata->res_info->acpi_lpe_res_index);
+ if (!rsrc) {
+ dev_err(dev, "Invalid SHIM base\n");
+ return -EIO;
+ }
+ rsrc->start -= pdata->res_info->shim_offset;
+ rsrc->end = rsrc->start + 0x200000 - 1;
+ } else {
+ ret = kstrtouint(id->id, 16, &dev_id);
+ if (ret < 0) {
+ dev_err(dev, "Unique device id conversion error: %d\n", ret);
+ return ret;
+ }
+
+ if (soc_intel_is_byt_cr(pdev))
+ byt_rvp_platform_data.res_info = &bytcr_res_info;
}
dev_dbg(dev, "ACPI device id: %x\n", dev_id);
@@ -280,11 +330,6 @@ static int sst_acpi_probe(struct platform_device *pdev)
if (ret < 0)
return ret;
- if (soc_intel_is_byt_cr(pdev)) {
- /* override resource info */
- byt_rvp_platform_data.res_info = &bytcr_res_info;
- }
-
/* update machine parameters */
mach->mach_params.acpi_ipc_irq_index =
pdata->res_info->acpi_ipc_irq_index;
@@ -344,6 +389,7 @@ static void sst_acpi_remove(struct platform_device *pdev)
}
static const struct acpi_device_id sst_acpi_ids[] = {
+ { "LPE0F28", (unsigned long)&snd_soc_acpi_intel_baytrail_machines},
{ "80860F28", (unsigned long)&snd_soc_acpi_intel_baytrail_machines},
{ "808622A8", (unsigned long)&snd_soc_acpi_intel_cherrytrail_machines},
{ },
diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
index 5b8b21ade9cf..ddf68be0af14 100644
--- a/sound/soc/intel/boards/bytcr_rt5640.c
+++ b/sound/soc/intel/boards/bytcr_rt5640.c
@@ -17,6 +17,7 @@
#include <linux/acpi.h>
#include <linux/clk.h>
#include <linux/device.h>
+#include <linux/device/bus.h>
#include <linux/dmi.h>
#include <linux/gpio/consumer.h>
#include <linux/gpio/machine.h>
@@ -32,6 +33,8 @@
#include "../atom/sst-atom-controls.h"
#include "../common/soc-intel-quirks.h"
+#define BYT_RT5640_FALLBACK_CODEC_DEV_NAME "i2c-rt5640"
+
enum {
BYT_RT5640_DMIC1_MAP,
BYT_RT5640_DMIC2_MAP,
@@ -1129,6 +1132,21 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
BYT_RT5640_SSP0_AIF2 |
BYT_RT5640_MCLK_EN),
},
+ { /* Vexia Edu Atla 10 tablet */
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
+ /* Above strings are too generic, also match on BIOS date */
+ DMI_MATCH(DMI_BIOS_DATE, "08/25/2014"),
+ },
+ .driver_data = (void *)(BYT_RT5640_IN1_MAP |
+ BYT_RT5640_JD_SRC_JD2_IN4N |
+ BYT_RT5640_OVCD_TH_2000UA |
+ BYT_RT5640_OVCD_SF_0P75 |
+ BYT_RT5640_DIFF_MIC |
+ BYT_RT5640_SSP0_AIF2 |
+ BYT_RT5640_MCLK_EN),
+ },
{ /* Voyo Winpad A15 */
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
@@ -1697,9 +1715,33 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
codec_dev = acpi_get_first_physical_node(adev);
acpi_dev_put(adev);
- if (!codec_dev)
- return -EPROBE_DEFER;
- priv->codec_dev = get_device(codec_dev);
+
+ if (codec_dev) {
+ priv->codec_dev = get_device(codec_dev);
+ } else {
+ /*
+ * Special case for Android tablets where the codec i2c_client
+ * has been manually instantiated by x86_android_tablets.ko due
+ * to a broken DSDT.
+ */
+ codec_dev = bus_find_device_by_name(&i2c_bus_type, NULL,
+ BYT_RT5640_FALLBACK_CODEC_DEV_NAME);
+ if (!codec_dev)
+ return -EPROBE_DEFER;
+
+ if (!i2c_verify_client(codec_dev)) {
+ dev_err(dev, "Error '%s' is not an i2c_client\n",
+ BYT_RT5640_FALLBACK_CODEC_DEV_NAME);
+ put_device(codec_dev);
+ }
+
+ /* fixup codec name */
+ strscpy(byt_rt5640_codec_name, BYT_RT5640_FALLBACK_CODEC_DEV_NAME,
+ sizeof(byt_rt5640_codec_name));
+
+ /* bus_find_device() returns a reference no need to get() */
+ priv->codec_dev = codec_dev;
+ }
/*
* swap SSP0 if bytcr is detected
diff --git a/sound/soc/stm/stm32_sai_sub.c b/sound/soc/stm/stm32_sai_sub.c
index 0acc848c1f00..dcbcd1a59a3a 100644
--- a/sound/soc/stm/stm32_sai_sub.c
+++ b/sound/soc/stm/stm32_sai_sub.c
@@ -317,7 +317,7 @@ static int stm32_sai_get_clk_div(struct stm32_sai_sub_data *sai,
int div;
div = DIV_ROUND_CLOSEST(input_rate, output_rate);
- if (div > SAI_XCR1_MCKDIV_MAX(version)) {
+ if (div > SAI_XCR1_MCKDIV_MAX(version) || div <= 0) {
dev_err(&sai->pdev->dev, "Divider %d out of range\n", div);
return -EINVAL;
}
@@ -378,8 +378,8 @@ static long stm32_sai_mclk_round_rate(struct clk_hw *hw, unsigned long rate,
int div;
div = stm32_sai_get_clk_div(sai, *prate, rate);
- if (div < 0)
- return div;
+ if (div <= 0)
+ return -EINVAL;
mclk->freq = *prate / div;
diff --git a/sound/usb/6fire/chip.c b/sound/usb/6fire/chip.c
index 33e962178c93..d562a30b087f 100644
--- a/sound/usb/6fire/chip.c
+++ b/sound/usb/6fire/chip.c
@@ -61,8 +61,10 @@ static void usb6fire_chip_abort(struct sfire_chip *chip)
}
}
-static void usb6fire_chip_destroy(struct sfire_chip *chip)
+static void usb6fire_card_free(struct snd_card *card)
{
+ struct sfire_chip *chip = card->private_data;
+
if (chip) {
if (chip->pcm)
usb6fire_pcm_destroy(chip);
@@ -72,8 +74,6 @@ static void usb6fire_chip_destroy(struct sfire_chip *chip)
usb6fire_comm_destroy(chip);
if (chip->control)
usb6fire_control_destroy(chip);
- if (chip->card)
- snd_card_free(chip->card);
}
}
@@ -136,6 +136,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf,
chip->regidx = regidx;
chip->intf_count = 1;
chip->card = card;
+ card->private_free = usb6fire_card_free;
ret = usb6fire_comm_init(chip);
if (ret < 0)
@@ -162,7 +163,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf,
return 0;
destroy_chip:
- usb6fire_chip_destroy(chip);
+ snd_card_free(card);
return ret;
}
@@ -181,7 +182,6 @@ static void usb6fire_chip_disconnect(struct usb_interface *intf)
chip->shutdown = true;
usb6fire_chip_abort(chip);
- usb6fire_chip_destroy(chip);
}
}
}
diff --git a/sound/usb/caiaq/audio.c b/sound/usb/caiaq/audio.c
index 4981753652a7..7a89872aa0cb 100644
--- a/sound/usb/caiaq/audio.c
+++ b/sound/usb/caiaq/audio.c
@@ -869,14 +869,20 @@ int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev)
return 0;
}
-void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev)
+void snd_usb_caiaq_audio_disconnect(struct snd_usb_caiaqdev *cdev)
{
struct device *dev = caiaqdev_to_dev(cdev);
dev_dbg(dev, "%s(%p)\n", __func__, cdev);
stream_stop(cdev);
+}
+
+void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev)
+{
+ struct device *dev = caiaqdev_to_dev(cdev);
+
+ dev_dbg(dev, "%s(%p)\n", __func__, cdev);
free_urbs(cdev->data_urbs_in);
free_urbs(cdev->data_urbs_out);
kfree(cdev->data_cb_info);
}
-
diff --git a/sound/usb/caiaq/audio.h b/sound/usb/caiaq/audio.h
index 869bf6264d6a..07f5d064456c 100644
--- a/sound/usb/caiaq/audio.h
+++ b/sound/usb/caiaq/audio.h
@@ -3,6 +3,7 @@
#define CAIAQ_AUDIO_H
int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev);
+void snd_usb_caiaq_audio_disconnect(struct snd_usb_caiaqdev *cdev);
void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev);
#endif /* CAIAQ_AUDIO_H */
diff --git a/sound/usb/caiaq/device.c b/sound/usb/caiaq/device.c
index b5cbf1f195c4..dfd820483849 100644
--- a/sound/usb/caiaq/device.c
+++ b/sound/usb/caiaq/device.c
@@ -376,6 +376,17 @@ static void setup_card(struct snd_usb_caiaqdev *cdev)
dev_err(dev, "Unable to set up control system (ret=%d)\n", ret);
}
+static void card_free(struct snd_card *card)
+{
+ struct snd_usb_caiaqdev *cdev = caiaqdev(card);
+
+#ifdef CONFIG_SND_USB_CAIAQ_INPUT
+ snd_usb_caiaq_input_free(cdev);
+#endif
+ snd_usb_caiaq_audio_free(cdev);
+ usb_reset_device(cdev->chip.dev);
+}
+
static int create_card(struct usb_device *usb_dev,
struct usb_interface *intf,
struct snd_card **cardp)
@@ -489,6 +500,7 @@ static int init_card(struct snd_usb_caiaqdev *cdev)
cdev->vendor_name, cdev->product_name, usbpath);
setup_card(cdev);
+ card->private_free = card_free;
return 0;
err_kill_urb:
@@ -534,15 +546,14 @@ static void snd_disconnect(struct usb_interface *intf)
snd_card_disconnect(card);
#ifdef CONFIG_SND_USB_CAIAQ_INPUT
- snd_usb_caiaq_input_free(cdev);
+ snd_usb_caiaq_input_disconnect(cdev);
#endif
- snd_usb_caiaq_audio_free(cdev);
+ snd_usb_caiaq_audio_disconnect(cdev);
usb_kill_urb(&cdev->ep1_in_urb);
usb_kill_urb(&cdev->midi_out_urb);
- snd_card_free(card);
- usb_reset_device(interface_to_usbdev(intf));
+ snd_card_free_when_closed(card);
}
diff --git a/sound/usb/caiaq/input.c b/sound/usb/caiaq/input.c
index 84f26dce7f5d..a9130891bb69 100644
--- a/sound/usb/caiaq/input.c
+++ b/sound/usb/caiaq/input.c
@@ -829,15 +829,21 @@ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev)
return ret;
}
-void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev)
+void snd_usb_caiaq_input_disconnect(struct snd_usb_caiaqdev *cdev)
{
if (!cdev || !cdev->input_dev)
return;
usb_kill_urb(cdev->ep4_in_urb);
+ input_unregister_device(cdev->input_dev);
+}
+
+void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev)
+{
+ if (!cdev || !cdev->input_dev)
+ return;
+
usb_free_urb(cdev->ep4_in_urb);
cdev->ep4_in_urb = NULL;
-
- input_unregister_device(cdev->input_dev);
cdev->input_dev = NULL;
}
diff --git a/sound/usb/caiaq/input.h b/sound/usb/caiaq/input.h
index c42891e7be88..fbe267f85d02 100644
--- a/sound/usb/caiaq/input.h
+++ b/sound/usb/caiaq/input.h
@@ -4,6 +4,7 @@
void snd_usb_caiaq_input_dispatch(struct snd_usb_caiaqdev *cdev, char *buf, unsigned int len);
int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev);
+void snd_usb_caiaq_input_disconnect(struct snd_usb_caiaqdev *cdev);
void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev);
#endif
diff --git a/sound/usb/clock.c b/sound/usb/clock.c
index a676ad093d18..f0f1e445cc56 100644
--- a/sound/usb/clock.c
+++ b/sound/usb/clock.c
@@ -36,6 +36,12 @@ union uac23_clock_multiplier_desc {
struct uac_clock_multiplier_descriptor v3;
};
+/* check whether the descriptor bLength has the minimal length */
+#define DESC_LENGTH_CHECK(p, proto) \
+ ((proto) == UAC_VERSION_3 ? \
+ ((p)->v3.bLength >= sizeof((p)->v3)) : \
+ ((p)->v2.bLength >= sizeof((p)->v2)))
+
#define GET_VAL(p, proto, field) \
((proto) == UAC_VERSION_3 ? (p)->v3.field : (p)->v2.field)
@@ -58,6 +64,8 @@ static bool validate_clock_source(void *p, int id, int proto)
{
union uac23_clock_source_desc *cs = p;
+ if (!DESC_LENGTH_CHECK(cs, proto))
+ return false;
return GET_VAL(cs, proto, bClockID) == id;
}
@@ -65,13 +73,27 @@ static bool validate_clock_selector(void *p, int id, int proto)
{
union uac23_clock_selector_desc *cs = p;
- return GET_VAL(cs, proto, bClockID) == id;
+ if (!DESC_LENGTH_CHECK(cs, proto))
+ return false;
+ if (GET_VAL(cs, proto, bClockID) != id)
+ return false;
+ /* additional length check for baCSourceID array (in bNrInPins size)
+ * and two more fields (which sizes depend on the protocol)
+ */
+ if (proto == UAC_VERSION_3)
+ return cs->v3.bLength >= sizeof(cs->v3) + cs->v3.bNrInPins +
+ 4 /* bmControls */ + 2 /* wCSelectorDescrStr */;
+ else
+ return cs->v2.bLength >= sizeof(cs->v2) + cs->v2.bNrInPins +
+ 1 /* bmControls */ + 1 /* iClockSelector */;
}
static bool validate_clock_multiplier(void *p, int id, int proto)
{
union uac23_clock_multiplier_desc *cs = p;
+ if (!DESC_LENGTH_CHECK(cs, proto))
+ return false;
return GET_VAL(cs, proto, bClockID) == id;
}
diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
index 75cde5779f38..d1bd8e0d6025 100644
--- a/sound/usb/quirks-table.h
+++ b/sound/usb/quirks-table.h
@@ -324,7 +324,6 @@ YAMAHA_DEVICE(0x105a, NULL),
YAMAHA_DEVICE(0x105b, NULL),
YAMAHA_DEVICE(0x105c, NULL),
YAMAHA_DEVICE(0x105d, NULL),
-YAMAHA_DEVICE(0x1718, "P-125"),
{
USB_DEVICE(0x0499, 0x1503),
QUIRK_DRIVER_INFO {
@@ -391,6 +390,19 @@ YAMAHA_DEVICE(0x1718, "P-125"),
}
}
},
+{
+ USB_DEVICE(0x0499, 0x1718),
+ QUIRK_DRIVER_INFO {
+ /* .vendor_name = "Yamaha", */
+ /* .product_name = "P-125", */
+ QUIRK_DATA_COMPOSITE {
+ { QUIRK_DATA_STANDARD_AUDIO(1) },
+ { QUIRK_DATA_STANDARD_AUDIO(2) },
+ { QUIRK_DATA_MIDI_YAMAHA(3) },
+ QUIRK_COMPOSITE_END
+ }
+ }
+},
YAMAHA_DEVICE(0x2000, "DGP-7"),
YAMAHA_DEVICE(0x2001, "DGP-5"),
YAMAHA_DEVICE(0x2002, NULL),
diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
index 37211ad31ec8..30a4d2deefda 100644
--- a/sound/usb/quirks.c
+++ b/sound/usb/quirks.c
@@ -555,6 +555,7 @@ int snd_usb_create_quirk(struct snd_usb_audio *chip,
static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interface *intf)
{
struct usb_host_config *config = dev->actconfig;
+ struct usb_device_descriptor new_device_descriptor;
int err;
if (le16_to_cpu(get_cfg_desc(config)->wTotalLength) == EXTIGY_FIRMWARE_SIZE_OLD ||
@@ -566,10 +567,14 @@ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interfac
if (err < 0)
dev_dbg(&dev->dev, "error sending boot message: %d\n", err);
err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
- &dev->descriptor, sizeof(dev->descriptor));
- config = dev->actconfig;
+ &new_device_descriptor, sizeof(new_device_descriptor));
if (err < 0)
dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
+ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
+ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
+ new_device_descriptor.bNumConfigurations);
+ else
+ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
err = usb_reset_configuration(dev);
if (err < 0)
dev_dbg(&dev->dev, "error usb_reset_configuration: %d\n", err);
@@ -901,6 +906,7 @@ static void mbox2_setup_48_24_magic(struct usb_device *dev)
static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
{
struct usb_host_config *config = dev->actconfig;
+ struct usb_device_descriptor new_device_descriptor;
int err;
u8 bootresponse[0x12];
int fwsize;
@@ -936,10 +942,14 @@ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
dev_dbg(&dev->dev, "device initialised!\n");
err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
- &dev->descriptor, sizeof(dev->descriptor));
- config = dev->actconfig;
+ &new_device_descriptor, sizeof(new_device_descriptor));
if (err < 0)
dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
+ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
+ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
+ new_device_descriptor.bNumConfigurations);
+ else
+ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
err = usb_reset_configuration(dev);
if (err < 0)
@@ -1253,6 +1263,7 @@ static void mbox3_setup_48_24_magic(struct usb_device *dev)
static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
{
struct usb_host_config *config = dev->actconfig;
+ struct usb_device_descriptor new_device_descriptor;
int err;
int descriptor_size;
@@ -1266,10 +1277,14 @@ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
dev_dbg(&dev->dev, "device initialised!\n");
err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
- &dev->descriptor, sizeof(dev->descriptor));
- config = dev->actconfig;
+ &new_device_descriptor, sizeof(new_device_descriptor));
if (err < 0)
dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
+ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
+ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
+ new_device_descriptor.bNumConfigurations);
+ else
+ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
err = usb_reset_configuration(dev);
if (err < 0)
diff --git a/sound/usb/usx2y/us122l.c b/sound/usb/usx2y/us122l.c
index 709ccad972e2..612047ca5fe7 100644
--- a/sound/usb/usx2y/us122l.c
+++ b/sound/usb/usx2y/us122l.c
@@ -617,10 +617,7 @@ static void snd_us122l_disconnect(struct usb_interface *intf)
usb_put_intf(usb_ifnum_to_if(us122l->dev, 1));
usb_put_dev(us122l->dev);
- while (atomic_read(&us122l->mmap_count))
- msleep(500);
-
- snd_card_free(card);
+ snd_card_free_when_closed(card);
}
static int snd_us122l_suspend(struct usb_interface *intf, pm_message_t message)
diff --git a/sound/usb/usx2y/usbusx2y.c b/sound/usb/usx2y/usbusx2y.c
index 52f4e6652407..4c4ce0319d62 100644
--- a/sound/usb/usx2y/usbusx2y.c
+++ b/sound/usb/usx2y/usbusx2y.c
@@ -423,7 +423,7 @@ static void snd_usx2y_disconnect(struct usb_interface *intf)
}
if (usx2y->us428ctls_sharedmem)
wake_up(&usx2y->us428ctls_wait_queue_head);
- snd_card_free(card);
+ snd_card_free_when_closed(card);
}
static int snd_usx2y_probe(struct usb_interface *intf,
diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c
index 7b8d9ec89ebd..c032d2c6ab6d 100644
--- a/tools/bpf/bpftool/jit_disasm.c
+++ b/tools/bpf/bpftool/jit_disasm.c
@@ -80,7 +80,8 @@ symbol_lookup_callback(__maybe_unused void *disasm_info,
static int
init_context(disasm_ctx_t *ctx, const char *arch,
__maybe_unused const char *disassembler_options,
- __maybe_unused unsigned char *image, __maybe_unused ssize_t len)
+ __maybe_unused unsigned char *image, __maybe_unused ssize_t len,
+ __maybe_unused __u64 func_ksym)
{
char *triple;
@@ -109,12 +110,13 @@ static void destroy_context(disasm_ctx_t *ctx)
}
static int
-disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc)
+disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc,
+ __u64 func_ksym)
{
char buf[256];
int count;
- count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, pc,
+ count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, func_ksym + pc,
buf, sizeof(buf));
if (json_output)
printf_json(buf);
@@ -136,8 +138,21 @@ int disasm_init(void)
#ifdef HAVE_LIBBFD_SUPPORT
#define DISASM_SPACER "\t"
+struct disasm_info {
+ struct disassemble_info info;
+ __u64 func_ksym;
+};
+
+static void disasm_print_addr(bfd_vma addr, struct disassemble_info *info)
+{
+ struct disasm_info *dinfo = container_of(info, struct disasm_info, info);
+
+ addr += dinfo->func_ksym;
+ generic_print_address(addr, info);
+}
+
typedef struct {
- struct disassemble_info *info;
+ struct disasm_info *info;
disassembler_ftype disassemble;
bfd *bfdf;
} disasm_ctx_t;
@@ -215,7 +230,7 @@ static int fprintf_json_styled(void *out,
static int init_context(disasm_ctx_t *ctx, const char *arch,
const char *disassembler_options,
- unsigned char *image, ssize_t len)
+ unsigned char *image, ssize_t len, __u64 func_ksym)
{
struct disassemble_info *info;
char tpath[PATH_MAX];
@@ -238,12 +253,13 @@ static int init_context(disasm_ctx_t *ctx, const char *arch,
}
bfdf = ctx->bfdf;
- ctx->info = malloc(sizeof(struct disassemble_info));
+ ctx->info = malloc(sizeof(struct disasm_info));
if (!ctx->info) {
p_err("mem alloc failed");
goto err_close;
}
- info = ctx->info;
+ ctx->info->func_ksym = func_ksym;
+ info = &ctx->info->info;
if (json_output)
init_disassemble_info_compat(info, stdout,
@@ -272,6 +288,7 @@ static int init_context(disasm_ctx_t *ctx, const char *arch,
info->disassembler_options = disassembler_options;
info->buffer = image;
info->buffer_length = len;
+ info->print_address_func = disasm_print_addr;
disassemble_init_for_target(info);
@@ -304,9 +321,10 @@ static void destroy_context(disasm_ctx_t *ctx)
static int
disassemble_insn(disasm_ctx_t *ctx, __maybe_unused unsigned char *image,
- __maybe_unused ssize_t len, int pc)
+ __maybe_unused ssize_t len, int pc,
+ __maybe_unused __u64 func_ksym)
{
- return ctx->disassemble(pc, ctx->info);
+ return ctx->disassemble(pc, &ctx->info->info);
}
int disasm_init(void)
@@ -331,7 +349,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
if (!len)
return -1;
- if (init_context(&ctx, arch, disassembler_options, image, len))
+ if (init_context(&ctx, arch, disassembler_options, image, len, func_ksym))
return -1;
if (json_output)
@@ -360,7 +378,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
printf("%4x:" DISASM_SPACER, pc);
}
- count = disassemble_insn(&ctx, image, len, pc);
+ count = disassemble_insn(&ctx, image, len, pc, func_ksym);
if (json_output) {
/* Operand array, was started in fprintf_json. Before
diff --git a/tools/include/nolibc/arch-s390.h b/tools/include/nolibc/arch-s390.h
index 5d60fd43f883..f72df614db3a 100644
--- a/tools/include/nolibc/arch-s390.h
+++ b/tools/include/nolibc/arch-s390.h
@@ -10,6 +10,7 @@
#include "compiler.h"
#include "crt.h"
+#include "std.h"
/* Syscalls for s390:
* - registers are 64-bit
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index ceed16a10285..2fad178949ef 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -3586,7 +3586,7 @@ static bool sym_is_subprog(const Elf64_Sym *sym, int text_shndx)
return true;
/* global function */
- return bind == STB_GLOBAL && type == STT_FUNC;
+ return (bind == STB_GLOBAL || bind == STB_WEAK) && type == STT_FUNC;
}
static int find_extern_btf_id(const struct btf *btf, const char *ext_name)
@@ -3990,7 +3990,7 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
static bool prog_is_subprog(const struct bpf_object *obj, const struct bpf_program *prog)
{
- return prog->sec_idx == obj->efile.text_shndx && obj->nr_programs > 1;
+ return prog->sec_idx == obj->efile.text_shndx;
}
struct bpf_program *
@@ -6837,8 +6837,14 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog,
opts->prog_flags |= BPF_F_XDP_HAS_FRAGS;
/* special check for usdt to use uprobe_multi link */
- if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK))
+ if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK)) {
+ /* for BPF_TRACE_UPROBE_MULTI, user might want to query expected_attach_type
+ * in prog, and expected_attach_type we set in kernel is from opts, so we
+ * update both.
+ */
prog->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
+ opts->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
+ }
if ((def & SEC_ATTACH_BTF) && !prog->attach_btf_id) {
int btf_obj_fd = 0, btf_type_id = 0, err;
@@ -6915,6 +6921,7 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
load_attr.attach_btf_id = prog->attach_btf_id;
load_attr.kern_version = kern_version;
load_attr.prog_ifindex = prog->prog_ifindex;
+ load_attr.expected_attach_type = prog->expected_attach_type;
/* specify func_info/line_info only if kernel supports them */
btf_fd = bpf_object__btf_fd(obj);
@@ -6943,9 +6950,6 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
insns_cnt = prog->insns_cnt;
}
- /* allow prog_prepare_load_fn to change expected_attach_type */
- load_attr.expected_attach_type = prog->expected_attach_type;
-
if (obj->gen_loader) {
bpf_gen__prog_load(obj->gen_loader, prog->type, prog->name,
license, insns, insns_cnt, &load_attr,
diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
index b311bb91f672..88cc7236f122 100644
--- a/tools/lib/bpf/linker.c
+++ b/tools/lib/bpf/linker.c
@@ -396,6 +396,8 @@ static int init_output_elf(struct bpf_linker *linker, const char *file)
pr_warn_elf("failed to create SYMTAB data");
return -EINVAL;
}
+ /* Ensure libelf translates byte-order of symbol records */
+ sec->data->d_type = ELF_T_SYM;
str_off = strset__add_str(linker->strtab_strs, sec->sec_name);
if (str_off < 0)
diff --git a/tools/lib/thermal/Makefile b/tools/lib/thermal/Makefile
index 2d0d255fd0e1..8890fd57b110 100644
--- a/tools/lib/thermal/Makefile
+++ b/tools/lib/thermal/Makefile
@@ -121,7 +121,9 @@ all: fixdep
clean:
$(call QUIET_CLEAN, libthermal) $(RM) $(LIBTHERMAL_A) \
- *.o *~ *.a *.so *.so.$(VERSION) *.so.$(LIBTHERMAL_VERSION) .*.d .*.cmd LIBTHERMAL-CFLAGS $(LIBTHERMAL_PC)
+ *.o *~ *.a *.so *.so.$(VERSION) *.so.$(LIBTHERMAL_VERSION) \
+ .*.d .*.cmd LIBTHERMAL-CFLAGS $(LIBTHERMAL_PC) \
+ $(srctree)/tools/$(THERMAL_UAPI)
$(LIBTHERMAL_PC):
$(QUIET_GEN)sed -e "s|@...FIX@|$(prefix)|" \
diff --git a/tools/lib/thermal/commands.c b/tools/lib/thermal/commands.c
index 73d4d4e8d6ec..27b4442f0e34 100644
--- a/tools/lib/thermal/commands.c
+++ b/tools/lib/thermal/commands.c
@@ -261,9 +261,25 @@ static struct genl_ops thermal_cmd_ops = {
.o_ncmds = ARRAY_SIZE(thermal_cmds),
};
-static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int cmd,
- int flags, void *arg)
+struct cmd_param {
+ int tz_id;
+};
+
+typedef int (*cmd_cb_t)(struct nl_msg *, struct cmd_param *);
+
+static int thermal_genl_tz_id_encode(struct nl_msg *msg, struct cmd_param *p)
+{
+ if (p->tz_id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, p->tz_id))
+ return -1;
+
+ return 0;
+}
+
+static thermal_error_t thermal_genl_auto(struct thermal_handler *th, cmd_cb_t cmd_cb,
+ struct cmd_param *param,
+ int cmd, int flags, void *arg)
{
+ thermal_error_t ret = THERMAL_ERROR;
struct nl_msg *msg;
void *hdr;
@@ -274,45 +290,55 @@ static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int
hdr = genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, thermal_cmd_ops.o_id,
0, flags, cmd, THERMAL_GENL_VERSION);
if (!hdr)
- return THERMAL_ERROR;
+ goto out;
- if (id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, id))
- return THERMAL_ERROR;
+ if (cmd_cb && cmd_cb(msg, param))
+ goto out;
if (nl_send_msg(th->sk_cmd, th->cb_cmd, msg, genl_handle_msg, arg))
- return THERMAL_ERROR;
+ goto out;
+ ret = THERMAL_SUCCESS;
+out:
nlmsg_free(msg);
- return THERMAL_SUCCESS;
+ return ret;
}
thermal_error_t thermal_cmd_get_tz(struct thermal_handler *th, struct thermal_zone **tz)
{
- return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_TZ_GET_ID,
+ return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_TZ_GET_ID,
NLM_F_DUMP | NLM_F_ACK, tz);
}
thermal_error_t thermal_cmd_get_cdev(struct thermal_handler *th, struct thermal_cdev **tc)
{
- return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_CDEV_GET,
+ return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_CDEV_GET,
NLM_F_DUMP | NLM_F_ACK, tc);
}
thermal_error_t thermal_cmd_get_trip(struct thermal_handler *th, struct thermal_zone *tz)
{
- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TRIP,
- 0, tz);
+ struct cmd_param p = { .tz_id = tz->id };
+
+ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
+ THERMAL_GENL_CMD_TZ_GET_TRIP, 0, tz);
}
thermal_error_t thermal_cmd_get_governor(struct thermal_handler *th, struct thermal_zone *tz)
{
- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz);
+ struct cmd_param p = { .tz_id = tz->id };
+
+ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
+ THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz);
}
thermal_error_t thermal_cmd_get_temp(struct thermal_handler *th, struct thermal_zone *tz)
{
- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz);
+ struct cmd_param p = { .tz_id = tz->id };
+
+ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
+ THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz);
}
thermal_error_t thermal_cmd_exit(struct thermal_handler *th)
diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
index ac2e6c75f912..a1971703e49c 100644
--- a/tools/perf/builtin-ftrace.c
+++ b/tools/perf/builtin-ftrace.c
@@ -771,7 +771,7 @@ static void display_histogram(int buckets[], bool use_nsec)
bar_len = buckets[0] * bar_total / total;
printf(" %4d - %-4d %s | %10d | %.*s%*s |\n",
- 0, 1, "us", buckets[0], bar_len, bar, bar_total - bar_len, "");
+ 0, 1, use_nsec ? "ns" : "us", buckets[0], bar_len, bar, bar_total - bar_len, "");
for (i = 1; i < NUM_BUCKET - 1; i++) {
int start = (1 << (i - 1));
diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c
index 61c2c96cc070..c8c72fcf37e1 100644
--- a/tools/perf/builtin-list.c
+++ b/tools/perf/builtin-list.c
@@ -95,7 +95,7 @@ static void wordwrap(const char *s, int start, int max, int corr)
}
}
-static void default_print_event(void *ps, const char *pmu_name, const char *topic,
+static void default_print_event(void *ps, const char *topic, const char *pmu_name,
const char *event_name, const char *event_alias,
const char *scale_unit __maybe_unused,
bool deprecated, const char *event_type_desc,
@@ -321,7 +321,7 @@ static void fix_escape_printf(struct strbuf *buf, const char *fmt, ...)
fputs(buf->buf, stdout);
}
-static void json_print_event(void *ps, const char *pmu_name, const char *topic,
+static void json_print_event(void *ps, const char *topic, const char *pmu_name,
const char *event_name, const char *event_alias,
const char *scale_unit,
bool deprecated, const char *event_type_desc,
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 78c104922181..9692ebdd7f11 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -712,15 +712,19 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
}
if (!cpu_map__is_dummy(evsel_list->core.user_requested_cpus)) {
- if (affinity__setup(&saved_affinity) < 0)
- return -1;
+ if (affinity__setup(&saved_affinity) < 0) {
+ err = -1;
+ goto err_out;
+ }
affinity = &saved_affinity;
}
evlist__for_each_entry(evsel_list, counter) {
counter->reset_group = false;
- if (bpf_counter__load(counter, &target))
- return -1;
+ if (bpf_counter__load(counter, &target)) {
+ err = -1;
+ goto err_out;
+ }
if (!(evsel__is_bperf(counter)))
all_counters_use_bpf = false;
}
@@ -763,7 +767,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
switch (stat_handle_error(counter)) {
case COUNTER_FATAL:
- return -1;
+ err = -1;
+ goto err_out;
case COUNTER_RETRY:
goto try_again;
case COUNTER_SKIP:
@@ -804,7 +809,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
switch (stat_handle_error(counter)) {
case COUNTER_FATAL:
- return -1;
+ err = -1;
+ goto err_out;
case COUNTER_RETRY:
goto try_again_reset;
case COUNTER_SKIP:
@@ -817,6 +823,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
}
}
affinity__cleanup(affinity);
+ affinity = NULL;
evlist__for_each_entry(evsel_list, counter) {
if (!counter->supported) {
@@ -829,8 +836,10 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
stat_config.unit_width = l;
if (evsel__should_store_id(counter) &&
- evsel__store_ids(counter, evsel_list))
- return -1;
+ evsel__store_ids(counter, evsel_list)) {
+ err = -1;
+ goto err_out;
+ }
}
if (evlist__apply_filters(evsel_list, &counter)) {
@@ -851,20 +860,23 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
}
if (err < 0)
- return err;
+ goto err_out;
err = perf_event__synthesize_stat_events(&stat_config, NULL, evsel_list,
process_synthesized_event, is_pipe);
if (err < 0)
- return err;
+ goto err_out;
+
}
if (target.initial_delay) {
pr_info(EVLIST_DISABLED_MSG);
} else {
err = enable_counters();
- if (err)
- return -1;
+ if (err) {
+ err = -1;
+ goto err_out;
+ }
}
/* Exec the command, if any */
@@ -874,8 +886,10 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
if (target.initial_delay > 0) {
usleep(target.initial_delay * USEC_PER_MSEC);
err = enable_counters();
- if (err)
- return -1;
+ if (err) {
+ err = -1;
+ goto err_out;
+ }
pr_info(EVLIST_ENABLED_MSG);
}
@@ -895,7 +909,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
if (workload_exec_errno) {
const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
pr_err("Workload failed: %s\n", emsg);
- return -1;
+ err = -1;
+ goto err_out;
}
if (WIFSIGNALED(status))
@@ -942,6 +957,13 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
evlist__close(evsel_list);
return WEXITSTATUS(status);
+
+err_out:
+ if (forks)
+ evlist__cancel_workload(evsel_list);
+
+ affinity__cleanup(affinity);
+ return err;
}
static int run_perf_stat(int argc, const char **argv, int run_idx)
diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index e541d0e2777a..3ecd6868be2d 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -2414,6 +2414,7 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
char msg[1024];
void *args, *augmented_args = NULL;
int augmented_args_size;
+ size_t printed = 0;
if (sc == NULL)
return -1;
@@ -2429,8 +2430,8 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
args = perf_evsel__sc_tp_ptr(evsel, args, sample);
augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls_args_size);
- syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
- fprintf(trace->output, "%s", msg);
+ printed += syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
+ fprintf(trace->output, "%.*s", (int)printed, msg);
err = 0;
out_put:
thread__put(thread);
@@ -2803,7 +2804,7 @@ static size_t trace__fprintf_tp_fields(struct trace *trace, struct evsel *evsel,
printed += syscall_arg_fmt__scnprintf_val(arg, bf + printed, size - printed, &syscall_arg, val);
}
- return printed + fprintf(trace->output, "%s", bf);
+ return printed + fprintf(trace->output, "%.*s", (int)printed, bf);
}
static int trace__event_handler(struct trace *trace, struct evsel *evsel,
@@ -2812,13 +2813,8 @@ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
{
struct thread *thread;
int callchain_ret = 0;
- /*
- * Check if we called perf_evsel__disable(evsel) due to, for instance,
- * this event's max_events having been hit and this is an entry coming
- * from the ring buffer that we should discard, since the max events
- * have already been considered/printed.
- */
- if (evsel->disabled)
+
+ if (evsel->nr_events_printed >= evsel->max_events)
return 0;
thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
@@ -3923,6 +3919,9 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
sizeof(__u32), BPF_ANY);
}
}
+
+ if (trace->skel)
+ trace->filter_pids.map = trace->skel->maps.pids_filtered;
#endif
err = trace__set_filter_pids(trace);
if (err < 0)
@@ -5031,6 +5030,10 @@ int cmd_trace(int argc, const char **argv)
if (trace.summary_only)
trace.summary = trace.summary_only;
+ /* Keep exited threads, otherwise information might be lost for summary */
+ if (trace.summary)
+ symbol_conf.keep_exited_threads = true;
+
if (output_name != NULL) {
err = trace__open_output(&trace, output_name);
if (err < 0) {
diff --git a/tools/perf/tests/attr/test-stat-default b/tools/perf/tests/attr/test-stat-default
index a1e2da0a9a6d..e47fb4944679 100644
--- a/tools/perf/tests/attr/test-stat-default
+++ b/tools/perf/tests/attr/test-stat-default
@@ -88,98 +88,142 @@ enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
[event13:base-stat]
fd=13
group_fd=11
type=4
-config=33280
+config=33024
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
[event14:base-stat]
fd=14
group_fd=11
type=4
-config=33536
+config=33280
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+# PERF_TYPE_RAW / topdown-be-bound (0x8300)
[event15:base-stat]
fd=15
group_fd=11
type=4
-config=33024
+config=33536
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
[event16:base-stat]
fd=16
+group_fd=11
type=4
-config=4109
+config=33792
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
[event17:base-stat]
fd=17
+group_fd=11
type=4
-config=17039629
+config=34048
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
[event18:base-stat]
fd=18
+group_fd=11
type=4
-config=60
+config=34304
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
[event19:base-stat]
fd=19
+group_fd=11
type=4
-config=2097421
+config=34560
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
[event20:base-stat]
fd=20
type=4
-config=316
+config=4109
optional=1
-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
+# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
[event21:base-stat]
fd=21
type=4
-config=412
+config=17039629
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
[event22:base-stat]
fd=22
type=4
-config=572
+config=60
optional=1
-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
+# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
[event23:base-stat]
fd=23
type=4
-config=706
+config=2097421
optional=1
-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
[event24:base-stat]
fd=24
type=4
+config=316
+optional=1
+
+# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
+[event25:base-stat]
+fd=25
+type=4
+config=412
+optional=1
+
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
+[event26:base-stat]
+fd=26
+type=4
+config=572
+optional=1
+
+# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
+[event27:base-stat]
+fd=27
+type=4
+config=706
+optional=1
+
+# PERF_TYPE_RAW / UOPS_ISSUED.ANY
+[event28:base-stat]
+fd=28
+type=4
config=270
optional=1
diff --git a/tools/perf/tests/attr/test-stat-detailed-1 b/tools/perf/tests/attr/test-stat-detailed-1
index 1c52cb05c900..3d500d3e0c5c 100644
--- a/tools/perf/tests/attr/test-stat-detailed-1
+++ b/tools/perf/tests/attr/test-stat-detailed-1
@@ -90,99 +90,143 @@ enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
[event13:base-stat]
fd=13
group_fd=11
type=4
-config=33280
+config=33024
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
[event14:base-stat]
fd=14
group_fd=11
type=4
-config=33536
+config=33280
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+# PERF_TYPE_RAW / topdown-be-bound (0x8300)
[event15:base-stat]
fd=15
group_fd=11
type=4
-config=33024
+config=33536
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
[event16:base-stat]
fd=16
+group_fd=11
type=4
-config=4109
+config=33792
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
[event17:base-stat]
fd=17
+group_fd=11
type=4
-config=17039629
+config=34048
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
[event18:base-stat]
fd=18
+group_fd=11
type=4
-config=60
+config=34304
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
[event19:base-stat]
fd=19
+group_fd=11
type=4
-config=2097421
+config=34560
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
[event20:base-stat]
fd=20
type=4
-config=316
+config=4109
optional=1
-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
+# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
[event21:base-stat]
fd=21
type=4
-config=412
+config=17039629
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
[event22:base-stat]
fd=22
type=4
-config=572
+config=60
optional=1
-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
+# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
[event23:base-stat]
fd=23
type=4
-config=706
+config=2097421
optional=1
-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
[event24:base-stat]
fd=24
type=4
+config=316
+optional=1
+
+# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
+[event25:base-stat]
+fd=25
+type=4
+config=412
+optional=1
+
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
+[event26:base-stat]
+fd=26
+type=4
+config=572
+optional=1
+
+# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
+[event27:base-stat]
+fd=27
+type=4
+config=706
+optional=1
+
+# PERF_TYPE_RAW / UOPS_ISSUED.ANY
+[event28:base-stat]
+fd=28
+type=4
config=270
optional=1
@@ -190,8 +234,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event25:base-stat]
-fd=25
+[event29:base-stat]
+fd=29
type=3
config=0
optional=1
@@ -200,8 +244,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event26:base-stat]
-fd=26
+[event30:base-stat]
+fd=30
type=3
config=65536
optional=1
@@ -210,8 +254,8 @@ optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event27:base-stat]
-fd=27
+[event31:base-stat]
+fd=31
type=3
config=2
optional=1
@@ -220,8 +264,8 @@ optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event28:base-stat]
-fd=28
+[event32:base-stat]
+fd=32
type=3
config=65538
optional=1
diff --git a/tools/perf/tests/attr/test-stat-detailed-2 b/tools/perf/tests/attr/test-stat-detailed-2
index 7e961d24a885..01777a63752f 100644
--- a/tools/perf/tests/attr/test-stat-detailed-2
+++ b/tools/perf/tests/attr/test-stat-detailed-2
@@ -90,99 +90,143 @@ enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
[event13:base-stat]
fd=13
group_fd=11
type=4
-config=33280
+config=33024
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
[event14:base-stat]
fd=14
group_fd=11
type=4
-config=33536
+config=33280
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+# PERF_TYPE_RAW / topdown-be-bound (0x8300)
[event15:base-stat]
fd=15
group_fd=11
type=4
-config=33024
+config=33536
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
[event16:base-stat]
fd=16
+group_fd=11
type=4
-config=4109
+config=33792
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
[event17:base-stat]
fd=17
+group_fd=11
type=4
-config=17039629
+config=34048
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
[event18:base-stat]
fd=18
+group_fd=11
type=4
-config=60
+config=34304
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
[event19:base-stat]
fd=19
+group_fd=11
type=4
-config=2097421
+config=34560
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
[event20:base-stat]
fd=20
type=4
-config=316
+config=4109
optional=1
-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
+# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
[event21:base-stat]
fd=21
type=4
-config=412
+config=17039629
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
[event22:base-stat]
fd=22
type=4
-config=572
+config=60
optional=1
-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
+# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
[event23:base-stat]
fd=23
type=4
-config=706
+config=2097421
optional=1
-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
[event24:base-stat]
fd=24
type=4
+config=316
+optional=1
+
+# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
+[event25:base-stat]
+fd=25
+type=4
+config=412
+optional=1
+
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
+[event26:base-stat]
+fd=26
+type=4
+config=572
+optional=1
+
+# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
+[event27:base-stat]
+fd=27
+type=4
+config=706
+optional=1
+
+# PERF_TYPE_RAW / UOPS_ISSUED.ANY
+[event28:base-stat]
+fd=28
+type=4
config=270
optional=1
@@ -190,8 +234,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event25:base-stat]
-fd=25
+[event29:base-stat]
+fd=29
type=3
config=0
optional=1
@@ -200,8 +244,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event26:base-stat]
-fd=26
+[event30:base-stat]
+fd=30
type=3
config=65536
optional=1
@@ -210,8 +254,8 @@ optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event27:base-stat]
-fd=27
+[event31:base-stat]
+fd=31
type=3
config=2
optional=1
@@ -220,8 +264,8 @@ optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event28:base-stat]
-fd=28
+[event32:base-stat]
+fd=32
type=3
config=65538
optional=1
@@ -230,8 +274,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1I << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event29:base-stat]
-fd=29
+[event33:base-stat]
+fd=33
type=3
config=1
optional=1
@@ -240,8 +284,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1I << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event30:base-stat]
-fd=30
+[event34:base-stat]
+fd=34
type=3
config=65537
optional=1
@@ -250,8 +294,8 @@ optional=1
# PERF_COUNT_HW_CACHE_DTLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event31:base-stat]
-fd=31
+[event35:base-stat]
+fd=35
type=3
config=3
optional=1
@@ -260,8 +304,8 @@ optional=1
# PERF_COUNT_HW_CACHE_DTLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event32:base-stat]
-fd=32
+[event36:base-stat]
+fd=36
type=3
config=65539
optional=1
@@ -270,8 +314,8 @@ optional=1
# PERF_COUNT_HW_CACHE_ITLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event33:base-stat]
-fd=33
+[event37:base-stat]
+fd=37
type=3
config=4
optional=1
@@ -280,8 +324,8 @@ optional=1
# PERF_COUNT_HW_CACHE_ITLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event34:base-stat]
-fd=34
+[event38:base-stat]
+fd=38
type=3
config=65540
optional=1
diff --git a/tools/perf/tests/attr/test-stat-detailed-3 b/tools/perf/tests/attr/test-stat-detailed-3
index e50535f45977..8400abd7e1e4 100644
--- a/tools/perf/tests/attr/test-stat-detailed-3
+++ b/tools/perf/tests/attr/test-stat-detailed-3
@@ -90,99 +90,143 @@ enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
[event13:base-stat]
fd=13
group_fd=11
type=4
-config=33280
+config=33024
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
[event14:base-stat]
fd=14
group_fd=11
type=4
-config=33536
+config=33280
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+# PERF_TYPE_RAW / topdown-be-bound (0x8300)
[event15:base-stat]
fd=15
group_fd=11
type=4
-config=33024
+config=33536
disabled=0
enable_on_exec=0
read_format=15
optional=1
-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
[event16:base-stat]
fd=16
+group_fd=11
type=4
-config=4109
+config=33792
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
[event17:base-stat]
fd=17
+group_fd=11
type=4
-config=17039629
+config=34048
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
[event18:base-stat]
fd=18
+group_fd=11
type=4
-config=60
+config=34304
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
[event19:base-stat]
fd=19
+group_fd=11
type=4
-config=2097421
+config=34560
+disabled=0
+enable_on_exec=0
+read_format=15
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
[event20:base-stat]
fd=20
type=4
-config=316
+config=4109
optional=1
-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
+# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
[event21:base-stat]
fd=21
type=4
-config=412
+config=17039629
optional=1
-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
[event22:base-stat]
fd=22
type=4
-config=572
+config=60
optional=1
-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
+# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
[event23:base-stat]
fd=23
type=4
-config=706
+config=2097421
optional=1
-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
[event24:base-stat]
fd=24
type=4
+config=316
+optional=1
+
+# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
+[event25:base-stat]
+fd=25
+type=4
+config=412
+optional=1
+
+# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
+[event26:base-stat]
+fd=26
+type=4
+config=572
+optional=1
+
+# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
+[event27:base-stat]
+fd=27
+type=4
+config=706
+optional=1
+
+# PERF_TYPE_RAW / UOPS_ISSUED.ANY
+[event28:base-stat]
+fd=28
+type=4
config=270
optional=1
@@ -190,8 +234,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event25:base-stat]
-fd=25
+[event29:base-stat]
+fd=29
type=3
config=0
optional=1
@@ -200,8 +244,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event26:base-stat]
-fd=26
+[event30:base-stat]
+fd=30
type=3
config=65536
optional=1
@@ -210,8 +254,8 @@ optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event27:base-stat]
-fd=27
+[event31:base-stat]
+fd=31
type=3
config=2
optional=1
@@ -220,8 +264,8 @@ optional=1
# PERF_COUNT_HW_CACHE_LL << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event28:base-stat]
-fd=28
+[event32:base-stat]
+fd=32
type=3
config=65538
optional=1
@@ -230,8 +274,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1I << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event29:base-stat]
-fd=29
+[event33:base-stat]
+fd=33
type=3
config=1
optional=1
@@ -240,8 +284,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1I << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event30:base-stat]
-fd=30
+[event34:base-stat]
+fd=34
type=3
config=65537
optional=1
@@ -250,8 +294,8 @@ optional=1
# PERF_COUNT_HW_CACHE_DTLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event31:base-stat]
-fd=31
+[event35:base-stat]
+fd=35
type=3
config=3
optional=1
@@ -260,8 +304,8 @@ optional=1
# PERF_COUNT_HW_CACHE_DTLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event32:base-stat]
-fd=32
+[event36:base-stat]
+fd=36
type=3
config=65539
optional=1
@@ -270,8 +314,8 @@ optional=1
# PERF_COUNT_HW_CACHE_ITLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event33:base-stat]
-fd=33
+[event37:base-stat]
+fd=37
type=3
config=4
optional=1
@@ -280,8 +324,8 @@ optional=1
# PERF_COUNT_HW_CACHE_ITLB << 0 |
# (PERF_COUNT_HW_CACHE_OP_READ << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event34:base-stat]
-fd=34
+[event38:base-stat]
+fd=38
type=3
config=65540
optional=1
@@ -290,8 +334,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
-[event35:base-stat]
-fd=35
+[event39:base-stat]
+fd=39
type=3
config=512
optional=1
@@ -300,8 +344,8 @@ optional=1
# PERF_COUNT_HW_CACHE_L1D << 0 |
# (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
# (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
-[event36:base-stat]
-fd=36
+[event40:base-stat]
+fd=40
type=3
config=66048
optional=1
diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
index 9729d006550d..799c104901b4 100644
--- a/tools/perf/util/cs-etm.c
+++ b/tools/perf/util/cs-etm.c
@@ -2412,12 +2412,6 @@ static void cs_etm__clear_all_traceid_queues(struct cs_etm_queue *etmq)
/* Ignore return value */
cs_etm__process_traceid_queue(etmq, tidq);
-
- /*
- * Generate an instruction sample with the remaining
- * branchstack entries.
- */
- cs_etm__flush(etmq, tidq);
}
}
@@ -2560,7 +2554,7 @@ static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm)
while (1) {
if (!etm->heap.heap_cnt)
- goto out;
+ break;
/* Take the entry at the top of the min heap */
cs_queue_nr = etm->heap.heap_array[0].queue_nr;
@@ -2643,6 +2637,23 @@ static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm)
ret = auxtrace_heap__add(&etm->heap, cs_queue_nr, cs_timestamp);
}
+ for (i = 0; i < etm->queues.nr_queues; i++) {
+ struct int_node *inode;
+
+ etmq = etm->queues.queue_array[i].priv;
+ if (!etmq)
+ continue;
+
+ intlist__for_each_entry(inode, etmq->traceid_queues_list) {
+ int idx = (int)(intptr_t)inode->priv;
+
+ /* Flush any remaining branch stack entries */
+ tidq = etmq->traceid_queues[idx];
+ ret = cs_etm__end_block(etmq, tidq);
+ if (ret)
+ return ret;
+ }
+ }
out:
return ret;
}
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index eb1dd29c538d..1eadb4f7c1b9 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -46,6 +46,7 @@
#include <sys/mman.h>
#include <sys/prctl.h>
#include <sys/timerfd.h>
+#include <sys/wait.h>
#include <linux/bitops.h>
#include <linux/hash.h>
@@ -1412,6 +1413,8 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
int child_ready_pipe[2], go_pipe[2];
char bf;
+ evlist->workload.cork_fd = -1;
+
if (pipe(child_ready_pipe) < 0) {
perror("failed to create 'ready' pipe");
return -1;
@@ -1464,7 +1467,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
* For cancelling the workload without actually running it,
* the parent will just close workload.cork_fd, without writing
* anything, i.e. read will return zero and we just exit()
- * here.
+ * here (See evlist__cancel_workload()).
*/
if (ret != 1) {
if (ret == -1)
@@ -1528,7 +1531,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
int evlist__start_workload(struct evlist *evlist)
{
- if (evlist->workload.cork_fd > 0) {
+ if (evlist->workload.cork_fd >= 0) {
char bf = 0;
int ret;
/*
@@ -1539,12 +1542,24 @@ int evlist__start_workload(struct evlist *evlist)
perror("unable to write to pipe");
close(evlist->workload.cork_fd);
+ evlist->workload.cork_fd = -1;
return ret;
}
return 0;
}
+void evlist__cancel_workload(struct evlist *evlist)
+{
+ int status;
+
+ if (evlist->workload.cork_fd >= 0) {
+ close(evlist->workload.cork_fd);
+ evlist->workload.cork_fd = -1;
+ waitpid(evlist->workload.pid, &status, WNOHANG);
+ }
+}
+
int evlist__parse_sample(struct evlist *evlist, union perf_event *event, struct perf_sample *sample)
{
struct evsel *evsel = evlist__event2evsel(evlist, event);
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index cb91dc9117a2..12f929ffdf92 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -184,6 +184,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target,
const char *argv[], bool pipe_output,
void (*exec_error)(int signo, siginfo_t *info, void *ucontext));
int evlist__start_workload(struct evlist *evlist);
+void evlist__cancel_workload(struct evlist *evlist);
struct option;
diff --git a/tools/perf/util/pfm.c b/tools/perf/util/pfm.c
index 862e4a689868..54421fceef5c 100644
--- a/tools/perf/util/pfm.c
+++ b/tools/perf/util/pfm.c
@@ -220,7 +220,7 @@ print_libpfm_event(const struct print_callbacks *print_cb, void *print_state,
}
if (is_libpfm_event_supported(name, cpus, threads)) {
- print_cb->print_event(print_state, pinfo->name, topic,
+ print_cb->print_event(print_state, topic, pinfo->name,
name, info->equiv,
/*scale_unit=*/NULL,
/*deprecated=*/NULL, "PFM event",
@@ -254,8 +254,8 @@ print_libpfm_event(const struct print_callbacks *print_cb, void *print_state,
continue;
print_cb->print_event(print_state,
- pinfo->name,
topic,
+ pinfo->name,
name, /*alias=*/NULL,
/*scale_unit=*/NULL,
/*deprecated=*/NULL, "PFM event",
diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c
index 54a237b2b853..f0577aa7eca8 100644
--- a/tools/perf/util/pmus.c
+++ b/tools/perf/util/pmus.c
@@ -474,8 +474,8 @@ void perf_pmus__print_pmu_events(const struct print_callbacks *print_cb, void *p
goto free;
print_cb->print_event(print_state,
- aliases[j].pmu_name,
aliases[j].topic,
+ aliases[j].pmu_name,
aliases[j].name,
aliases[j].alias,
aliases[j].scale_unit,
diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
index f171360b0ef4..c0c8d7f9514b 100644
--- a/tools/perf/util/probe-finder.c
+++ b/tools/perf/util/probe-finder.c
@@ -1499,6 +1499,10 @@ int debuginfo__find_trace_events(struct debuginfo *dbg,
if (ret >= 0 && tf.pf.skip_empty_arg)
ret = fill_empty_trace_arg(pev, tf.tevs, tf.ntevs);
+#if _ELFUTILS_PREREQ(0, 142)
+ dwarf_cfi_end(tf.pf.cfi_eh);
+#endif
+
if (ret < 0 || tf.ntevs == 0) {
for (i = 0; i < tf.ntevs; i++)
clear_probe_trace_event(&tf.tevs[i]);
@@ -1741,8 +1745,21 @@ int debuginfo__find_probe_point(struct debuginfo *dbg, u64 addr,
/* Find a corresponding function (name, baseline and baseaddr) */
if (die_find_realfunc(&cudie, (Dwarf_Addr)addr, &spdie)) {
- /* Get function entry information */
- func = basefunc = dwarf_diename(&spdie);
+ /*
+ * Get function entry information.
+ *
+ * As described in the document DWARF Debugging Information
+ * Format Version 5, section 2.22 Linkage Names, "mangled names,
+ * are used in various ways, ... to distinguish multiple
+ * entities that have the same name".
+ *
+ * Firstly try to get distinct linkage name, if fail then
+ * rollback to get associated name in DIE.
+ */
+ func = basefunc = die_get_linkage_name(&spdie);
+ if (!func)
+ func = basefunc = dwarf_diename(&spdie);
+
if (!func ||
die_entrypc(&spdie, &baseaddr) != 0 ||
dwarf_decl_line(&spdie, &baseline) != 0) {
diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h
index 8bc1c80d3c1c..1f4650b95509 100644
--- a/tools/perf/util/probe-finder.h
+++ b/tools/perf/util/probe-finder.h
@@ -81,9 +81,9 @@ struct probe_finder {
/* For variable searching */
#if _ELFUTILS_PREREQ(0, 142)
- /* Call Frame Information from .eh_frame */
+ /* Call Frame Information from .eh_frame. Owned by this struct. */
Dwarf_CFI *cfi_eh;
- /* Call Frame Information from .debug_frame */
+ /* Call Frame Information from .debug_frame. Not owned. */
Dwarf_CFI *cfi_dbg;
#endif
Dwarf_Op *fb_ops; /* Frame base attribute */
diff --git a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
index 2b1425b92b69..a3d1e23fe02a 100644
--- a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
+++ b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
@@ -65,7 +65,7 @@ static int check_single_included_tags(int mem_type, int mode)
ptr = mte_insert_tags(ptr, BUFFER_SIZE);
/* Check tag value */
if (MT_FETCH_TAG((uintptr_t)ptr) == tag) {
- ksft_print_msg("FAIL: wrong tag = 0x%x with include mask=0x%x\n",
+ ksft_print_msg("FAIL: wrong tag = 0x%lx with include mask=0x%x\n",
MT_FETCH_TAG((uintptr_t)ptr),
MT_INCLUDE_VALID_TAG(tag));
result = KSFT_FAIL;
@@ -97,7 +97,7 @@ static int check_multiple_included_tags(int mem_type, int mode)
ptr = mte_insert_tags(ptr, BUFFER_SIZE);
/* Check tag value */
if (MT_FETCH_TAG((uintptr_t)ptr) < tag) {
- ksft_print_msg("FAIL: wrong tag = 0x%x with include mask=0x%x\n",
+ ksft_print_msg("FAIL: wrong tag = 0x%lx with include mask=0x%lx\n",
MT_FETCH_TAG((uintptr_t)ptr),
MT_INCLUDE_VALID_TAGS(excl_mask));
result = KSFT_FAIL;
diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
index 00ffd34c66d3..1120f5aa7655 100644
--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
+++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
@@ -38,7 +38,7 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
if (cur_mte_cxt.trig_si_code == si->si_code)
cur_mte_cxt.fault_valid = true;
else
- ksft_print_msg("Got unexpected SEGV_MTEAERR at pc=$lx, fault addr=%lx\n",
+ ksft_print_msg("Got unexpected SEGV_MTEAERR at pc=%llx, fault addr=%lx\n",
((ucontext_t *)uc)->uc_mcontext.pc,
addr);
return;
@@ -64,7 +64,7 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
exit(1);
}
} else if (signum == SIGBUS) {
- ksft_print_msg("INFO: SIGBUS signal at pc=%lx, fault addr=%lx, si_code=%lx\n",
+ ksft_print_msg("INFO: SIGBUS signal at pc=%llx, fault addr=%lx, si_code=%x\n",
((ucontext_t *)uc)->uc_mcontext.pc, addr, si->si_code);
if ((cur_mte_cxt.trig_range >= 0 &&
addr >= MT_CLEAR_TAG(cur_mte_cxt.trig_addr) &&
diff --git a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
index 86cd183ef6dc..293ac1049d38 100644
--- a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
+++ b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
@@ -28,8 +28,8 @@ struct {
},
};
-SEC(".data.A") struct bpf_spin_lock lockA;
-SEC(".data.B") struct bpf_spin_lock lockB;
+static struct bpf_spin_lock lockA SEC(".data.A");
+static struct bpf_spin_lock lockB SEC(".data.B");
SEC("?tc")
int lock_id_kptr_preserve(void *ctx)
diff --git a/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c b/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c
index f61d623b1ce8..f87365f7599b 100644
--- a/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c
+++ b/tools/testing/selftests/bpf/progs/verifier_subprog_precision.c
@@ -541,11 +541,24 @@ static __u64 subprog_spill_reg_precise(void)
SEC("?raw_tp")
__success __log_level(2)
-/* precision backtracking can't currently handle stack access not through r10,
- * so we won't be able to mark stack slot fp-8 as precise, and so will
- * fallback to forcing all as precise
- */
-__msg("mark_precise: frame0: falling back to forcing all scalars precise")
+__msg("10: (0f) r1 += r7")
+__msg("mark_precise: frame0: last_idx 10 first_idx 7 subseq_idx -1")
+__msg("mark_precise: frame0: regs=r7 stack= before 9: (bf) r1 = r8")
+__msg("mark_precise: frame0: regs=r7 stack= before 8: (27) r7 *= 4")
+__msg("mark_precise: frame0: regs=r7 stack= before 7: (79) r7 = *(u64 *)(r10 -8)")
+__msg("mark_precise: frame0: parent state regs= stack=-8: R0_w=2 R6_w=1 R8_rw=map_value(map=.data.vals,ks=4,vs=16) R10=fp0 fp-8_rw=P1")
+__msg("mark_precise: frame0: last_idx 18 first_idx 0 subseq_idx 7")
+__msg("mark_precise: frame0: regs= stack=-8 before 18: (95) exit")
+__msg("mark_precise: frame1: regs= stack= before 17: (0f) r0 += r2")
+__msg("mark_precise: frame1: regs= stack= before 16: (79) r2 = *(u64 *)(r1 +0)")
+__msg("mark_precise: frame1: regs= stack= before 15: (79) r0 = *(u64 *)(r10 -16)")
+__msg("mark_precise: frame1: regs= stack= before 14: (7b) *(u64 *)(r10 -16) = r2")
+__msg("mark_precise: frame1: regs= stack= before 13: (7b) *(u64 *)(r1 +0) = r2")
+__msg("mark_precise: frame1: regs=r2 stack= before 6: (85) call pc+6")
+__msg("mark_precise: frame0: regs=r2 stack= before 5: (bf) r2 = r6")
+__msg("mark_precise: frame0: regs=r6 stack= before 4: (07) r1 += -8")
+__msg("mark_precise: frame0: regs=r6 stack= before 3: (bf) r1 = r10")
+__msg("mark_precise: frame0: regs=r6 stack= before 2: (b7) r6 = 1")
__naked int subprog_spill_into_parent_stack_slot_precise(void)
{
asm volatile (
diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
index a181c0ccf98b..dccaf9b8cb90 100644
--- a/tools/testing/selftests/bpf/test_sockmap.c
+++ b/tools/testing/selftests/bpf/test_sockmap.c
@@ -56,6 +56,8 @@ static void running_handler(int a);
#define BPF_SOCKHASH_FILENAME "test_sockhash_kern.bpf.o"
#define CG_PATH "/sockmap"
+#define EDATAINTEGRITY 2001
+
/* global sockets */
int s1, s2, c1, c2, p1, p2;
int test_cnt;
@@ -85,6 +87,10 @@ int ktls;
int peek_flag;
int skb_use_parser;
int txmsg_omit_skb_parser;
+int verify_push_start;
+int verify_push_len;
+int verify_pop_start;
+int verify_pop_len;
static const struct option long_options[] = {
{"help", no_argument, NULL, 'h' },
@@ -417,16 +423,18 @@ static int msg_loop_sendpage(int fd, int iov_length, int cnt,
{
bool drop = opt->drop_expected;
unsigned char k = 0;
+ int i, j, fp;
FILE *file;
- int i, fp;
file = tmpfile();
if (!file) {
perror("create file for sendpage");
return 1;
}
- for (i = 0; i < iov_length * cnt; i++, k++)
- fwrite(&k, sizeof(char), 1, file);
+ for (i = 0; i < cnt; i++, k = 0) {
+ for (j = 0; j < iov_length; j++, k++)
+ fwrite(&k, sizeof(char), 1, file);
+ }
fflush(file);
fseek(file, 0, SEEK_SET);
@@ -509,42 +517,111 @@ static int msg_alloc_iov(struct msghdr *msg,
return -ENOMEM;
}
-static int msg_verify_data(struct msghdr *msg, int size, int chunk_sz)
+/* In push or pop test, we need to do some calculations for msg_verify_data */
+static void msg_verify_date_prep(void)
{
- int i, j = 0, bytes_cnt = 0;
- unsigned char k = 0;
+ int push_range_end = txmsg_start_push + txmsg_end_push - 1;
+ int pop_range_end = txmsg_start_pop + txmsg_pop - 1;
+
+ if (txmsg_end_push && txmsg_pop &&
+ txmsg_start_push <= pop_range_end && txmsg_start_pop <= push_range_end) {
+ /* The push range and the pop range overlap */
+ int overlap_len;
+
+ verify_push_start = txmsg_start_push;
+ verify_pop_start = txmsg_start_pop;
+ if (txmsg_start_push < txmsg_start_pop)
+ overlap_len = min(push_range_end - txmsg_start_pop + 1, txmsg_pop);
+ else
+ overlap_len = min(pop_range_end - txmsg_start_push + 1, txmsg_end_push);
+ verify_push_len = max(txmsg_end_push - overlap_len, 0);
+ verify_pop_len = max(txmsg_pop - overlap_len, 0);
+ } else {
+ /* Otherwise */
+ verify_push_start = txmsg_start_push;
+ verify_pop_start = txmsg_start_pop;
+ verify_push_len = txmsg_end_push;
+ verify_pop_len = txmsg_pop;
+ }
+}
+
+static int msg_verify_data(struct msghdr *msg, int size, int chunk_sz,
+ unsigned char *k_p, int *bytes_cnt_p,
+ int *check_cnt_p, int *push_p)
+{
+ int bytes_cnt = *bytes_cnt_p, check_cnt = *check_cnt_p, push = *push_p;
+ unsigned char k = *k_p;
+ int i, j;
- for (i = 0; i < msg->msg_iovlen; i++) {
+ for (i = 0, j = 0; i < msg->msg_iovlen && size; i++, j = 0) {
unsigned char *d = msg->msg_iov[i].iov_base;
/* Special case test for skb ingress + ktls */
if (i == 0 && txmsg_ktls_skb) {
if (msg->msg_iov[i].iov_len < 4)
- return -EIO;
+ return -EDATAINTEGRITY;
if (memcmp(d, "PASS", 4) != 0) {
fprintf(stderr,
"detected skb data error with skb ingress update @iov[%i]:%i \"%02x %02x %02x %02x\" != \"PASS\"\n",
i, 0, d[0], d[1], d[2], d[3]);
- return -EIO;
+ return -EDATAINTEGRITY;
}
j = 4; /* advance index past PASS header */
}
for (; j < msg->msg_iov[i].iov_len && size; j++) {
+ if (push > 0 &&
+ check_cnt == verify_push_start + verify_push_len - push) {
+ int skipped;
+revisit_push:
+ skipped = push;
+ if (j + push >= msg->msg_iov[i].iov_len)
+ skipped = msg->msg_iov[i].iov_len - j;
+ push -= skipped;
+ size -= skipped;
+ j += skipped - 1;
+ check_cnt += skipped;
+ continue;
+ }
+
+ if (verify_pop_len > 0 && check_cnt == verify_pop_start) {
+ bytes_cnt += verify_pop_len;
+ check_cnt += verify_pop_len;
+ k += verify_pop_len;
+
+ if (bytes_cnt == chunk_sz) {
+ k = 0;
+ bytes_cnt = 0;
+ check_cnt = 0;
+ push = verify_push_len;
+ }
+
+ if (push > 0 &&
+ check_cnt == verify_push_start + verify_push_len - push)
+ goto revisit_push;
+ }
+
if (d[j] != k++) {
fprintf(stderr,
"detected data corruption @iov[%i]:%i %02x != %02x, %02x ?= %02x\n",
i, j, d[j], k - 1, d[j+1], k);
- return -EIO;
+ return -EDATAINTEGRITY;
}
bytes_cnt++;
+ check_cnt++;
if (bytes_cnt == chunk_sz) {
k = 0;
bytes_cnt = 0;
+ check_cnt = 0;
+ push = verify_push_len;
}
size--;
}
}
+ *k_p = k;
+ *bytes_cnt_p = bytes_cnt;
+ *check_cnt_p = check_cnt;
+ *push_p = push;
return 0;
}
@@ -597,10 +674,14 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
}
clock_gettime(CLOCK_MONOTONIC, &s->end);
} else {
+ float total_bytes, txmsg_pop_total, txmsg_push_total;
int slct, recvp = 0, recv, max_fd = fd;
- float total_bytes, txmsg_pop_total;
int fd_flags = O_NONBLOCK;
struct timeval timeout;
+ unsigned char k = 0;
+ int bytes_cnt = 0;
+ int check_cnt = 0;
+ int push = 0;
fd_set w;
fcntl(fd, fd_flags);
@@ -614,12 +695,22 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
* This is really only useful for testing edge cases in code
* paths.
*/
- total_bytes = (float)iov_count * (float)iov_length * (float)cnt;
- if (txmsg_apply)
+ total_bytes = (float)iov_length * (float)cnt;
+ if (!opt->sendpage)
+ total_bytes *= (float)iov_count;
+ if (txmsg_apply) {
+ txmsg_push_total = txmsg_end_push * (total_bytes / txmsg_apply);
txmsg_pop_total = txmsg_pop * (total_bytes / txmsg_apply);
- else
+ } else {
+ txmsg_push_total = txmsg_end_push * cnt;
txmsg_pop_total = txmsg_pop * cnt;
+ }
+ total_bytes += txmsg_push_total;
total_bytes -= txmsg_pop_total;
+ if (data) {
+ msg_verify_date_prep();
+ push = verify_push_len;
+ }
err = clock_gettime(CLOCK_MONOTONIC, &s->start);
if (err < 0)
perror("recv start time");
@@ -692,10 +783,11 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
if (data) {
int chunk_sz = opt->sendpage ?
- iov_length * cnt :
+ iov_length :
iov_length * iov_count;
- errno = msg_verify_data(&msg, recv, chunk_sz);
+ errno = msg_verify_data(&msg, recv, chunk_sz, &k, &bytes_cnt,
+ &check_cnt, &push);
if (errno) {
perror("data verify msg failed");
goto out_errno;
@@ -703,7 +795,11 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
if (recvp) {
errno = msg_verify_data(&msg_peek,
recvp,
- chunk_sz);
+ chunk_sz,
+ &k,
+ &bytes_cnt,
+ &check_cnt,
+ &push);
if (errno) {
perror("data verify msg_peek failed");
goto out_errno;
@@ -785,8 +881,6 @@ static int sendmsg_test(struct sockmap_options *opt)
rxpid = fork();
if (rxpid == 0) {
- if (txmsg_pop || txmsg_start_pop)
- iov_buf -= (txmsg_pop - txmsg_start_pop + 1);
if (opt->drop_expected || txmsg_ktls_skb_drop)
_exit(0);
@@ -811,7 +905,7 @@ static int sendmsg_test(struct sockmap_options *opt)
s.bytes_sent, sent_Bps, sent_Bps/giga,
s.bytes_recvd, recvd_Bps, recvd_Bps/giga,
peek_flag ? "(peek_msg)" : "");
- if (err && txmsg_cork)
+ if (err && err != -EDATAINTEGRITY && txmsg_cork)
err = 0;
exit(err ? 1 : 0);
} else if (rxpid == -1) {
@@ -1459,8 +1553,8 @@ static void test_send_many(struct sockmap_options *opt, int cgrp)
static void test_send_large(struct sockmap_options *opt, int cgrp)
{
- opt->iov_length = 256;
- opt->iov_count = 1024;
+ opt->iov_length = 8192;
+ opt->iov_count = 32;
opt->rate = 2;
test_exec(cgrp, opt);
}
@@ -1589,17 +1683,19 @@ static void test_txmsg_cork_hangs(int cgrp, struct sockmap_options *opt)
static void test_txmsg_pull(int cgrp, struct sockmap_options *opt)
{
/* Test basic start/end */
+ txmsg_pass = 1;
txmsg_start = 1;
txmsg_end = 2;
test_send(opt, cgrp);
/* Test >4k pull */
+ txmsg_pass = 1;
txmsg_start = 4096;
txmsg_end = 9182;
test_send_large(opt, cgrp);
/* Test pull + redirect */
- txmsg_redir = 0;
+ txmsg_redir = 1;
txmsg_start = 1;
txmsg_end = 2;
test_send(opt, cgrp);
@@ -1621,12 +1717,16 @@ static void test_txmsg_pull(int cgrp, struct sockmap_options *opt)
static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
{
+ bool data = opt->data_test;
+
/* Test basic pop */
+ txmsg_pass = 1;
txmsg_start_pop = 1;
txmsg_pop = 2;
test_send_many(opt, cgrp);
/* Test pop with >4k */
+ txmsg_pass = 1;
txmsg_start_pop = 4096;
txmsg_pop = 4096;
test_send_large(opt, cgrp);
@@ -1637,6 +1737,12 @@ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
txmsg_pop = 2;
test_send_many(opt, cgrp);
+ /* TODO: Test for pop + cork should be different,
+ * - It makes the layout of the received data difficult
+ * - It makes it hard to calculate the total_bytes in the recvmsg
+ * Temporarily skip the data integrity test for this case now.
+ */
+ opt->data_test = false;
/* Test pop + cork */
txmsg_redir = 0;
txmsg_cork = 512;
@@ -1650,16 +1756,21 @@ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
txmsg_start_pop = 1;
txmsg_pop = 2;
test_send_many(opt, cgrp);
+ opt->data_test = data;
}
static void test_txmsg_push(int cgrp, struct sockmap_options *opt)
{
+ bool data = opt->data_test;
+
/* Test basic push */
+ txmsg_pass = 1;
txmsg_start_push = 1;
txmsg_end_push = 1;
test_send(opt, cgrp);
/* Test push 4kB >4k */
+ txmsg_pass = 1;
txmsg_start_push = 4096;
txmsg_end_push = 4096;
test_send_large(opt, cgrp);
@@ -1670,16 +1781,24 @@ static void test_txmsg_push(int cgrp, struct sockmap_options *opt)
txmsg_end_push = 2;
test_send_many(opt, cgrp);
+ /* TODO: Test for push + cork should be different,
+ * - It makes the layout of the received data difficult
+ * - It makes it hard to calculate the total_bytes in the recvmsg
+ * Temporarily skip the data integrity test for this case now.
+ */
+ opt->data_test = false;
/* Test push + cork */
txmsg_redir = 0;
txmsg_cork = 512;
txmsg_start_push = 1;
txmsg_end_push = 2;
test_send_many(opt, cgrp);
+ opt->data_test = data;
}
static void test_txmsg_push_pop(int cgrp, struct sockmap_options *opt)
{
+ txmsg_pass = 1;
txmsg_start_push = 1;
txmsg_end_push = 10;
txmsg_start_pop = 5;
diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
index 0d84dd1f38b6..8a2ff81d8350 100644
--- a/tools/testing/selftests/bpf/verifier/precise.c
+++ b/tools/testing/selftests/bpf/verifier/precise.c
@@ -140,10 +140,11 @@
.result = REJECT,
},
{
- "precise: ST insn causing spi > allocated_stack",
+ "precise: ST zero to stack insn is supported",
.insns = {
BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 123, 0),
+ /* not a register spill, so we stop precision propagation for R4 here */
BPF_ST_MEM(BPF_DW, BPF_REG_3, -8, 0),
BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
BPF_MOV64_IMM(BPF_REG_0, -1),
@@ -157,11 +158,11 @@
mark_precise: frame0: last_idx 4 first_idx 2\
mark_precise: frame0: regs=r4 stack= before 4\
mark_precise: frame0: regs=r4 stack= before 3\
- mark_precise: frame0: regs= stack=-8 before 2\
- mark_precise: frame0: falling back to forcing all scalars precise\
- force_precise: frame0: forcing r0 to be precise\
mark_precise: frame0: last_idx 5 first_idx 5\
- mark_precise: frame0: parent state regs= stack=:",
+ mark_precise: frame0: parent state regs=r0 stack=:\
+ mark_precise: frame0: last_idx 4 first_idx 2\
+ mark_precise: frame0: regs=r0 stack= before 4\
+ 5: R0=-1 R4=0",
.result = VERBOSE_ACCEPT,
.retval = -1,
},
@@ -169,6 +170,8 @@
"precise: STX insn causing spi > allocated_stack",
.insns = {
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+ /* make later reg spill more interesting by having somewhat known scalar */
+ BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xff),
BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 123, 0),
BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, -8),
@@ -179,18 +182,21 @@
},
.prog_type = BPF_PROG_TYPE_XDP,
.flags = BPF_F_TEST_STATE_FREQ,
- .errstr = "mark_precise: frame0: last_idx 6 first_idx 6\
+ .errstr = "mark_precise: frame0: last_idx 7 first_idx 7\
mark_precise: frame0: parent state regs=r4 stack=:\
- mark_precise: frame0: last_idx 5 first_idx 3\
- mark_precise: frame0: regs=r4 stack= before 5\
- mark_precise: frame0: regs=r4 stack= before 4\
- mark_precise: frame0: regs= stack=-8 before 3\
- mark_precise: frame0: falling back to forcing all scalars precise\
- force_precise: frame0: forcing r0 to be precise\
- force_precise: frame0: forcing r0 to be precise\
- force_precise: frame0: forcing r0 to be precise\
- force_precise: frame0: forcing r0 to be precise\
- mark_precise: frame0: last_idx 6 first_idx 6\
+ mark_precise: frame0: last_idx 6 first_idx 4\
+ mark_precise: frame0: regs=r4 stack= before 6: (b7) r0 = -1\
+ mark_precise: frame0: regs=r4 stack= before 5: (79) r4 = *(u64 *)(r10 -8)\
+ mark_precise: frame0: regs= stack=-8 before 4: (7b) *(u64 *)(r3 -8) = r0\
+ mark_precise: frame0: parent state regs=r0 stack=:\
+ mark_precise: frame0: last_idx 3 first_idx 3\
+ mark_precise: frame0: regs=r0 stack= before 3: (55) if r3 != 0x7b goto pc+0\
+ mark_precise: frame0: regs=r0 stack= before 2: (bf) r3 = r10\
+ mark_precise: frame0: regs=r0 stack= before 1: (57) r0 &= 255\
+ mark_precise: frame0: parent state regs=r0 stack=:\
+ mark_precise: frame0: last_idx 0 first_idx 0\
+ mark_precise: frame0: regs=r0 stack= before 0: (85) call bpf_get_prandom_u32#7\
+ mark_precise: frame0: last_idx 7 first_idx 7\
mark_precise: frame0: parent state regs= stack=:",
.result = VERBOSE_ACCEPT,
.retval = -1,
diff --git a/tools/testing/selftests/mount_setattr/mount_setattr_test.c b/tools/testing/selftests/mount_setattr/mount_setattr_test.c
index c6a8c732b802..304e6422a1f1 100644
--- a/tools/testing/selftests/mount_setattr/mount_setattr_test.c
+++ b/tools/testing/selftests/mount_setattr/mount_setattr_test.c
@@ -1026,7 +1026,7 @@ FIXTURE_SETUP(mount_setattr_idmapped)
"size=100000,mode=700"), 0);
ASSERT_EQ(mount("testing", "/mnt", "tmpfs", MS_NOATIME | MS_NODEV,
- "size=100000,mode=700"), 0);
+ "size=2m,mode=700"), 0);
ASSERT_EQ(mkdir("/mnt/A", 0777), 0);
diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
index d65fdd407d73..1c0dd2f78167 100755
--- a/tools/testing/selftests/net/pmtu.sh
+++ b/tools/testing/selftests/net/pmtu.sh
@@ -1961,7 +1961,7 @@ check_running() {
pid=${1}
cmd=${2}
- [ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "{cmd}" ]
+ [ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "${cmd}" ]
}
test_cleanup_vxlanX_exception() {
diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
index 0f6cca61ec94..a85ae8148db8 100644
--- a/tools/testing/selftests/resctrl/fill_buf.c
+++ b/tools/testing/selftests/resctrl/fill_buf.c
@@ -51,29 +51,6 @@ static void mem_flush(unsigned char *buf, size_t buf_size)
sb();
}
-static void *malloc_and_init_memory(size_t buf_size)
-{
- void *p = NULL;
- uint64_t *p64;
- size_t s64;
- int ret;
-
- ret = posix_memalign(&p, PAGE_SIZE, buf_size);
- if (ret < 0)
- return NULL;
-
- p64 = (uint64_t *)p;
- s64 = buf_size / sizeof(uint64_t);
-
- while (s64 > 0) {
- *p64 = (uint64_t)rand();
- p64 += (CL_SIZE / sizeof(uint64_t));
- s64 -= (CL_SIZE / sizeof(uint64_t));
- }
-
- return p;
-}
-
static int fill_one_span_read(unsigned char *buf, size_t buf_size)
{
unsigned char *end_ptr = buf + buf_size;
@@ -135,41 +112,48 @@ static int fill_cache_write(unsigned char *buf, size_t buf_size, bool once)
return 0;
}
-static int fill_cache(size_t buf_size, int memflush, int op, bool once)
+static unsigned char *alloc_buffer(size_t buf_size, int memflush)
{
- unsigned char *buf;
+ void *buf = NULL;
+ uint64_t *p64;
+ ssize_t s64;
int ret;
- buf = malloc_and_init_memory(buf_size);
- if (!buf)
- return -1;
-
- /* Flush the memory before using to avoid "cache hot pages" effect */
- if (memflush)
- mem_flush(buf, buf_size);
-
- if (op == 0)
- ret = fill_cache_read(buf, buf_size, once);
- else
- ret = fill_cache_write(buf, buf_size, once);
+ ret = posix_memalign(&buf, PAGE_SIZE, buf_size);
+ if (ret < 0)
+ return NULL;
- free(buf);
+ /* Initialize the buffer */
+ p64 = buf;
+ s64 = buf_size / sizeof(uint64_t);
- if (ret) {
- printf("\n Error in fill cache read/write...\n");
- return -1;
+ while (s64 > 0) {
+ *p64 = (uint64_t)rand();
+ p64 += (CL_SIZE / sizeof(uint64_t));
+ s64 -= (CL_SIZE / sizeof(uint64_t));
}
+ /* Flush the memory before using to avoid "cache hot pages" effect */
+ if (memflush)
+ mem_flush(buf, buf_size);
- return 0;
+ return buf;
}
-int run_fill_buf(size_t span, int memflush, int op, bool once)
+int run_fill_buf(size_t buf_size, int memflush, int op, bool once)
{
- size_t cache_size = span;
+ unsigned char *buf;
int ret;
- ret = fill_cache(cache_size, memflush, op, once);
+ buf = alloc_buffer(buf_size, memflush);
+ if (!buf)
+ return -1;
+
+ if (op == 0)
+ ret = fill_cache_read(buf, buf_size, once);
+ else
+ ret = fill_cache_write(buf, buf_size, once);
+ free(buf);
if (ret) {
printf("\n Error in fill cache\n");
return -1;
diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
index dd3546655657..a848e9c75578 100644
--- a/tools/testing/selftests/resctrl/resctrl.h
+++ b/tools/testing/selftests/resctrl/resctrl.h
@@ -91,7 +91,7 @@ int write_bm_pid_to_resctrl(pid_t bm_pid, char *ctrlgrp, char *mongrp,
char *resctrl_val);
int perf_event_open(struct perf_event_attr *hw_event, pid_t pid, int cpu,
int group_fd, unsigned long flags);
-int run_fill_buf(size_t span, int memflush, int op, bool once);
+int run_fill_buf(size_t buf_size, int memflush, int op, bool once);
int resctrl_val(const char * const *benchmark_cmd, struct resctrl_val_param *param);
int mbm_bw_change(int cpu_no, const char * const *benchmark_cmd);
void tests_cleanup(void);
diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
index 45439e726e79..d77fdf356e98 100644
--- a/tools/testing/selftests/resctrl/resctrl_val.c
+++ b/tools/testing/selftests/resctrl/resctrl_val.c
@@ -102,13 +102,12 @@ void get_event_and_umask(char *cas_count_cfg, int count, bool op)
char *token[MAX_TOKENS];
int i = 0;
- strcat(cas_count_cfg, ",");
token[0] = strtok(cas_count_cfg, "=,");
for (i = 1; i < MAX_TOKENS; i++)
token[i] = strtok(NULL, "=,");
- for (i = 0; i < MAX_TOKENS; i++) {
+ for (i = 0; i < MAX_TOKENS - 1; i++) {
if (!token[i])
break;
if (strcmp(token[i], "event") == 0) {
diff --git a/tools/testing/selftests/vDSO/parse_vdso.c b/tools/testing/selftests/vDSO/parse_vdso.c
index 7dd5668ea8a6..28f35620c499 100644
--- a/tools/testing/selftests/vDSO/parse_vdso.c
+++ b/tools/testing/selftests/vDSO/parse_vdso.c
@@ -222,8 +222,7 @@ void *vdso_sym(const char *version, const char *name)
ELF(Sym) *sym = &vdso_info.symtab[chain];
/* Check for a defined global or weak function w/ right name. */
- if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC &&
- ELF64_ST_TYPE(sym->st_info) != STT_NOTYPE)
+ if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC)
continue;
if (ELF64_ST_BIND(sym->st_info) != STB_GLOBAL &&
ELF64_ST_BIND(sym->st_info) != STB_WEAK)
diff --git a/tools/testing/selftests/watchdog/watchdog-test.c b/tools/testing/selftests/watchdog/watchdog-test.c
index bc71cbca0dde..a1f506ba5578 100644
--- a/tools/testing/selftests/watchdog/watchdog-test.c
+++ b/tools/testing/selftests/watchdog/watchdog-test.c
@@ -334,7 +334,13 @@ int main(int argc, char *argv[])
printf("Watchdog Ticking Away!\n");
+ /*
+ * Register the signals
+ */
signal(SIGINT, term);
+ signal(SIGTERM, term);
+ signal(SIGKILL, term);
+ signal(SIGQUIT, term);
while (1) {
keep_alive();
diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
index 405ff262ca93..55500f901fbc 100755
--- a/tools/testing/selftests/wireguard/netns.sh
+++ b/tools/testing/selftests/wireguard/netns.sh
@@ -332,6 +332,7 @@ waitiface $netns1 vethc
waitiface $netns2 veths
n0 bash -c 'printf 1 > /proc/sys/net/ipv4/ip_forward'
+[[ -e /proc/sys/net/netfilter/nf_conntrack_udp_timeout ]] || modprobe nf_conntrack
n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout'
n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout_stream'
n0 iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -d 10.0.0.0/24 -j SNAT --to 10.0.0.1
Powered by blists - more mailing lists