Merge remote-tracking branch 'stable/linux-5.10.y' into rpi-5.10.y

This commit is contained in:
Dom Cobley
2021-12-16 12:12:00 +00:00
300 changed files with 3985 additions and 1381 deletions

View File

@@ -91,6 +91,14 @@ properties:
compensate for the board being designed with the lanes compensate for the board being designed with the lanes
swapped. swapped.
enet-phy-lane-no-swap:
$ref: /schemas/types.yaml#/definitions/flag
description:
If set, indicates that PHY will disable swap of the
TX/RX lanes. This property allows the PHY to work correcly after
e.g. wrong bootstrap configuration caused by issues in PCB
layout design.
eee-broken-100tx: eee-broken-100tx:
$ref: /schemas/types.yaml#definitions/flag $ref: /schemas/types.yaml#definitions/flag
description: description:

View File

@@ -11,16 +11,13 @@ compiler [1]_. They are useful for runtime instrumentation and static analysis.
We can analyse, change and add further code during compilation via We can analyse, change and add further code during compilation via
callbacks [2]_, GIMPLE [3]_, IPA [4]_ and RTL passes [5]_. callbacks [2]_, GIMPLE [3]_, IPA [4]_ and RTL passes [5]_.
The GCC plugin infrastructure of the kernel supports all gcc versions from The GCC plugin infrastructure of the kernel supports building out-of-tree
4.5 to 6.0, building out-of-tree modules, cross-compilation and building in a modules, cross-compilation and building in a separate directory.
separate directory. Plugin source files have to be compilable by a C++ compiler.
Plugin source files have to be compilable by both a C and a C++ compiler as well
because gcc versions 4.5 and 4.6 are compiled by a C compiler,
gcc-4.7 can be compiled by a C or a C++ compiler,
and versions 4.8+ can only be compiled by a C++ compiler.
Currently the GCC plugin infrastructure supports only the x86, arm, arm64 and Currently the GCC plugin infrastructure supports only some architectures.
powerpc architectures. Grep "select HAVE_GCC_PLUGINS" to find out which architectures support
GCC plugins.
This infrastructure was ported from grsecurity [6]_ and PaX [7]_. This infrastructure was ported from grsecurity [6]_ and PaX [7]_.
@@ -47,20 +44,13 @@ Files
This is a compatibility header for GCC plugins. This is a compatibility header for GCC plugins.
It should be always included instead of individual gcc headers. It should be always included instead of individual gcc headers.
**$(src)/scripts/gcc-plugin.sh**
This script checks the availability of the included headers in
gcc-common.h and chooses the proper host compiler to build the plugins
(gcc-4.7 can be built by either gcc or g++).
**$(src)/scripts/gcc-plugins/gcc-generate-gimple-pass.h, **$(src)/scripts/gcc-plugins/gcc-generate-gimple-pass.h,
$(src)/scripts/gcc-plugins/gcc-generate-ipa-pass.h, $(src)/scripts/gcc-plugins/gcc-generate-ipa-pass.h,
$(src)/scripts/gcc-plugins/gcc-generate-simple_ipa-pass.h, $(src)/scripts/gcc-plugins/gcc-generate-simple_ipa-pass.h,
$(src)/scripts/gcc-plugins/gcc-generate-rtl-pass.h** $(src)/scripts/gcc-plugins/gcc-generate-rtl-pass.h**
These headers automatically generate the registration structures for These headers automatically generate the registration structures for
GIMPLE, SIMPLE_IPA, IPA and RTL passes. They support all gcc versions GIMPLE, SIMPLE_IPA, IPA and RTL passes.
from 4.5 to 6.0.
They should be preferred to creating the structures by hand. They should be preferred to creating the structures by hand.
@@ -68,21 +58,25 @@ Usage
===== =====
You must install the gcc plugin headers for your gcc version, You must install the gcc plugin headers for your gcc version,
e.g., on Ubuntu for gcc-4.9:: e.g., on Ubuntu for gcc-10::
apt-get install gcc-4.9-plugin-dev apt-get install gcc-10-plugin-dev
Or on Fedora:: Or on Fedora::
dnf install gcc-plugin-devel dnf install gcc-plugin-devel
Enable a GCC plugin based feature in the kernel config:: Enable the GCC plugin infrastructure and some plugin(s) you want to use
in the kernel config::
CONFIG_GCC_PLUGIN_CYC_COMPLEXITY = y CONFIG_GCC_PLUGINS=y
CONFIG_GCC_PLUGIN_CYC_COMPLEXITY=y
CONFIG_GCC_PLUGIN_LATENT_ENTROPY=y
...
To compile only the plugin(s):: To compile the minimum tool set including the plugin(s)::
make gcc-plugins make scripts
or just run the kernel make and compile the whole kernel with or just run the kernel make and compile the whole kernel with
the cyclomatic complexity GCC plugin. the cyclomatic complexity GCC plugin.
@@ -91,7 +85,8 @@ the cyclomatic complexity GCC plugin.
4. How to add a new GCC plugin 4. How to add a new GCC plugin
============================== ==============================
The GCC plugins are in $(src)/scripts/gcc-plugins/. You can use a file or a directory The GCC plugins are in scripts/gcc-plugins/. You need to put plugin source files
here. It must be added to $(src)/scripts/gcc-plugins/Makefile, right under scripts/gcc-plugins/. Creating subdirectories is not supported.
$(src)/scripts/Makefile.gcc-plugins and $(src)/arch/Kconfig. It must be added to scripts/gcc-plugins/Makefile, scripts/Makefile.gcc-plugins
and a relevant Kconfig file.
See the cyc_complexity_plugin.c (CONFIG_GCC_PLUGIN_CYC_COMPLEXITY) GCC plugin. See the cyc_complexity_plugin.c (CONFIG_GCC_PLUGIN_CYC_COMPLEXITY) GCC plugin.

View File

@@ -439,11 +439,9 @@ preemption. The following substitution works on both kernels::
spin_lock(&p->lock); spin_lock(&p->lock);
p->count += this_cpu_read(var2); p->count += this_cpu_read(var2);
On a non-PREEMPT_RT kernel migrate_disable() maps to preempt_disable()
which makes the above code fully equivalent. On a PREEMPT_RT kernel
migrate_disable() ensures that the task is pinned on the current CPU which migrate_disable() ensures that the task is pinned on the current CPU which
in turn guarantees that the per-CPU access to var1 and var2 are staying on in turn guarantees that the per-CPU access to var1 and var2 are staying on
the same CPU. the same CPU while the task remains preemptible.
The migrate_disable() substitution is not valid for the following The migrate_disable() substitution is not valid for the following
scenario:: scenario::
@@ -456,9 +454,8 @@ scenario::
p = this_cpu_ptr(&var1); p = this_cpu_ptr(&var1);
p->val = func2(); p->val = func2();
While correct on a non-PREEMPT_RT kernel, this breaks on PREEMPT_RT because This breaks because migrate_disable() does not protect against reentrancy from
here migrate_disable() does not protect against reentrancy from a a preempting task. A correct substitution for this case is::
preempting task. A correct substitution for this case is::
func() func()
{ {

View File

@@ -7341,7 +7341,6 @@ L: linux-hardening@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/kbuild/gcc-plugins.rst F: Documentation/kbuild/gcc-plugins.rst
F: scripts/Makefile.gcc-plugins F: scripts/Makefile.gcc-plugins
F: scripts/gcc-plugin.sh
F: scripts/gcc-plugins/ F: scripts/gcc-plugins/
GCOV BASED KERNEL PROFILING GCOV BASED KERNEL PROFILING

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
VERSION = 5 VERSION = 5
PATCHLEVEL = 10 PATCHLEVEL = 10
SUBLEVEL = 83 SUBLEVEL = 85
EXTRAVERSION = EXTRAVERSION =
NAME = Dare mighty things NAME = Dare mighty things

View File

@@ -83,7 +83,7 @@
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H) #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
/* TCR_EL2 Registers bits */ /* TCR_EL2 Registers bits */
#define TCR_EL2_RES1 ((1 << 31) | (1 << 23)) #define TCR_EL2_RES1 ((1U << 31) | (1 << 23))
#define TCR_EL2_TBI (1 << 20) #define TCR_EL2_TBI (1 << 20)
#define TCR_EL2_PS_SHIFT 16 #define TCR_EL2_PS_SHIFT 16
#define TCR_EL2_PS_MASK (7 << TCR_EL2_PS_SHIFT) #define TCR_EL2_PS_MASK (7 << TCR_EL2_PS_SHIFT)
@@ -268,7 +268,7 @@
#define CPTR_EL2_TFP_SHIFT 10 #define CPTR_EL2_TFP_SHIFT 10
/* Hyp Coprocessor Trap Register */ /* Hyp Coprocessor Trap Register */
#define CPTR_EL2_TCPAC (1 << 31) #define CPTR_EL2_TCPAC (1U << 31)
#define CPTR_EL2_TAM (1 << 30) #define CPTR_EL2_TAM (1 << 30)
#define CPTR_EL2_TTA (1 << 20) #define CPTR_EL2_TTA (1 << 20)
#define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT) #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT)

View File

@@ -77,11 +77,17 @@
.endm .endm
SYM_CODE_START(ftrace_regs_caller) SYM_CODE_START(ftrace_regs_caller)
#ifdef BTI_C
BTI_C
#endif
ftrace_regs_entry 1 ftrace_regs_entry 1
b ftrace_common b ftrace_common
SYM_CODE_END(ftrace_regs_caller) SYM_CODE_END(ftrace_regs_caller)
SYM_CODE_START(ftrace_caller) SYM_CODE_START(ftrace_caller)
#ifdef BTI_C
BTI_C
#endif
ftrace_regs_entry 0 ftrace_regs_entry 0
b ftrace_common b ftrace_common
SYM_CODE_END(ftrace_caller) SYM_CODE_END(ftrace_caller)

View File

@@ -211,7 +211,7 @@ asmlinkage void do_trap_illinsn(struct pt_regs *regs)
asmlinkage void do_trap_fpe(struct pt_regs *regs) asmlinkage void do_trap_fpe(struct pt_regs *regs)
{ {
#ifdef CONFIG_CPU_HAS_FP #ifdef CONFIG_CPU_HAS_FPU
return fpu_fpe(regs); return fpu_fpe(regs);
#else #else
do_trap_error(regs, SIGILL, ILL_ILLOPC, regs->pc, do_trap_error(regs, SIGILL, ILL_ILLOPC, regs->pc,
@@ -221,7 +221,7 @@ asmlinkage void do_trap_fpe(struct pt_regs *regs)
asmlinkage void do_trap_priv(struct pt_regs *regs) asmlinkage void do_trap_priv(struct pt_regs *regs)
{ {
#ifdef CONFIG_CPU_HAS_FP #ifdef CONFIG_CPU_HAS_FPU
if (user_mode(regs) && fpu_libc_helper(regs)) if (user_mode(regs) && fpu_libc_helper(regs))
return; return;
#endif #endif

View File

@@ -17,7 +17,12 @@
# Mike Shaver, Helge Deller and Martin K. Petersen # Mike Shaver, Helge Deller and Martin K. Petersen
# #
ifdef CONFIG_PARISC_SELF_EXTRACT
boot := arch/parisc/boot
KBUILD_IMAGE := $(boot)/bzImage
else
KBUILD_IMAGE := vmlinuz KBUILD_IMAGE := vmlinuz
endif
NM = sh $(srctree)/arch/parisc/nm NM = sh $(srctree)/arch/parisc/nm
CHECKFLAGS += -D__hppa__=1 CHECKFLAGS += -D__hppa__=1

View File

@@ -39,6 +39,7 @@ verify "$3"
if [ -n "${INSTALLKERNEL}" ]; then if [ -n "${INSTALLKERNEL}" ]; then
if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi
if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
if [ -x /usr/sbin/${INSTALLKERNEL} ]; then exec /usr/sbin/${INSTALLKERNEL} "$@"; fi
fi fi
# Default install # Default install

View File

@@ -252,27 +252,13 @@ void __init time_init(void)
static int __init init_cr16_clocksource(void) static int __init init_cr16_clocksource(void)
{ {
/* /*
* The cr16 interval timers are not syncronized across CPUs on * The cr16 interval timers are not syncronized across CPUs, even if
* different sockets, so mark them unstable and lower rating on * they share the same socket.
* multi-socket SMP systems.
*/ */
if (num_online_cpus() > 1 && !running_on_qemu) { if (num_online_cpus() > 1 && !running_on_qemu) {
int cpu; clocksource_cr16.name = "cr16_unstable";
unsigned long cpu0_loc; clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE;
cpu0_loc = per_cpu(cpu_data, 0).cpu_loc; clocksource_cr16.rating = 0;
for_each_online_cpu(cpu) {
if (cpu == 0)
continue;
if ((cpu0_loc != 0) &&
(cpu0_loc == per_cpu(cpu_data, cpu).cpu_loc))
continue;
clocksource_cr16.name = "cr16_unstable";
clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE;
clocksource_cr16.rating = 0;
break;
}
} }
/* XXX: We may want to mark sched_clock stable here if cr16 clocks are /* XXX: We may want to mark sched_clock stable here if cr16 clocks are

View File

@@ -1034,15 +1034,6 @@ static phys_addr_t ddw_memory_hotplug_max(void)
phys_addr_t max_addr = memory_hotplug_max(); phys_addr_t max_addr = memory_hotplug_max();
struct device_node *memory; struct device_node *memory;
/*
* The "ibm,pmemory" can appear anywhere in the address space.
* Assuming it is still backed by page structs, set the upper limit
* for the huge DMA window as MAX_PHYSMEM_BITS.
*/
if (of_find_node_by_type(NULL, "ibm,pmemory"))
return (sizeof(phys_addr_t) * 8 <= MAX_PHYSMEM_BITS) ?
(phys_addr_t) -1 : (1ULL << MAX_PHYSMEM_BITS);
for_each_node_by_type(memory, "memory") { for_each_node_by_type(memory, "memory") {
unsigned long start, size; unsigned long start, size;
int n_mem_addr_cells, n_mem_size_cells, len; int n_mem_addr_cells, n_mem_size_cells, len;

View File

@@ -14,12 +14,13 @@
/* I/O Map */ /* I/O Map */
#define ZPCI_IOMAP_SHIFT 48 #define ZPCI_IOMAP_SHIFT 48
#define ZPCI_IOMAP_ADDR_BASE 0x8000000000000000UL #define ZPCI_IOMAP_ADDR_SHIFT 62
#define ZPCI_IOMAP_ADDR_BASE (1UL << ZPCI_IOMAP_ADDR_SHIFT)
#define ZPCI_IOMAP_ADDR_OFF_MASK ((1UL << ZPCI_IOMAP_SHIFT) - 1) #define ZPCI_IOMAP_ADDR_OFF_MASK ((1UL << ZPCI_IOMAP_SHIFT) - 1)
#define ZPCI_IOMAP_MAX_ENTRIES \ #define ZPCI_IOMAP_MAX_ENTRIES \
((ULONG_MAX - ZPCI_IOMAP_ADDR_BASE + 1) / (1UL << ZPCI_IOMAP_SHIFT)) (1UL << (ZPCI_IOMAP_ADDR_SHIFT - ZPCI_IOMAP_SHIFT))
#define ZPCI_IOMAP_ADDR_IDX_MASK \ #define ZPCI_IOMAP_ADDR_IDX_MASK \
(~ZPCI_IOMAP_ADDR_OFF_MASK - ZPCI_IOMAP_ADDR_BASE) ((ZPCI_IOMAP_ADDR_BASE - 1) & ~ZPCI_IOMAP_ADDR_OFF_MASK)
struct zpci_iomap_entry { struct zpci_iomap_entry {
u32 fh; u32 fh;

View File

@@ -845,9 +845,6 @@ static void __init setup_memory(void)
storage_key_init_range(start, end); storage_key_init_range(start, end);
psw_set_key(PAGE_DEFAULT_KEY); psw_set_key(PAGE_DEFAULT_KEY);
/* Only cosmetics */
memblock_enforce_memory_limit(memblock_end_of_DRAM());
} }
/* /*

View File

@@ -1939,6 +1939,7 @@ config EFI
depends on ACPI depends on ACPI
select UCS2_STRING select UCS2_STRING
select EFI_RUNTIME_WRAPPERS select EFI_RUNTIME_WRAPPERS
select ARCH_USE_MEMREMAP_PROT
help help
This enables the kernel to use EFI runtime services that are This enables the kernel to use EFI runtime services that are
available (such as the EFI variable services). available (such as the EFI variable services).

View File

@@ -575,6 +575,10 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
ud2 ud2
1: 1:
#endif #endif
#ifdef CONFIG_XEN_PV
ALTERNATIVE "", "jmp xenpv_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
#endif
POP_REGS pop_rdi=0 POP_REGS pop_rdi=0
/* /*
@@ -669,7 +673,7 @@ native_irq_return_ldt:
*/ */
pushq %rdi /* Stash user RDI */ pushq %rdi /* Stash user RDI */
SWAPGS /* to kernel GS */ swapgs /* to kernel GS */
SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi /* to kernel CR3 */ SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi /* to kernel CR3 */
movq PER_CPU_VAR(espfix_waddr), %rdi movq PER_CPU_VAR(espfix_waddr), %rdi
@@ -699,7 +703,7 @@ native_irq_return_ldt:
orq PER_CPU_VAR(espfix_stack), %rax orq PER_CPU_VAR(espfix_stack), %rax
SWITCH_TO_USER_CR3_STACK scratch_reg=%rdi SWITCH_TO_USER_CR3_STACK scratch_reg=%rdi
SWAPGS /* to user GS */ swapgs /* to user GS */
popq %rdi /* Restore user RDI */ popq %rdi /* Restore user RDI */
movq %rax, %rsp movq %rax, %rsp
@@ -932,6 +936,7 @@ SYM_CODE_START_LOCAL(paranoid_entry)
.Lparanoid_entry_checkgs: .Lparanoid_entry_checkgs:
/* EBX = 1 -> kernel GSBASE active, no restore required */ /* EBX = 1 -> kernel GSBASE active, no restore required */
movl $1, %ebx movl $1, %ebx
/* /*
* The kernel-enforced convention is a negative GSBASE indicates * The kernel-enforced convention is a negative GSBASE indicates
* a kernel value. No SWAPGS needed on entry and exit. * a kernel value. No SWAPGS needed on entry and exit.
@@ -939,21 +944,14 @@ SYM_CODE_START_LOCAL(paranoid_entry)
movl $MSR_GS_BASE, %ecx movl $MSR_GS_BASE, %ecx
rdmsr rdmsr
testl %edx, %edx testl %edx, %edx
jns .Lparanoid_entry_swapgs js .Lparanoid_kernel_gsbase
ret
.Lparanoid_entry_swapgs:
SWAPGS
/*
* The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an
* unconditional CR3 write, even in the PTI case. So do an lfence
* to prevent GS speculation, regardless of whether PTI is enabled.
*/
FENCE_SWAPGS_KERNEL_ENTRY
/* EBX = 0 -> SWAPGS required on exit */ /* EBX = 0 -> SWAPGS required on exit */
xorl %ebx, %ebx xorl %ebx, %ebx
swapgs
.Lparanoid_kernel_gsbase:
FENCE_SWAPGS_KERNEL_ENTRY
ret ret
SYM_CODE_END(paranoid_entry) SYM_CODE_END(paranoid_entry)
@@ -1001,7 +999,7 @@ SYM_CODE_START_LOCAL(paranoid_exit)
jnz restore_regs_and_return_to_kernel jnz restore_regs_and_return_to_kernel
/* We are returning to a context with user GSBASE */ /* We are returning to a context with user GSBASE */
SWAPGS_UNSAFE_STACK swapgs
jmp restore_regs_and_return_to_kernel jmp restore_regs_and_return_to_kernel
SYM_CODE_END(paranoid_exit) SYM_CODE_END(paranoid_exit)
@@ -1035,11 +1033,6 @@ SYM_CODE_START_LOCAL(error_entry)
pushq %r12 pushq %r12
ret ret
.Lerror_entry_done_lfence:
FENCE_SWAPGS_KERNEL_ENTRY
.Lerror_entry_done:
ret
/* /*
* There are two places in the kernel that can potentially fault with * There are two places in the kernel that can potentially fault with
* usergs. Handle them here. B stepping K8s sometimes report a * usergs. Handle them here. B stepping K8s sometimes report a
@@ -1062,8 +1055,14 @@ SYM_CODE_START_LOCAL(error_entry)
* .Lgs_change's error handler with kernel gsbase. * .Lgs_change's error handler with kernel gsbase.
*/ */
SWAPGS SWAPGS
FENCE_SWAPGS_USER_ENTRY
jmp .Lerror_entry_done /*
* Issue an LFENCE to prevent GS speculation, regardless of whether it is a
* kernel or user gsbase.
*/
.Lerror_entry_done_lfence:
FENCE_SWAPGS_KERNEL_ENTRY
ret
.Lbstep_iret: .Lbstep_iret:
/* Fix truncated RIP */ /* Fix truncated RIP */
@@ -1426,7 +1425,7 @@ nmi_no_fsgsbase:
jnz nmi_restore jnz nmi_restore
nmi_swapgs: nmi_swapgs:
SWAPGS_UNSAFE_STACK swapgs
nmi_restore: nmi_restore:
POP_REGS POP_REGS

View File

@@ -131,18 +131,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
#define SAVE_FLAGS(x) pushfq; popq %rax #define SAVE_FLAGS(x) pushfq; popq %rax
#endif #endif
#define SWAPGS swapgs
/*
* Currently paravirt can't handle swapgs nicely when we
* don't have a stack we can rely on (such as a user space
* stack). So we either find a way around these or just fault
* and emulate if a guest tries to call swapgs directly.
*
* Either way, this is a good way to document that we don't
* have a reliable stack. x86_64 only.
*/
#define SWAPGS_UNSAFE_STACK swapgs
#define INTERRUPT_RETURN jmp native_iret #define INTERRUPT_RETURN jmp native_iret
#define USERGS_SYSRET64 \ #define USERGS_SYSRET64 \
swapgs; \ swapgs; \
@@ -170,6 +158,14 @@ static __always_inline int arch_irqs_disabled(void)
return arch_irqs_disabled_flags(flags); return arch_irqs_disabled_flags(flags);
} }
#else
#ifdef CONFIG_X86_64
#ifdef CONFIG_XEN_PV
#define SWAPGS ALTERNATIVE "swapgs", "", X86_FEATURE_XENPV
#else
#define SWAPGS swapgs
#endif
#endif
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#endif #endif

View File

@@ -85,7 +85,7 @@
KVM_ARCH_REQ_FLAGS(25, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) KVM_ARCH_REQ_FLAGS(25, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_TLB_FLUSH_CURRENT KVM_ARCH_REQ(26) #define KVM_REQ_TLB_FLUSH_CURRENT KVM_ARCH_REQ(26)
#define KVM_REQ_TLB_FLUSH_GUEST \ #define KVM_REQ_TLB_FLUSH_GUEST \
KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_NO_WAKEUP) KVM_ARCH_REQ_FLAGS(27, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_APF_READY KVM_ARCH_REQ(28) #define KVM_REQ_APF_READY KVM_ARCH_REQ(28)
#define KVM_REQ_MSR_FILTER_CHANGED KVM_ARCH_REQ(29) #define KVM_REQ_MSR_FILTER_CHANGED KVM_ARCH_REQ(29)

View File

@@ -776,26 +776,6 @@ extern void default_banner(void);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#ifdef CONFIG_PARAVIRT_XXL #ifdef CONFIG_PARAVIRT_XXL
/*
* If swapgs is used while the userspace stack is still current,
* there's no way to call a pvop. The PV replacement *must* be
* inlined, or the swapgs instruction must be trapped and emulated.
*/
#define SWAPGS_UNSAFE_STACK \
PARA_SITE(PARA_PATCH(PV_CPU_swapgs), swapgs)
/*
* Note: swapgs is very special, and in practise is either going to be
* implemented with a single "swapgs" instruction or something very
* special. Either way, we don't need to save any registers for
* it.
*/
#define SWAPGS \
PARA_SITE(PARA_PATCH(PV_CPU_swapgs), \
ANNOTATE_RETPOLINE_SAFE; \
call PARA_INDIRECT(pv_ops+PV_CPU_swapgs); \
)
#define USERGS_SYSRET64 \ #define USERGS_SYSRET64 \
PARA_SITE(PARA_PATCH(PV_CPU_usergs_sysret64), \ PARA_SITE(PARA_PATCH(PV_CPU_usergs_sysret64), \
ANNOTATE_RETPOLINE_SAFE; \ ANNOTATE_RETPOLINE_SAFE; \

View File

@@ -169,8 +169,6 @@ struct pv_cpu_ops {
frame set up. */ frame set up. */
void (*iret)(void); void (*iret)(void);
void (*swapgs)(void);
void (*start_context_switch)(struct task_struct *prev); void (*start_context_switch)(struct task_struct *prev);
void (*end_context_switch)(struct task_struct *next); void (*end_context_switch)(struct task_struct *next);
#endif #endif

View File

@@ -15,7 +15,6 @@ int main(void)
#ifdef CONFIG_PARAVIRT_XXL #ifdef CONFIG_PARAVIRT_XXL
OFFSET(PV_CPU_usergs_sysret64, paravirt_patch_template, OFFSET(PV_CPU_usergs_sysret64, paravirt_patch_template,
cpu.usergs_sysret64); cpu.usergs_sysret64);
OFFSET(PV_CPU_swapgs, paravirt_patch_template, cpu.swapgs);
#ifdef CONFIG_DEBUG_ENTRY #ifdef CONFIG_DEBUG_ENTRY
OFFSET(PV_IRQ_save_fl, paravirt_patch_template, irq.save_fl); OFFSET(PV_IRQ_save_fl, paravirt_patch_template, irq.save_fl);
#endif #endif

View File

@@ -312,7 +312,6 @@ struct paravirt_patch_template pv_ops = {
.cpu.usergs_sysret64 = native_usergs_sysret64, .cpu.usergs_sysret64 = native_usergs_sysret64,
.cpu.iret = native_iret, .cpu.iret = native_iret,
.cpu.swapgs = native_swapgs,
#ifdef CONFIG_X86_IOPL_IOPERM #ifdef CONFIG_X86_IOPL_IOPERM
.cpu.invalidate_io_bitmap = native_tss_invalidate_io_bitmap, .cpu.invalidate_io_bitmap = native_tss_invalidate_io_bitmap,

View File

@@ -28,7 +28,6 @@ struct patch_xxl {
const unsigned char irq_restore_fl[2]; const unsigned char irq_restore_fl[2];
const unsigned char cpu_wbinvd[2]; const unsigned char cpu_wbinvd[2];
const unsigned char cpu_usergs_sysret64[6]; const unsigned char cpu_usergs_sysret64[6];
const unsigned char cpu_swapgs[3];
const unsigned char mov64[3]; const unsigned char mov64[3];
}; };
@@ -43,7 +42,6 @@ static const struct patch_xxl patch_data_xxl = {
.cpu_wbinvd = { 0x0f, 0x09 }, // wbinvd .cpu_wbinvd = { 0x0f, 0x09 }, // wbinvd
.cpu_usergs_sysret64 = { 0x0f, 0x01, 0xf8, .cpu_usergs_sysret64 = { 0x0f, 0x01, 0xf8,
0x48, 0x0f, 0x07 }, // swapgs; sysretq 0x48, 0x0f, 0x07 }, // swapgs; sysretq
.cpu_swapgs = { 0x0f, 0x01, 0xf8 }, // swapgs
.mov64 = { 0x48, 0x89, 0xf8 }, // mov %rdi, %rax .mov64 = { 0x48, 0x89, 0xf8 }, // mov %rdi, %rax
}; };
@@ -86,7 +84,6 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len); PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len); PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len);
PATCH_CASE(cpu, swapgs, xxl, insn_buff, len);
PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len); PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
#endif #endif

View File

@@ -260,11 +260,6 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt,
char *dst, char *buf, size_t size) char *dst, char *buf, size_t size)
{ {
unsigned long error_code = X86_PF_PROT | X86_PF_WRITE; unsigned long error_code = X86_PF_PROT | X86_PF_WRITE;
char __user *target = (char __user *)dst;
u64 d8;
u32 d4;
u16 d2;
u8 d1;
/* /*
* This function uses __put_user() independent of whether kernel or user * This function uses __put_user() independent of whether kernel or user
@@ -286,26 +281,42 @@ static enum es_result vc_write_mem(struct es_em_ctxt *ctxt,
* instructions here would cause infinite nesting. * instructions here would cause infinite nesting.
*/ */
switch (size) { switch (size) {
case 1: case 1: {
u8 d1;
u8 __user *target = (u8 __user *)dst;
memcpy(&d1, buf, 1); memcpy(&d1, buf, 1);
if (__put_user(d1, target)) if (__put_user(d1, target))
goto fault; goto fault;
break; break;
case 2: }
case 2: {
u16 d2;
u16 __user *target = (u16 __user *)dst;
memcpy(&d2, buf, 2); memcpy(&d2, buf, 2);
if (__put_user(d2, target)) if (__put_user(d2, target))
goto fault; goto fault;
break; break;
case 4: }
case 4: {
u32 d4;
u32 __user *target = (u32 __user *)dst;
memcpy(&d4, buf, 4); memcpy(&d4, buf, 4);
if (__put_user(d4, target)) if (__put_user(d4, target))
goto fault; goto fault;
break; break;
case 8: }
case 8: {
u64 d8;
u64 __user *target = (u64 __user *)dst;
memcpy(&d8, buf, 8); memcpy(&d8, buf, 8);
if (__put_user(d8, target)) if (__put_user(d8, target))
goto fault; goto fault;
break; break;
}
default: default:
WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size); WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size);
return ES_UNSUPPORTED; return ES_UNSUPPORTED;
@@ -328,11 +339,6 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
char *src, char *buf, size_t size) char *src, char *buf, size_t size)
{ {
unsigned long error_code = X86_PF_PROT; unsigned long error_code = X86_PF_PROT;
char __user *s = (char __user *)src;
u64 d8;
u32 d4;
u16 d2;
u8 d1;
/* /*
* This function uses __get_user() independent of whether kernel or user * This function uses __get_user() independent of whether kernel or user
@@ -354,26 +360,41 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
* instructions here would cause infinite nesting. * instructions here would cause infinite nesting.
*/ */
switch (size) { switch (size) {
case 1: case 1: {
u8 d1;
u8 __user *s = (u8 __user *)src;
if (__get_user(d1, s)) if (__get_user(d1, s))
goto fault; goto fault;
memcpy(buf, &d1, 1); memcpy(buf, &d1, 1);
break; break;
case 2: }
case 2: {
u16 d2;
u16 __user *s = (u16 __user *)src;
if (__get_user(d2, s)) if (__get_user(d2, s))
goto fault; goto fault;
memcpy(buf, &d2, 2); memcpy(buf, &d2, 2);
break; break;
case 4: }
case 4: {
u32 d4;
u32 __user *s = (u32 __user *)src;
if (__get_user(d4, s)) if (__get_user(d4, s))
goto fault; goto fault;
memcpy(buf, &d4, 4); memcpy(buf, &d4, 4);
break; break;
case 8: }
case 8: {
u64 d8;
u64 __user *s = (u64 __user *)src;
if (__get_user(d8, s)) if (__get_user(d8, s))
goto fault; goto fault;
memcpy(buf, &d8, 8); memcpy(buf, &d8, 8);
break; break;
}
default: default:
WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size); WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size);
return ES_UNSUPPORTED; return ES_UNSUPPORTED;

View File

@@ -1178,6 +1178,12 @@ void mark_tsc_unstable(char *reason)
EXPORT_SYMBOL_GPL(mark_tsc_unstable); EXPORT_SYMBOL_GPL(mark_tsc_unstable);
static void __init tsc_disable_clocksource_watchdog(void)
{
clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY;
clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY;
}
static void __init check_system_tsc_reliable(void) static void __init check_system_tsc_reliable(void)
{ {
#if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC) #if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC)
@@ -1194,6 +1200,23 @@ static void __init check_system_tsc_reliable(void)
#endif #endif
if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE)) if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE))
tsc_clocksource_reliable = 1; tsc_clocksource_reliable = 1;
/*
* Disable the clocksource watchdog when the system has:
* - TSC running at constant frequency
* - TSC which does not stop in C-States
* - the TSC_ADJUST register which allows to detect even minimal
* modifications
* - not more than two sockets. As the number of sockets cannot be
* evaluated at the early boot stage where this has to be
* invoked, check the number of online memory nodes as a
* fallback solution which is an reasonable estimate.
*/
if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC) &&
boot_cpu_has(X86_FEATURE_NONSTOP_TSC) &&
boot_cpu_has(X86_FEATURE_TSC_ADJUST) &&
nr_online_nodes <= 2)
tsc_disable_clocksource_watchdog();
} }
/* /*
@@ -1385,9 +1408,6 @@ static int __init init_tsc_clocksource(void)
if (tsc_unstable) if (tsc_unstable)
goto unreg; goto unreg;
if (tsc_clocksource_reliable || no_tsc_watchdog)
clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY;
if (boot_cpu_has(X86_FEATURE_NONSTOP_TSC_S3)) if (boot_cpu_has(X86_FEATURE_NONSTOP_TSC_S3))
clocksource_tsc.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP; clocksource_tsc.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP;
@@ -1525,7 +1545,7 @@ void __init tsc_init(void)
} }
if (tsc_clocksource_reliable || no_tsc_watchdog) if (tsc_clocksource_reliable || no_tsc_watchdog)
clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY; tsc_disable_clocksource_watchdog();
clocksource_register_khz(&clocksource_tsc_early, tsc_khz); clocksource_register_khz(&clocksource_tsc_early, tsc_khz);
detect_art(); detect_art();

View File

@@ -30,6 +30,7 @@ struct tsc_adjust {
}; };
static DEFINE_PER_CPU(struct tsc_adjust, tsc_adjust); static DEFINE_PER_CPU(struct tsc_adjust, tsc_adjust);
static struct timer_list tsc_sync_check_timer;
/* /*
* TSC's on different sockets may be reset asynchronously. * TSC's on different sockets may be reset asynchronously.
@@ -77,6 +78,46 @@ void tsc_verify_tsc_adjust(bool resume)
} }
} }
/*
* Normally the tsc_sync will be checked every time system enters idle
* state, but there is still caveat that a system won't enter idle,
* either because it's too busy or configured purposely to not enter
* idle.
*
* So setup a periodic timer (every 10 minutes) to make sure the check
* is always on.
*/
#define SYNC_CHECK_INTERVAL (HZ * 600)
static void tsc_sync_check_timer_fn(struct timer_list *unused)
{
int next_cpu;
tsc_verify_tsc_adjust(false);
/* Run the check for all onlined CPUs in turn */
next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask);
if (next_cpu >= nr_cpu_ids)
next_cpu = cpumask_first(cpu_online_mask);
tsc_sync_check_timer.expires += SYNC_CHECK_INTERVAL;
add_timer_on(&tsc_sync_check_timer, next_cpu);
}
static int __init start_sync_check_timer(void)
{
if (!cpu_feature_enabled(X86_FEATURE_TSC_ADJUST) || tsc_clocksource_reliable)
return 0;
timer_setup(&tsc_sync_check_timer, tsc_sync_check_timer_fn, 0);
tsc_sync_check_timer.expires = jiffies + SYNC_CHECK_INTERVAL;
add_timer(&tsc_sync_check_timer);
return 0;
}
late_initcall(start_sync_check_timer);
static void tsc_sanitize_first_cpu(struct tsc_adjust *cur, s64 bootval, static void tsc_sanitize_first_cpu(struct tsc_adjust *cur, s64 bootval,
unsigned int cpu, bool bootcpu) unsigned int cpu, bool bootcpu)
{ {

View File

@@ -5152,7 +5152,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_invalidate_gva);
void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
{ {
kvm_mmu_invalidate_gva(vcpu, vcpu->arch.mmu, gva, INVALID_PAGE); kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE);
++vcpu->stat.invlpg; ++vcpu->stat.invlpg;
} }
EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); EXPORT_SYMBOL_GPL(kvm_mmu_invlpg);

View File

@@ -274,7 +274,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS; pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS;
pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1; pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;
pmu->reserved_bits = 0xffffffff00200000ull; pmu->reserved_bits = 0xfffffff000280000ull;
pmu->version = 1; pmu->version = 1;
/* not applicable to AMD; but clean them to prevent any fall out */ /* not applicable to AMD; but clean them to prevent any fall out */
pmu->counter_bitmask[KVM_PMC_FIXED] = 0; pmu->counter_bitmask[KVM_PMC_FIXED] = 0;

View File

@@ -2619,8 +2619,10 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) && if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) &&
WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL, WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,
vmcs12->guest_ia32_perf_global_ctrl))) vmcs12->guest_ia32_perf_global_ctrl))) {
*entry_failure_code = ENTRY_FAIL_DEFAULT;
return -EINVAL; return -EINVAL;
}
kvm_rsp_write(vcpu, vmcs12->guest_rsp); kvm_rsp_write(vcpu, vmcs12->guest_rsp);
kvm_rip_write(vcpu, vmcs12->guest_rip); kvm_rip_write(vcpu, vmcs12->guest_rip);

View File

@@ -5,6 +5,7 @@
#include <asm/cpu.h> #include <asm/cpu.h>
#include "lapic.h" #include "lapic.h"
#include "irq.h"
#include "posted_intr.h" #include "posted_intr.h"
#include "trace.h" #include "trace.h"
#include "vmx.h" #include "vmx.h"
@@ -77,13 +78,18 @@ after_clear_sn:
pi_set_on(pi_desc); pi_set_on(pi_desc);
} }
static bool vmx_can_use_vtd_pi(struct kvm *kvm)
{
return irqchip_in_kernel(kvm) && enable_apicv &&
kvm_arch_has_assigned_device(kvm) &&
irq_remapping_cap(IRQ_POSTING_CAP);
}
void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu) void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu)
{ {
struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu); struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
if (!kvm_arch_has_assigned_device(vcpu->kvm) || if (!vmx_can_use_vtd_pi(vcpu->kvm))
!irq_remapping_cap(IRQ_POSTING_CAP) ||
!kvm_vcpu_apicv_active(vcpu))
return; return;
/* Set SN when the vCPU is preempted */ /* Set SN when the vCPU is preempted */
@@ -141,9 +147,7 @@ int pi_pre_block(struct kvm_vcpu *vcpu)
struct pi_desc old, new; struct pi_desc old, new;
struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu); struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);
if (!kvm_arch_has_assigned_device(vcpu->kvm) || if (!vmx_can_use_vtd_pi(vcpu->kvm))
!irq_remapping_cap(IRQ_POSTING_CAP) ||
!kvm_vcpu_apicv_active(vcpu))
return 0; return 0;
WARN_ON(irqs_disabled()); WARN_ON(irqs_disabled());
@@ -256,9 +260,7 @@ int pi_update_irte(struct kvm *kvm, unsigned int host_irq, uint32_t guest_irq,
struct vcpu_data vcpu_info; struct vcpu_data vcpu_info;
int idx, ret = 0; int idx, ret = 0;
if (!kvm_arch_has_assigned_device(kvm) || if (!vmx_can_use_vtd_pi(kvm))
!irq_remapping_cap(IRQ_POSTING_CAP) ||
!kvm_vcpu_apicv_active(kvm->vcpus[0]))
return 0; return 0;
idx = srcu_read_lock(&kvm->irq_srcu); idx = srcu_read_lock(&kvm->irq_srcu);

View File

@@ -2908,6 +2908,13 @@ static void vmx_flush_tlb_all(struct kvm_vcpu *vcpu)
} }
} }
static inline int vmx_get_current_vpid(struct kvm_vcpu *vcpu)
{
if (is_guest_mode(vcpu))
return nested_get_vpid02(vcpu);
return to_vmx(vcpu)->vpid;
}
static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu) static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
{ {
struct kvm_mmu *mmu = vcpu->arch.mmu; struct kvm_mmu *mmu = vcpu->arch.mmu;
@@ -2920,31 +2927,29 @@ static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)
if (enable_ept) if (enable_ept)
ept_sync_context(construct_eptp(vcpu, root_hpa, ept_sync_context(construct_eptp(vcpu, root_hpa,
mmu->shadow_root_level)); mmu->shadow_root_level));
else if (!is_guest_mode(vcpu))
vpid_sync_context(to_vmx(vcpu)->vpid);
else else
vpid_sync_context(nested_get_vpid02(vcpu)); vpid_sync_context(vmx_get_current_vpid(vcpu));
} }
static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr) static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
{ {
/* /*
* vpid_sync_vcpu_addr() is a nop if vmx->vpid==0, see the comment in * vpid_sync_vcpu_addr() is a nop if vpid==0, see the comment in
* vmx_flush_tlb_guest() for an explanation of why this is ok. * vmx_flush_tlb_guest() for an explanation of why this is ok.
*/ */
vpid_sync_vcpu_addr(to_vmx(vcpu)->vpid, addr); vpid_sync_vcpu_addr(vmx_get_current_vpid(vcpu), addr);
} }
static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu) static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)
{ {
/* /*
* vpid_sync_context() is a nop if vmx->vpid==0, e.g. if enable_vpid==0 * vpid_sync_context() is a nop if vpid==0, e.g. if enable_vpid==0 or a
* or a vpid couldn't be allocated for this vCPU. VM-Enter and VM-Exit * vpid couldn't be allocated for this vCPU. VM-Enter and VM-Exit are
* are required to flush GVA->{G,H}PA mappings from the TLB if vpid is * required to flush GVA->{G,H}PA mappings from the TLB if vpid is
* disabled (VM-Enter with vpid enabled and vpid==0 is disallowed), * disabled (VM-Enter with vpid enabled and vpid==0 is disallowed),
* i.e. no explicit INVVPID is necessary. * i.e. no explicit INVVPID is necessary.
*/ */
vpid_sync_context(to_vmx(vcpu)->vpid); vpid_sync_context(vmx_get_current_vpid(vcpu));
} }
void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu) void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu)

View File

@@ -277,7 +277,8 @@ void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size)
return; return;
} }
new = early_memremap(data.phys_map, data.size); new = early_memremap_prot(data.phys_map, data.size,
pgprot_val(pgprot_encrypted(FIXMAP_PAGE_NORMAL)));
if (!new) { if (!new) {
pr_err("Failed to map new boot services memmap\n"); pr_err("Failed to map new boot services memmap\n");
return; return;

View File

@@ -70,6 +70,7 @@ static void __init setup_real_mode(void)
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
u64 *trampoline_pgd; u64 *trampoline_pgd;
u64 efer; u64 efer;
int i;
#endif #endif
base = (unsigned char *)real_mode_header; base = (unsigned char *)real_mode_header;
@@ -126,8 +127,17 @@ static void __init setup_real_mode(void)
trampoline_header->flags = 0; trampoline_header->flags = 0;
trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd); trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);
/* Map the real mode stub as virtual == physical */
trampoline_pgd[0] = trampoline_pgd_entry.pgd; trampoline_pgd[0] = trampoline_pgd_entry.pgd;
trampoline_pgd[511] = init_top_pgt[511].pgd;
/*
* Include the entirety of the kernel mapping into the trampoline
* PGD. This way, all mappings present in the normal kernel page
* tables are usable while running on trampoline_pgd.
*/
for (i = pgd_index(__PAGE_OFFSET); i < PTRS_PER_PGD; i++)
trampoline_pgd[i] = init_top_pgt[i].pgd;
#endif #endif
sme_sev_setup_real_mode(trampoline_header); sme_sev_setup_real_mode(trampoline_header);

View File

@@ -1083,9 +1083,6 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
#endif #endif
.io_delay = xen_io_delay, .io_delay = xen_io_delay,
/* Xen takes care of %gs when switching to usermode for us */
.swapgs = paravirt_nop,
.start_context_switch = paravirt_start_context_switch, .start_context_switch = paravirt_start_context_switch,
.end_context_switch = xen_end_context_switch, .end_context_switch = xen_end_context_switch,
}; };

View File

@@ -19,6 +19,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/linkage.h> #include <linux/linkage.h>
#include <../entry/calling.h>
/* /*
* Enable events. This clears the event mask and tests the pending * Enable events. This clears the event mask and tests the pending
@@ -235,6 +236,25 @@ SYM_CODE_START(xen_sysret64)
jmp hypercall_iret jmp hypercall_iret
SYM_CODE_END(xen_sysret64) SYM_CODE_END(xen_sysret64)
/*
* XEN pv doesn't use trampoline stack, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is
* also the kernel stack. Reusing swapgs_restore_regs_and_return_to_usermode()
* in XEN pv would cause %rsp to move up to the top of the kernel stack and
* leave the IRET frame below %rsp, which is dangerous to be corrupted if #NMI
* interrupts. And swapgs_restore_regs_and_return_to_usermode() pushing the IRET
* frame at the same address is useless.
*/
SYM_CODE_START(xenpv_restore_regs_and_return_to_usermode)
UNWIND_HINT_REGS
POP_REGS
/* stackleak_erase() can work safely on the kernel stack. */
STACKLEAK_ERASE_NOCLOBBER
addq $8, %rsp /* skip regs->orig_ax */
jmp xen_iret
SYM_CODE_END(xenpv_restore_regs_and_return_to_usermode)
/* /*
* Xen handles syscall callbacks much like ordinary exceptions, which * Xen handles syscall callbacks much like ordinary exceptions, which
* means we have: * means we have:

View File

@@ -214,6 +214,7 @@ SYSCALL_DEFINE2(ioprio_get, int, which, int, who)
pgrp = task_pgrp(current); pgrp = task_pgrp(current);
else else
pgrp = find_vpid(who); pgrp = find_vpid(who);
read_lock(&tasklist_lock);
do_each_pid_thread(pgrp, PIDTYPE_PGID, p) { do_each_pid_thread(pgrp, PIDTYPE_PGID, p) {
tmpio = get_task_ioprio(p); tmpio = get_task_ioprio(p);
if (tmpio < 0) if (tmpio < 0)
@@ -223,6 +224,8 @@ SYSCALL_DEFINE2(ioprio_get, int, which, int, who)
else else
ret = ioprio_best(ret, tmpio); ret = ioprio_best(ret, tmpio);
} while_each_pid_thread(pgrp, PIDTYPE_PGID, p); } while_each_pid_thread(pgrp, PIDTYPE_PGID, p);
read_unlock(&tasklist_lock);
break; break;
case IOPRIO_WHO_USER: case IOPRIO_WHO_USER:
uid = make_kuid(current_user_ns(), who); uid = make_kuid(current_user_ns(), who);

View File

@@ -4784,23 +4784,20 @@ static int binder_thread_release(struct binder_proc *proc,
__release(&t->lock); __release(&t->lock);
/* /*
* If this thread used poll, make sure we remove the waitqueue * If this thread used poll, make sure we remove the waitqueue from any
* from any epoll data structures holding it with POLLFREE. * poll data structures holding it.
* waitqueue_active() is safe to use here because we're holding
* the inner lock.
*/ */
if ((thread->looper & BINDER_LOOPER_STATE_POLL) && if (thread->looper & BINDER_LOOPER_STATE_POLL)
waitqueue_active(&thread->wait)) { wake_up_pollfree(&thread->wait);
wake_up_poll(&thread->wait, EPOLLHUP | POLLFREE);
}
binder_inner_proc_unlock(thread->proc); binder_inner_proc_unlock(thread->proc);
/* /*
* This is needed to avoid races between wake_up_poll() above and * This is needed to avoid races between wake_up_pollfree() above and
* and ep_remove_waitqueue() called for other reasons (eg the epoll file * someone else removing the last entry from the queue for other reasons
* descriptor being closed); ep_remove_waitqueue() holds an RCU read * (e.g. ep_remove_wait_queue() being called due to an epoll file
* lock, so we can be sure it's done after calling synchronize_rcu(). * descriptor being closed). Such other users hold an RCU read lock, so
* we can be sure they're done after we call synchronize_rcu().
*/ */
if (thread->looper & BINDER_LOOPER_STATE_POLL) if (thread->looper & BINDER_LOOPER_STATE_POLL)
synchronize_rcu(); synchronize_rcu();

View File

@@ -442,6 +442,7 @@ static const struct pci_device_id ahci_pci_tbl[] = {
/* AMD */ /* AMD */
{ PCI_VDEVICE(AMD, 0x7800), board_ahci }, /* AMD Hudson-2 */ { PCI_VDEVICE(AMD, 0x7800), board_ahci }, /* AMD Hudson-2 */
{ PCI_VDEVICE(AMD, 0x7900), board_ahci }, /* AMD CZ */ { PCI_VDEVICE(AMD, 0x7900), board_ahci }, /* AMD CZ */
{ PCI_VDEVICE(AMD, 0x7901), board_ahci_mobile }, /* AMD Green Sardine */
/* AMD is using RAID class only for ahci controllers */ /* AMD is using RAID class only for ahci controllers */
{ PCI_VENDOR_ID_AMD, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, { PCI_VENDOR_ID_AMD, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_STORAGE_RAID << 8, 0xffffff, board_ahci }, PCI_CLASS_STORAGE_RAID << 8, 0xffffff, board_ahci },

View File

@@ -3831,6 +3831,8 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
{ "VRFDFC22048UCHC-TE*", NULL, ATA_HORKAGE_NODMA }, { "VRFDFC22048UCHC-TE*", NULL, ATA_HORKAGE_NODMA },
/* Odd clown on sil3726/4726 PMPs */ /* Odd clown on sil3726/4726 PMPs */
{ "Config Disk", NULL, ATA_HORKAGE_DISABLE }, { "Config Disk", NULL, ATA_HORKAGE_DISABLE },
/* Similar story with ASMedia 1092 */
{ "ASMT109x- Config", NULL, ATA_HORKAGE_DISABLE },
/* Weird ATAPI devices */ /* Weird ATAPI devices */
{ "TORiSAN DVD-ROM DRD-N216", NULL, ATA_HORKAGE_MAX_SEC_128 }, { "TORiSAN DVD-ROM DRD-N216", NULL, ATA_HORKAGE_MAX_SEC_128 },

View File

@@ -1394,6 +1394,14 @@ static int sata_fsl_init_controller(struct ata_host *host)
return 0; return 0;
} }
static void sata_fsl_host_stop(struct ata_host *host)
{
struct sata_fsl_host_priv *host_priv = host->private_data;
iounmap(host_priv->hcr_base);
kfree(host_priv);
}
/* /*
* scsi mid-layer and libata interface structures * scsi mid-layer and libata interface structures
*/ */
@@ -1426,6 +1434,8 @@ static struct ata_port_operations sata_fsl_ops = {
.port_start = sata_fsl_port_start, .port_start = sata_fsl_port_start,
.port_stop = sata_fsl_port_stop, .port_stop = sata_fsl_port_stop,
.host_stop = sata_fsl_host_stop,
.pmp_attach = sata_fsl_pmp_attach, .pmp_attach = sata_fsl_pmp_attach,
.pmp_detach = sata_fsl_pmp_detach, .pmp_detach = sata_fsl_pmp_detach,
}; };
@@ -1480,9 +1490,9 @@ static int sata_fsl_probe(struct platform_device *ofdev)
host_priv->ssr_base = ssr_base; host_priv->ssr_base = ssr_base;
host_priv->csr_base = csr_base; host_priv->csr_base = csr_base;
irq = irq_of_parse_and_map(ofdev->dev.of_node, 0); irq = platform_get_irq(ofdev, 0);
if (!irq) { if (irq < 0) {
dev_err(&ofdev->dev, "invalid irq from platform\n"); retval = irq;
goto error_exit_with_cleanup; goto error_exit_with_cleanup;
} }
host_priv->irq = irq; host_priv->irq = irq;
@@ -1557,10 +1567,6 @@ static int sata_fsl_remove(struct platform_device *ofdev)
ata_host_detach(host); ata_host_detach(host);
irq_dispose_mapping(host_priv->irq);
iounmap(host_priv->hcr_base);
kfree(host_priv);
return 0; return 0;
} }

View File

@@ -203,6 +203,8 @@ struct ipmi_user {
struct work_struct remove_work; struct work_struct remove_work;
}; };
static struct workqueue_struct *remove_work_wq;
static struct ipmi_user *acquire_ipmi_user(struct ipmi_user *user, int *index) static struct ipmi_user *acquire_ipmi_user(struct ipmi_user *user, int *index)
__acquires(user->release_barrier) __acquires(user->release_barrier)
{ {
@@ -1272,7 +1274,7 @@ static void free_user(struct kref *ref)
struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount); struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
/* SRCU cleanup must happen in task context. */ /* SRCU cleanup must happen in task context. */
schedule_work(&user->remove_work); queue_work(remove_work_wq, &user->remove_work);
} }
static void _ipmi_destroy_user(struct ipmi_user *user) static void _ipmi_destroy_user(struct ipmi_user *user)
@@ -5166,6 +5168,13 @@ static int ipmi_init_msghandler(void)
atomic_notifier_chain_register(&panic_notifier_list, &panic_block); atomic_notifier_chain_register(&panic_notifier_list, &panic_block);
remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");
if (!remove_work_wq) {
pr_err("unable to create ipmi-msghandler-remove-wq workqueue");
rv = -ENOMEM;
goto out;
}
initialized = true; initialized = true;
out: out:
@@ -5191,6 +5200,8 @@ static void __exit cleanup_ipmi(void)
int count; int count;
if (initialized) { if (initialized) {
destroy_workqueue(remove_work_wq);
atomic_notifier_chain_unregister(&panic_notifier_list, atomic_notifier_chain_unregister(&panic_notifier_list,
&panic_block); &panic_block);

View File

@@ -231,7 +231,7 @@ static struct platform_driver imx8qxp_lpcg_clk_driver = {
.probe = imx8qxp_lpcg_clk_probe, .probe = imx8qxp_lpcg_clk_probe,
}; };
builtin_platform_driver(imx8qxp_lpcg_clk_driver); module_platform_driver(imx8qxp_lpcg_clk_driver);
MODULE_AUTHOR("Aisheng Dong <aisheng.dong@nxp.com>"); MODULE_AUTHOR("Aisheng Dong <aisheng.dong@nxp.com>");
MODULE_DESCRIPTION("NXP i.MX8QXP LPCG clock driver"); MODULE_DESCRIPTION("NXP i.MX8QXP LPCG clock driver");

View File

@@ -151,7 +151,7 @@ static struct platform_driver imx8qxp_clk_driver = {
}, },
.probe = imx8qxp_clk_probe, .probe = imx8qxp_clk_probe,
}; };
builtin_platform_driver(imx8qxp_clk_driver); module_platform_driver(imx8qxp_clk_driver);
MODULE_AUTHOR("Aisheng Dong <aisheng.dong@nxp.com>"); MODULE_AUTHOR("Aisheng Dong <aisheng.dong@nxp.com>");
MODULE_DESCRIPTION("NXP i.MX8QXP clock driver"); MODULE_DESCRIPTION("NXP i.MX8QXP clock driver");

View File

@@ -28,7 +28,7 @@ static u8 mux_get_parent(struct clk_hw *hw)
val &= mask; val &= mask;
if (mux->parent_map) if (mux->parent_map)
return qcom_find_src_index(hw, mux->parent_map, val); return qcom_find_cfg_index(hw, mux->parent_map, val);
return val; return val;
} }

View File

@@ -69,6 +69,18 @@ int qcom_find_src_index(struct clk_hw *hw, const struct parent_map *map, u8 src)
} }
EXPORT_SYMBOL_GPL(qcom_find_src_index); EXPORT_SYMBOL_GPL(qcom_find_src_index);
int qcom_find_cfg_index(struct clk_hw *hw, const struct parent_map *map, u8 cfg)
{
int i, num_parents = clk_hw_get_num_parents(hw);
for (i = 0; i < num_parents; i++)
if (cfg == map[i].cfg)
return i;
return -ENOENT;
}
EXPORT_SYMBOL_GPL(qcom_find_cfg_index);
struct regmap * struct regmap *
qcom_cc_map(struct platform_device *pdev, const struct qcom_cc_desc *desc) qcom_cc_map(struct platform_device *pdev, const struct qcom_cc_desc *desc)
{ {

View File

@@ -49,6 +49,8 @@ extern void
qcom_pll_set_fsm_mode(struct regmap *m, u32 reg, u8 bias_count, u8 lock_count); qcom_pll_set_fsm_mode(struct regmap *m, u32 reg, u8 bias_count, u8 lock_count);
extern int qcom_find_src_index(struct clk_hw *hw, const struct parent_map *map, extern int qcom_find_src_index(struct clk_hw *hw, const struct parent_map *map,
u8 src); u8 src);
extern int qcom_find_cfg_index(struct clk_hw *hw, const struct parent_map *map,
u8 cfg);
extern int qcom_cc_register_board_clk(struct device *dev, const char *path, extern int qcom_cc_register_board_clk(struct device *dev, const char *path,
const char *name, unsigned long rate); const char *name, unsigned long rate);

View File

@@ -1004,10 +1004,9 @@ static struct kobj_type ktype_cpufreq = {
.release = cpufreq_sysfs_release, .release = cpufreq_sysfs_release,
}; };
static void add_cpu_dev_symlink(struct cpufreq_policy *policy, unsigned int cpu) static void add_cpu_dev_symlink(struct cpufreq_policy *policy, unsigned int cpu,
struct device *dev)
{ {
struct device *dev = get_cpu_device(cpu);
if (unlikely(!dev)) if (unlikely(!dev))
return; return;
@@ -1391,7 +1390,7 @@ static int cpufreq_online(unsigned int cpu)
if (new_policy) { if (new_policy) {
for_each_cpu(j, policy->related_cpus) { for_each_cpu(j, policy->related_cpus) {
per_cpu(cpufreq_cpu_data, j) = policy; per_cpu(cpufreq_cpu_data, j) = policy;
add_cpu_dev_symlink(policy, j); add_cpu_dev_symlink(policy, j, get_cpu_device(j));
} }
policy->min_freq_req = kzalloc(2 * sizeof(*policy->min_freq_req), policy->min_freq_req = kzalloc(2 * sizeof(*policy->min_freq_req),
@@ -1553,7 +1552,7 @@ static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
/* Create sysfs link on CPU registration */ /* Create sysfs link on CPU registration */
policy = per_cpu(cpufreq_cpu_data, cpu); policy = per_cpu(cpufreq_cpu_data, cpu);
if (policy) if (policy)
add_cpu_dev_symlink(policy, cpu); add_cpu_dev_symlink(policy, cpu, dev);
return 0; return 0;
} }

View File

@@ -47,12 +47,8 @@ int amdgpu_amdkfd_init(void)
amdgpu_amdkfd_total_mem_size = si.totalram - si.totalhigh; amdgpu_amdkfd_total_mem_size = si.totalram - si.totalhigh;
amdgpu_amdkfd_total_mem_size *= si.mem_unit; amdgpu_amdkfd_total_mem_size *= si.mem_unit;
#ifdef CONFIG_HSA_AMD
ret = kgd2kfd_init(); ret = kgd2kfd_init();
amdgpu_amdkfd_gpuvm_init_mem_limits(); amdgpu_amdkfd_gpuvm_init_mem_limits();
#else
ret = -ENOENT;
#endif
kfd_initialized = !ret; kfd_initialized = !ret;
return ret; return ret;
@@ -194,6 +190,16 @@ void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool run_pm)
kgd2kfd_suspend(adev->kfd.dev, run_pm); kgd2kfd_suspend(adev->kfd.dev, run_pm);
} }
int amdgpu_amdkfd_resume_iommu(struct amdgpu_device *adev)
{
int r = 0;
if (adev->kfd.dev)
r = kgd2kfd_resume_iommu(adev->kfd.dev);
return r;
}
int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm) int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm)
{ {
int r = 0; int r = 0;
@@ -695,86 +701,3 @@ bool amdgpu_amdkfd_have_atomics_support(struct kgd_dev *kgd)
return adev->have_atomics_support; return adev->have_atomics_support;
} }
#ifndef CONFIG_HSA_AMD
bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm)
{
return false;
}
void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo)
{
}
int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo)
{
return 0;
}
void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
struct amdgpu_vm *vm)
{
}
struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f)
{
return NULL;
}
int amdgpu_amdkfd_evict_userptr(struct kgd_mem *mem, struct mm_struct *mm)
{
return 0;
}
struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev,
unsigned int asic_type, bool vf)
{
return NULL;
}
bool kgd2kfd_device_init(struct kfd_dev *kfd,
struct drm_device *ddev,
const struct kgd2kfd_shared_resources *gpu_resources)
{
return false;
}
void kgd2kfd_device_exit(struct kfd_dev *kfd)
{
}
void kgd2kfd_exit(void)
{
}
void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm)
{
}
int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
{
return 0;
}
int kgd2kfd_pre_reset(struct kfd_dev *kfd)
{
return 0;
}
int kgd2kfd_post_reset(struct kfd_dev *kfd)
{
return 0;
}
void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
{
}
void kgd2kfd_set_sram_ecc_flag(struct kfd_dev *kfd)
{
}
void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint32_t throttle_bitmask)
{
}
#endif

View File

@@ -94,11 +94,6 @@ enum kgd_engine_type {
KGD_ENGINE_MAX KGD_ENGINE_MAX
}; };
struct amdgpu_amdkfd_fence *amdgpu_amdkfd_fence_create(u64 context,
struct mm_struct *mm);
bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm);
struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f);
int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo);
struct amdkfd_process_info { struct amdkfd_process_info {
/* List head of all VMs that belong to a KFD process */ /* List head of all VMs that belong to a KFD process */
@@ -126,14 +121,13 @@ int amdgpu_amdkfd_init(void);
void amdgpu_amdkfd_fini(void); void amdgpu_amdkfd_fini(void);
void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool run_pm); void amdgpu_amdkfd_suspend(struct amdgpu_device *adev, bool run_pm);
int amdgpu_amdkfd_resume_iommu(struct amdgpu_device *adev);
int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm); int amdgpu_amdkfd_resume(struct amdgpu_device *adev, bool run_pm);
void amdgpu_amdkfd_interrupt(struct amdgpu_device *adev, void amdgpu_amdkfd_interrupt(struct amdgpu_device *adev,
const void *ih_ring_entry); const void *ih_ring_entry);
void amdgpu_amdkfd_device_probe(struct amdgpu_device *adev); void amdgpu_amdkfd_device_probe(struct amdgpu_device *adev);
void amdgpu_amdkfd_device_init(struct amdgpu_device *adev); void amdgpu_amdkfd_device_init(struct amdgpu_device *adev);
void amdgpu_amdkfd_device_fini(struct amdgpu_device *adev); void amdgpu_amdkfd_device_fini(struct amdgpu_device *adev);
int amdgpu_amdkfd_evict_userptr(struct kgd_mem *mem, struct mm_struct *mm);
int amdgpu_amdkfd_submit_ib(struct kgd_dev *kgd, enum kgd_engine_type engine, int amdgpu_amdkfd_submit_ib(struct kgd_dev *kgd, enum kgd_engine_type engine,
uint32_t vmid, uint64_t gpu_addr, uint32_t vmid, uint64_t gpu_addr,
uint32_t *ib_cmd, uint32_t ib_len); uint32_t *ib_cmd, uint32_t ib_len);
@@ -153,6 +147,38 @@ void amdgpu_amdkfd_gpu_reset(struct kgd_dev *kgd);
int amdgpu_queue_mask_bit_to_set_resource_bit(struct amdgpu_device *adev, int amdgpu_queue_mask_bit_to_set_resource_bit(struct amdgpu_device *adev,
int queue_bit); int queue_bit);
struct amdgpu_amdkfd_fence *amdgpu_amdkfd_fence_create(u64 context,
struct mm_struct *mm);
#if IS_ENABLED(CONFIG_HSA_AMD)
bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm);
struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f);
int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo);
int amdgpu_amdkfd_evict_userptr(struct kgd_mem *mem, struct mm_struct *mm);
#else
static inline
bool amdkfd_fence_check_mm(struct dma_fence *f, struct mm_struct *mm)
{
return false;
}
static inline
struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f)
{
return NULL;
}
static inline
int amdgpu_amdkfd_remove_fence_on_pt_pd_bos(struct amdgpu_bo *bo)
{
return 0;
}
static inline
int amdgpu_amdkfd_evict_userptr(struct kgd_mem *mem, struct mm_struct *mm)
{
return 0;
}
#endif
/* Shared API */ /* Shared API */
int amdgpu_amdkfd_alloc_gtt_mem(struct kgd_dev *kgd, size_t size, int amdgpu_amdkfd_alloc_gtt_mem(struct kgd_dev *kgd, size_t size,
void **mem_obj, uint64_t *gpu_addr, void **mem_obj, uint64_t *gpu_addr,
@@ -215,8 +241,6 @@ int amdgpu_amdkfd_gpuvm_acquire_process_vm(struct kgd_dev *kgd,
struct file *filp, u32 pasid, struct file *filp, u32 pasid,
void **vm, void **process_info, void **vm, void **process_info,
struct dma_fence **ef); struct dma_fence **ef);
void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
struct amdgpu_vm *vm);
void amdgpu_amdkfd_gpuvm_destroy_process_vm(struct kgd_dev *kgd, void *vm); void amdgpu_amdkfd_gpuvm_destroy_process_vm(struct kgd_dev *kgd, void *vm);
void amdgpu_amdkfd_gpuvm_release_process_vm(struct kgd_dev *kgd, void *vm); void amdgpu_amdkfd_gpuvm_release_process_vm(struct kgd_dev *kgd, void *vm);
uint64_t amdgpu_amdkfd_gpuvm_get_process_page_dir(void *vm); uint64_t amdgpu_amdkfd_gpuvm_get_process_page_dir(void *vm);
@@ -236,23 +260,43 @@ int amdgpu_amdkfd_gpuvm_map_gtt_bo_to_kernel(struct kgd_dev *kgd,
struct kgd_mem *mem, void **kptr, uint64_t *size); struct kgd_mem *mem, void **kptr, uint64_t *size);
int amdgpu_amdkfd_gpuvm_restore_process_bos(void *process_info, int amdgpu_amdkfd_gpuvm_restore_process_bos(void *process_info,
struct dma_fence **ef); struct dma_fence **ef);
int amdgpu_amdkfd_gpuvm_get_vm_fault_info(struct kgd_dev *kgd, int amdgpu_amdkfd_gpuvm_get_vm_fault_info(struct kgd_dev *kgd,
struct kfd_vm_fault_info *info); struct kfd_vm_fault_info *info);
int amdgpu_amdkfd_gpuvm_import_dmabuf(struct kgd_dev *kgd, int amdgpu_amdkfd_gpuvm_import_dmabuf(struct kgd_dev *kgd,
struct dma_buf *dmabuf, struct dma_buf *dmabuf,
uint64_t va, void *vm, uint64_t va, void *vm,
struct kgd_mem **mem, uint64_t *size, struct kgd_mem **mem, uint64_t *size,
uint64_t *mmap_offset); uint64_t *mmap_offset);
void amdgpu_amdkfd_gpuvm_init_mem_limits(void);
void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo);
int amdgpu_amdkfd_get_tile_config(struct kgd_dev *kgd, int amdgpu_amdkfd_get_tile_config(struct kgd_dev *kgd,
struct tile_config *config); struct tile_config *config);
#if IS_ENABLED(CONFIG_HSA_AMD)
void amdgpu_amdkfd_gpuvm_init_mem_limits(void);
void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
struct amdgpu_vm *vm);
void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo);
#else
static inline
void amdgpu_amdkfd_gpuvm_init_mem_limits(void)
{
}
static inline
void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev,
struct amdgpu_vm *vm)
{
}
static inline
void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo)
{
}
#endif
/* KGD2KFD callbacks */ /* KGD2KFD callbacks */
int kgd2kfd_quiesce_mm(struct mm_struct *mm);
int kgd2kfd_resume_mm(struct mm_struct *mm);
int kgd2kfd_schedule_evict_and_restore_process(struct mm_struct *mm,
struct dma_fence *fence);
#if IS_ENABLED(CONFIG_HSA_AMD)
int kgd2kfd_init(void); int kgd2kfd_init(void);
void kgd2kfd_exit(void); void kgd2kfd_exit(void);
struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev, struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev,
@@ -262,15 +306,78 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
const struct kgd2kfd_shared_resources *gpu_resources); const struct kgd2kfd_shared_resources *gpu_resources);
void kgd2kfd_device_exit(struct kfd_dev *kfd); void kgd2kfd_device_exit(struct kfd_dev *kfd);
void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm); void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm);
int kgd2kfd_resume_iommu(struct kfd_dev *kfd);
int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm); int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm);
int kgd2kfd_pre_reset(struct kfd_dev *kfd); int kgd2kfd_pre_reset(struct kfd_dev *kfd);
int kgd2kfd_post_reset(struct kfd_dev *kfd); int kgd2kfd_post_reset(struct kfd_dev *kfd);
void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry); void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry);
int kgd2kfd_quiesce_mm(struct mm_struct *mm);
int kgd2kfd_resume_mm(struct mm_struct *mm);
int kgd2kfd_schedule_evict_and_restore_process(struct mm_struct *mm,
struct dma_fence *fence);
void kgd2kfd_set_sram_ecc_flag(struct kfd_dev *kfd); void kgd2kfd_set_sram_ecc_flag(struct kfd_dev *kfd);
void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint32_t throttle_bitmask); void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint32_t throttle_bitmask);
#else
static inline int kgd2kfd_init(void)
{
return -ENOENT;
}
static inline void kgd2kfd_exit(void)
{
}
static inline
struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev,
unsigned int asic_type, bool vf)
{
return NULL;
}
static inline
bool kgd2kfd_device_init(struct kfd_dev *kfd, struct drm_device *ddev,
const struct kgd2kfd_shared_resources *gpu_resources)
{
return false;
}
static inline void kgd2kfd_device_exit(struct kfd_dev *kfd)
{
}
static inline void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm)
{
}
static int __maybe_unused kgd2kfd_resume_iommu(struct kfd_dev *kfd)
{
return 0;
}
static inline int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
{
return 0;
}
static inline int kgd2kfd_pre_reset(struct kfd_dev *kfd)
{
return 0;
}
static inline int kgd2kfd_post_reset(struct kfd_dev *kfd)
{
return 0;
}
static inline
void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
{
}
static inline
void kgd2kfd_set_sram_ecc_flag(struct kfd_dev *kfd)
{
}
static inline
void kgd2kfd_smi_event_throttle(struct kfd_dev *kfd, uint32_t throttle_bitmask)
{
}
#endif
#endif /* AMDGPU_AMDKFD_H_INCLUDED */ #endif /* AMDGPU_AMDKFD_H_INCLUDED */

View File

@@ -2913,6 +2913,10 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
{ {
int r; int r;
r = amdgpu_amdkfd_resume_iommu(adev);
if (r)
return r;
r = amdgpu_device_ip_resume_phase1(adev); r = amdgpu_device_ip_resume_phase1(adev);
if (r) if (r)
return r; return r;
@@ -4296,6 +4300,10 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive,
if (!r) { if (!r) {
dev_info(tmp_adev->dev, "GPU reset succeeded, trying to resume\n"); dev_info(tmp_adev->dev, "GPU reset succeeded, trying to resume\n");
r = amdgpu_amdkfd_resume_iommu(tmp_adev);
if (r)
goto out;
r = amdgpu_device_ip_resume_phase1(tmp_adev); r = amdgpu_device_ip_resume_phase1(tmp_adev);
if (r) if (r)
goto out; goto out;

View File

@@ -358,6 +358,7 @@ struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct amdgpu_device *adev)
"%s", "xgmi_hive_info"); "%s", "xgmi_hive_info");
if (ret) { if (ret) {
dev_err(adev->dev, "XGMI: failed initializing kobject for xgmi hive\n"); dev_err(adev->dev, "XGMI: failed initializing kobject for xgmi hive\n");
kobject_put(&hive->kobj);
kfree(hive); kfree(hive);
hive = NULL; hive = NULL;
goto pro_end; goto pro_end;

View File

@@ -751,6 +751,9 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
kfd_cwsr_init(kfd); kfd_cwsr_init(kfd);
if(kgd2kfd_resume_iommu(kfd))
goto device_iommu_error;
if (kfd_resume(kfd)) if (kfd_resume(kfd))
goto kfd_resume_error; goto kfd_resume_error;
@@ -896,17 +899,21 @@ int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm)
return ret; return ret;
} }
static int kfd_resume(struct kfd_dev *kfd) int kgd2kfd_resume_iommu(struct kfd_dev *kfd)
{ {
int err = 0; int err = 0;
err = kfd_iommu_resume(kfd); err = kfd_iommu_resume(kfd);
if (err) { if (err)
dev_err(kfd_device, dev_err(kfd_device,
"Failed to resume IOMMU for device %x:%x\n", "Failed to resume IOMMU for device %x:%x\n",
kfd->pdev->vendor, kfd->pdev->device); kfd->pdev->vendor, kfd->pdev->device);
return err; return err;
} }
static int kfd_resume(struct kfd_dev *kfd)
{
int err = 0;
err = kfd->dqm->ops.start(kfd->dqm); err = kfd->dqm->ops.start(kfd->dqm);
if (err) { if (err) {

View File

@@ -1207,6 +1207,11 @@ static int stop_cpsch(struct device_queue_manager *dqm)
bool hanging; bool hanging;
dqm_lock(dqm); dqm_lock(dqm);
if (!dqm->sched_running) {
dqm_unlock(dqm);
return 0;
}
if (!dqm->is_hws_hang) if (!dqm->is_hws_hang)
unmap_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0); unmap_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0);
hanging = dqm->is_hws_hang || dqm->is_resetting; hanging = dqm->is_hws_hang || dqm->is_resetting;

View File

@@ -37,6 +37,8 @@
#include "dm_helpers.h" #include "dm_helpers.h"
#include "dc_link_ddc.h" #include "dc_link_ddc.h"
#include "ddc_service_types.h"
#include "dpcd_defs.h"
#include "i2caux_interface.h" #include "i2caux_interface.h"
#if defined(CONFIG_DEBUG_FS) #if defined(CONFIG_DEBUG_FS)
@@ -153,6 +155,16 @@ static const struct drm_connector_funcs dm_dp_mst_connector_funcs = {
}; };
#if defined(CONFIG_DRM_AMD_DC_DCN) #if defined(CONFIG_DRM_AMD_DC_DCN)
static bool needs_dsc_aux_workaround(struct dc_link *link)
{
if (link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 &&
(link->dpcd_caps.dpcd_rev.raw == DPCD_REV_14 || link->dpcd_caps.dpcd_rev.raw == DPCD_REV_12) &&
link->dpcd_caps.sink_count.bits.SINK_COUNT >= 2)
return true;
return false;
}
static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnector) static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnector)
{ {
struct dc_sink *dc_sink = aconnector->dc_sink; struct dc_sink *dc_sink = aconnector->dc_sink;
@@ -160,7 +172,7 @@ static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnecto
u8 dsc_caps[16] = { 0 }; u8 dsc_caps[16] = { 0 };
aconnector->dsc_aux = drm_dp_mst_dsc_aux_for_port(port); aconnector->dsc_aux = drm_dp_mst_dsc_aux_for_port(port);
#if defined(CONFIG_HP_HOOK_WORKAROUND)
/* /*
* drm_dp_mst_dsc_aux_for_port() will return NULL for certain configs * drm_dp_mst_dsc_aux_for_port() will return NULL for certain configs
* because it only check the dsc/fec caps of the "port variable" and not the dock * because it only check the dsc/fec caps of the "port variable" and not the dock
@@ -170,10 +182,10 @@ static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnecto
* Workaround: explicitly check the use case above and use the mst dock's aux as dsc_aux * Workaround: explicitly check the use case above and use the mst dock's aux as dsc_aux
* *
*/ */
if (!aconnector->dsc_aux && !port->parent->port_parent &&
if (!aconnector->dsc_aux && !port->parent->port_parent) needs_dsc_aux_workaround(aconnector->dc_link))
aconnector->dsc_aux = &aconnector->mst_port->dm_dp_aux.aux; aconnector->dsc_aux = &aconnector->mst_port->dm_dp_aux.aux;
#endif
if (!aconnector->dsc_aux) if (!aconnector->dsc_aux)
return false; return false;

View File

@@ -391,8 +391,17 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
if (*fence) { if (*fence) {
ret = dma_fence_chain_find_seqno(fence, point); ret = dma_fence_chain_find_seqno(fence, point);
if (!ret) if (!ret) {
/* If the requested seqno is already signaled
* drm_syncobj_find_fence may return a NULL
* fence. To make sure the recipient gets
* signalled, use a new fence instead.
*/
if (!*fence)
*fence = dma_fence_get_stub();
goto out; goto out;
}
dma_fence_put(*fence); dma_fence_put(*fence);
} else { } else {
ret = -EINVAL; ret = -EINVAL;

View File

@@ -777,12 +777,12 @@ static void a6xx_get_gmu_registers(struct msm_gpu *gpu,
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
a6xx_state->gmu_registers = state_kcalloc(a6xx_state, a6xx_state->gmu_registers = state_kcalloc(a6xx_state,
2, sizeof(*a6xx_state->gmu_registers)); 3, sizeof(*a6xx_state->gmu_registers));
if (!a6xx_state->gmu_registers) if (!a6xx_state->gmu_registers)
return; return;
a6xx_state->nr_gmu_registers = 2; a6xx_state->nr_gmu_registers = 3;
/* Get the CX GMU registers from AHB */ /* Get the CX GMU registers from AHB */
_a6xx_get_gmu_registers(gpu, a6xx_state, &a6xx_gmu_reglist[0], _a6xx_get_gmu_registers(gpu, a6xx_state, &a6xx_gmu_reglist[0],

View File

@@ -77,6 +77,7 @@ static int msm_gpu_open(struct inode *inode, struct file *file)
goto free_priv; goto free_priv;
pm_runtime_get_sync(&gpu->pdev->dev); pm_runtime_get_sync(&gpu->pdev->dev);
msm_gpu_hw_init(gpu);
show_priv->state = gpu->funcs->gpu_state_get(gpu); show_priv->state = gpu->funcs->gpu_state_get(gpu);
pm_runtime_put_sync(&gpu->pdev->dev); pm_runtime_put_sync(&gpu->pdev->dev);

View File

@@ -46,6 +46,7 @@ config DRM_SUN6I_DSI
default MACH_SUN8I default MACH_SUN8I
select CRC_CCITT select CRC_CCITT
select DRM_MIPI_DSI select DRM_MIPI_DSI
select RESET_CONTROLLER
select PHY_SUN6I_MIPI_DPHY select PHY_SUN6I_MIPI_DPHY
help help
Choose this option if you want have an Allwinner SoC with Choose this option if you want have an Allwinner SoC with

View File

@@ -207,14 +207,14 @@ config HID_CHERRY
config HID_CHICONY config HID_CHICONY
tristate "Chicony devices" tristate "Chicony devices"
depends on HID depends on USB_HID
default !EXPERT default !EXPERT
help help
Support for Chicony Tactical pad and special keys on Chicony keyboards. Support for Chicony Tactical pad and special keys on Chicony keyboards.
config HID_CORSAIR config HID_CORSAIR
tristate "Corsair devices" tristate "Corsair devices"
depends on HID && USB && LEDS_CLASS depends on USB_HID && LEDS_CLASS
help help
Support for Corsair devices that are not fully compliant with the Support for Corsair devices that are not fully compliant with the
HID standard. HID standard.
@@ -245,7 +245,7 @@ config HID_MACALLY
config HID_PRODIKEYS config HID_PRODIKEYS
tristate "Prodikeys PC-MIDI Keyboard support" tristate "Prodikeys PC-MIDI Keyboard support"
depends on HID && SND depends on USB_HID && SND
select SND_RAWMIDI select SND_RAWMIDI
help help
Support for Prodikeys PC-MIDI Keyboard device support. Support for Prodikeys PC-MIDI Keyboard device support.
@@ -541,7 +541,7 @@ config HID_LENOVO
config HID_LOGITECH config HID_LOGITECH
tristate "Logitech devices" tristate "Logitech devices"
depends on HID depends on USB_HID
depends on LEDS_CLASS depends on LEDS_CLASS
default !EXPERT default !EXPERT
help help
@@ -889,7 +889,7 @@ config HID_SAITEK
config HID_SAMSUNG config HID_SAMSUNG
tristate "Samsung InfraRed remote control or keyboards" tristate "Samsung InfraRed remote control or keyboards"
depends on HID depends on USB_HID
help help
Support for Samsung InfraRed remote control or keyboards. Support for Samsung InfraRed remote control or keyboards.

View File

@@ -918,8 +918,7 @@ static int asus_probe(struct hid_device *hdev, const struct hid_device_id *id)
if (drvdata->quirks & QUIRK_IS_MULTITOUCH) if (drvdata->quirks & QUIRK_IS_MULTITOUCH)
drvdata->tp = &asus_i2c_tp; drvdata->tp = &asus_i2c_tp;
if ((drvdata->quirks & QUIRK_T100_KEYBOARD) && if ((drvdata->quirks & QUIRK_T100_KEYBOARD) && hid_is_usb(hdev)) {
hid_is_using_ll_driver(hdev, &usb_hid_driver)) {
struct usb_interface *intf = to_usb_interface(hdev->dev.parent); struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
if (intf->altsetting->desc.bInterfaceNumber == T100_TPAD_INTF) { if (intf->altsetting->desc.bInterfaceNumber == T100_TPAD_INTF) {
@@ -947,8 +946,7 @@ static int asus_probe(struct hid_device *hdev, const struct hid_device_id *id)
drvdata->tp = &asus_t100chi_tp; drvdata->tp = &asus_t100chi_tp;
} }
if ((drvdata->quirks & QUIRK_MEDION_E1239T) && if ((drvdata->quirks & QUIRK_MEDION_E1239T) && hid_is_usb(hdev)) {
hid_is_using_ll_driver(hdev, &usb_hid_driver)) {
struct usb_host_interface *alt = struct usb_host_interface *alt =
to_usb_interface(hdev->dev.parent)->altsetting; to_usb_interface(hdev->dev.parent)->altsetting;

View File

@@ -191,7 +191,7 @@ static void bigben_worker(struct work_struct *work)
struct bigben_device, worker); struct bigben_device, worker);
struct hid_field *report_field = bigben->report->field[0]; struct hid_field *report_field = bigben->report->field[0];
if (bigben->removed) if (bigben->removed || !report_field)
return; return;
if (bigben->work_led) { if (bigben->work_led) {

View File

@@ -58,8 +58,12 @@ static int ch_input_mapping(struct hid_device *hdev, struct hid_input *hi,
static __u8 *ch_switch12_report_fixup(struct hid_device *hdev, __u8 *rdesc, static __u8 *ch_switch12_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize) unsigned int *rsize)
{ {
struct usb_interface *intf = to_usb_interface(hdev->dev.parent); struct usb_interface *intf;
if (!hid_is_usb(hdev))
return rdesc;
intf = to_usb_interface(hdev->dev.parent);
if (intf->cur_altsetting->desc.bInterfaceNumber == 1) { if (intf->cur_altsetting->desc.bInterfaceNumber == 1) {
/* Change usage maximum and logical maximum from 0x7fff to /* Change usage maximum and logical maximum from 0x7fff to
* 0x2fff, so they don't exceed HID_MAX_USAGES */ * 0x2fff, so they don't exceed HID_MAX_USAGES */

View File

@@ -553,7 +553,12 @@ static int corsair_probe(struct hid_device *dev, const struct hid_device_id *id)
int ret; int ret;
unsigned long quirks = id->driver_data; unsigned long quirks = id->driver_data;
struct corsair_drvdata *drvdata; struct corsair_drvdata *drvdata;
struct usb_interface *usbif = to_usb_interface(dev->dev.parent); struct usb_interface *usbif;
if (!hid_is_usb(dev))
return -EINVAL;
usbif = to_usb_interface(dev->dev.parent);
drvdata = devm_kzalloc(&dev->dev, sizeof(struct corsair_drvdata), drvdata = devm_kzalloc(&dev->dev, sizeof(struct corsair_drvdata),
GFP_KERNEL); GFP_KERNEL);

View File

@@ -50,7 +50,7 @@ struct elan_drvdata {
static int is_not_elan_touchpad(struct hid_device *hdev) static int is_not_elan_touchpad(struct hid_device *hdev)
{ {
if (hdev->bus == BUS_USB) { if (hid_is_usb(hdev)) {
struct usb_interface *intf = to_usb_interface(hdev->dev.parent); struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
return (intf->altsetting->desc.bInterfaceNumber != return (intf->altsetting->desc.bInterfaceNumber !=

View File

@@ -229,6 +229,9 @@ static int elo_probe(struct hid_device *hdev, const struct hid_device_id *id)
struct elo_priv *priv; struct elo_priv *priv;
int ret; int ret;
if (!hid_is_usb(hdev))
return -EINVAL;
priv = kzalloc(sizeof(*priv), GFP_KERNEL); priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) if (!priv)
return -ENOMEM; return -ENOMEM;

View File

@@ -528,6 +528,8 @@ static void hammer_remove(struct hid_device *hdev)
static const struct hid_device_id hammer_devices[] = { static const struct hid_device_id hammer_devices[] = {
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_DON) }, USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_DON) },
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_EEL) },
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) }, USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) },
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,

View File

@@ -140,12 +140,17 @@ static int holtek_kbd_input_event(struct input_dev *dev, unsigned int type,
static int holtek_kbd_probe(struct hid_device *hdev, static int holtek_kbd_probe(struct hid_device *hdev,
const struct hid_device_id *id) const struct hid_device_id *id)
{ {
struct usb_interface *intf = to_usb_interface(hdev->dev.parent); struct usb_interface *intf;
int ret = hid_parse(hdev); int ret;
if (!hid_is_usb(hdev))
return -EINVAL;
ret = hid_parse(hdev);
if (!ret) if (!ret)
ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT); ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT);
intf = to_usb_interface(hdev->dev.parent);
if (!ret && intf->cur_altsetting->desc.bInterfaceNumber == 1) { if (!ret && intf->cur_altsetting->desc.bInterfaceNumber == 1) {
struct hid_input *hidinput; struct hid_input *hidinput;
list_for_each_entry(hidinput, &hdev->inputs, list) { list_for_each_entry(hidinput, &hdev->inputs, list) {

View File

@@ -62,6 +62,14 @@ static __u8 *holtek_mouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,
return rdesc; return rdesc;
} }
static int holtek_mouse_probe(struct hid_device *hdev,
const struct hid_device_id *id)
{
if (!hid_is_usb(hdev))
return -EINVAL;
return 0;
}
static const struct hid_device_id holtek_mouse_devices[] = { static const struct hid_device_id holtek_mouse_devices[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT,
USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) }, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) },
@@ -83,6 +91,7 @@ static struct hid_driver holtek_mouse_driver = {
.name = "holtek_mouse", .name = "holtek_mouse",
.id_table = holtek_mouse_devices, .id_table = holtek_mouse_devices,
.report_fixup = holtek_mouse_report_fixup, .report_fixup = holtek_mouse_report_fixup,
.probe = holtek_mouse_probe,
}; };
module_hid_driver(holtek_mouse_driver); module_hid_driver(holtek_mouse_driver);

View File

@@ -491,6 +491,7 @@
#define USB_DEVICE_ID_GOOGLE_MAGNEMITE 0x503d #define USB_DEVICE_ID_GOOGLE_MAGNEMITE 0x503d
#define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044 #define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044
#define USB_DEVICE_ID_GOOGLE_DON 0x5050 #define USB_DEVICE_ID_GOOGLE_DON 0x5050
#define USB_DEVICE_ID_GOOGLE_EEL 0x5057
#define USB_VENDOR_ID_GOTOP 0x08f2 #define USB_VENDOR_ID_GOTOP 0x08f2
#define USB_DEVICE_ID_SUPER_Q2 0x007f #define USB_DEVICE_ID_SUPER_Q2 0x007f
@@ -868,6 +869,7 @@
#define USB_DEVICE_ID_MS_TOUCH_COVER_2 0x07a7 #define USB_DEVICE_ID_MS_TOUCH_COVER_2 0x07a7
#define USB_DEVICE_ID_MS_TYPE_COVER_2 0x07a9 #define USB_DEVICE_ID_MS_TYPE_COVER_2 0x07a9
#define USB_DEVICE_ID_MS_POWER_COVER 0x07da #define USB_DEVICE_ID_MS_POWER_COVER 0x07da
#define USB_DEVICE_ID_MS_SURFACE3_COVER 0x07de
#define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd #define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd
#define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb #define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb
#define USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS 0x02e0 #define USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS 0x02e0

View File

@@ -769,12 +769,18 @@ static int lg_raw_event(struct hid_device *hdev, struct hid_report *report,
static int lg_probe(struct hid_device *hdev, const struct hid_device_id *id) static int lg_probe(struct hid_device *hdev, const struct hid_device_id *id)
{ {
struct usb_interface *iface = to_usb_interface(hdev->dev.parent); struct usb_interface *iface;
__u8 iface_num = iface->cur_altsetting->desc.bInterfaceNumber; __u8 iface_num;
unsigned int connect_mask = HID_CONNECT_DEFAULT; unsigned int connect_mask = HID_CONNECT_DEFAULT;
struct lg_drv_data *drv_data; struct lg_drv_data *drv_data;
int ret; int ret;
if (!hid_is_usb(hdev))
return -EINVAL;
iface = to_usb_interface(hdev->dev.parent);
iface_num = iface->cur_altsetting->desc.bInterfaceNumber;
/* G29 only work with the 1st interface */ /* G29 only work with the 1st interface */
if ((hdev->product == USB_DEVICE_ID_LOGITECH_G29_WHEEL) && if ((hdev->product == USB_DEVICE_ID_LOGITECH_G29_WHEEL) &&
(iface_num != 0)) { (iface_num != 0)) {

View File

@@ -1693,7 +1693,7 @@ static int logi_dj_probe(struct hid_device *hdev,
case recvr_type_27mhz: no_dj_interfaces = 2; break; case recvr_type_27mhz: no_dj_interfaces = 2; break;
case recvr_type_bluetooth: no_dj_interfaces = 2; break; case recvr_type_bluetooth: no_dj_interfaces = 2; break;
} }
if (hid_is_using_ll_driver(hdev, &usb_hid_driver)) { if (hid_is_usb(hdev)) {
intf = to_usb_interface(hdev->dev.parent); intf = to_usb_interface(hdev->dev.parent);
if (intf && intf->altsetting->desc.bInterfaceNumber >= if (intf && intf->altsetting->desc.bInterfaceNumber >=
no_dj_interfaces) { no_dj_interfaces) {

View File

@@ -798,12 +798,18 @@ static int pk_raw_event(struct hid_device *hdev, struct hid_report *report,
static int pk_probe(struct hid_device *hdev, const struct hid_device_id *id) static int pk_probe(struct hid_device *hdev, const struct hid_device_id *id)
{ {
int ret; int ret;
struct usb_interface *intf = to_usb_interface(hdev->dev.parent); struct usb_interface *intf;
unsigned short ifnum = intf->cur_altsetting->desc.bInterfaceNumber; unsigned short ifnum;
unsigned long quirks = id->driver_data; unsigned long quirks = id->driver_data;
struct pk_device *pk; struct pk_device *pk;
struct pcmidi_snd *pm = NULL; struct pcmidi_snd *pm = NULL;
if (!hid_is_usb(hdev))
return -EINVAL;
intf = to_usb_interface(hdev->dev.parent);
ifnum = intf->cur_altsetting->desc.bInterfaceNumber;
pk = kzalloc(sizeof(*pk), GFP_KERNEL); pk = kzalloc(sizeof(*pk), GFP_KERNEL);
if (pk == NULL) { if (pk == NULL) {
hid_err(hdev, "can't alloc descriptor\n"); hid_err(hdev, "can't alloc descriptor\n");

View File

@@ -125,6 +125,7 @@ static const struct hid_device_id hid_quirks[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT }, { HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL }, { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS }, { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE3_COVER), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE_PRO_2), HID_QUIRK_NO_INIT_REPORTS }, { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE_PRO_2), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TOUCH_COVER_2), HID_QUIRK_NO_INIT_REPORTS }, { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TOUCH_COVER_2), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_2), HID_QUIRK_NO_INIT_REPORTS }, { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_2), HID_QUIRK_NO_INIT_REPORTS },

View File

@@ -344,6 +344,9 @@ static int arvo_probe(struct hid_device *hdev,
{ {
int retval; int retval;
if (!hid_is_usb(hdev))
return -EINVAL;
retval = hid_parse(hdev); retval = hid_parse(hdev);
if (retval) { if (retval) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -324,6 +324,9 @@ static int isku_probe(struct hid_device *hdev,
{ {
int retval; int retval;
if (!hid_is_usb(hdev))
return -EINVAL;
retval = hid_parse(hdev); retval = hid_parse(hdev);
if (retval) { if (retval) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -749,6 +749,9 @@ static int kone_probe(struct hid_device *hdev, const struct hid_device_id *id)
{ {
int retval; int retval;
if (!hid_is_usb(hdev))
return -EINVAL;
retval = hid_parse(hdev); retval = hid_parse(hdev);
if (retval) { if (retval) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -431,6 +431,9 @@ static int koneplus_probe(struct hid_device *hdev,
{ {
int retval; int retval;
if (!hid_is_usb(hdev))
return -EINVAL;
retval = hid_parse(hdev); retval = hid_parse(hdev);
if (retval) { if (retval) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -133,6 +133,9 @@ static int konepure_probe(struct hid_device *hdev,
{ {
int retval; int retval;
if (!hid_is_usb(hdev))
return -EINVAL;
retval = hid_parse(hdev); retval = hid_parse(hdev);
if (retval) { if (retval) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -501,6 +501,9 @@ static int kovaplus_probe(struct hid_device *hdev,
{ {
int retval; int retval;
if (!hid_is_usb(hdev))
return -EINVAL;
retval = hid_parse(hdev); retval = hid_parse(hdev);
if (retval) { if (retval) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -160,6 +160,9 @@ static int lua_probe(struct hid_device *hdev,
{ {
int retval; int retval;
if (!hid_is_usb(hdev))
return -EINVAL;
retval = hid_parse(hdev); retval = hid_parse(hdev);
if (retval) { if (retval) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -449,6 +449,9 @@ static int pyra_probe(struct hid_device *hdev, const struct hid_device_id *id)
{ {
int retval; int retval;
if (!hid_is_usb(hdev))
return -EINVAL;
retval = hid_parse(hdev); retval = hid_parse(hdev);
if (retval) { if (retval) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -141,6 +141,9 @@ static int ryos_probe(struct hid_device *hdev,
{ {
int retval; int retval;
if (!hid_is_usb(hdev))
return -EINVAL;
retval = hid_parse(hdev); retval = hid_parse(hdev);
if (retval) { if (retval) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -113,6 +113,9 @@ static int savu_probe(struct hid_device *hdev,
{ {
int retval; int retval;
if (!hid_is_usb(hdev))
return -EINVAL;
retval = hid_parse(hdev); retval = hid_parse(hdev);
if (retval) { if (retval) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -152,6 +152,9 @@ static int samsung_probe(struct hid_device *hdev,
int ret; int ret;
unsigned int cmask = HID_CONNECT_DEFAULT; unsigned int cmask = HID_CONNECT_DEFAULT;
if (!hid_is_usb(hdev))
return -EINVAL;
ret = hid_parse(hdev); ret = hid_parse(hdev);
if (ret) { if (ret) {
hid_err(hdev, "parse failed\n"); hid_err(hdev, "parse failed\n");

View File

@@ -290,7 +290,7 @@ static int u2fzero_probe(struct hid_device *hdev,
unsigned int minor; unsigned int minor;
int ret; int ret;
if (!hid_is_using_ll_driver(hdev, &usb_hid_driver)) if (!hid_is_usb(hdev))
return -EINVAL; return -EINVAL;
dev = devm_kzalloc(&hdev->dev, sizeof(*dev), GFP_KERNEL); dev = devm_kzalloc(&hdev->dev, sizeof(*dev), GFP_KERNEL);

View File

@@ -164,6 +164,9 @@ static int uclogic_probe(struct hid_device *hdev,
struct uclogic_drvdata *drvdata = NULL; struct uclogic_drvdata *drvdata = NULL;
bool params_initialized = false; bool params_initialized = false;
if (!hid_is_usb(hdev))
return -EINVAL;
/* /*
* libinput requires the pad interface to be on a different node * libinput requires the pad interface to be on a different node
* than the pen, so use QUIRK_MULTI_INPUT for all tablets. * than the pen, so use QUIRK_MULTI_INPUT for all tablets.

View File

@@ -841,8 +841,7 @@ int uclogic_params_init(struct uclogic_params *params,
struct uclogic_params p = {0, }; struct uclogic_params p = {0, };
/* Check arguments */ /* Check arguments */
if (params == NULL || hdev == NULL || if (params == NULL || hdev == NULL || !hid_is_usb(hdev)) {
!hid_is_using_ll_driver(hdev, &usb_hid_driver)) {
rc = -EINVAL; rc = -EINVAL;
goto cleanup; goto cleanup;
} }

View File

@@ -726,7 +726,7 @@ static void wacom_retrieve_hid_descriptor(struct hid_device *hdev,
* Skip the query for this type and modify defaults based on * Skip the query for this type and modify defaults based on
* interface number. * interface number.
*/ */
if (features->type == WIRELESS) { if (features->type == WIRELESS && intf) {
if (intf->cur_altsetting->desc.bInterfaceNumber == 0) if (intf->cur_altsetting->desc.bInterfaceNumber == 0)
features->device_type = WACOM_DEVICETYPE_WL_MONITOR; features->device_type = WACOM_DEVICETYPE_WL_MONITOR;
else else
@@ -2217,7 +2217,7 @@ static void wacom_update_name(struct wacom *wacom, const char *suffix)
if ((features->type == HID_GENERIC) && !strcmp("Wacom HID", features->name)) { if ((features->type == HID_GENERIC) && !strcmp("Wacom HID", features->name)) {
char *product_name = wacom->hdev->name; char *product_name = wacom->hdev->name;
if (hid_is_using_ll_driver(wacom->hdev, &usb_hid_driver)) { if (hid_is_usb(wacom->hdev)) {
struct usb_interface *intf = to_usb_interface(wacom->hdev->dev.parent); struct usb_interface *intf = to_usb_interface(wacom->hdev->dev.parent);
struct usb_device *dev = interface_to_usbdev(intf); struct usb_device *dev = interface_to_usbdev(intf);
product_name = dev->product; product_name = dev->product;
@@ -2448,6 +2448,9 @@ static void wacom_wireless_work(struct work_struct *work)
wacom_destroy_battery(wacom); wacom_destroy_battery(wacom);
if (!usbdev)
return;
/* Stylus interface */ /* Stylus interface */
hdev1 = usb_get_intfdata(usbdev->config->interface[1]); hdev1 = usb_get_intfdata(usbdev->config->interface[1]);
wacom1 = hid_get_drvdata(hdev1); wacom1 = hid_get_drvdata(hdev1);
@@ -2727,8 +2730,6 @@ static void wacom_mode_change_work(struct work_struct *work)
static int wacom_probe(struct hid_device *hdev, static int wacom_probe(struct hid_device *hdev,
const struct hid_device_id *id) const struct hid_device_id *id)
{ {
struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
struct usb_device *dev = interface_to_usbdev(intf);
struct wacom *wacom; struct wacom *wacom;
struct wacom_wac *wacom_wac; struct wacom_wac *wacom_wac;
struct wacom_features *features; struct wacom_features *features;
@@ -2763,8 +2764,14 @@ static int wacom_probe(struct hid_device *hdev,
wacom_wac->hid_data.inputmode = -1; wacom_wac->hid_data.inputmode = -1;
wacom_wac->mode_report = -1; wacom_wac->mode_report = -1;
wacom->usbdev = dev; if (hid_is_usb(hdev)) {
wacom->intf = intf; struct usb_interface *intf = to_usb_interface(hdev->dev.parent);
struct usb_device *dev = interface_to_usbdev(intf);
wacom->usbdev = dev;
wacom->intf = intf;
}
mutex_init(&wacom->lock); mutex_init(&wacom->lock);
INIT_DELAYED_WORK(&wacom->init_work, wacom_init_work); INIT_DELAYED_WORK(&wacom->init_work, wacom_init_work);
INIT_WORK(&wacom->wireless_work, wacom_wireless_work); INIT_WORK(&wacom->wireless_work, wacom_wireless_work);

View File

@@ -195,8 +195,9 @@ static u32 cbus_i2c_func(struct i2c_adapter *adapter)
} }
static const struct i2c_algorithm cbus_i2c_algo = { static const struct i2c_algorithm cbus_i2c_algo = {
.smbus_xfer = cbus_i2c_smbus_xfer, .smbus_xfer = cbus_i2c_smbus_xfer,
.functionality = cbus_i2c_func, .smbus_xfer_atomic = cbus_i2c_smbus_xfer,
.functionality = cbus_i2c_func,
}; };
static int cbus_i2c_remove(struct platform_device *pdev) static int cbus_i2c_remove(struct platform_device *pdev)

View File

@@ -1472,6 +1472,7 @@ static irqreturn_t stm32f7_i2c_isr_event(int irq, void *data)
{ {
struct stm32f7_i2c_dev *i2c_dev = data; struct stm32f7_i2c_dev *i2c_dev = data;
struct stm32f7_i2c_msg *f7_msg = &i2c_dev->f7_msg; struct stm32f7_i2c_msg *f7_msg = &i2c_dev->f7_msg;
struct stm32_i2c_dma *dma = i2c_dev->dma;
void __iomem *base = i2c_dev->base; void __iomem *base = i2c_dev->base;
u32 status, mask; u32 status, mask;
int ret = IRQ_HANDLED; int ret = IRQ_HANDLED;
@@ -1497,6 +1498,10 @@ static irqreturn_t stm32f7_i2c_isr_event(int irq, void *data)
dev_dbg(i2c_dev->dev, "<%s>: Receive NACK (addr %x)\n", dev_dbg(i2c_dev->dev, "<%s>: Receive NACK (addr %x)\n",
__func__, f7_msg->addr); __func__, f7_msg->addr);
writel_relaxed(STM32F7_I2C_ICR_NACKCF, base + STM32F7_I2C_ICR); writel_relaxed(STM32F7_I2C_ICR_NACKCF, base + STM32F7_I2C_ICR);
if (i2c_dev->use_dma) {
stm32f7_i2c_disable_dma_req(i2c_dev);
dmaengine_terminate_all(dma->chan_using);
}
f7_msg->result = -ENXIO; f7_msg->result = -ENXIO;
} }
@@ -1512,7 +1517,7 @@ static irqreturn_t stm32f7_i2c_isr_event(int irq, void *data)
/* Clear STOP flag */ /* Clear STOP flag */
writel_relaxed(STM32F7_I2C_ICR_STOPCF, base + STM32F7_I2C_ICR); writel_relaxed(STM32F7_I2C_ICR_STOPCF, base + STM32F7_I2C_ICR);
if (i2c_dev->use_dma) { if (i2c_dev->use_dma && !f7_msg->result) {
ret = IRQ_WAKE_THREAD; ret = IRQ_WAKE_THREAD;
} else { } else {
i2c_dev->master_mode = false; i2c_dev->master_mode = false;
@@ -1525,7 +1530,7 @@ static irqreturn_t stm32f7_i2c_isr_event(int irq, void *data)
if (f7_msg->stop) { if (f7_msg->stop) {
mask = STM32F7_I2C_CR2_STOP; mask = STM32F7_I2C_CR2_STOP;
stm32f7_i2c_set_bits(base + STM32F7_I2C_CR2, mask); stm32f7_i2c_set_bits(base + STM32F7_I2C_CR2, mask);
} else if (i2c_dev->use_dma) { } else if (i2c_dev->use_dma && !f7_msg->result) {
ret = IRQ_WAKE_THREAD; ret = IRQ_WAKE_THREAD;
} else if (f7_msg->smbus) { } else if (f7_msg->smbus) {
stm32f7_i2c_smbus_rep_start(i2c_dev); stm32f7_i2c_smbus_rep_start(i2c_dev);
@@ -1665,12 +1670,23 @@ static int stm32f7_i2c_xfer(struct i2c_adapter *i2c_adap,
time_left = wait_for_completion_timeout(&i2c_dev->complete, time_left = wait_for_completion_timeout(&i2c_dev->complete,
i2c_dev->adap.timeout); i2c_dev->adap.timeout);
ret = f7_msg->result; ret = f7_msg->result;
if (ret) {
/*
* It is possible that some unsent data have already been
* written into TXDR. To avoid sending old data in a
* further transfer, flush TXDR in case of any error
*/
writel_relaxed(STM32F7_I2C_ISR_TXE,
i2c_dev->base + STM32F7_I2C_ISR);
goto pm_free;
}
if (!time_left) { if (!time_left) {
dev_dbg(i2c_dev->dev, "Access to slave 0x%x timed out\n", dev_dbg(i2c_dev->dev, "Access to slave 0x%x timed out\n",
i2c_dev->msg->addr); i2c_dev->msg->addr);
if (i2c_dev->use_dma) if (i2c_dev->use_dma)
dmaengine_terminate_all(dma->chan_using); dmaengine_terminate_all(dma->chan_using);
stm32f7_i2c_wait_free_bus(i2c_dev);
ret = -ETIMEDOUT; ret = -ETIMEDOUT;
} }
@@ -1713,13 +1729,22 @@ static int stm32f7_i2c_smbus_xfer(struct i2c_adapter *adapter, u16 addr,
timeout = wait_for_completion_timeout(&i2c_dev->complete, timeout = wait_for_completion_timeout(&i2c_dev->complete,
i2c_dev->adap.timeout); i2c_dev->adap.timeout);
ret = f7_msg->result; ret = f7_msg->result;
if (ret) if (ret) {
/*
* It is possible that some unsent data have already been
* written into TXDR. To avoid sending old data in a
* further transfer, flush TXDR in case of any error
*/
writel_relaxed(STM32F7_I2C_ISR_TXE,
i2c_dev->base + STM32F7_I2C_ISR);
goto pm_free; goto pm_free;
}
if (!timeout) { if (!timeout) {
dev_dbg(dev, "Access to slave 0x%x timed out\n", f7_msg->addr); dev_dbg(dev, "Access to slave 0x%x timed out\n", f7_msg->addr);
if (i2c_dev->use_dma) if (i2c_dev->use_dma)
dmaengine_terminate_all(dma->chan_using); dmaengine_terminate_all(dma->chan_using);
stm32f7_i2c_wait_free_bus(i2c_dev);
ret = -ETIMEDOUT; ret = -ETIMEDOUT;
goto pm_free; goto pm_free;
} }

View File

@@ -1435,8 +1435,7 @@ static int kxcjk1013_probe(struct i2c_client *client,
return 0; return 0;
err_buffer_cleanup: err_buffer_cleanup:
if (data->dready_trig) iio_triggered_buffer_cleanup(indio_dev);
iio_triggered_buffer_cleanup(indio_dev);
err_trigger_unregister: err_trigger_unregister:
if (data->dready_trig) if (data->dready_trig)
iio_trigger_unregister(data->dready_trig); iio_trigger_unregister(data->dready_trig);
@@ -1459,8 +1458,8 @@ static int kxcjk1013_remove(struct i2c_client *client)
pm_runtime_set_suspended(&client->dev); pm_runtime_set_suspended(&client->dev);
pm_runtime_put_noidle(&client->dev); pm_runtime_put_noidle(&client->dev);
iio_triggered_buffer_cleanup(indio_dev);
if (data->dready_trig) { if (data->dready_trig) {
iio_triggered_buffer_cleanup(indio_dev);
iio_trigger_unregister(data->dready_trig); iio_trigger_unregister(data->dready_trig);
iio_trigger_unregister(data->motion_trig); iio_trigger_unregister(data->motion_trig);
} }

View File

@@ -224,14 +224,14 @@ static irqreturn_t kxsd9_trigger_handler(int irq, void *p)
hw_values.chan, hw_values.chan,
sizeof(hw_values.chan)); sizeof(hw_values.chan));
if (ret) { if (ret) {
dev_err(st->dev, dev_err(st->dev, "error reading data: %d\n", ret);
"error reading data\n"); goto out;
return ret;
} }
iio_push_to_buffers_with_timestamp(indio_dev, iio_push_to_buffers_with_timestamp(indio_dev,
&hw_values, &hw_values,
iio_get_time_ns(indio_dev)); iio_get_time_ns(indio_dev));
out:
iio_trigger_notify_done(indio_dev->trig); iio_trigger_notify_done(indio_dev->trig);
return IRQ_HANDLED; return IRQ_HANDLED;

View File

@@ -1473,7 +1473,7 @@ static int mma8452_trigger_setup(struct iio_dev *indio_dev)
if (ret) if (ret)
return ret; return ret;
indio_dev->trig = trig; indio_dev->trig = iio_trigger_get(trig);
return 0; return 0;
} }

View File

@@ -470,8 +470,8 @@ static irqreturn_t ad7768_trigger_handler(int irq, void *p)
iio_push_to_buffers_with_timestamp(indio_dev, &st->data.scan, iio_push_to_buffers_with_timestamp(indio_dev, &st->data.scan,
iio_get_time_ns(indio_dev)); iio_get_time_ns(indio_dev));
iio_trigger_notify_done(indio_dev->trig);
err_unlock: err_unlock:
iio_trigger_notify_done(indio_dev->trig);
mutex_unlock(&st->lock); mutex_unlock(&st->lock);
return IRQ_HANDLED; return IRQ_HANDLED;

View File

@@ -1401,7 +1401,8 @@ static int at91_adc_read_info_raw(struct iio_dev *indio_dev,
*val = st->conversion_value; *val = st->conversion_value;
ret = at91_adc_adjust_val_osr(st, val); ret = at91_adc_adjust_val_osr(st, val);
if (chan->scan_type.sign == 's') if (chan->scan_type.sign == 's')
*val = sign_extend32(*val, 11); *val = sign_extend32(*val,
chan->scan_type.realbits - 1);
st->conversion_done = false; st->conversion_done = false;
} }

View File

@@ -251,19 +251,8 @@ static int axp22x_adc_raw(struct iio_dev *indio_dev,
struct iio_chan_spec const *chan, int *val) struct iio_chan_spec const *chan, int *val)
{ {
struct axp20x_adc_iio *info = iio_priv(indio_dev); struct axp20x_adc_iio *info = iio_priv(indio_dev);
int size;
/* *val = axp20x_read_variable_width(info->regmap, chan->address, 12);
* N.B.: Unlike the Chinese datasheets tell, the charging current is
* stored on 12 bits, not 13 bits. Only discharging current is on 13
* bits.
*/
if (chan->type == IIO_CURRENT && chan->channel == AXP22X_BATT_DISCHRG_I)
size = 13;
else
size = 12;
*val = axp20x_read_variable_width(info->regmap, chan->address, size);
if (*val < 0) if (*val < 0)
return *val; return *val;
@@ -386,9 +375,8 @@ static int axp22x_adc_scale(struct iio_chan_spec const *chan, int *val,
return IIO_VAL_INT_PLUS_MICRO; return IIO_VAL_INT_PLUS_MICRO;
case IIO_CURRENT: case IIO_CURRENT:
*val = 0; *val = 1;
*val2 = 500000; return IIO_VAL_INT;
return IIO_VAL_INT_PLUS_MICRO;
case IIO_TEMP: case IIO_TEMP:
*val = 100; *val = 100;

View File

@@ -248,7 +248,6 @@ static int dln2_adc_set_chan_period(struct dln2_adc *dln2,
static int dln2_adc_read(struct dln2_adc *dln2, unsigned int channel) static int dln2_adc_read(struct dln2_adc *dln2, unsigned int channel)
{ {
int ret, i; int ret, i;
struct iio_dev *indio_dev = platform_get_drvdata(dln2->pdev);
u16 conflict; u16 conflict;
__le16 value; __le16 value;
int olen = sizeof(value); int olen = sizeof(value);
@@ -257,13 +256,9 @@ static int dln2_adc_read(struct dln2_adc *dln2, unsigned int channel)
.chan = channel, .chan = channel,
}; };
ret = iio_device_claim_direct_mode(indio_dev);
if (ret < 0)
return ret;
ret = dln2_adc_set_chan_enabled(dln2, channel, true); ret = dln2_adc_set_chan_enabled(dln2, channel, true);
if (ret < 0) if (ret < 0)
goto release_direct; return ret;
ret = dln2_adc_set_port_enabled(dln2, true, &conflict); ret = dln2_adc_set_port_enabled(dln2, true, &conflict);
if (ret < 0) { if (ret < 0) {
@@ -300,8 +295,6 @@ disable_port:
dln2_adc_set_port_enabled(dln2, false, NULL); dln2_adc_set_port_enabled(dln2, false, NULL);
disable_chan: disable_chan:
dln2_adc_set_chan_enabled(dln2, channel, false); dln2_adc_set_chan_enabled(dln2, channel, false);
release_direct:
iio_device_release_direct_mode(indio_dev);
return ret; return ret;
} }
@@ -337,10 +330,16 @@ static int dln2_adc_read_raw(struct iio_dev *indio_dev,
switch (mask) { switch (mask) {
case IIO_CHAN_INFO_RAW: case IIO_CHAN_INFO_RAW:
ret = iio_device_claim_direct_mode(indio_dev);
if (ret < 0)
return ret;
mutex_lock(&dln2->mutex); mutex_lock(&dln2->mutex);
ret = dln2_adc_read(dln2, chan->channel); ret = dln2_adc_read(dln2, chan->channel);
mutex_unlock(&dln2->mutex); mutex_unlock(&dln2->mutex);
iio_device_release_direct_mode(indio_dev);
if (ret < 0) if (ret < 0)
return ret; return ret;
@@ -655,7 +654,11 @@ static int dln2_adc_probe(struct platform_device *pdev)
return -ENOMEM; return -ENOMEM;
} }
iio_trigger_set_drvdata(dln2->trig, dln2); iio_trigger_set_drvdata(dln2->trig, dln2);
devm_iio_trigger_register(dev, dln2->trig); ret = devm_iio_trigger_register(dev, dln2->trig);
if (ret) {
dev_err(dev, "failed to register trigger: %d\n", ret);
return ret;
}
iio_trigger_set_immutable(indio_dev, dln2->trig); iio_trigger_set_immutable(indio_dev, dln2->trig);
ret = devm_iio_triggered_buffer_setup(dev, indio_dev, NULL, ret = devm_iio_triggered_buffer_setup(dev, indio_dev, NULL,

View File

@@ -979,6 +979,7 @@ static void stm32h7_adc_unprepare(struct iio_dev *indio_dev)
{ {
struct stm32_adc *adc = iio_priv(indio_dev); struct stm32_adc *adc = iio_priv(indio_dev);
stm32_adc_writel(adc, STM32H7_ADC_PCSEL, 0);
stm32h7_adc_disable(indio_dev); stm32h7_adc_disable(indio_dev);
stm32h7_adc_enter_pwr_down(adc); stm32h7_adc_enter_pwr_down(adc);
} }

View File

@@ -7,6 +7,7 @@
*/ */
#include <linux/bitfield.h> #include <linux/bitfield.h>
#include <linux/bitops.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/kernel.h> #include <linux/kernel.h>
@@ -124,7 +125,7 @@ static int adxrs290_get_rate_data(struct iio_dev *indio_dev, const u8 cmd, int *
goto err_unlock; goto err_unlock;
} }
*val = temp; *val = sign_extend32(temp, 15);
err_unlock: err_unlock:
mutex_unlock(&st->lock); mutex_unlock(&st->lock);
@@ -146,7 +147,7 @@ static int adxrs290_get_temp_data(struct iio_dev *indio_dev, int *val)
} }
/* extract lower 12 bits temperature reading */ /* extract lower 12 bits temperature reading */
*val = temp & 0x0FFF; *val = sign_extend32(temp, 11);
err_unlock: err_unlock:
mutex_unlock(&st->lock); mutex_unlock(&st->lock);

View File

@@ -61,9 +61,9 @@ static irqreturn_t itg3200_trigger_handler(int irq, void *p)
iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp); iio_push_to_buffers_with_timestamp(indio_dev, &scan, pf->timestamp);
error_ret:
iio_trigger_notify_done(indio_dev->trig); iio_trigger_notify_done(indio_dev->trig);
error_ret:
return IRQ_HANDLED; return IRQ_HANDLED;
} }

Some files were not shown because too many files have changed in this diff Show More