Compare commits

...

47 Commits

Author SHA1 Message Date
Naushir Patuck
889e42c151 drivers: i2c: imx477: Use pm_runtime_use_autosuspend()
Switch the power management in the imx477 device driver to use auto-
suspend with a 5s timeout.

This improves mode switching time that avoids additional regulator
switch-on delays and common register I2C writes.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
2025-11-03 15:02:57 +00:00
Jai Luthra
203ead2371 staging: vc04_services: vc-sm-cma: Fix smatch warnings
Fix these two smatch warnings for the vc-sm-cma driver, rest were false
positives:

../vc-sm-cma/vc_sm.c:413
vc_sm_dma_buf_attach() warn: inconsistent returns '&buf->lock'.
  Locked on  : 396
  Unlocked on: 413
../vc-sm-cma/vc_sm.c:1225
vc_sm_cma_ioctl_alloc() error: we previously assumed 'buffer' could be
null (see line 1113)

Signed-off-by: Jai Luthra <jai.luthra@ideasonboard.com>
2025-11-03 13:26:25 +00:00
Dave Stevenson
61d338afc9 media: imx477: Increase IMX477_VBLANK_MIN due to image corruption
The 4056x3040 mode appears to need more vertical blanking lines
than any other, leaving a black bar at the bottom of the image.

Increase IMX477_VBLANK_MIN from 4 to 48 to compensate. (It may be
possible to reduce it slightly further, but fix the regression
for now).

https://github.com/raspberrypi/linux/issues/7109

Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.com>
2025-11-03 13:25:21 +00:00
Dave Stevenson
f55958ec22 drm/vc4: plane: Swap Cb/Cr pointers for YVU formats
hvs6 appears to have dropped support for the component order
field in 3 plane YUV formats.

Support them by swapping the Cb and Cr planes over when
reading the image pointers.

Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.com>
2025-11-03 13:24:44 +00:00
Dom Cobley
eaa5a91ed1 Merge remote-tracking branch 'stable/linux-6.12.y' into rpi-6.12.y 2025-11-03 12:33:49 +00:00
Greg Kroah-Hartman
8a243ecde1 Linux 6.12.57
Link: https://lore.kernel.org/r/20251031140043.939381518@linuxfoundation.org
Tested-by: Peter Schneider <pschneider1968@googlemail.com>
Tested-by: Brett Mastbergen <bmastbergen@ciq.com>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Salvatore Bonaccorso <carnil@debian.org>
Tested-by: Pavel Machek (CIP) <pavel@denx.de>
Tested-by: Shuah Khan <skhan@linuxfoundation.org>
Tested-by: Dileep Malepu <dileep.debian@gmail.com>
Tested-by: Linux Kernel Functional Testing <lkft@linaro.org>
Tested-by: Ron Economos <re@w6rz.net>
Tested-by: Brett A C Sheffield <bacs@librecast.net>
Tested-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:23 +09:00
Dan Carpenter
800101f6ab btrfs: tree-checker: fix bounds check in check_inode_extref()
commit e92c294120 upstream.

The parentheses for the unlikely() annotation were put in the wrong
place so it means that the condition is basically never true and the
bounds checking is skipped.

Fixes: aab9458b9f ("btrfs: tree-checker: add inode extref checks")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:23 +09:00
Edward Cree
f21623b844 sfc: fix NULL dereferences in ef100_process_design_param()
[ Upstream commit 8241ecec1c ]

Since cited commit, ef100_probe_main() and hence also
 ef100_check_design_params() run before efx->net_dev is created;
 consequently, we cannot netif_set_tso_max_size() or _segs() at this
 point.
Move those netif calls to ef100_probe_netdev(), and also replace
 netif_err within the design params code with pci_err.

Reported-by: Kyungwook Boo <bookyungwook@gmail.com>
Fixes: 98ff4c7c8a ("sfc: Separate netdev probe/remove from PCI probe/remove")
Signed-off-by: Edward Cree <ecree.xilinx@gmail.com>
Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Link: https://patch.msgid.link/20250401225439.2401047-1-edward.cree@amd.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Amelia Crate <acrate@waldn.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:23 +09:00
Xiaogang Chen
29b65a3171 udmabuf: fix a buf size overflow issue during udmabuf creation
[ Upstream commit 021ba7f1ba ]

by casting size_limit_mb to u64  when calculate pglimit.

Signed-off-by: Xiaogang Chen<Xiaogang.Chen@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20250321164126.329638-1-xiaogang.chen@amd.com
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Amelia Crate <acrate@waldn.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:23 +09:00
Aditya Kumar Singh
57100b87c7 wifi: ath12k: fix read pointer after free in ath12k_mac_assign_vif_to_vdev()
[ Upstream commit 5a10971c76 ]

In ath12k_mac_assign_vif_to_vdev(), if arvif is created on a different
radio, it gets deleted from that radio through a call to
ath12k_mac_unassign_link_vif(). This action frees the arvif pointer.
Subsequently, there is a check involving arvif, which will result in a
read-after-free scenario.

Fix this by moving this check after arvif is again assigned via call to
ath12k_mac_assign_link_vif().

Tested-on: QCN9274 hw2.0 PCI WLAN.WBE.1.3.1-00173-QCAHKSWPL_SILICONZ-1

Closes: https://scan5.scan.coverity.com/#/project-view/63541/10063?selectedIssue=1636423
Fixes: b5068bc918 ("wifi: ath12k: Cache vdev configs before vdev create")
Signed-off-by: Aditya Kumar Singh <quic_adisi@quicinc.com>
Acked-by: Jeff Johnson <jeff.johnson@oss.qualcomm.com>
Acked-by: Kalle Valo <kvalo@kernel.org>
Link: https://patch.msgid.link/20241210-read_after_free-v1-1-969f69c7d66c@quicinc.com
Signed-off-by: Jeff Johnson <jeff.johnson@oss.qualcomm.com>
Signed-off-by: Amelia Crate <acrate@waldn.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:23 +09:00
Kees Bakker
68ec78beb4 iommu/vt-d: Avoid use of NULL after WARN_ON_ONCE
[ Upstream commit 60f030f741 ]

There is a WARN_ON_ONCE to catch an unlikely situation when
domain_remove_dev_pasid can't find the `pasid`. In case it nevertheless
happens we must avoid using a NULL pointer.

Signed-off-by: Kees Bakker <kees@ijzerbout.nl>
Link: https://lore.kernel.org/r/20241218201048.E544818E57E@bout3.ijzerbout.nl
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Amelia Crate <acrate@waldn.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:23 +09:00
William Breathitt Gray
1e9ad8e566 gpio: idio-16: Define fixed direction of the GPIO lines
[ Upstream commit 2ba5772e53 ]

The direction of the IDIO-16 GPIO lines is fixed with the first 16 lines
as output and the remaining 16 lines as input. Set the gpio_config
fixed_direction_output member to represent the fixed direction of the
GPIO lines.

Fixes: db02247827 ("gpio: idio-16: Migrate to the regmap API")
Reported-by: Mark Cave-Ayland <mark.caveayland@nutanix.com>
Closes: https://lore.kernel.org/r/9b0375fd-235f-4ee1-a7fa-daca296ef6bf@nutanix.com
Suggested-by: Michael Walle <mwalle@kernel.org>
Cc: stable@vger.kernel.org # ae495810cffe: gpio: regmap: add the .fixed_direction_output configuration parameter
Cc: stable@vger.kernel.org
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: William Breathitt Gray <wbg@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Link: https://lore.kernel.org/r/20251020-fix-gpio-idio-16-regmap-v2-3-ebeb50e93c33@kernel.org
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Signed-off-by: William Breathitt Gray <wbg@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Ioana Ciornei
0537e524fe gpio: regmap: add the .fixed_direction_output configuration parameter
[ Upstream commit 00aaae60fa ]

There are GPIO controllers such as the one present in the LX2160ARDB
QIXIS FPGA which have fixed-direction input and output GPIO lines mixed
together in a single register. This cannot be modeled using the
gpio-regmap as-is since there is no way to present the true direction of
a GPIO line.

In order to make this use case possible, add a new configuration
parameter - fixed_direction_output - into the gpio_regmap_config
structure. This will enable user drivers to provide a bitmap that
represents the fixed direction of the GPIO lines.

Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com>
Acked-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Michael Walle <mwalle@kernel.org>
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Stable-dep-of: 2ba5772e53 ("gpio: idio-16: Define fixed direction of the GPIO lines")
Signed-off-by: William Breathitt Gray <wbg@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Mathieu Dubois-Briand
512c19320c gpio: regmap: Allow to allocate regmap-irq device
[ Upstream commit 553b75d4bf ]

GPIO controller often have support for IRQ: allow to easily allocate
both gpio-regmap and regmap-irq in one operation.

Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Signed-off-by: Mathieu Dubois-Briand <mathieu.dubois-briand@bootlin.com>
Link: https://lore.kernel.org/r/20250824-mdb-max7360-support-v14-5-435cfda2b1ea@bootlin.com
Signed-off-by: Lee Jones <lee@kernel.org>
Stable-dep-of: 2ba5772e53 ("gpio: idio-16: Define fixed direction of the GPIO lines")
Signed-off-by: William Breathitt Gray <wbg@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Vincent Mailhol
41e98f2789 bits: introduce fixed-type GENMASK_U*()
[ Upstream commit 19408200c0 ]

Add GENMASK_TYPE() which generalizes __GENMASK() to support different
types, and implement fixed-types versions of GENMASK() based on it.
The fixed-type version allows more strict checks to the min/max values
accepted, which is useful for defining registers like implemented by
i915 and xe drivers with their REG_GENMASK*() macros.

The strict checks rely on shift-count-overflow compiler check to fail
the build if a number outside of the range allowed is passed.
Example:

  #define FOO_MASK GENMASK_U32(33, 4)

will generate a warning like:

  include/linux/bits.h:51:27: error: right shift count >= width of type [-Werror=shift-count-overflow]
     51 |               type_max(t) >> (BITS_PER_TYPE(t) - 1 - (h)))))
        |                           ^~

The result is casted to the corresponding fixed width type. For
example, GENMASK_U8() returns an u8. Note that because of the C
promotion rules, GENMASK_U8() and GENMASK_U16() will immediately be
promoted to int if used in an expression. Regardless, the main goal is
not to get the correct type, but rather to enforce more checks at
compile time.

While GENMASK_TYPE() is crafted to cover all variants, including the
already existing GENMASK(), GENMASK_ULL() and GENMASK_U128(), for the
moment, only use it for the newly introduced GENMASK_U*(). The
consolidation will be done in a separate change.

Co-developed-by: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Stable-dep-of: 2ba5772e53 ("gpio: idio-16: Define fixed direction of the GPIO lines")
Signed-off-by: William Breathitt Gray <wbg@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Vincent Mailhol
17143f5b09 bits: add comments and newlines to #if, #else and #endif directives
[ Upstream commit 31299a5e02 ]

This is a preparation for the upcoming GENMASK_U*() and BIT_U*()
changes. After introducing those new macros, there will be a lot of
scrolling between the #if, #else and #endif.

Add a comment to the #else and #endif preprocessor macros to help keep
track of which context we are in. Also, add new lines to better
visually separate the non-asm and asm sections.

Signed-off-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Stable-dep-of: 2ba5772e53 ("gpio: idio-16: Define fixed direction of the GPIO lines")
Signed-off-by: William Breathitt Gray <wbg@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Wang Liang
6f3af8055e bonding: check xdp prog when set bond mode
[ Upstream commit 094ee6017e ]

Following operations can trigger a warning[1]:

    ip netns add ns1
    ip netns exec ns1 ip link add bond0 type bond mode balance-rr
    ip netns exec ns1 ip link set dev bond0 xdp obj af_xdp_kern.o sec xdp
    ip netns exec ns1 ip link set bond0 type bond mode broadcast
    ip netns del ns1

When delete the namespace, dev_xdp_uninstall() is called to remove xdp
program on bond dev, and bond_xdp_set() will check the bond mode. If bond
mode is changed after attaching xdp program, the warning may occur.

Some bond modes (broadcast, etc.) do not support native xdp. Set bond mode
with xdp program attached is not good. Add check for xdp program when set
bond mode.

    [1]
    ------------[ cut here ]------------
    WARNING: CPU: 0 PID: 11 at net/core/dev.c:9912 unregister_netdevice_many_notify+0x8d9/0x930
    Modules linked in:
    CPU: 0 UID: 0 PID: 11 Comm: kworker/u4:0 Not tainted 6.14.0-rc4 #107
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
    Workqueue: netns cleanup_net
    RIP: 0010:unregister_netdevice_many_notify+0x8d9/0x930
    Code: 00 00 48 c7 c6 6f e3 a2 82 48 c7 c7 d0 b3 96 82 e8 9c 10 3e ...
    RSP: 0018:ffffc90000063d80 EFLAGS: 00000282
    RAX: 00000000ffffffa1 RBX: ffff888004959000 RCX: 00000000ffffdfff
    RDX: 0000000000000000 RSI: 00000000ffffffea RDI: ffffc90000063b48
    RBP: ffffc90000063e28 R08: ffffffff82d39b28 R09: 0000000000009ffb
    R10: 0000000000000175 R11: ffffffff82d09b40 R12: ffff8880049598e8
    R13: 0000000000000001 R14: dead000000000100 R15: ffffc90000045000
    FS:  0000000000000000(0000) GS:ffff888007a00000(0000) knlGS:0000000000000000
    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 000000000d406b60 CR3: 000000000483e000 CR4: 00000000000006f0
    Call Trace:
     <TASK>
     ? __warn+0x83/0x130
     ? unregister_netdevice_many_notify+0x8d9/0x930
     ? report_bug+0x18e/0x1a0
     ? handle_bug+0x54/0x90
     ? exc_invalid_op+0x18/0x70
     ? asm_exc_invalid_op+0x1a/0x20
     ? unregister_netdevice_many_notify+0x8d9/0x930
     ? bond_net_exit_batch_rtnl+0x5c/0x90
     cleanup_net+0x237/0x3d0
     process_one_work+0x163/0x390
     worker_thread+0x293/0x3b0
     ? __pfx_worker_thread+0x10/0x10
     kthread+0xec/0x1e0
     ? __pfx_kthread+0x10/0x10
     ? __pfx_kthread+0x10/0x10
     ret_from_fork+0x2f/0x50
     ? __pfx_kthread+0x10/0x10
     ret_from_fork_asm+0x1a/0x30
     </TASK>
    ---[ end trace 0000000000000000 ]---

Fixes: 9e2ee5c7e7 ("net, bonding: Add XDP support to the bonding driver")
Signed-off-by: Wang Liang <wangliang74@huawei.com>
Acked-by: Jussi Maki <joamaki@gmail.com>
Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://patch.msgid.link/20250321044852.1086551-1-wangliang74@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Rajani Kantha <681739313@139.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Hangbin Liu
da82ac2a03 bonding: return detailed error when loading native XDP fails
[ Upstream commit 22ccb684c1 ]

Bonding only supports native XDP for specific modes, which can lead to
confusion for users regarding why XDP loads successfully at times and
fails at others. This patch enhances error handling by returning detailed
error messages, providing users with clearer insights into the specific
reasons for the failure when loading native XDP.

Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Link: https://patch.msgid.link/20241021031211.814-2-liuhangbin@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Rajani Kantha <681739313@139.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Alexander Wetzel
4a63523d35 wifi: cfg80211: Add missing lock in cfg80211_check_and_end_cac()
[ Upstream commit 2c5dee1523 ]

Callers of wdev_chandef() must hold the wiphy mutex.

But the worker cfg80211_propagate_cac_done_wk() never takes the lock.
Which triggers the warning below with the mesh_peer_connected_dfs
test from hostapd and not (yet) released mac80211 code changes:

WARNING: CPU: 0 PID: 495 at net/wireless/chan.c:1552 wdev_chandef+0x60/0x165
Modules linked in:
CPU: 0 UID: 0 PID: 495 Comm: kworker/u4:2 Not tainted 6.14.0-rc5-wt-g03960e6f9d47 #33 13c287eeabfe1efea01c0bcc863723ab082e17cf
Workqueue: cfg80211 cfg80211_propagate_cac_done_wk
Stack:
 00000000 00000001 ffffff00 6093267c
 00000000 6002ec30 6d577c50 60037608
 00000000 67e8d108 6063717b 00000000
Call Trace:
 [<6002ec30>] ? _printk+0x0/0x98
 [<6003c2b3>] show_stack+0x10e/0x11a
 [<6002ec30>] ? _printk+0x0/0x98
 [<60037608>] dump_stack_lvl+0x71/0xb8
 [<6063717b>] ? wdev_chandef+0x60/0x165
 [<6003766d>] dump_stack+0x1e/0x20
 [<6005d1b7>] __warn+0x101/0x20f
 [<6005d3a8>] warn_slowpath_fmt+0xe3/0x15d
 [<600b0c5c>] ? mark_lock.part.0+0x0/0x4ec
 [<60751191>] ? __this_cpu_preempt_check+0x0/0x16
 [<600b11a2>] ? mark_held_locks+0x5a/0x6e
 [<6005d2c5>] ? warn_slowpath_fmt+0x0/0x15d
 [<60052e53>] ? unblock_signals+0x3a/0xe7
 [<60052f2d>] ? um_set_signals+0x2d/0x43
 [<60751191>] ? __this_cpu_preempt_check+0x0/0x16
 [<607508b2>] ? lock_is_held_type+0x207/0x21f
 [<6063717b>] wdev_chandef+0x60/0x165
 [<605f89b4>] regulatory_propagate_dfs_state+0x247/0x43f
 [<60052f00>] ? um_set_signals+0x0/0x43
 [<605e6bfd>] cfg80211_propagate_cac_done_wk+0x3a/0x4a
 [<6007e460>] process_scheduled_works+0x3bc/0x60e
 [<6007d0ec>] ? move_linked_works+0x4d/0x81
 [<6007d120>] ? assign_work+0x0/0xaa
 [<6007f81f>] worker_thread+0x220/0x2dc
 [<600786ef>] ? set_pf_worker+0x0/0x57
 [<60087c96>] ? to_kthread+0x0/0x43
 [<6008ab3c>] kthread+0x2d3/0x2e2
 [<6007f5ff>] ? worker_thread+0x0/0x2dc
 [<6006c05b>] ? calculate_sigpending+0x0/0x56
 [<6003b37d>] new_thread_handler+0x4a/0x64
irq event stamp: 614611
hardirqs last  enabled at (614621): [<00000000600bc96b>] __up_console_sem+0x82/0xaf
hardirqs last disabled at (614630): [<00000000600bc92c>] __up_console_sem+0x43/0xaf
softirqs last  enabled at (614268): [<00000000606c55c6>] __ieee80211_wake_queue+0x933/0x985
softirqs last disabled at (614266): [<00000000606c52d6>] __ieee80211_wake_queue+0x643/0x985

Fixes: 26ec17a1dc ("cfg80211: Fix radar event during another phy CAC")
Signed-off-by: Alexander Wetzel <Alexander@wetzel-home.de>
Link: https://patch.msgid.link/20250717162547.94582-1-Alexander@wetzel-home.de
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
[ The author recommends that when porting to older kernels, we should use wiphy_lock()
and wiphy_unlock() instead of guard(). This tip is mentioned in the link:
https://patch.msgid.link/20250717162547.94582-1-Alexander@wetzel-home.de. ]
Signed-off-by: Alva Lan <alvalan9@foxmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Chao Yu
2dda0930fb f2fs: fix to avoid panic once fallocation fails for pinfile
[ Upstream commit 48ea8b2004 ]

syzbot reports a f2fs bug as below:

------------[ cut here ]------------
kernel BUG at fs/f2fs/segment.c:2746!
CPU: 0 UID: 0 PID: 5323 Comm: syz.0.0 Not tainted 6.13.0-rc2-syzkaller-00018-g7cb1b4663150 #0
RIP: 0010:get_new_segment fs/f2fs/segment.c:2746 [inline]
RIP: 0010:new_curseg+0x1f52/0x1f70 fs/f2fs/segment.c:2876
Call Trace:
 <TASK>
 __allocate_new_segment+0x1ce/0x940 fs/f2fs/segment.c:3210
 f2fs_allocate_new_section fs/f2fs/segment.c:3224 [inline]
 f2fs_allocate_pinning_section+0xfa/0x4e0 fs/f2fs/segment.c:3238
 f2fs_expand_inode_data+0x696/0xca0 fs/f2fs/file.c:1830
 f2fs_fallocate+0x537/0xa10 fs/f2fs/file.c:1940
 vfs_fallocate+0x569/0x6e0 fs/open.c:327
 do_vfs_ioctl+0x258c/0x2e40 fs/ioctl.c:885
 __do_sys_ioctl fs/ioctl.c:904 [inline]
 __se_sys_ioctl+0x80/0x170 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

Concurrent pinfile allocation may run out of free section, result in
panic in get_new_segment(), let's expand pin_sem lock coverage to
include f2fs_gc(), so that we can make sure to reclaim enough free
space for following allocation.

In addition, do below changes to enhance error path handling:
- call f2fs_bug_on() only in non-pinfile allocation path in
get_new_segment().
- call reset_curseg_fields() to reset all fields of curseg in
new_curseg()

Fixes: f5a53edcf0 ("f2fs: support aligned pinned file")
Reported-by: syzbot+15669ec8c35ddf6c3d43@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/linux-f2fs-devel/675cd64e.050a0220.37aaf.00bb.GAE@google.com
Signed-off-by: Chao Yu <chao@kernel.org>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Signed-off-by: Rajani Kantha <681739313@139.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Matthieu Baerts (NGI0)
bdb0e04154 mptcp: pm: in-kernel: C-flag: handle late ADD_ADDR
[ Upstream commit e84cb860ac ]

The special C-flag case expects the ADD_ADDR to be received when
switching to 'fully-established'. But for various reasons, the ADD_ADDR
could be sent after the "4th ACK", and the special case doesn't work.

On NIPA, the new test validating this special case for the C-flag failed
a few times, e.g.

  102 default limits, server deny join id 0
        syn rx                 [FAIL] got 0 JOIN[s] syn rx expected 2

  Server ns stats
  (...)
  MPTcpExtAddAddrTx  1
  MPTcpExtEchoAdd    1

  Client ns stats
  (...)
  MPTcpExtAddAddr    1
  MPTcpExtEchoAddTx  1

        synack rx              [FAIL] got 0 JOIN[s] synack rx expected 2
        ack rx                 [FAIL] got 0 JOIN[s] ack rx expected 2
        join Rx                [FAIL] see above
        syn tx                 [FAIL] got 0 JOIN[s] syn tx expected 2
        join Tx                [FAIL] see above

I had a suspicion about what the issue could be: the ADD_ADDR might have
been received after the switch to the 'fully-established' state. The
issue was not easy to reproduce. The packet capture shown that the
ADD_ADDR can indeed be sent with a delay, and the client would not try
to establish subflows to it as expected.

A simple fix is not to mark the endpoints as 'used' in the C-flag case,
when looking at creating subflows to the remote initial IP address and
port. In this case, there is no need to try.

Note: newly added fullmesh endpoints will still continue to be used as
expected, thanks to the conditions behind mptcp_pm_add_addr_c_flag_case.

Fixes: 4b1ff850e0 ("mptcp: pm: in-kernel: usable client side with C-flag")
Cc: stable@vger.kernel.org
Reviewed-by: Geliang Tang <geliang@kernel.org>
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
Link: https://patch.msgid.link/20251020-net-mptcp-c-flag-late-add-addr-v1-1-8207030cb0e8@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
[ applied to pm_netlink.c instead of pm_kernel.c ]
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Matthieu Baerts (NGI0)
dbd0aa9456 selftests: mptcp: join: mark 'delete re-add signal' as skipped if not supported
[ Upstream commit c3496c052a ]

The call to 'continue_if' was missing: it properly marks a subtest as
'skipped' if the attached condition is not valid.

Without that, the test is wrongly marked as passed on older kernels.

Fixes: b5e2fb832f ("selftests: mptcp: add explicit test case for remove/readd")
Cc: stable@vger.kernel.org
Reviewed-by: Geliang Tang <geliang@kernel.org>
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
Link: https://patch.msgid.link/20251020-net-mptcp-c-flag-late-add-addr-v1-4-8207030cb0e8@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Geliang Tang
e762ddf34f selftests: mptcp: disable add_addr retrans in endpoint_tests
[ Upstream commit f92199f551 ]

To prevent test instability in the "delete re-add signal" test caused by
ADD_ADDR retransmissions, disable retransmissions for this test by setting
net.mptcp.add_addr_timeout to 0.

Suggested-by: Matthieu Baerts <matttbe@kernel.org>
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
Link: https://patch.msgid.link/20250815-net-mptcp-misc-fixes-6-17-rc2-v1-6-521fe9957892@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Stable-dep-of: c3496c052a ("selftests: mptcp: join: mark 'delete re-add signal' as skipped if not supported")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Jonathan Corbet
0e0bdcea10 docs: kdoc: handle the obsolescensce of docutils.ErrorString()
commit 00d95fcc4d upstream.

The ErrorString() and SafeString() docutils functions were helpers meant to
ease the handling of encodings during the Python 3 transition.  There is no
real need for them after Python 3.6, and docutils 0.22 removes them,
breaking the docs build

Handle this by just injecting our own one-liner version of ErrorString(),
and removing the sole SafeString() call entirely.

Reported-by: Zhixu Liu <zhixu.liu@gmail.com>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
Message-ID: <87ldmnv2pi.fsf@trenco.lwn.net>
[ Salvatore Bonaccorso: Backport to v6.17.y for context changes in
  Documentation/sphinx/kernel_include.py with major refactorings for the v6.18
  development cycle. Backport ErrorString definition as well to
  Documentation/sphinx/kernel_abi.py file for 6.12.y where it is imported
  from docutils before the faccc0ec64 ("docs: sphinx/kernel_abi: adjust
  coding style") change. ]
Suggested-by: Andreas Radke <andreas.radke@mailbox.org>
Signed-off-by: Salvatore Bonaccorso <carnil@debian.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:22 +09:00
Menglong Dong
cc89ac0ca5 arch: Add the macro COMPILE_OFFSETS to all the asm-offsets.c
[ Upstream commit 35561bab76 ]

The include/generated/asm-offsets.h is generated in Kbuild during
compiling from arch/SRCARCH/kernel/asm-offsets.c. When we want to
generate another similar offset header file, circular dependency can
happen.

For example, we want to generate a offset file include/generated/test.h,
which is included in include/sched/sched.h. If we generate asm-offsets.h
first, it will fail, as include/sched/sched.h is included in asm-offsets.c
and include/generated/test.h doesn't exist; If we generate test.h first,
it can't success neither, as include/generated/asm-offsets.h is included
by it.

In x86_64, the macro COMPILE_OFFSETS is used to avoid such circular
dependency. We can generate asm-offsets.h first, and if the
COMPILE_OFFSETS is defined, we don't include the "generated/test.h".

And we define the macro COMPILE_OFFSETS for all the asm-offsets.c for this
purpose.

Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:22 +09:00
Tejun Heo
8297de569e sched_ext: Make qmap dump operation non-destructive
[ Upstream commit d452972858 ]

The qmap dump operation was destructively consuming queue entries while
displaying them. As dump can be triggered anytime, this can easily lead to
stalls. Add a temporary dump_store queue and modify the dump logic to pop
entries, display them, and then restore them back to the original queue.
This allows dump operations to be performed without affecting the
scheduler's queue state.

Note that if racing against new enqueues during dump, ordering can get
mixed up, but this is acceptable for debugging purposes.

Acked-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Filipe Manana
4270dc1e8d btrfs: use smp_mb__after_atomic() when forcing COW in create_pending_snapshot()
[ Upstream commit 45c222468d ]

After setting the BTRFS_ROOT_FORCE_COW flag on the root we are doing a
full write barrier, smp_wmb(), but we don't need to, all we need is a
smp_mb__after_atomic().  The use of the smp_wmb() is from the old days
when we didn't use a bit and used instead an int field in the root to
signal if cow is forced. After the int field was changed to a bit in
the root's state (flags field), we forgot to update the memory barrier
in create_pending_snapshot() to smp_mb__after_atomic(), but we did the
change in commit_fs_roots() after clearing BTRFS_ROOT_FORCE_COW. That
happened in commit 27cdeb7096 ("Btrfs: use bitfield instead of integer
data type for the some variants in btrfs_root"). On the reader side, in
should_cow_block(), we also use the counterpart smp_mb__before_atomic()
which generates further confusion.

So change the smp_wmb() to smp_mb__after_atomic(). In fact we don't
even need any barrier at all since create_pending_snapshot() is called
in the critical section of a transaction commit and therefore no one
can concurrently join/attach the transaction, or start a new one, until
the transaction is unblocked. By the time someone starts a new transaction
and enters should_cow_block(), a lot of implicit memory barriers already
took place by having acquired several locks such as fs_info->trans_lock
and extent buffer locks on the root node at least. Nevertlheless, for
consistency use smp_mb__after_atomic() after setting the force cow bit
in create_pending_snapshot().

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Qu Wenruo
7db72c34ef btrfs: tree-checker: add inode extref checks
[ Upstream commit aab9458b9f ]

Like inode refs, inode extrefs have a variable length name, which means
we have to do a proper check to make sure no header nor name can exceed
the item limits.

The check itself is very similar to check_inode_ref(), just a different
structure (btrfs_inode_extref vs btrfs_inode_ref).

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Filipe Manana
32054a9216 btrfs: abort transaction if we fail to update inode in log replay dir fixup
[ Upstream commit 5a0565cad3 ]

If we fail to update the inode at link_to_fixup_dir(), we don't abort the
transaction and propagate the error up the call chain, which makes it hard
to pinpoint the error to the inode update. So abort the transaction if the
inode update call fails, so that if it happens we known immediately.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Filipe Manana
90542dc854 btrfs: use level argument in log tree walk callback replay_one_buffer()
[ Upstream commit 6cb7f0b8c9 ]

We already have the extent buffer's level in an argument, there's no need
to first ensure the extent buffer's data is loaded (by calling
btrfs_read_extent_buffer()) and then call btrfs_header_level() to check
the level. So use the level argument and do the check before calling
btrfs_read_extent_buffer().

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Filipe Manana
403eb8a1ba btrfs: always drop log root tree reference in btrfs_replay_log()
[ Upstream commit 2f5b8095ea ]

Currently we have this odd behaviour:

1) At btrfs_replay_log() we drop the reference of the log root tree if
   the call to btrfs_recover_log_trees() failed;

2) But if the call to btrfs_recover_log_trees() did not fail, we don't
   drop the reference in btrfs_replay_log() - we expect that
   btrfs_recover_log_trees() does it in case it returns success.

Let's simplify this and make btrfs_replay_log() always drop the reference
on the log root tree, not only this simplifies code as it's what makes
sense since it's btrfs_replay_log() who grabbed the reference in the first
place.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Thorsten Blum
500784abb5 btrfs: scrub: replace max_t()/min_t() with clamp() in scrub_throttle_dev_io()
[ Upstream commit a7f3dfb829 ]

Replace max_t() followed by min_t() with a single clamp().

As was pointed by David Laight in
https://lore.kernel.org/linux-btrfs/20250906122458.75dfc8f0@pumpkin/
the calculation may overflow u32 when the input value is too large, so
clamp_t() is not used.  In practice the expected values are in range of
megabytes to gigabytes (throughput limit) so the bug would not happen.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: David Sterba <dsterba@suse.com>
[ Use clamp() and add explanation. ]
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Naohiro Aota
cfc90c12a9 btrfs: zoned: refine extent allocator hint selection
[ Upstream commit 0d703963d2 ]

The hint block group selection in the extent allocator is wrong in the
first place, as it can select the dedicated data relocation block group for
the normal data allocation.

Since we separated the normal data space_info and the data relocation
space_info, we can easily identify a block group is for data relocation or
not. Do not choose it for the normal data allocation.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Johannes Thumshirn
69a9df08eb btrfs: zoned: return error from btrfs_zone_finish_endio()
[ Upstream commit 3c44cd3c79 ]

Now that btrfs_zone_finish_endio_workfn() is directly calling
do_zone_finish() the only caller of btrfs_zone_finish_endio() is
btrfs_finish_one_ordered().

btrfs_finish_one_ordered() already has error handling in-place so
btrfs_zone_finish_endio() can return an error if the block group lookup
fails.

Also as btrfs_zone_finish_endio() already checks for zoned filesystems and
returns early, there's no need to do this in the caller.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Filipe Manana
630378d35b btrfs: abort transaction in the process_one_buffer() log tree walk callback
[ Upstream commit e6dd405b66 ]

In the process_one_buffer() log tree walk callback we return errors to the
log tree walk caller and then the caller aborts the transaction, if we
have one, or turns the fs into error state if we don't have one. While
this reduces code it makes it harder to figure out where exactly an error
came from. So add the transaction aborts after every failure inside the
process_one_buffer() callback, so that it helps figuring out why failures
happen.

Reviewed-by: Boris Burkov <boris@bur.io>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Filipe Manana
3b0bcce1a2 btrfs: abort transaction on specific error places when walking log tree
[ Upstream commit 6ebd726b10 ]

We do several things while walking a log tree (for replaying and for
freeing a log tree) like reading extent buffers and cleaning them up,
but we don't immediately abort the transaction, or turn the fs into an
error state, when one of these things fails. Instead we the transaction
abort or turn the fs into error state in the caller of the entry point
function that walks a log tree - walk_log_tree() - which means we don't
get to know exactly where an error came from.

Improve on this by doing a transaction abort / turn fs into error state
after each such failure so that when it happens we have a better
understanding where the failure comes from. This deliberately leaves
the transaction abort / turn fs into error state in the callers of
walk_log_tree() as to ensure we don't get into an inconsistent state in
case we forget to do it deeper in call chain. It also deliberately does
not do it after errors from the calls to the callback defined in
struct walk_control::process_func(), as we will do it later on another
patch.

Reviewed-by: Boris Burkov <boris@bur.io>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Chen Ridong
cf9459ce31 cpuset: Use new excpus for nocpu error check when enabling root partition
[ Upstream commit 59d5de3655 ]

A previous patch fixed a bug where new_prs should be assigned before
checking housekeeping conflicts. This patch addresses another potential
issue: the nocpu error check currently uses the xcpus which is not updated.
Although no issue has been observed so far, the check should be performed
using the new effective exclusive cpus.

The comment has been removed because the function returns an error if
nocpu checking fails, which is unrelated to the parent.

Signed-off-by: Chen Ridong <chenridong@huawei.com>
Reviewed-by: Waiman Long <longman@redhat.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Avadhut Naik
e72270986a EDAC/mc_sysfs: Increase legacy channel support to 16
[ Upstream commit 6e1c2c6c2c ]

Newer AMD systems can support up to 16 channels per EDAC "mc" device.
These are detected by the EDAC module running on the device, and the
current EDAC interface is appropriately enumerated.

The legacy EDAC sysfs interface however, provides device attributes for
channels 0 through 11 only. Consequently, the last four channels, 12
through 15, will not be enumerated and will not be visible through the
legacy sysfs interface.

Add additional device attributes to ensure that all 16 channels, if
present, are enumerated by and visible through the legacy EDAC sysfs
interface.

Signed-off-by: Avadhut Naik <avadhut.naik@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/20250916203242.1281036-1-avadhut.naik@amd.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
David Kaplan
537427cb38 x86/bugs: Fix reporting of LFENCE retpoline
[ Upstream commit d1cc1baef6 ]

The LFENCE retpoline mitigation is not secure but the kernel prints
inconsistent messages about this fact.  The dmesg log says 'Mitigation:
LFENCE', implying the system is mitigated.  But sysfs reports 'Vulnerable:
LFENCE' implying the system (correctly) is not mitigated.

Fix this by printing a consistent 'Vulnerable: LFENCE' string everywhere
when this mitigation is selected.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/20250915134706.3201818-1-david.kaplan@amd.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
David Kaplan
65dc4615ed x86/bugs: Report correct retbleed mitigation status
[ Upstream commit 930f2361fe ]

On Intel CPUs, the default retbleed mitigation is IBRS/eIBRS but this
requires that a similar spectre_v2 mitigation is applied.  If the user
selects a different spectre_v2 mitigation (like spectre_v2=retpoline) a
warning is printed but sysfs will still report 'Mitigation: IBRS' or
'Mitigation: Enhanced IBRS'.  This is incorrect because retbleed is not
mitigated, and IBRS is not actually set.

Fix this by choosing RETBLEED_MITIGATION_NONE in this scenario so the
kernel correctly reports the system as vulnerable to retbleed.

Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/20250915134706.3201818-1-david.kaplan@amd.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:21 +09:00
Jiri Olsa
1c0462f28b seccomp: passthrough uprobe systemcall without filtering
[ Upstream commit 89d1d8434d ]

Adding uprobe as another exception to the seccomp filter alongside
with the uretprobe syscall.

Same as the uretprobe the uprobe syscall is installed by kernel as
replacement for the breakpoint exception and is limited to x86_64
arch and isn't expected to ever be supported in i386.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <kees@kernel.org>
Link: https://lore.kernel.org/r/20250720112133.244369-21-jolsa@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:20 +09:00
Josh Poimboeuf
d6c55b581c perf: Skip user unwind if the task is a kernel thread
[ Upstream commit 16ed389227 ]

If the task is not a user thread, there's no user stack to unwind.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250820180428.930791978@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:20 +09:00
Josh Poimboeuf
8d33b133b8 perf: Have get_perf_callchain() return NULL if crosstask and user are set
[ Upstream commit 153f9e74de ]

get_perf_callchain() doesn't support cross-task unwinding for user space
stacks, have it return NULL if both the crosstask and user arguments are
set.

Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250820180428.426423415@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:20 +09:00
Steven Rostedt
8d79f96e47 perf: Use current->flags & PF_KTHREAD|PF_USER_WORKER instead of current->mm == NULL
[ Upstream commit 90942f9fac ]

To determine if a task is a kernel thread or not, it is more reliable to
use (current->flags & (PF_KTHREAD|PF_USER_WORKERi)) than to rely on
current->mm being NULL.  That is because some kernel tasks (io_uring
helpers) may have a mm field.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250820180428.592367294@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:20 +09:00
Dapeng Mi
2aef7015d0 perf/x86/intel: Add ICL_FIXED_0_ADAPTIVE bit into INTEL_FIXED_BITS_MASK
[ Upstream commit 2676dbf9f4 ]

ICL_FIXED_0_ADAPTIVE is missed to be added into INTEL_FIXED_BITS_MASK,
add it.

With help of this new INTEL_FIXED_BITS_MASK, intel_pmu_enable_fixed() can
be optimized. The old fixed counter control bits can be unconditionally
cleared with INTEL_FIXED_BITS_MASK and then set new control bits base on
new configuration.

Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Tested-by: Yi Lai <yi1.lai@intel.com>
Link: https://lore.kernel.org/r/20250820023032.17128-7-dapeng1.mi@linux.intel.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:20 +09:00
Richard Guy Briggs
9bdd948853 audit: record fanotify event regardless of presence of rules
[ Upstream commit ce8370e2e6 ]

When no audit rules are in place, fanotify event results are
unconditionally dropped due to an explicit check for the existence of
any audit rules.  Given this is a report from another security
sub-system, allow it to be recorded regardless of the existence of any
audit rules.

To test, install and run the fapolicyd daemon with default config.  Then
as an unprivileged user, create and run a very simple binary that should
be denied.  Then check for an event with
	ausearch -m FANOTIFY -ts recent

Link: https://issues.redhat.com/browse/RHEL-9065
Signed-off-by: Richard Guy Briggs <rgb@redhat.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2025-11-02 22:15:20 +09:00
Xiang Mei
6ff8e74c8f net/sched: sch_qfq: Fix null-deref in agg_dequeue
commit dd831ac822 upstream.

To prevent a potential crash in agg_dequeue (net/sched/sch_qfq.c)
when cl->qdisc->ops->peek(cl->qdisc) returns NULL, we check the return
value before using it, similar to the existing approach in sch_hfsc.c.

To avoid code duplication, the following changes are made:

1. Changed qdisc_warn_nonwc(include/net/pkt_sched.h) into a static
inline function.

2. Moved qdisc_peek_len from net/sched/sch_hfsc.c to
include/net/pkt_sched.h so that sch_qfq can reuse it.

3. Applied qdisc_peek_len in agg_dequeue to avoid crashing.

Signed-off-by: Xiang Mei <xmei5@asu.edu>
Reviewed-by: Cong Wang <xiyou.wangcong@gmail.com>
Link: https://patch.msgid.link/20250705212143.3982664-1-xmei5@asu.edu
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-11-02 22:15:20 +09:00
70 changed files with 480 additions and 179 deletions

View File

@@ -42,9 +42,11 @@ import kernellog
from docutils import nodes, statemachine
from docutils.statemachine import ViewList
from docutils.parsers.rst import directives, Directive
from docutils.utils.error_reporting import ErrorString
from sphinx.util.docutils import switch_source_input
def ErrorString(exc): # Shamelessly stolen from docutils
return f'{exc.__class__.__name}: {exc}'
__version__ = '1.0'
def setup(app):

View File

@@ -40,9 +40,11 @@ import sys
from docutils import nodes, statemachine
from docutils.statemachine import ViewList
from docutils.parsers.rst import directives, Directive
from docutils.utils.error_reporting import ErrorString
from sphinx.util.docutils import switch_source_input
def ErrorString(exc): # Shamelessly stolen from docutils
return f'{exc.__class__.__name}: {exc}'
__version__ = '1.0'
def setup(app):

View File

@@ -34,13 +34,15 @@ u"""
import os.path
from docutils import io, nodes, statemachine
from docutils.utils.error_reporting import SafeString, ErrorString
from docutils.parsers.rst import directives
from docutils.parsers.rst.directives.body import CodeBlock, NumberLines
from docutils.parsers.rst.directives.misc import Include
__version__ = '1.0'
def ErrorString(exc): # Shamelessly stolen from docutils
return f'{exc.__class__.__name}: {exc}'
# ==============================================================================
def setup(app):
# ==============================================================================
@@ -111,7 +113,7 @@ class KernelInclude(Include):
raise self.severe('Problems with "%s" directive path:\n'
'Cannot encode input file path "%s" '
'(wrong locale?).' %
(self.name, SafeString(path)))
(self.name, path))
except IOError as error:
raise self.severe('Problems with "%s" directive path:\n%s.' %
(self.name, ErrorString(error)))

View File

@@ -22,10 +22,12 @@ import re
import os.path
from docutils import statemachine
from docutils.utils.error_reporting import ErrorString
from docutils.parsers.rst import Directive
from docutils.parsers.rst.directives.misc import Include
def ErrorString(exc): # Shamelessly stolen from docutils
return f'{exc.__class__.__name}: {exc}'
__version__ = '1.0'
def setup(app):

View File

@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 6
PATCHLEVEL = 12
SUBLEVEL = 56
SUBLEVEL = 57
EXTRAVERSION =
NAME = Baby Opossum Posse

View File

@@ -4,6 +4,7 @@
* This code generates raw asm output which is post-processed to extract
* and format the required data.
*/
#define COMPILE_OFFSETS
#include <linux/types.h>
#include <linux/stddef.h>

View File

@@ -2,6 +2,7 @@
/*
* Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)
*/
#define COMPILE_OFFSETS
#include <linux/sched.h>
#include <linux/mm.h>

View File

@@ -7,6 +7,8 @@
* This code generates raw asm output which is post-processed to extract
* and format the required data.
*/
#define COMPILE_OFFSETS
#include <linux/compiler.h>
#include <linux/sched.h>
#include <linux/mm.h>

View File

@@ -6,6 +6,7 @@
* 2001-2002 Keith Owens
* Copyright (C) 2012 ARM Ltd.
*/
#define COMPILE_OFFSETS
#include <linux/arm_sdei.h>
#include <linux/sched.h>

View File

@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#define COMPILE_OFFSETS
#include <linux/sched.h>
#include <linux/kernel_stat.h>

View File

@@ -8,6 +8,7 @@
*
* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
*/
#define COMPILE_OFFSETS
#include <linux/compat.h>
#include <linux/types.h>

View File

@@ -4,6 +4,8 @@
*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#define COMPILE_OFFSETS
#include <linux/types.h>
#include <linux/sched.h>
#include <linux/mm.h>

View File

@@ -9,6 +9,7 @@
* #defines from the assembly-language output.
*/
#define COMPILE_OFFSETS
#define ASM_OFFSETS_C
#include <linux/stddef.h>

View File

@@ -7,6 +7,7 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#define COMPILE_OFFSETS
#include <linux/init.h>
#include <linux/stddef.h>

View File

@@ -9,6 +9,8 @@
* Kevin Kissell, kevink@mips.com and Carsten Langgaard, carstenl@mips.com
* Copyright (C) 2000 MIPS Technologies, Inc.
*/
#define COMPILE_OFFSETS
#include <linux/compat.h>
#include <linux/types.h>
#include <linux/sched.h>

View File

@@ -2,6 +2,7 @@
/*
* Copyright (C) 2011 Tobias Klauser <tklauser@distanz.ch>
*/
#define COMPILE_OFFSETS
#include <linux/stddef.h>
#include <linux/sched.h>

View File

@@ -18,6 +18,7 @@
* compile this file to assembler, and then extract the
* #defines from the assembly-language output.
*/
#define COMPILE_OFFSETS
#include <linux/signal.h>
#include <linux/sched.h>

View File

@@ -13,6 +13,7 @@
* Copyright (C) 2002 Randolph Chung <tausq with parisc-linux.org>
* Copyright (C) 2003 James Bottomley <jejb at parisc-linux.org>
*/
#define COMPILE_OFFSETS
#include <linux/types.h>
#include <linux/sched.h>

View File

@@ -8,6 +8,7 @@
* compile this file to assembler, and then extract the
* #defines from the assembly-language output.
*/
#define COMPILE_OFFSETS
#include <linux/compat.h>
#include <linux/signal.h>

View File

@@ -3,6 +3,7 @@
* Copyright (C) 2012 Regents of the University of California
* Copyright (C) 2017 SiFive
*/
#define COMPILE_OFFSETS
#include <linux/kbuild.h>
#include <linux/mm.h>

View File

@@ -4,6 +4,7 @@
* This code generates raw asm output which is post-processed to extract
* and format the required data.
*/
#define COMPILE_OFFSETS
#define ASM_OFFSETS_C

View File

@@ -8,6 +8,7 @@
* compile this file to assembler, and then extract the
* #defines from the assembly-language output.
*/
#define COMPILE_OFFSETS
#include <linux/stddef.h>
#include <linux/types.h>

View File

@@ -10,6 +10,7 @@
*
* On sparc, thread_info data is static and TI_XXX offsets are computed by hand.
*/
#define COMPILE_OFFSETS
#include <linux/sched.h>
#include <linux/mm_types.h>

View File

@@ -1 +1,3 @@
#define COMPILE_OFFSETS
#include <sysdep/kernel-offsets.h>

View File

@@ -2812,8 +2812,8 @@ static void intel_pmu_enable_fixed(struct perf_event *event)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
u64 mask, bits = 0;
int idx = hwc->idx;
u64 bits = 0;
if (is_topdown_idx(idx)) {
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
@@ -2849,14 +2849,10 @@ static void intel_pmu_enable_fixed(struct perf_event *event)
idx -= INTEL_PMC_IDX_FIXED;
bits = intel_fixed_bits_by_idx(idx, bits);
mask = intel_fixed_bits_by_idx(idx, INTEL_FIXED_BITS_MASK);
if (x86_pmu.intel_cap.pebs_baseline && event->attr.precise_ip) {
if (x86_pmu.intel_cap.pebs_baseline && event->attr.precise_ip)
bits |= intel_fixed_bits_by_idx(idx, ICL_FIXED_0_ADAPTIVE);
mask |= intel_fixed_bits_by_idx(idx, ICL_FIXED_0_ADAPTIVE);
}
cpuc->fixed_ctrl_val &= ~mask;
cpuc->fixed_ctrl_val &= ~intel_fixed_bits_by_idx(idx, INTEL_FIXED_BITS_MASK);
cpuc->fixed_ctrl_val |= bits;
}

View File

@@ -35,7 +35,6 @@
#define ARCH_PERFMON_EVENTSEL_EQ (1ULL << 36)
#define ARCH_PERFMON_EVENTSEL_UMASK2 (0xFFULL << 40)
#define INTEL_FIXED_BITS_MASK 0xFULL
#define INTEL_FIXED_BITS_STRIDE 4
#define INTEL_FIXED_0_KERNEL (1ULL << 0)
#define INTEL_FIXED_0_USER (1ULL << 1)
@@ -47,6 +46,11 @@
#define ICL_EVENTSEL_ADAPTIVE (1ULL << 34)
#define ICL_FIXED_0_ADAPTIVE (1ULL << 32)
#define INTEL_FIXED_BITS_MASK \
(INTEL_FIXED_0_KERNEL | INTEL_FIXED_0_USER | \
INTEL_FIXED_0_ANYTHREAD | INTEL_FIXED_0_ENABLE_PMI | \
ICL_FIXED_0_ADAPTIVE)
#define intel_fixed_bits_by_idx(_idx, _bits) \
((_bits) << ((_idx) * INTEL_FIXED_BITS_STRIDE))

View File

@@ -1186,8 +1186,10 @@ do_cmd_auto:
retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
break;
default:
if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF) {
pr_err(RETBLEED_INTEL_MSG);
retbleed_mitigation = RETBLEED_MITIGATION_NONE;
}
}
}
@@ -1596,7 +1598,7 @@ set_mode:
static const char * const spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines",
[SPECTRE_V2_LFENCE] = "Mitigation: LFENCE",
[SPECTRE_V2_LFENCE] = "Vulnerable: LFENCE",
[SPECTRE_V2_EIBRS] = "Mitigation: Enhanced / Automatic IBRS",
[SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced / Automatic IBRS + LFENCE",
[SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced / Automatic IBRS + Retpolines",
@@ -3249,9 +3251,6 @@ static const char *spectre_bhi_state(void)
static ssize_t spectre_v2_show_state(char *buf)
{
if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
return sysfs_emit(buf, "Vulnerable: LFENCE\n");
if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
return sysfs_emit(buf, "Vulnerable: eIBRS with unprivileged eBPF\n");

View File

@@ -13,7 +13,7 @@
#define MSR_IA32_MISC_ENABLE_PMU_RO_MASK (MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL | \
MSR_IA32_MISC_ENABLE_BTS_UNAVAIL)
/* retrieve the 4 bits for EN and PMI out of IA32_FIXED_CTR_CTRL */
/* retrieve a fixed counter bits out of IA32_FIXED_CTR_CTRL */
#define fixed_ctrl_field(ctrl_reg, idx) \
(((ctrl_reg) >> ((idx) * INTEL_FIXED_BITS_STRIDE)) & INTEL_FIXED_BITS_MASK)

View File

@@ -11,6 +11,7 @@
*
* Chris Zankel <chris@zankel.net>
*/
#define COMPILE_OFFSETS
#include <asm/processor.h>
#include <asm/coprocessor.h>

View File

@@ -350,7 +350,7 @@ static long udmabuf_create(struct miscdevice *device,
return -ENOMEM;
INIT_LIST_HEAD(&ubuf->unpin_list);
pglimit = (size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
pglimit = ((u64)size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
for (i = 0; i < head->count; i++) {
if (!PAGE_ALIGNED(list[i].offset))
goto err;

View File

@@ -305,6 +305,14 @@ DEVICE_CHANNEL(ch10_dimm_label, S_IRUGO | S_IWUSR,
channel_dimm_label_show, channel_dimm_label_store, 10);
DEVICE_CHANNEL(ch11_dimm_label, S_IRUGO | S_IWUSR,
channel_dimm_label_show, channel_dimm_label_store, 11);
DEVICE_CHANNEL(ch12_dimm_label, S_IRUGO | S_IWUSR,
channel_dimm_label_show, channel_dimm_label_store, 12);
DEVICE_CHANNEL(ch13_dimm_label, S_IRUGO | S_IWUSR,
channel_dimm_label_show, channel_dimm_label_store, 13);
DEVICE_CHANNEL(ch14_dimm_label, S_IRUGO | S_IWUSR,
channel_dimm_label_show, channel_dimm_label_store, 14);
DEVICE_CHANNEL(ch15_dimm_label, S_IRUGO | S_IWUSR,
channel_dimm_label_show, channel_dimm_label_store, 15);
/* Total possible dynamic DIMM Label attribute file table */
static struct attribute *dynamic_csrow_dimm_attr[] = {
@@ -320,6 +328,10 @@ static struct attribute *dynamic_csrow_dimm_attr[] = {
&dev_attr_legacy_ch9_dimm_label.attr.attr,
&dev_attr_legacy_ch10_dimm_label.attr.attr,
&dev_attr_legacy_ch11_dimm_label.attr.attr,
&dev_attr_legacy_ch12_dimm_label.attr.attr,
&dev_attr_legacy_ch13_dimm_label.attr.attr,
&dev_attr_legacy_ch14_dimm_label.attr.attr,
&dev_attr_legacy_ch15_dimm_label.attr.attr,
NULL
};
@@ -348,6 +360,14 @@ DEVICE_CHANNEL(ch10_ce_count, S_IRUGO,
channel_ce_count_show, NULL, 10);
DEVICE_CHANNEL(ch11_ce_count, S_IRUGO,
channel_ce_count_show, NULL, 11);
DEVICE_CHANNEL(ch12_ce_count, S_IRUGO,
channel_ce_count_show, NULL, 12);
DEVICE_CHANNEL(ch13_ce_count, S_IRUGO,
channel_ce_count_show, NULL, 13);
DEVICE_CHANNEL(ch14_ce_count, S_IRUGO,
channel_ce_count_show, NULL, 14);
DEVICE_CHANNEL(ch15_ce_count, S_IRUGO,
channel_ce_count_show, NULL, 15);
/* Total possible dynamic ce_count attribute file table */
static struct attribute *dynamic_csrow_ce_count_attr[] = {
@@ -363,6 +383,10 @@ static struct attribute *dynamic_csrow_ce_count_attr[] = {
&dev_attr_legacy_ch9_ce_count.attr.attr,
&dev_attr_legacy_ch10_ce_count.attr.attr,
&dev_attr_legacy_ch11_ce_count.attr.attr,
&dev_attr_legacy_ch12_ce_count.attr.attr,
&dev_attr_legacy_ch13_ce_count.attr.attr,
&dev_attr_legacy_ch14_ce_count.attr.attr,
&dev_attr_legacy_ch15_ce_count.attr.attr,
NULL
};

View File

@@ -3,6 +3,7 @@
* GPIO library for the ACCES IDIO-16 family
* Copyright (C) 2022 William Breathitt Gray
*/
#include <linux/bitmap.h>
#include <linux/bits.h>
#include <linux/device.h>
#include <linux/err.h>
@@ -106,6 +107,7 @@ int devm_idio_16_regmap_register(struct device *const dev,
struct idio_16_data *data;
struct regmap_irq_chip *chip;
struct regmap_irq_chip_data *chip_data;
DECLARE_BITMAP(fixed_direction_output, IDIO_16_NGPIO);
if (!config->parent)
return -EINVAL;
@@ -163,6 +165,9 @@ int devm_idio_16_regmap_register(struct device *const dev,
gpio_config.irq_domain = regmap_irq_get_domain(chip_data);
gpio_config.reg_mask_xlate = idio_16_reg_mask_xlate;
bitmap_from_u64(fixed_direction_output, GENMASK_U64(15, 0));
gpio_config.fixed_direction_output = fixed_direction_output;
return PTR_ERR_OR_ZERO(devm_gpio_regmap_register(dev, &gpio_config));
}
EXPORT_SYMBOL_GPL(devm_idio_16_regmap_register);

View File

@@ -29,6 +29,12 @@ struct gpio_regmap {
unsigned int reg_clr_base;
unsigned int reg_dir_in_base;
unsigned int reg_dir_out_base;
unsigned long *fixed_direction_output;
#ifdef CONFIG_REGMAP_IRQ
int regmap_irq_line;
struct regmap_irq_chip_data *irq_chip_data;
#endif
int (*reg_mask_xlate)(struct gpio_regmap *gpio, unsigned int base,
unsigned int offset, unsigned int *reg,
@@ -117,6 +123,13 @@ static int gpio_regmap_get_direction(struct gpio_chip *chip,
unsigned int base, val, reg, mask;
int invert, ret;
if (gpio->fixed_direction_output) {
if (test_bit(offset, gpio->fixed_direction_output))
return GPIO_LINE_DIRECTION_OUT;
else
return GPIO_LINE_DIRECTION_IN;
}
if (gpio->reg_dat_base && !gpio->reg_set_base)
return GPIO_LINE_DIRECTION_IN;
if (gpio->reg_set_base && !gpio->reg_dat_base)
@@ -203,6 +216,7 @@ EXPORT_SYMBOL_GPL(gpio_regmap_get_drvdata);
*/
struct gpio_regmap *gpio_regmap_register(const struct gpio_regmap_config *config)
{
struct irq_domain *irq_domain;
struct gpio_regmap *gpio;
struct gpio_chip *chip;
int ret;
@@ -274,12 +288,37 @@ struct gpio_regmap *gpio_regmap_register(const struct gpio_regmap_config *config
chip->direction_output = gpio_regmap_direction_output;
}
if (config->fixed_direction_output) {
gpio->fixed_direction_output = bitmap_alloc(chip->ngpio,
GFP_KERNEL);
if (!gpio->fixed_direction_output) {
ret = -ENOMEM;
goto err_free_gpio;
}
bitmap_copy(gpio->fixed_direction_output,
config->fixed_direction_output, chip->ngpio);
}
ret = gpiochip_add_data(chip, gpio);
if (ret < 0)
goto err_free_gpio;
goto err_free_bitmap;
if (config->irq_domain) {
ret = gpiochip_irqchip_add_domain(chip, config->irq_domain);
#ifdef CONFIG_REGMAP_IRQ
if (config->regmap_irq_chip) {
gpio->regmap_irq_line = config->regmap_irq_line;
ret = regmap_add_irq_chip_fwnode(dev_fwnode(config->parent), config->regmap,
config->regmap_irq_line, config->regmap_irq_flags,
0, config->regmap_irq_chip, &gpio->irq_chip_data);
if (ret)
goto err_free_bitmap;
irq_domain = regmap_irq_get_domain(gpio->irq_chip_data);
} else
#endif
irq_domain = config->irq_domain;
if (irq_domain) {
ret = gpiochip_irqchip_add_domain(chip, irq_domain);
if (ret)
goto err_remove_gpiochip;
}
@@ -288,6 +327,8 @@ struct gpio_regmap *gpio_regmap_register(const struct gpio_regmap_config *config
err_remove_gpiochip:
gpiochip_remove(chip);
err_free_bitmap:
bitmap_free(gpio->fixed_direction_output);
err_free_gpio:
kfree(gpio);
return ERR_PTR(ret);
@@ -300,7 +341,13 @@ EXPORT_SYMBOL_GPL(gpio_regmap_register);
*/
void gpio_regmap_unregister(struct gpio_regmap *gpio)
{
#ifdef CONFIG_REGMAP_IRQ
if (gpio->irq_chip_data)
regmap_del_irq_chip(gpio->regmap_irq_line, gpio->irq_chip_data);
#endif
gpiochip_remove(&gpio->gpio_chip);
bitmap_free(gpio->fixed_direction_output);
kfree(gpio);
}
EXPORT_SYMBOL_GPL(gpio_regmap_unregister);

View File

@@ -37,6 +37,7 @@ static const struct hvs_format {
u32 pixel_order_hvs5;
bool hvs5_only;
bool hvs6_only;
bool hvs6_swap_chroma_pointers;
} hvs_formats[] = {
{
.drm = DRM_FORMAT_XRGB8888,
@@ -109,6 +110,7 @@ static const struct hvs_format {
.hvs = HVS_PIXEL_FORMAT_YCBCR_YUV422_3PLANE,
.pixel_order = HVS_PIXEL_ORDER_XYCRCB,
.pixel_order_hvs5 = HVS_PIXEL_ORDER_XYCRCB,
.hvs6_swap_chroma_pointers = true,
},
{
.drm = DRM_FORMAT_YUV444,
@@ -121,6 +123,7 @@ static const struct hvs_format {
.hvs = HVS_PIXEL_FORMAT_YCBCR_YUV422_3PLANE,
.pixel_order = HVS_PIXEL_ORDER_XYCRCB,
.pixel_order_hvs5 = HVS_PIXEL_ORDER_XYCRCB,
.hvs6_swap_chroma_pointers = true,
},
{
.drm = DRM_FORMAT_YUV420,
@@ -133,6 +136,7 @@ static const struct hvs_format {
.hvs = HVS_PIXEL_FORMAT_YCBCR_YUV420_3PLANE,
.pixel_order = HVS_PIXEL_ORDER_XYCRCB,
.pixel_order_hvs5 = HVS_PIXEL_ORDER_XYCRCB,
.hvs6_swap_chroma_pointers = true,
},
{
.drm = DRM_FORMAT_NV12,
@@ -1870,6 +1874,14 @@ static u32 vc6_plane_get_csc_mode(struct vc4_plane_state *vc4_state)
return ret;
}
static int vc6_get_plane_idx(const struct hvs_format *format, int plane)
{
if (!plane || !format->hvs6_swap_chroma_pointers)
return plane;
return (plane == 1) ? 2 : 1;
}
static int vc6_plane_mode_set(struct drm_plane *plane,
struct drm_plane_state *state)
{
@@ -2162,7 +2174,8 @@ static int vc6_plane_mode_set(struct drm_plane *plane,
* TODO: This only covers Raster Scan Order planes
*/
for (i = 0; i < num_planes; i++) {
struct drm_gem_dma_object *bo = drm_fb_dma_get_gem_obj(fb, i);
struct drm_gem_dma_object *bo =
drm_fb_dma_get_gem_obj(fb, vc6_get_plane_idx(format, i));
dma_addr_t paddr = bo->dma_addr + fb->offsets[i] + offsets[i];
/* Pointer Word 0 */

View File

@@ -4328,13 +4328,14 @@ static void intel_iommu_remove_dev_pasid(struct device *dev, ioasid_t pasid,
break;
}
}
WARN_ON_ONCE(!dev_pasid);
spin_unlock_irqrestore(&dmar_domain->lock, flags);
cache_tag_unassign_domain(dmar_domain, dev, pasid);
domain_detach_iommu(dmar_domain, iommu);
intel_iommu_debugfs_remove_dev_pasid(dev_pasid);
kfree(dev_pasid);
if (!WARN_ON_ONCE(!dev_pasid)) {
intel_iommu_debugfs_remove_dev_pasid(dev_pasid);
kfree(dev_pasid);
}
intel_pasid_tear_down_entry(iommu, dev, pasid, false);
intel_drain_pasid_prq(dev, pasid);
}

View File

@@ -72,7 +72,7 @@ MODULE_PARM_DESC(fstrobe_delay, "Set fstrobe delay from end all lines starting t
/* V_TIMING internal */
#define IMX477_REG_FRAME_LENGTH CCI_REG16(0x0340)
#define IMX477_VBLANK_MIN 4
#define IMX477_VBLANK_MIN 48
#define IMX477_FRAME_LENGTH_MAX 0xffdc
/* H_TIMING internal */
@@ -1268,7 +1268,8 @@ static int imx477_set_ctrl(struct v4l2_ctrl *ctrl)
break;
}
pm_runtime_put(&client->dev);
pm_runtime_mark_last_busy(&client->dev);
pm_runtime_put_autosuspend(&client->dev);
return ret;
}
@@ -1694,22 +1695,23 @@ static int imx477_set_stream(struct v4l2_subdev *sd, int enable)
}
if (enable) {
ret = pm_runtime_get_sync(&client->dev);
if (ret < 0) {
pm_runtime_put_noidle(&client->dev);
ret = pm_runtime_resume_and_get(&client->dev);
if (ret < 0)
goto err_unlock;
}
/*
* Apply default & customized values
* and then start streaming.
*/
ret = imx477_start_streaming(imx477);
if (ret)
goto err_rpm_put;
if (ret) {
pm_runtime_put_sync(&client->dev);
goto err_unlock;
}
} else {
imx477_stop_streaming(imx477);
pm_runtime_put(&client->dev);
pm_runtime_mark_last_busy(&client->dev);
pm_runtime_put_autosuspend(&client->dev);
}
imx477->streaming = enable;
@@ -1722,8 +1724,6 @@ static int imx477_set_stream(struct v4l2_subdev *sd, int enable)
return ret;
err_rpm_put:
pm_runtime_put(&client->dev);
err_unlock:
mutex_unlock(&imx477->mutex);
@@ -2183,10 +2183,16 @@ static int imx477_probe(struct i2c_client *client)
/* Initialize default format */
imx477_set_default_format(imx477);
/* Enable runtime PM and turn off the device */
/*
* Enable runtime PM with 5s autosuspend. As the device has been powered
* manually, mark it as active, and increase the usage count without
* resuming the device.
*/
pm_runtime_set_active(dev);
pm_runtime_get_noresume(dev);
pm_runtime_enable(dev);
pm_runtime_idle(dev);
pm_runtime_set_autosuspend_delay(dev, 5000);
pm_runtime_use_autosuspend(dev);
/* This needs the pm runtime to be registered. */
ret = imx477_init_controls(imx477);
@@ -2215,6 +2221,9 @@ static int imx477_probe(struct i2c_client *client)
goto error_media_entity;
}
pm_runtime_mark_last_busy(&client->dev);
pm_runtime_put_autosuspend(&client->dev);
return 0;
error_media_entity:
@@ -2225,7 +2234,7 @@ error_handler_free:
error_power_off:
pm_runtime_disable(&client->dev);
pm_runtime_set_suspended(&client->dev);
pm_runtime_put_noidle(&client->dev);
imx477_power_off(&client->dev);
return ret;

View File

@@ -322,9 +322,9 @@ static bool bond_sk_check(struct bonding *bond)
}
}
static bool bond_xdp_check(struct bonding *bond)
bool bond_xdp_check(struct bonding *bond, int mode)
{
switch (BOND_MODE(bond)) {
switch (mode) {
case BOND_MODE_ROUNDROBIN:
case BOND_MODE_ACTIVEBACKUP:
return true;
@@ -1930,7 +1930,7 @@ void bond_xdp_set_features(struct net_device *bond_dev)
ASSERT_RTNL();
if (!bond_xdp_check(bond) || !bond_has_slaves(bond)) {
if (!bond_xdp_check(bond, BOND_MODE(bond)) || !bond_has_slaves(bond)) {
xdp_clear_features_flag(bond_dev);
return;
}
@@ -5699,8 +5699,11 @@ static int bond_xdp_set(struct net_device *dev, struct bpf_prog *prog,
ASSERT_RTNL();
if (!bond_xdp_check(bond))
if (!bond_xdp_check(bond, BOND_MODE(bond))) {
BOND_NL_ERR(dev, extack,
"No native XDP support for the current bonding mode");
return -EOPNOTSUPP;
}
old_prog = bond->xdp_prog;
bond->xdp_prog = prog;

View File

@@ -868,6 +868,9 @@ static bool bond_set_xfrm_features(struct bonding *bond)
static int bond_option_mode_set(struct bonding *bond,
const struct bond_opt_value *newval)
{
if (bond->xdp_prog && !bond_xdp_check(bond, newval->value))
return -EOPNOTSUPP;
if (!bond_mode_uses_arp(newval->value)) {
if (bond->params.arp_interval) {
netdev_dbg(bond->dev, "%s mode is incompatible with arp monitoring, start mii monitoring\n",

View File

@@ -450,8 +450,9 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data)
net_dev->hw_enc_features |= efx->type->offload_features;
net_dev->vlan_features |= NETIF_F_HW_CSUM | NETIF_F_SG |
NETIF_F_HIGHDMA | NETIF_F_ALL_TSO;
netif_set_tso_max_segs(net_dev,
ESE_EF100_DP_GZ_TSO_MAX_HDR_NUM_SEGS_DEFAULT);
nic_data = efx->nic_data;
netif_set_tso_max_size(efx->net_dev, nic_data->tso_max_payload_len);
netif_set_tso_max_segs(efx->net_dev, nic_data->tso_max_payload_num_segs);
efx->mdio.dev = net_dev;
rc = efx_ef100_init_datapath_caps(efx);
@@ -478,7 +479,6 @@ int ef100_probe_netdev(struct efx_probe_data *probe_data)
/* Don't fail init if RSS setup doesn't work. */
efx_mcdi_push_default_indir_table(efx, efx->n_rx_channels);
nic_data = efx->nic_data;
rc = ef100_get_mac_address(efx, net_dev->perm_addr, CLIENT_HANDLE_SELF,
efx->type->is_vf);
if (rc)

View File

@@ -887,8 +887,7 @@ static int ef100_process_design_param(struct efx_nic *efx,
case ESE_EF100_DP_GZ_TSO_MAX_HDR_NUM_SEGS:
/* We always put HDR_NUM_SEGS=1 in our TSO descriptors */
if (!reader->value) {
netif_err(efx, probe, efx->net_dev,
"TSO_MAX_HDR_NUM_SEGS < 1\n");
pci_err(efx->pci_dev, "TSO_MAX_HDR_NUM_SEGS < 1\n");
return -EOPNOTSUPP;
}
return 0;
@@ -901,32 +900,28 @@ static int ef100_process_design_param(struct efx_nic *efx,
*/
if (!reader->value || reader->value > EFX_MIN_DMAQ_SIZE ||
EFX_MIN_DMAQ_SIZE % (u32)reader->value) {
netif_err(efx, probe, efx->net_dev,
"%s size granularity is %llu, can't guarantee safety\n",
reader->type == ESE_EF100_DP_GZ_RXQ_SIZE_GRANULARITY ? "RXQ" : "TXQ",
reader->value);
pci_err(efx->pci_dev,
"%s size granularity is %llu, can't guarantee safety\n",
reader->type == ESE_EF100_DP_GZ_RXQ_SIZE_GRANULARITY ? "RXQ" : "TXQ",
reader->value);
return -EOPNOTSUPP;
}
return 0;
case ESE_EF100_DP_GZ_TSO_MAX_PAYLOAD_LEN:
nic_data->tso_max_payload_len = min_t(u64, reader->value,
GSO_LEGACY_MAX_SIZE);
netif_set_tso_max_size(efx->net_dev,
nic_data->tso_max_payload_len);
return 0;
case ESE_EF100_DP_GZ_TSO_MAX_PAYLOAD_NUM_SEGS:
nic_data->tso_max_payload_num_segs = min_t(u64, reader->value, 0xffff);
netif_set_tso_max_segs(efx->net_dev,
nic_data->tso_max_payload_num_segs);
return 0;
case ESE_EF100_DP_GZ_TSO_MAX_NUM_FRAMES:
nic_data->tso_max_frames = min_t(u64, reader->value, 0xffff);
return 0;
case ESE_EF100_DP_GZ_COMPAT:
if (reader->value) {
netif_err(efx, probe, efx->net_dev,
"DP_COMPAT has unknown bits %#llx, driver not compatible with this hw\n",
reader->value);
pci_err(efx->pci_dev,
"DP_COMPAT has unknown bits %#llx, driver not compatible with this hw\n",
reader->value);
return -EOPNOTSUPP;
}
return 0;
@@ -946,10 +941,10 @@ static int ef100_process_design_param(struct efx_nic *efx,
* So the value of this shouldn't matter.
*/
if (reader->value != ESE_EF100_DP_GZ_VI_STRIDES_DEFAULT)
netif_dbg(efx, probe, efx->net_dev,
"NIC has other than default VI_STRIDES (mask "
"%#llx), early probing might use wrong one\n",
reader->value);
pci_dbg(efx->pci_dev,
"NIC has other than default VI_STRIDES (mask "
"%#llx), early probing might use wrong one\n",
reader->value);
return 0;
case ESE_EF100_DP_GZ_RX_MAX_RUNT:
/* Driver doesn't look at L2_STATUS:LEN_ERR bit, so we don't
@@ -961,9 +956,9 @@ static int ef100_process_design_param(struct efx_nic *efx,
/* Host interface says "Drivers should ignore design parameters
* that they do not recognise."
*/
netif_dbg(efx, probe, efx->net_dev,
"Ignoring unrecognised design parameter %u\n",
reader->type);
pci_dbg(efx->pci_dev,
"Ignoring unrecognised design parameter %u\n",
reader->type);
return 0;
}
}
@@ -999,13 +994,13 @@ static int ef100_check_design_params(struct efx_nic *efx)
*/
if (reader.state != EF100_TLV_TYPE) {
if (reader.state == EF100_TLV_TYPE_CONT)
netif_err(efx, probe, efx->net_dev,
"truncated design parameter (incomplete type %u)\n",
reader.type);
pci_err(efx->pci_dev,
"truncated design parameter (incomplete type %u)\n",
reader.type);
else
netif_err(efx, probe, efx->net_dev,
"truncated design parameter %u\n",
reader.type);
pci_err(efx->pci_dev,
"truncated design parameter %u\n",
reader.type);
rc = -EIO;
}
out:

View File

@@ -6733,15 +6733,15 @@ static struct ath12k *ath12k_mac_assign_vif_to_vdev(struct ieee80211_hw *hw,
mutex_lock(&ar->conf_mutex);
if (arvif->is_created)
goto flush;
if (vif->type == NL80211_IFTYPE_AP &&
ar->num_peers > (ar->max_num_peers - 1)) {
ath12k_warn(ab, "failed to create vdev due to insufficient peer entry resource in firmware\n");
goto unlock;
}
if (arvif->is_created)
goto flush;
if (ar->num_created_vdevs > (TARGET_NUM_VDEVS - 1)) {
ath12k_warn(ab, "failed to create vdev, reached max vdev limit %d\n",
TARGET_NUM_VDEVS);

View File

@@ -393,6 +393,7 @@ static int vc_sm_dma_buf_attach(struct dma_buf *dmabuf,
ret = sg_alloc_table(sgt, buf->alloc.sg_table->orig_nents, GFP_KERNEL);
if (ret) {
kfree(a);
mutex_unlock(&buf->lock);
return -ENOMEM;
}
@@ -1223,7 +1224,7 @@ error:
dma_buf_put(dmabuf);
} else {
/* No dmabuf, therefore just free the buffer here */
if (buffer->cookie)
if (buffer && buffer->cookie)
dma_free_coherent(&sm_state->device->dev, buffer->size,
buffer->cookie, buffer->dma_addr);
kfree(buffer);

View File

@@ -2100,10 +2100,10 @@ static int btrfs_replay_log(struct btrfs_fs_info *fs_info,
/* returns with log_tree_root freed on success */
ret = btrfs_recover_log_trees(log_tree_root);
btrfs_put_root(log_tree_root);
if (ret) {
btrfs_handle_fs_error(fs_info, ret,
"Failed to recover log tree");
btrfs_put_root(log_tree_root);
return ret;
}

View File

@@ -4299,7 +4299,8 @@ static int prepare_allocation_clustered(struct btrfs_fs_info *fs_info,
}
static int prepare_allocation_zoned(struct btrfs_fs_info *fs_info,
struct find_free_extent_ctl *ffe_ctl)
struct find_free_extent_ctl *ffe_ctl,
struct btrfs_space_info *space_info)
{
if (ffe_ctl->for_treelog) {
spin_lock(&fs_info->treelog_bg_lock);
@@ -4323,6 +4324,7 @@ static int prepare_allocation_zoned(struct btrfs_fs_info *fs_info,
u64 avail = block_group->zone_capacity - block_group->alloc_offset;
if (block_group_bits(block_group, ffe_ctl->flags) &&
block_group->space_info == space_info &&
avail >= ffe_ctl->num_bytes) {
ffe_ctl->hint_byte = block_group->start;
break;
@@ -4344,7 +4346,7 @@ static int prepare_allocation(struct btrfs_fs_info *fs_info,
return prepare_allocation_clustered(fs_info, ffe_ctl,
space_info, ins);
case BTRFS_EXTENT_ALLOC_ZONED:
return prepare_allocation_zoned(fs_info, ffe_ctl);
return prepare_allocation_zoned(fs_info, ffe_ctl, space_info);
default:
BUG();
}

View File

@@ -3174,9 +3174,10 @@ int btrfs_finish_one_ordered(struct btrfs_ordered_extent *ordered_extent)
goto out;
}
if (btrfs_is_zoned(fs_info))
btrfs_zone_finish_endio(fs_info, ordered_extent->disk_bytenr,
ordered_extent->disk_num_bytes);
ret = btrfs_zone_finish_endio(fs_info, ordered_extent->disk_bytenr,
ordered_extent->disk_num_bytes);
if (ret)
goto out;
if (test_bit(BTRFS_ORDERED_TRUNCATED, &ordered_extent->flags)) {
truncated = true;

View File

@@ -1270,8 +1270,7 @@ static void scrub_throttle_dev_io(struct scrub_ctx *sctx, struct btrfs_device *d
* Slice is divided into intervals when the IO is submitted, adjust by
* bwlimit and maximum of 64 intervals.
*/
div = max_t(u32, 1, (u32)(bwlimit / (16 * 1024 * 1024)));
div = min_t(u32, 64, div);
div = clamp(bwlimit / (16 * 1024 * 1024), 1, 64);
/* Start new epoch, set deadline */
now = ktime_get();

View File

@@ -1810,7 +1810,7 @@ static noinline int create_pending_snapshot(struct btrfs_trans_handle *trans,
}
/* see comments in should_cow_block() */
set_bit(BTRFS_ROOT_FORCE_COW, &root->state);
smp_wmb();
smp_mb__after_atomic();
btrfs_set_root_node(new_root_item, tmp);
/* record when the snapshot was created in key.offset */

View File

@@ -183,6 +183,7 @@ static bool check_prev_ino(struct extent_buffer *leaf,
/* Only these key->types needs to be checked */
ASSERT(key->type == BTRFS_XATTR_ITEM_KEY ||
key->type == BTRFS_INODE_REF_KEY ||
key->type == BTRFS_INODE_EXTREF_KEY ||
key->type == BTRFS_DIR_INDEX_KEY ||
key->type == BTRFS_DIR_ITEM_KEY ||
key->type == BTRFS_EXTENT_DATA_KEY);
@@ -1770,6 +1771,39 @@ static int check_inode_ref(struct extent_buffer *leaf,
return 0;
}
static int check_inode_extref(struct extent_buffer *leaf,
struct btrfs_key *key, struct btrfs_key *prev_key,
int slot)
{
unsigned long ptr = btrfs_item_ptr_offset(leaf, slot);
unsigned long end = ptr + btrfs_item_size(leaf, slot);
if (unlikely(!check_prev_ino(leaf, key, slot, prev_key)))
return -EUCLEAN;
while (ptr < end) {
struct btrfs_inode_extref *extref = (struct btrfs_inode_extref *)ptr;
u16 namelen;
if (unlikely(ptr + sizeof(*extref) > end)) {
inode_ref_err(leaf, slot,
"inode extref overflow, ptr %lu end %lu inode_extref size %zu",
ptr, end, sizeof(*extref));
return -EUCLEAN;
}
namelen = btrfs_inode_extref_name_len(leaf, extref);
if (unlikely(ptr + sizeof(*extref) + namelen > end)) {
inode_ref_err(leaf, slot,
"inode extref overflow, ptr %lu end %lu namelen %u",
ptr, end, namelen);
return -EUCLEAN;
}
ptr += sizeof(*extref) + namelen;
}
return 0;
}
static int check_raid_stripe_extent(const struct extent_buffer *leaf,
const struct btrfs_key *key, int slot)
{
@@ -1881,6 +1915,9 @@ static enum btrfs_tree_block_status check_leaf_item(struct extent_buffer *leaf,
case BTRFS_INODE_REF_KEY:
ret = check_inode_ref(leaf, key, prev_key, slot);
break;
case BTRFS_INODE_EXTREF_KEY:
ret = check_inode_extref(leaf, key, prev_key, slot);
break;
case BTRFS_BLOCK_GROUP_ITEM_KEY:
ret = check_block_group_item(leaf, key, slot);
break;

View File

@@ -350,6 +350,7 @@ static int process_one_buffer(struct btrfs_root *log,
struct extent_buffer *eb,
struct walk_control *wc, u64 gen, int level)
{
struct btrfs_trans_handle *trans = wc->trans;
struct btrfs_fs_info *fs_info = log->fs_info;
int ret = 0;
@@ -364,18 +365,29 @@ static int process_one_buffer(struct btrfs_root *log,
};
ret = btrfs_read_extent_buffer(eb, &check);
if (ret)
if (ret) {
if (trans)
btrfs_abort_transaction(trans, ret);
else
btrfs_handle_fs_error(fs_info, ret, NULL);
return ret;
}
}
if (wc->pin) {
ret = btrfs_pin_extent_for_log_replay(wc->trans, eb);
if (ret)
ASSERT(trans != NULL);
ret = btrfs_pin_extent_for_log_replay(trans, eb);
if (ret) {
btrfs_abort_transaction(trans, ret);
return ret;
}
if (btrfs_buffer_uptodate(eb, gen, 0) &&
btrfs_header_level(eb) == 0)
btrfs_header_level(eb) == 0) {
ret = btrfs_exclude_logged_extents(eb);
if (ret)
btrfs_abort_transaction(trans, ret);
}
}
return ret;
}
@@ -1766,6 +1778,8 @@ static noinline int link_to_fixup_dir(struct btrfs_trans_handle *trans,
else
inc_nlink(vfs_inode);
ret = btrfs_update_inode(trans, inode);
if (ret)
btrfs_abort_transaction(trans, ret);
} else if (ret == -EEXIST) {
ret = 0;
}
@@ -2431,15 +2445,13 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
int i;
int ret;
if (level != 0)
return 0;
ret = btrfs_read_extent_buffer(eb, &check);
if (ret)
return ret;
level = btrfs_header_level(eb);
if (level != 0)
return 0;
path = btrfs_alloc_path();
if (!path)
return -ENOMEM;
@@ -2612,15 +2624,24 @@ static int unaccount_log_buffer(struct btrfs_fs_info *fs_info, u64 start)
static int clean_log_buffer(struct btrfs_trans_handle *trans,
struct extent_buffer *eb)
{
int ret;
btrfs_tree_lock(eb);
btrfs_clear_buffer_dirty(trans, eb);
wait_on_extent_buffer_writeback(eb);
btrfs_tree_unlock(eb);
if (trans)
return btrfs_pin_reserved_extent(trans, eb);
if (trans) {
ret = btrfs_pin_reserved_extent(trans, eb);
if (ret)
btrfs_abort_transaction(trans, ret);
return ret;
}
return unaccount_log_buffer(eb->fs_info, eb->start);
ret = unaccount_log_buffer(eb->fs_info, eb->start);
if (ret)
btrfs_handle_fs_error(eb->fs_info, ret, NULL);
return ret;
}
static noinline int walk_down_log_tree(struct btrfs_trans_handle *trans,
@@ -2656,8 +2677,14 @@ static noinline int walk_down_log_tree(struct btrfs_trans_handle *trans,
next = btrfs_find_create_tree_block(fs_info, bytenr,
btrfs_header_owner(cur),
*level - 1);
if (IS_ERR(next))
return PTR_ERR(next);
if (IS_ERR(next)) {
ret = PTR_ERR(next);
if (trans)
btrfs_abort_transaction(trans, ret);
else
btrfs_handle_fs_error(fs_info, ret, NULL);
return ret;
}
if (*level == 1) {
ret = wc->process_func(root, next, wc, ptr_gen,
@@ -2672,6 +2699,10 @@ static noinline int walk_down_log_tree(struct btrfs_trans_handle *trans,
ret = btrfs_read_extent_buffer(next, &check);
if (ret) {
free_extent_buffer(next);
if (trans)
btrfs_abort_transaction(trans, ret);
else
btrfs_handle_fs_error(fs_info, ret, NULL);
return ret;
}
@@ -2687,6 +2718,10 @@ static noinline int walk_down_log_tree(struct btrfs_trans_handle *trans,
ret = btrfs_read_extent_buffer(next, &check);
if (ret) {
free_extent_buffer(next);
if (trans)
btrfs_abort_transaction(trans, ret);
else
btrfs_handle_fs_error(fs_info, ret, NULL);
return ret;
}
@@ -7422,7 +7457,6 @@ next:
log_root_tree->log_root = NULL;
clear_bit(BTRFS_FS_LOG_RECOVERING, &fs_info->flags);
btrfs_put_root(log_root_tree);
return 0;
error:

View File

@@ -2384,16 +2384,17 @@ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags)
return ret;
}
void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 length)
int btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 length)
{
struct btrfs_block_group *block_group;
u64 min_alloc_bytes;
if (!btrfs_is_zoned(fs_info))
return;
return 0;
block_group = btrfs_lookup_block_group(fs_info, logical);
ASSERT(block_group);
if (WARN_ON_ONCE(!block_group))
return -ENOENT;
/* No MIXED_BG on zoned btrfs. */
if (block_group->flags & BTRFS_BLOCK_GROUP_DATA)
@@ -2410,6 +2411,7 @@ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 len
out:
btrfs_put_block_group(block_group);
return 0;
}
static void btrfs_zone_finish_endio_workfn(struct work_struct *work)

View File

@@ -83,7 +83,7 @@ int btrfs_sync_zone_write_pointer(struct btrfs_device *tgt_dev, u64 logical,
bool btrfs_zone_activate(struct btrfs_block_group *block_group);
int btrfs_zone_finish(struct btrfs_block_group *block_group);
bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags);
void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical,
int btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical,
u64 length);
void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
struct extent_buffer *eb);
@@ -232,8 +232,11 @@ static inline bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices,
return true;
}
static inline void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info,
u64 logical, u64 length) { }
static inline int btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info,
u64 logical, u64 length)
{
return 0;
}
static inline void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
struct extent_buffer *eb) { }

View File

@@ -1836,18 +1836,20 @@ static int f2fs_expand_inode_data(struct inode *inode, loff_t offset,
map.m_len = sec_blks;
next_alloc:
f2fs_down_write(&sbi->pin_sem);
if (has_not_enough_free_secs(sbi, 0, f2fs_sb_has_blkzoned(sbi) ?
ZONED_PIN_SEC_REQUIRED_COUNT :
GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)))) {
f2fs_down_write(&sbi->gc_lock);
stat_inc_gc_call_count(sbi, FOREGROUND);
err = f2fs_gc(sbi, &gc_control);
if (err && err != -ENODATA)
if (err && err != -ENODATA) {
f2fs_up_write(&sbi->pin_sem);
goto out_err;
}
}
f2fs_down_write(&sbi->pin_sem);
err = f2fs_allocate_pinning_section(sbi);
if (err) {
f2fs_up_write(&sbi->pin_sem);

View File

@@ -2749,7 +2749,7 @@ find_other_zone:
MAIN_SECS(sbi));
if (secno >= MAIN_SECS(sbi)) {
ret = -ENOSPC;
f2fs_bug_on(sbi, 1);
f2fs_bug_on(sbi, !pinning);
goto out_unlock;
}
}
@@ -2795,7 +2795,7 @@ got_it:
out_unlock:
spin_unlock(&free_i->segmap_lock);
if (ret == -ENOSPC)
if (ret == -ENOSPC && !pinning)
f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_NO_SEGMENT);
return ret;
}
@@ -2868,6 +2868,13 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type)
return curseg->segno;
}
static void reset_curseg_fields(struct curseg_info *curseg)
{
curseg->inited = false;
curseg->segno = NULL_SEGNO;
curseg->next_segno = 0;
}
/*
* Allocate a current working segment.
* This function always allocates a free segment in LFS manner.
@@ -2886,7 +2893,7 @@ static int new_curseg(struct f2fs_sb_info *sbi, int type, bool new_sec)
ret = get_new_segment(sbi, &segno, new_sec, pinning);
if (ret) {
if (ret == -ENOSPC)
curseg->segno = NULL_SEGNO;
reset_curseg_fields(curseg);
return ret;
}
@@ -3640,13 +3647,6 @@ static void f2fs_randomize_chunk(struct f2fs_sb_info *sbi,
get_random_u32_inclusive(1, sbi->max_fragment_hole);
}
static void reset_curseg_fields(struct curseg_info *curseg)
{
curseg->inited = false;
curseg->segno = NULL_SEGNO;
curseg->next_segno = 0;
}
int f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
block_t old_blkaddr, block_t *new_blkaddr,
struct f2fs_summary *sum, int type,

View File

@@ -527,7 +527,7 @@ static inline void audit_log_kern_module(const char *name)
static inline void audit_fanotify(u32 response, struct fanotify_response_info_audit_rule *friar)
{
if (!audit_dummy_context())
if (audit_enabled)
__audit_fanotify(response, friar);
}

View File

@@ -8,7 +8,6 @@
#include <uapi/linux/kernel.h>
#define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE)
#define BITS_TO_LONGS(nr) __KERNEL_DIV_ROUND_UP(nr, BITS_PER_TYPE(long))
#define BITS_TO_U64(nr) __KERNEL_DIV_ROUND_UP(nr, BITS_PER_TYPE(u64))
#define BITS_TO_U32(nr) __KERNEL_DIV_ROUND_UP(nr, BITS_PER_TYPE(u32))

View File

@@ -12,6 +12,7 @@
#define BIT_ULL_MASK(nr) (ULL(1) << ((nr) % BITS_PER_LONG_LONG))
#define BIT_ULL_WORD(nr) ((nr) / BITS_PER_LONG_LONG)
#define BITS_PER_BYTE 8
#define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE)
/*
* Create a contiguous bitmask starting at bit position @l and ending at
@@ -19,17 +20,50 @@
* GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.
*/
#if !defined(__ASSEMBLY__)
/*
* Missing asm support
*
* GENMASK_U*() depend on BITS_PER_TYPE() which relies on sizeof(),
* something not available in asm. Nevertheless, fixed width integers is a C
* concept. Assembly code can rely on the long and long long versions instead.
*/
#include <linux/build_bug.h>
#include <linux/overflow.h>
#define GENMASK_INPUT_CHECK(h, l) \
(BUILD_BUG_ON_ZERO(__builtin_choose_expr( \
__is_constexpr((l) > (h)), (l) > (h), 0)))
#else
/*
* Generate a mask for the specified type @t. Additional checks are made to
* guarantee the value returned fits in that type, relying on
* -Wshift-count-overflow compiler check to detect incompatible arguments.
* For example, all these create build errors or warnings:
*
* - GENMASK(15, 20): wrong argument order
* - GENMASK(72, 15): doesn't fit unsigned long
* - GENMASK_U32(33, 15): doesn't fit in a u32
*/
#define GENMASK_TYPE(t, h, l) \
((t)(GENMASK_INPUT_CHECK(h, l) + \
(type_max(t) << (l) & \
type_max(t) >> (BITS_PER_TYPE(t) - 1 - (h)))))
#define GENMASK_U8(h, l) GENMASK_TYPE(u8, h, l)
#define GENMASK_U16(h, l) GENMASK_TYPE(u16, h, l)
#define GENMASK_U32(h, l) GENMASK_TYPE(u32, h, l)
#define GENMASK_U64(h, l) GENMASK_TYPE(u64, h, l)
#else /* defined(__ASSEMBLY__) */
/*
* BUILD_BUG_ON_ZERO is not available in h files included from asm files,
* disable the input check if that is the case.
*/
#define GENMASK_INPUT_CHECK(h, l) 0
#endif
#endif /* !defined(__ASSEMBLY__) */
#define GENMASK(h, l) \
(GENMASK_INPUT_CHECK(h, l) + __GENMASK(h, l))

View File

@@ -37,9 +37,18 @@ struct regmap;
* offset to a register/bitmask pair. If not
* given the default gpio_regmap_simple_xlate()
* is used.
* @fixed_direction_output:
* (Optional) Bitmap representing the fixed direction of
* the GPIO lines. Useful when there are GPIO lines with a
* fixed direction mixed together in the same register.
* @drvdata: (Optional) Pointer to driver specific data which is
* not used by gpio-remap but is provided "as is" to the
* driver callback(s).
* @regmap_irq_chip: (Optional) Pointer on an regmap_irq_chip structure. If
* set, a regmap-irq device will be created and the IRQ
* domain will be set accordingly.
* @regmap_irq_line (Optional) The IRQ the device uses to signal interrupts.
* @regmap_irq_flags (Optional) The IRQF_ flags to use for the interrupt.
*
* The ->reg_mask_xlate translates a given base address and GPIO offset to
* register and mask pair. The base address is one of the given register
@@ -77,6 +86,13 @@ struct gpio_regmap_config {
int reg_stride;
int ngpio_per_reg;
struct irq_domain *irq_domain;
unsigned long *fixed_direction_output;
#ifdef CONFIG_REGMAP_IRQ
struct regmap_irq_chip *regmap_irq_chip;
int regmap_irq_line;
unsigned long regmap_irq_flags;
#endif
int (*reg_mask_xlate)(struct gpio_regmap *gpio, unsigned int base,
unsigned int offset, unsigned int *reg,

View File

@@ -695,6 +695,7 @@ void bond_debug_register(struct bonding *bond);
void bond_debug_unregister(struct bonding *bond);
void bond_debug_reregister(struct bonding *bond);
const char *bond_mode_name(int mode);
bool bond_xdp_check(struct bonding *bond, int mode);
void bond_setup(struct net_device *bond_dev);
unsigned int bond_get_num_tx_queues(void);
int bond_netlink_init(void);

View File

@@ -114,7 +114,6 @@ struct qdisc_rate_table *qdisc_get_rtab(struct tc_ratespec *r,
struct netlink_ext_ack *extack);
void qdisc_put_rtab(struct qdisc_rate_table *tab);
void qdisc_put_stab(struct qdisc_size_table *tab);
void qdisc_warn_nonwc(const char *txt, struct Qdisc *qdisc);
bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q,
struct net_device *dev, struct netdev_queue *txq,
spinlock_t *root_lock, bool validate);
@@ -290,4 +289,28 @@ static inline bool tc_qdisc_stats_dump(struct Qdisc *sch,
return true;
}
static inline void qdisc_warn_nonwc(const char *txt, struct Qdisc *qdisc)
{
if (!(qdisc->flags & TCQ_F_WARN_NONWC)) {
pr_warn("%s: %s qdisc %X: is non-work-conserving?\n",
txt, qdisc->ops->id, qdisc->handle >> 16);
qdisc->flags |= TCQ_F_WARN_NONWC;
}
}
static inline unsigned int qdisc_peek_len(struct Qdisc *sch)
{
struct sk_buff *skb;
unsigned int len;
skb = sch->ops->peek(sch);
if (unlikely(skb == NULL)) {
qdisc_warn_nonwc("qdisc_peek_len", sch);
return 0;
}
len = qdisc_pkt_len(skb);
return len;
}
#endif

View File

@@ -1679,11 +1679,7 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
if (prstate_housekeeping_conflict(new_prs, xcpus))
return PERR_HKEEPING;
/*
* A parent can be left with no CPU as long as there is no
* task directly associated with the parent partition.
*/
if (nocpu)
if (tasks_nocpu_error(parent, cs, xcpus))
return PERR_NOCPUS;
deleting = cpumask_and(tmp->delmask, xcpus, parent->effective_xcpus);

View File

@@ -223,6 +223,10 @@ get_perf_callchain(struct pt_regs *regs, u32 init_nr, bool kernel, bool user,
struct perf_callchain_entry_ctx ctx;
int rctx, start_entry_idx;
/* crosstask is not supported for user stacks */
if (crosstask && user && !kernel)
return NULL;
entry = get_callchain_entry(&rctx);
if (!entry)
return NULL;
@@ -239,18 +243,15 @@ get_perf_callchain(struct pt_regs *regs, u32 init_nr, bool kernel, bool user,
perf_callchain_kernel(&ctx, regs);
}
if (user) {
if (user && !crosstask) {
if (!user_mode(regs)) {
if (current->mm)
regs = task_pt_regs(current);
else
if (current->flags & (PF_KTHREAD | PF_USER_WORKER))
regs = NULL;
else
regs = task_pt_regs(current);
}
if (regs) {
if (crosstask)
goto exit_put;
if (add_mark)
perf_callchain_store_context(&ctx, PERF_CONTEXT_USER);
@@ -260,7 +261,6 @@ get_perf_callchain(struct pt_regs *regs, u32 init_nr, bool kernel, bool user,
}
}
exit_put:
put_callchain_entry(rctx);
return entry;

View File

@@ -7095,7 +7095,7 @@ static void perf_sample_regs_user(struct perf_regs *regs_user,
if (user_mode(regs)) {
regs_user->abi = perf_reg_abi(current);
regs_user->regs = regs;
} else if (!(current->flags & PF_KTHREAD)) {
} else if (!(current->flags & (PF_KTHREAD | PF_USER_WORKER))) {
perf_get_regs_user(regs_user, regs);
} else {
regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE;
@@ -7735,7 +7735,7 @@ static u64 perf_virt_to_phys(u64 virt)
* Try IRQ-safe get_user_page_fast_only first.
* If failed, leave phys_addr as 0.
*/
if (current->mm != NULL) {
if (!(current->flags & (PF_KTHREAD | PF_USER_WORKER))) {
struct page *p;
pagefault_disable();
@@ -7847,7 +7847,8 @@ struct perf_callchain_entry *
perf_callchain(struct perf_event *event, struct pt_regs *regs)
{
bool kernel = !event->attr.exclude_callchain_kernel;
bool user = !event->attr.exclude_callchain_user;
bool user = !event->attr.exclude_callchain_user &&
!(current->flags & (PF_KTHREAD | PF_USER_WORKER));
/* Disallow cross-task user callchains. */
bool crosstask = event->ctx->task && event->ctx->task != current;
const u32 max_stack = event->attr.sample_max_stack;

View File

@@ -733,6 +733,26 @@ out:
}
#ifdef SECCOMP_ARCH_NATIVE
static bool seccomp_uprobe_exception(struct seccomp_data *sd)
{
#if defined __NR_uretprobe || defined __NR_uprobe
#ifdef SECCOMP_ARCH_COMPAT
if (sd->arch == SECCOMP_ARCH_NATIVE)
#endif
{
#ifdef __NR_uretprobe
if (sd->nr == __NR_uretprobe)
return true;
#endif
#ifdef __NR_uprobe
if (sd->nr == __NR_uprobe)
return true;
#endif
}
#endif
return false;
}
/**
* seccomp_is_const_allow - check if filter is constant allow with given data
* @fprog: The BPF programs
@@ -750,13 +770,8 @@ static bool seccomp_is_const_allow(struct sock_fprog_kern *fprog,
return false;
/* Our single exception to filtering. */
#ifdef __NR_uretprobe
#ifdef SECCOMP_ARCH_COMPAT
if (sd->arch == SECCOMP_ARCH_NATIVE)
#endif
if (sd->nr == __NR_uretprobe)
return true;
#endif
if (seccomp_uprobe_exception(sd))
return true;
for (pc = 0; pc < fprog->len; pc++) {
struct sock_filter *insn = &fprog->filter[pc];
@@ -1034,6 +1049,9 @@ static const int mode1_syscalls[] = {
__NR_seccomp_read, __NR_seccomp_write, __NR_seccomp_exit, __NR_seccomp_sigreturn,
#ifdef __NR_uretprobe
__NR_uretprobe,
#endif
#ifdef __NR_uprobe
__NR_uprobe,
#endif
-1, /* negative terminated */
};

View File

@@ -618,6 +618,10 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
}
subflow:
/* No need to try establishing subflows to remote id0 if not allowed */
if (mptcp_pm_add_addr_c_flag_case(msk))
goto exit;
/* check if should create a new subflow */
while (msk->pm.local_addr_used < local_addr_max &&
msk->pm.subflows < subflows_max) {
@@ -649,6 +653,8 @@ subflow:
__mptcp_subflow_connect(sk, &local, &addrs[i]);
spin_lock_bh(&msk->pm.lock);
}
exit:
mptcp_pm_nl_check_work_pending(msk);
}

View File

@@ -599,16 +599,6 @@ out:
qdisc_skb_cb(skb)->pkt_len = pkt_len;
}
void qdisc_warn_nonwc(const char *txt, struct Qdisc *qdisc)
{
if (!(qdisc->flags & TCQ_F_WARN_NONWC)) {
pr_warn("%s: %s qdisc %X: is non-work-conserving?\n",
txt, qdisc->ops->id, qdisc->handle >> 16);
qdisc->flags |= TCQ_F_WARN_NONWC;
}
}
EXPORT_SYMBOL(qdisc_warn_nonwc);
static enum hrtimer_restart qdisc_watchdog(struct hrtimer *timer)
{
struct qdisc_watchdog *wd = container_of(timer, struct qdisc_watchdog,

View File

@@ -835,22 +835,6 @@ update_vf(struct hfsc_class *cl, unsigned int len, u64 cur_time)
}
}
static unsigned int
qdisc_peek_len(struct Qdisc *sch)
{
struct sk_buff *skb;
unsigned int len;
skb = sch->ops->peek(sch);
if (unlikely(skb == NULL)) {
qdisc_warn_nonwc("qdisc_peek_len", sch);
return 0;
}
len = qdisc_pkt_len(skb);
return len;
}
static void
hfsc_adjust_levels(struct hfsc_class *cl)
{

View File

@@ -1002,7 +1002,7 @@ static struct sk_buff *agg_dequeue(struct qfq_aggregate *agg,
if (cl->qdisc->q.qlen == 0) /* no more packets, remove from list */
list_del_init(&cl->alist);
else if (cl->deficit < qdisc_pkt_len(cl->qdisc->ops->peek(cl->qdisc))) {
else if (cl->deficit < qdisc_peek_len(cl->qdisc)) {
cl->deficit += agg->lmax;
list_move_tail(&cl->alist, &agg->active);
}

View File

@@ -4234,6 +4234,8 @@ static void cfg80211_check_and_end_cac(struct cfg80211_registered_device *rdev)
struct wireless_dev *wdev;
unsigned int link_id;
wiphy_lock(&rdev->wiphy);
/* If we finished CAC or received radar, we should end any
* CAC running on the same channels.
* the check !cfg80211_chandef_dfs_usable contain 2 options:
@@ -4258,6 +4260,8 @@ static void cfg80211_check_and_end_cac(struct cfg80211_registered_device *rdev)
rdev_end_cac(rdev, wdev->netdev, link_id);
}
}
wiphy_unlock(&rdev->wiphy);
}
void regulatory_propagate_dfs_state(struct wiphy *wiphy,

View File

@@ -56,7 +56,8 @@ struct qmap {
queue1 SEC(".maps"),
queue2 SEC(".maps"),
queue3 SEC(".maps"),
queue4 SEC(".maps");
queue4 SEC(".maps"),
dump_store SEC(".maps");
struct {
__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
@@ -578,11 +579,26 @@ void BPF_STRUCT_OPS(qmap_dump, struct scx_dump_ctx *dctx)
return;
scx_bpf_dump("QMAP FIFO[%d]:", i);
/*
* Dump can be invoked anytime and there is no way to iterate in
* a non-destructive way. Pop and store in dump_store and then
* restore afterwards. If racing against new enqueues, ordering
* can get mixed up.
*/
bpf_repeat(4096) {
if (bpf_map_pop_elem(fifo, &pid))
break;
bpf_map_push_elem(&dump_store, &pid, 0);
scx_bpf_dump(" %d", pid);
}
bpf_repeat(4096) {
if (bpf_map_pop_elem(&dump_store, &pid))
break;
bpf_map_push_elem(fifo, &pid, 0);
}
scx_bpf_dump("\n");
}
}

View File

@@ -3823,7 +3823,8 @@ endpoint_tests()
# remove and re-add
if reset_with_events "delete re-add signal" &&
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=0
pm_nl_set_limits $ns1 0 3
pm_nl_set_limits $ns2 3 3
pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal