mirror of
https://github.com/raspberrypi/linux.git
synced 2025-12-27 04:22:58 +00:00
Add a new test to ensure that when the transport changes a null pointer
dereference does not occur. The bug was reported upstream [1] and fixed
with commit 2cb7c756f6 ("vsock/virtio: discard packets if the
transport changes").
KASAN: null-ptr-deref in range [0x0000000000000060-0x0000000000000067]
CPU: 2 UID: 0 PID: 463 Comm: kworker/2:3 Not tainted
Workqueue: vsock-loopback vsock_loopback_work
RIP: 0010:vsock_stream_has_data+0x44/0x70
Call Trace:
virtio_transport_do_close+0x68/0x1a0
virtio_transport_recv_pkt+0x1045/0x2ae4
vsock_loopback_work+0x27d/0x3f0
process_one_work+0x846/0x1420
worker_thread+0x5b3/0xf80
kthread+0x35a/0x700
ret_from_fork+0x2d/0x70
ret_from_fork_asm+0x1a/0x30
Note that this test may not fail in a kernel without the fix, but it may
hang on the client side if it triggers a kernel oops.
This works by creating a socket, trying to connect to a server, and then
executing a second connect operation on the same socket but to a
different CID (0). This triggers a transport change. If the connect
operation is interrupted by a signal, this could cause a null-ptr-deref.
Since this bug is non-deterministic, we need to try several times. It
is reasonable to assume that the bug will show up within the timeout
period.
If there is a G2H transport loaded in the system, the bug is not
triggered and this test will always pass. This is because
`vsock_assign_transport`, when using CID 0, like in this case, sets
vsk->transport to `transport_g2h` that is not NULL if a G2H transport is
available.
[1]https://lore.kernel.org/netdev/Z2LvdTTQR7dBmPb5@v4bel-B760M-AORUS-ELITE-AX/
Suggested-by: Hyunwoo Kim <v4bel@theori.io>
Suggested-by: Michal Luczaj <mhal@rbox.co>
Signed-off-by: Luigi Leonardi <leonardi@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Link: https://patch.msgid.link/20250630-test_vsock-v5-2-2492e141e80b@redhat.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
AF_VSOCK test suite
-------------------
These tests exercise net/vmw_vsock/ host<->guest sockets for VMware, KVM, and
Hyper-V.
The following tests are available:
* vsock_test - core AF_VSOCK socket functionality
* vsock_diag_test - vsock_diag.ko module for listing open sockets
The following prerequisite steps are not automated and must be performed prior
to running tests:
1. Build the kernel, make headers_install, and build these tests.
2. Install the kernel and tests on the host.
3. Install the kernel and tests inside the guest.
4. Boot the guest and ensure that the AF_VSOCK transport is enabled.
Invoke test binaries in both directions as follows:
# host=server, guest=client
(host)# $TEST_BINARY --mode=server \
--control-port=1234 \
--peer-cid=3
(guest)# $TEST_BINARY --mode=client \
--control-host=$HOST_IP \
--control-port=1234 \
--peer-cid=2
# host=client, guest=server
(guest)# $TEST_BINARY --mode=server \
--control-port=1234 \
--peer-cid=2
(host)# $TEST_BINARY --mode=client \
--control-port=$GUEST_IP \
--control-port=1234 \
--peer-cid=3
Some tests are designed to produce kernel memory leaks. Leaks detection,
however, is deferred to Kernel Memory Leak Detector. It is recommended to enable
kmemleak (CONFIG_DEBUG_KMEMLEAK=y) and explicitly trigger a scan after each test
suite run, e.g.
# echo clear > /sys/kernel/debug/kmemleak
# $TEST_BINARY ...
# echo "wait for any grace periods" && sleep 2
# echo scan > /sys/kernel/debug/kmemleak
# echo "wait for kmemleak" && sleep 5
# echo scan > /sys/kernel/debug/kmemleak
# cat /sys/kernel/debug/kmemleak
For more information see Documentation/dev-tools/kmemleak.rst.
vsock_perf utility
-------------------
'vsock_perf' is a simple tool to measure vsock performance. It works in
sender/receiver modes: sender connect to peer at the specified port and
starts data transmission to the receiver. After data processing is done,
it prints several metrics(see below).
Usage:
# run as sender
# connect to CID 2, port 1234, send 1G of data, tx buf size is 1M
./vsock_perf --sender 2 --port 1234 --bytes 1G --buf-size 1M
Output:
tx performance: A Gbits/s
Output explanation:
A is calculated as "number of bits to send" / "time in tx loop"
# run as receiver
# listen port 1234, rx buf size is 1M, socket buf size is 1G, SO_RCVLOWAT is 64K
./vsock_perf --port 1234 --buf-size 1M --vsk-size 1G --rcvlowat 64K
Output:
rx performance: A Gbits/s
total in 'read()': B sec
POLLIN wakeups: C
average in 'read()': D ns
Output explanation:
A is calculated as "number of received bits" / "time in rx loop".
B is time, spent in 'read()' system call(excluding 'poll()')
C is number of 'poll()' wake ups with POLLIN bit set.
D is B / C, e.g. average amount of time, spent in single 'read()'.