mirror of
https://github.com/raspberrypi/linux.git
synced 2025-12-23 10:12:24 +00:00
Pull MM updates from Andrew Morton:
- The series "zram: optimal post-processing target selection" from
Sergey Senozhatsky improves zram's post-processing selection
algorithm. This leads to improved memory savings.
- Wei Yang has gone to town on the mapletree code, contributing several
series which clean up the implementation:
- "refine mas_mab_cp()"
- "Reduce the space to be cleared for maple_big_node"
- "maple_tree: simplify mas_push_node()"
- "Following cleanup after introduce mas_wr_store_type()"
- "refine storing null"
- The series "selftests/mm: hugetlb_fault_after_madv improvements" from
David Hildenbrand fixes this selftest for s390.
- The series "introduce pte_offset_map_{ro|rw}_nolock()" from Qi Zheng
implements some rationaizations and cleanups in the page mapping
code.
- The series "mm: optimize shadow entries removal" from Shakeel Butt
optimizes the file truncation code by speeding up the handling of
shadow entries.
- The series "Remove PageKsm()" from Matthew Wilcox completes the
migration of this flag over to being a folio-based flag.
- The series "Unify hugetlb into arch_get_unmapped_area functions" from
Oscar Salvador implements a bunch of consolidations and cleanups in
the hugetlb code.
- The series "Do not shatter hugezeropage on wp-fault" from Dev Jain
takes away the wp-fault time practice of turning a huge zero page
into small pages. Instead we replace the whole thing with a THP. More
consistent cleaner and potentiall saves a large number of pagefaults.
- The series "percpu: Add a test case and fix for clang" from Andy
Shevchenko enhances and fixes the kernel's built in percpu test code.
- The series "mm/mremap: Remove extra vma tree walk" from Liam Howlett
optimizes mremap() by avoiding doing things which we didn't need to
do.
- The series "Improve the tmpfs large folio read performance" from
Baolin Wang teaches tmpfs to copy data into userspace at the folio
size rather than as individual pages. A 20% speedup was observed.
- The series "mm/damon/vaddr: Fix issue in
damon_va_evenly_split_region()" fro Zheng Yejian fixes DAMON
splitting.
- The series "memcg-v1: fully deprecate charge moving" from Shakeel
Butt removes the long-deprecated memcgv2 charge moving feature.
- The series "fix error handling in mmap_region() and refactor" from
Lorenzo Stoakes cleanup up some of the mmap() error handling and
addresses some potential performance issues.
- The series "x86/module: use large ROX pages for text allocations"
from Mike Rapoport teaches x86 to use large pages for
read-only-execute module text.
- The series "page allocation tag compression" from Suren Baghdasaryan
is followon maintenance work for the new page allocation profiling
feature.
- The series "page->index removals in mm" from Matthew Wilcox remove
most references to page->index in mm/. A slow march towards shrinking
struct page.
- The series "damon/{self,kunit}tests: minor fixups for DAMON debugfs
interface tests" from Andrew Paniakin performs maintenance work for
DAMON's self testing code.
- The series "mm: zswap swap-out of large folios" from Kanchana Sridhar
improves zswap's batching of compression and decompression. It is a
step along the way towards using Intel IAA hardware acceleration for
this zswap operation.
- The series "kasan: migrate the last module test to kunit" from
Sabyrzhan Tasbolatov completes the migration of the KASAN built-in
tests over to the KUnit framework.
- The series "implement lightweight guard pages" from Lorenzo Stoakes
permits userapace to place fault-generating guard pages within a
single VMA, rather than requiring that multiple VMAs be created for
this. Improved efficiencies for userspace memory allocators are
expected.
- The series "memcg: tracepoint for flushing stats" from JP Kobryn uses
tracepoints to provide increased visibility into memcg stats flushing
activity.
- The series "zram: IDLE flag handling fixes" from Sergey Senozhatsky
fixes a zram buglet which potentially affected performance.
- The series "mm: add more kernel parameters to control mTHP" from
Maíra Canal enhances our ability to control/configuremultisize THP
from the kernel boot command line.
- The series "kasan: few improvements on kunit tests" from Sabyrzhan
Tasbolatov has a couple of fixups for the KASAN KUnit tests.
- The series "mm/list_lru: Split list_lru lock into per-cgroup scope"
from Kairui Song optimizes list_lru memory utilization when lockdep
is enabled.
* tag 'mm-stable-2024-11-18-19-27' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (215 commits)
cma: enforce non-zero pageblock_order during cma_init_reserved_mem()
mm/kfence: add a new kunit test test_use_after_free_read_nofault()
zram: fix NULL pointer in comp_algorithm_show()
memcg/hugetlb: add hugeTLB counters to memcg
vmstat: call fold_vm_zone_numa_events() before show per zone NUMA event
mm: mmap_lock: check trace_mmap_lock_$type_enabled() instead of regcount
zram: ZRAM_DEF_COMP should depend on ZRAM
MAINTAINERS/MEMORY MANAGEMENT: add document files for mm
Docs/mm/damon: recommend academic papers to read and/or cite
mm: define general function pXd_init()
kmemleak: iommu/iova: fix transient kmemleak false positive
mm/list_lru: simplify the list_lru walk callback function
mm/list_lru: split the lock to per-cgroup scope
mm/list_lru: simplify reparenting and initial allocation
mm/list_lru: code clean up for reparenting
mm/list_lru: don't export list_lru_add
mm/list_lru: don't pass unnecessary key parameters
kasan: add kunit tests for kmalloc_track_caller, kmalloc_node_track_caller
kasan: change kasan_atomics kunit test as KUNIT_CASE_SLOW
kasan: use EXPORT_SYMBOL_IF_KUNIT to export symbols
...
69 lines
2.1 KiB
C
69 lines
2.1 KiB
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#ifndef _ASMS390_SET_MEMORY_H
|
|
#define _ASMS390_SET_MEMORY_H
|
|
|
|
#include <linux/mutex.h>
|
|
|
|
extern struct mutex cpa_mutex;
|
|
|
|
enum {
|
|
_SET_MEMORY_RO_BIT,
|
|
_SET_MEMORY_RW_BIT,
|
|
_SET_MEMORY_NX_BIT,
|
|
_SET_MEMORY_X_BIT,
|
|
_SET_MEMORY_4K_BIT,
|
|
_SET_MEMORY_INV_BIT,
|
|
_SET_MEMORY_DEF_BIT,
|
|
};
|
|
|
|
#define SET_MEMORY_RO BIT(_SET_MEMORY_RO_BIT)
|
|
#define SET_MEMORY_RW BIT(_SET_MEMORY_RW_BIT)
|
|
#define SET_MEMORY_NX BIT(_SET_MEMORY_NX_BIT)
|
|
#define SET_MEMORY_X BIT(_SET_MEMORY_X_BIT)
|
|
#define SET_MEMORY_4K BIT(_SET_MEMORY_4K_BIT)
|
|
#define SET_MEMORY_INV BIT(_SET_MEMORY_INV_BIT)
|
|
#define SET_MEMORY_DEF BIT(_SET_MEMORY_DEF_BIT)
|
|
|
|
int __set_memory(unsigned long addr, unsigned long numpages, unsigned long flags);
|
|
|
|
#define set_memory_rox set_memory_rox
|
|
|
|
/*
|
|
* Generate two variants of each set_memory() function:
|
|
*
|
|
* set_memory_yy(unsigned long addr, int numpages);
|
|
* __set_memory_yy(void *start, void *end);
|
|
*
|
|
* The second variant exists for both convenience to avoid the usual
|
|
* (unsigned long) casts, but unlike the first variant it can also be used
|
|
* for areas larger than 8TB, which may happen at memory initialization.
|
|
*/
|
|
#define __SET_MEMORY_FUNC(fname, flags) \
|
|
static inline int fname(unsigned long addr, int numpages) \
|
|
{ \
|
|
return __set_memory(addr, numpages, (flags)); \
|
|
} \
|
|
\
|
|
static inline int __##fname(void *start, void *end) \
|
|
{ \
|
|
unsigned long numpages; \
|
|
\
|
|
numpages = (end - start) >> PAGE_SHIFT; \
|
|
return __set_memory((unsigned long)start, numpages, (flags)); \
|
|
}
|
|
|
|
__SET_MEMORY_FUNC(set_memory_ro, SET_MEMORY_RO)
|
|
__SET_MEMORY_FUNC(set_memory_rw, SET_MEMORY_RW)
|
|
__SET_MEMORY_FUNC(set_memory_nx, SET_MEMORY_NX)
|
|
__SET_MEMORY_FUNC(set_memory_x, SET_MEMORY_X)
|
|
__SET_MEMORY_FUNC(set_memory_rox, SET_MEMORY_RO | SET_MEMORY_X)
|
|
__SET_MEMORY_FUNC(set_memory_rwnx, SET_MEMORY_RW | SET_MEMORY_NX)
|
|
__SET_MEMORY_FUNC(set_memory_4k, SET_MEMORY_4K)
|
|
|
|
int set_direct_map_invalid_noflush(struct page *page);
|
|
int set_direct_map_default_noflush(struct page *page);
|
|
int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid);
|
|
bool kernel_page_present(struct page *page);
|
|
|
|
#endif
|