Mike Rapoport (Microsoft)
8afa901c14
arch, mm: make releasing of memory to page allocator more explicit
...
The point where the memory is released from memblock to the buddy
allocator is hidden inside arch-specific mem_init()s and the call to
memblock_free_all() is needlessly duplicated in every artiste cure and
after introduction of arch_mm_preinit() hook, mem_init() implementation on
many architecture only contains the call to memblock_free_all().
Pull memblock_free_all() call into mm_core_init() and drop mem_init() on
relevant architectures to make it more explicit where the free memory is
released from memblock to the buddy allocator and to reduce code
duplication in architecture specific code.
Link: https://lkml.kernel.org/r/20250313135003.836600-14-rppt@kernel.org
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org >
Acked-by: Dave Hansen <dave.hansen@linux.intel.com > [x86]
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org > [m68k]
Tested-by: Mark Brown <broonie@kernel.org >
Cc: Alexander Gordeev <agordeev@linux.ibm.com >
Cc: Andreas Larsson <andreas@gaisler.com >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Ard Biesheuvel <ardb@kernel.org >
Cc: Arnd Bergmann <arnd@arndb.de >
Cc: Borislav Betkov <bp@alien8.de >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David S. Miller <davem@davemloft.net >
Cc: Dinh Nguyen <dinguyen@kernel.org >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: Guo Ren (csky) <guoren@kernel.org >
Cc: Heiko Carstens <hca@linux.ibm.com >
Cc: Helge Deller <deller@gmx.de >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com >
Cc: Johannes Berg <johannes@sipsolutions.net >
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Max Filippov <jcmvbkbc@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Palmer Dabbelt <palmer@dabbelt.com >
Cc: Richard Weinberger <richard@nod.at >
Cc: Russel King <linux@armlinux.org.uk >
Cc: Stafford Horne <shorne@gmail.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Thomas Gleinxer <tglx@linutronix.de >
Cc: Vasily Gorbik <gor@linux.ibm.com >
Cc: Vineet Gupta <vgupta@kernel.org >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2025-03-17 22:06:53 -07:00
Mike Rapoport (Microsoft)
0d98484ee3
arch, mm: introduce arch_mm_preinit
...
Currently, implementation of mem_init() in every architecture consists of
one or more of the following:
* initializations that must run before page allocator is active, for
instance swiotlb_init()
* a call to memblock_free_all() to release all the memory to the buddy
allocator
* initializations that must run after page allocator is ready and there is
no arch-specific hook other than mem_init() for that, like for example
register_page_bootmem_info() in x86 and sparc64 or simple setting of
mem_init_done = 1 in several architectures
* a bunch of semi-related stuff that apparently had no better place to
live, for example a ton of BUILD_BUG_ON()s in parisc.
Introduce arch_mm_preinit() that will be the first thing called from
mm_core_init(). On architectures that have initializations that must happen
before the page allocator is ready, move those into arch_mm_preinit() along
with the code that does not depend on ordering with page allocator setup.
On several architectures this results in reduction of mem_init() to a
single call to memblock_free_all() that allows its consolidation next.
Link: https://lkml.kernel.org/r/20250313135003.836600-13-rppt@kernel.org
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org >
Acked-by: Dave Hansen <dave.hansen@linux.intel.com > [x86]
Tested-by: Mark Brown <broonie@kernel.org >
Cc: Alexander Gordeev <agordeev@linux.ibm.com >
Cc: Andreas Larsson <andreas@gaisler.com >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Ard Biesheuvel <ardb@kernel.org >
Cc: Arnd Bergmann <arnd@arndb.de >
Cc: Borislav Betkov <bp@alien8.de >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David S. Miller <davem@davemloft.net >
Cc: Dinh Nguyen <dinguyen@kernel.org >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: Guo Ren (csky) <guoren@kernel.org >
Cc: Heiko Carstens <hca@linux.ibm.com >
Cc: Helge Deller <deller@gmx.de >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com >
Cc: Johannes Berg <johannes@sipsolutions.net >
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Max Filippov <jcmvbkbc@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Palmer Dabbelt <palmer@dabbelt.com >
Cc: Richard Weinberger <richard@nod.at >
Cc: Russel King <linux@armlinux.org.uk >
Cc: Stafford Horne <shorne@gmail.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Thomas Gleinxer <tglx@linutronix.de >
Cc: Vasily Gorbik <gor@linux.ibm.com >
Cc: Vineet Gupta <vgupta@kernel.org >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2025-03-17 22:06:53 -07:00
Mike Rapoport (Microsoft)
6faea3422e
arch, mm: streamline HIGHMEM freeing
...
All architectures that support HIGHMEM have their code that frees high
memory pages to the buddy allocator while __free_memory_core() is limited
to freeing only low memory.
There is no actual reason for that. The memory map is completely ready by
the time memblock_free_all() is called and high pages can be released to
the buddy allocator along with low memory.
Remove low memory limit from __free_memory_core() and drop per-architecture
code that frees high memory pages.
Link: https://lkml.kernel.org/r/20250313135003.836600-12-rppt@kernel.org
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org >
Acked-by: Dave Hansen <dave.hansen@linux.intel.com > [x86]
Tested-by: Mark Brown <broonie@kernel.org >
Cc: Alexander Gordeev <agordeev@linux.ibm.com >
Cc: Andreas Larsson <andreas@gaisler.com >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Ard Biesheuvel <ardb@kernel.org >
Cc: Arnd Bergmann <arnd@arndb.de >
Cc: Borislav Betkov <bp@alien8.de >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David S. Miller <davem@davemloft.net >
Cc: Dinh Nguyen <dinguyen@kernel.org >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: Guo Ren (csky) <guoren@kernel.org >
Cc: Heiko Carstens <hca@linux.ibm.com >
Cc: Helge Deller <deller@gmx.de >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com >
Cc: Johannes Berg <johannes@sipsolutions.net >
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Max Filippov <jcmvbkbc@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Palmer Dabbelt <palmer@dabbelt.com >
Cc: Richard Weinberger <richard@nod.at >
Cc: Russel King <linux@armlinux.org.uk >
Cc: Stafford Horne <shorne@gmail.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Thomas Gleinxer <tglx@linutronix.de >
Cc: Vasily Gorbik <gor@linux.ibm.com >
Cc: Vineet Gupta <vgupta@kernel.org >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2025-03-17 22:06:53 -07:00
Mike Rapoport (Microsoft)
e120d1bc12
arch, mm: set high_memory in free_area_init()
...
high_memory defines upper bound on the directly mapped memory. This bound
is defined by the beginning of ZONE_HIGHMEM when a system has high memory
and by the end of memory otherwise.
All this is known to generic memory management initialization code that
can set high_memory while initializing core mm structures.
Add a generic calculation of high_memory to free_area_init() and remove
per-architecture calculation except for the architectures that set and use
high_memory earlier than that.
Link: https://lkml.kernel.org/r/20250313135003.836600-11-rppt@kernel.org
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org >
Acked-by: Dave Hansen <dave.hansen@linux.intel.com > [x86]
Tested-by: Mark Brown <broonie@kernel.org >
Cc: Alexander Gordeev <agordeev@linux.ibm.com >
Cc: Andreas Larsson <andreas@gaisler.com >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Ard Biesheuvel <ardb@kernel.org >
Cc: Arnd Bergmann <arnd@arndb.de >
Cc: Borislav Betkov <bp@alien8.de >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David S. Miller <davem@davemloft.net >
Cc: Dinh Nguyen <dinguyen@kernel.org >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: Guo Ren (csky) <guoren@kernel.org >
Cc: Heiko Carstens <hca@linux.ibm.com >
Cc: Helge Deller <deller@gmx.de >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com >
Cc: Johannes Berg <johannes@sipsolutions.net >
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Max Filippov <jcmvbkbc@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Palmer Dabbelt <palmer@dabbelt.com >
Cc: Richard Weinberger <richard@nod.at >
Cc: Russel King <linux@armlinux.org.uk >
Cc: Stafford Horne <shorne@gmail.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Thomas Gleinxer <tglx@linutronix.de >
Cc: Vasily Gorbik <gor@linux.ibm.com >
Cc: Vineet Gupta <vgupta@kernel.org >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2025-03-17 22:06:52 -07:00
Mike Rapoport (Microsoft)
8268af309d
arch, mm: set max_mapnr when allocating memory map for FLATMEM
...
max_mapnr is essentially the size of the memory map for systems that use
FLATMEM. There is no reason to calculate it in each and every architecture
when it's anyway calculated in alloc_node_mem_map().
Drop setting of max_mapnr from architecture code and set it once in
alloc_node_mem_map().
While on it, move definition of mem_map and max_mapnr to mm/mm_init.c so
there won't be two copies for MMU and !MMU variants.
Link: https://lkml.kernel.org/r/20250313135003.836600-10-rppt@kernel.org
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org >
Acked-by: Dave Hansen <dave.hansen@linux.intel.com > [x86]
Tested-by: Mark Brown <broonie@kernel.org >
Cc: Alexander Gordeev <agordeev@linux.ibm.com >
Cc: Andreas Larsson <andreas@gaisler.com >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Ard Biesheuvel <ardb@kernel.org >
Cc: Arnd Bergmann <arnd@arndb.de >
Cc: Borislav Betkov <bp@alien8.de >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: David S. Miller <davem@davemloft.net >
Cc: Dinh Nguyen <dinguyen@kernel.org >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: Guo Ren (csky) <guoren@kernel.org >
Cc: Heiko Carstens <hca@linux.ibm.com >
Cc: Helge Deller <deller@gmx.de >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com >
Cc: Johannes Berg <johannes@sipsolutions.net >
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Max Filippov <jcmvbkbc@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Palmer Dabbelt <palmer@dabbelt.com >
Cc: Richard Weinberger <richard@nod.at >
Cc: Russel King <linux@armlinux.org.uk >
Cc: Stafford Horne <shorne@gmail.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Thomas Gleinxer <tglx@linutronix.de >
Cc: Vasily Gorbik <gor@linux.ibm.com >
Cc: Vineet Gupta <vgupta@kernel.org >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2025-03-17 22:06:52 -07:00
Mike Rapoport (Microsoft)
e74e2b8eb4
MIPS: make setup_zero_pages() use memblock
...
Allocating the zero pages from memblock is simpler because the memory is
already reserved.
This will also help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Link: https://lkml.kernel.org/r/20250313135003.836600-6-rppt@kernel.org
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org >
Cc: Alexander Gordeev <agordeev@linux.ibm.com >
Cc: Andreas Larsson <andreas@gaisler.com >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Ard Biesheuvel <ardb@kernel.org >
Cc: Arnd Bergmann <arnd@arndb.de >
Cc: Borislav Betkov <bp@alien8.de >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Dave Hansen <dave.hansen@linux.intel.com >
Cc: David S. Miller <davem@davemloft.net >
Cc: Dinh Nguyen <dinguyen@kernel.org >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: Guo Ren (csky) <guoren@kernel.org >
Cc: Heiko Carstens <hca@linux.ibm.com >
Cc: Helge Deller <deller@gmx.de >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com >
Cc: Johannes Berg <johannes@sipsolutions.net >
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Brown <broonie@kernel.org >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Max Filippov <jcmvbkbc@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Palmer Dabbelt <palmer@dabbelt.com >
Cc: Richard Weinberger <richard@nod.at >
Cc: Russel King <linux@armlinux.org.uk >
Cc: Stafford Horne <shorne@gmail.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Thomas Gleinxer <tglx@linutronix.de >
Cc: Vasily Gorbik <gor@linux.ibm.com >
Cc: Vineet Gupta <vgupta@kernel.org >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2025-03-17 22:06:51 -07:00
Mike Rapoport (Microsoft)
67e7a60086
MIPS: consolidate mem_init() for NUMA machines
...
Both MIPS systems that support numa (loongsoon3 and sgi-ip27) have
identical mem_init() for NUMA case.
Move that into arch/mips/mm/init.c and drop duplicate per-machine
definitions.
Link: https://lkml.kernel.org/r/20250313135003.836600-5-rppt@kernel.org
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org >
Cc: Alexander Gordeev <agordeev@linux.ibm.com >
Cc: Andreas Larsson <andreas@gaisler.com >
Cc: Andy Lutomirski <luto@kernel.org >
Cc: Ard Biesheuvel <ardb@kernel.org >
Cc: Arnd Bergmann <arnd@arndb.de >
Cc: Borislav Betkov <bp@alien8.de >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Dave Hansen <dave.hansen@linux.intel.com >
Cc: David S. Miller <davem@davemloft.net >
Cc: Dinh Nguyen <dinguyen@kernel.org >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com >
Cc: Guo Ren (csky) <guoren@kernel.org >
Cc: Heiko Carstens <hca@linux.ibm.com >
Cc: Helge Deller <deller@gmx.de >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com >
Cc: Johannes Berg <johannes@sipsolutions.net >
Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Brown <broonie@kernel.org >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Max Filippov <jcmvbkbc@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Palmer Dabbelt <palmer@dabbelt.com >
Cc: Richard Weinberger <richard@nod.at >
Cc: Russel King <linux@armlinux.org.uk >
Cc: Stafford Horne <shorne@gmail.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Thomas Gleinxer <tglx@linutronix.de >
Cc: Vasily Gorbik <gor@linux.ibm.com >
Cc: Vineet Gupta <vgupta@kernel.org >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2025-03-17 22:06:51 -07:00
Mike Rapoport (IBM)
0cc2dc4902
arch: make execmem setup available regardless of CONFIG_MODULES
...
execmem does not depend on modules, on the contrary modules use
execmem.
To make execmem available when CONFIG_MODULES=n, for instance for
kprobes, split execmem_params initialization out from
arch/*/kernel/module.c and compile it when CONFIG_EXECMEM=y
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org >
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org >
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org >
2024-05-14 00:31:44 -07:00
Linus Torvalds
096f286ee3
Merge tag 'mips_6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
...
Pull MIPS updates from Thomas Bogendoerfer:
"Just cleanups and fixes"
* tag 'mips_6.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
MIPS: Alchemy: Fix an out-of-bound access in db1550_dev_setup()
MIPS: Alchemy: Fix an out-of-bound access in db1200_dev_setup()
MIPS: Fix typos
MIPS: Remove unused shadow GPR support from vector irq setup
MIPS: Allow vectored interrupt handler to reside everywhere for 64bit
mips: Set dump-stack arch description
mips: mm: add slab availability checking in ioremap_prot
mips: Optimize max_mapnr init procedure
mips: Fix max_mapnr being uninitialized on early stages
mips: Fix incorrect max_low_pfn adjustment
mips: dmi: Fix early remap on MIPS32
MIPS: compressed: Use correct instruction for 64 bit code
MIPS: SGI-IP27: hubio: fix nasid kernel-doc warning
MAINTAINERS: Add myself as maintainer of the Ralink architecture
2024-01-17 11:20:50 -08:00
Serge Semin
1c0150229f
mips: Optimize max_mapnr init procedure
...
max_mapnr defines the upper boundary of the pages space in the system.
Currently in case if HIGHMEM is available it's calculated based on the
upper high memory PFN limit value. Seeing there is a case when it isn't
fully correct let's optimize out the max_mapnr variable initialization
procedure to cover all the handled in the paging_init() method cases:
1. If CPU has DC-aliases, then high memory is unavailable so the PFNs
upper boundary is determined by max_low_pfn.
2. Otherwise if high memory is available, use highend_pfn value
representing the upper high memory PFNs limit.
3. Otherwise no high memory is available so set max_mapnr with the
low-memory upper limit.
Signed-off-by: Serge Semin <fancer.lancer@gmail.com >
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
2023-12-21 15:32:23 +01:00
Serge Semin
e1a9ae4573
mips: Fix max_mapnr being uninitialized on early stages
...
max_mapnr variable is utilized in the pfn_valid() method in order to
determine the upper PFN space boundary. Having it uninitialized
effectively makes any PFN passed to that method invalid. That in its turn
causes the kernel mm-subsystem occasion malfunctions even after the
max_mapnr variable is actually properly updated. For instance,
pfn_valid() is called in the init_unavailable_range() method in the
framework of the calls-chain on MIPS:
setup_arch()
+-> paging_init()
+-> free_area_init()
+-> memmap_init()
+-> memmap_init_zone_range()
+-> init_unavailable_range()
Since pfn_valid() always returns "false" value before max_mapnr is
initialized in the mem_init() method, any flatmem page-holes will be left
in the poisoned/uninitialized state including the IO-memory pages. Thus
any further attempts to map/remap the IO-memory by using MMU may fail.
In particular it happened in my case on attempt to map the SRAM region.
The kernel bootup procedure just crashed on the unhandled unaligned access
bug raised in the __update_cache() method:
> Unhandled kernel unaligned access[#1 ]:
> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.7.0-rc1-XXX-dirty #2056
> ...
> Call Trace:
> [<8011ef9c>] __update_cache+0x88/0x1bc
> [<80385944>] ioremap_page_range+0x110/0x2a4
> [<80126948>] ioremap_prot+0x17c/0x1f4
> [<80711b80>] __devm_ioremap+0x8c/0x120
> [<80711e0c>] __devm_ioremap_resource+0xf4/0x218
> [<808bf244>] sram_probe+0x4f4/0x930
> [<80889d20>] platform_probe+0x68/0xec
> ...
Let's fix the problem by initializing the max_mapnr variable as soon as
the required data is available. In particular it can be done right in the
paging_init() method before free_area_init() is called since all the PFN
zone boundaries have already been calculated by that time.
Cc: stable@vger.kernel.org
Signed-off-by: Serge Semin <fancer.lancer@gmail.com >
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
2023-12-21 15:32:09 +01:00
Arnd Bergmann
e021227afb
mips: fix setup_zero_pages() prototype
...
setup_zero_pages() has a local declaration in a platform specific header,
but that is not seen in the file it is defined in:
arch/mips/mm/init.c:60:6: error: no previous prototype for 'setup_zero_pages' [-Werror=missing-prototypes]
Move it to the corresponding global header and include that where needed.
Link: https://lkml.kernel.org/r/20231204115710.2247097-11-arnd@kernel.org
Signed-off-by: Arnd Bergmann <arnd@arndb.de >
Cc: Stephen Rothwell <sfr@rothwell.id.au >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-12-10 17:21:40 -08:00
Matthew Wilcox (Oracle)
15fa3e8e32
mips: implement the new page table range API
...
Rename _PFN_SHIFT to PFN_PTE_SHIFT. Convert a few places
to call set_pte() instead of set_pte_at(). Add set_ptes(),
update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio().
Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page
to per-folio.
Link: https://lkml.kernel.org/r/20230802151406.3735276-18-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org >
Acked-by: Mike Rapoport (IBM) <rppt@kernel.org >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
2023-08-24 16:20:22 -07:00
Kefeng Wang
23f917169e
mm: percpu: add generic pcpu_fc_alloc/free funciton
...
With the previous patch, we could add a generic pcpu first chunk
allocate and free function to cleanup the duplicated definations on each
architecture.
Link: https://lkml.kernel.org/r/20211216112359.103822-4-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org >
Cc: Paul Mackerras <paulus@samba.org >
Cc: "David S. Miller" <davem@davemloft.net >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Dave Hansen <dave.hansen@linux.intel.com >
Cc: "H. Peter Anvin" <hpa@zytor.com >
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
Cc: Dennis Zhou <dennis@kernel.org >
Cc: Tejun Heo <tj@kernel.org >
Cc: Christoph Lameter <cl@linux.com >
Cc: Albert Ou <aou@eecs.berkeley.edu >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Palmer Dabbelt <palmer@dabbelt.com >
Cc: Paul Walmsley <paul.walmsley@sifive.com >
Cc: "Rafael J. Wysocki" <rafael@kernel.org >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2022-01-20 08:52:52 +02:00
Kefeng Wang
1ca3fb3abd
mm: percpu: add pcpu_fc_cpu_to_node_fn_t typedef
...
Add pcpu_fc_cpu_to_node_fn_t and pass it into pcpu_fc_alloc_fn_t, pcpu
first chunk allocation will call it to alloc memblock on the
corresponding node by it, this is prepare for the next patch.
Link: https://lkml.kernel.org/r/20211216112359.103822-3-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org >
Cc: Paul Mackerras <paulus@samba.org >
Cc: "David S. Miller" <davem@davemloft.net >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Dave Hansen <dave.hansen@linux.intel.com >
Cc: "H. Peter Anvin" <hpa@zytor.com >
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
Cc: "Rafael J. Wysocki" <rafael@kernel.org >
Cc: Dennis Zhou <dennis@kernel.org >
Cc: Tejun Heo <tj@kernel.org >
Cc: Christoph Lameter <cl@linux.com >
Cc: Albert Ou <aou@eecs.berkeley.edu >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Palmer Dabbelt <palmer@dabbelt.com >
Cc: Paul Walmsley <paul.walmsley@sifive.com >
Cc: Will Deacon <will@kernel.org >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2022-01-20 08:52:52 +02:00
Mike Rapoport
4421cca0a3
memblock: use memblock_free for freeing virtual pointers
...
Rename memblock_free_ptr() to memblock_free() and use memblock_free()
when freeing a virtual pointer so that memblock_free() will be a
counterpart of memblock_alloc()
The callers are updated with the below semantic patch and manual
addition of (void *) casting to pointers that are represented by
unsigned long variables.
@@
identifier vaddr;
expression size;
@@
(
- memblock_phys_free(__pa(vaddr), size);
+ memblock_free(vaddr, size);
|
- memblock_free_ptr(vaddr, size);
+ memblock_free(vaddr, size);
)
[sfr@canb.auug.org.au: fixup]
Link: https://lkml.kernel.org/r/20211018192940.3d1d532f@canb.auug.org.au
Link: https://lkml.kernel.org/r/20210930185031.18648-7-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com >
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au >
Cc: Christophe Leroy <christophe.leroy@csgroup.eu >
Cc: Juergen Gross <jgross@suse.com >
Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-11-06 13:30:41 -07:00
Mike Rapoport
3ecc68349b
memblock: rename memblock_free to memblock_phys_free
...
Since memblock_free() operates on a physical range, make its name
reflect it and rename it to memblock_phys_free(), so it will be a
logical counterpart to memblock_phys_alloc().
The callers are updated with the below semantic patch:
@@
expression addr;
expression size;
@@
- memblock_free(addr, size);
+ memblock_phys_free(addr, size);
Link: https://lkml.kernel.org/r/20210930185031.18648-6-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com >
Cc: Christophe Leroy <christophe.leroy@csgroup.eu >
Cc: Juergen Gross <jgross@suse.com >
Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-11-06 13:30:41 -07:00
Mike Rapoport
fa27717110
memblock: drop memblock_free_early_nid() and memblock_free_early()
...
memblock_free_early_nid() is unused and memblock_free_early() is an
alias for memblock_free().
Replace calls to memblock_free_early() with calls to memblock_free() and
remove memblock_free_early() and memblock_free_early_nid().
Link: https://lkml.kernel.org/r/20210930185031.18648-4-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com >
Cc: Christophe Leroy <christophe.leroy@csgroup.eu >
Cc: Juergen Gross <jgross@suse.com >
Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-11-06 13:30:41 -07:00
Mike Rapoport
a9ee6cf5c6
mm: replace CONFIG_NEED_MULTIPLE_NODES with CONFIG_NUMA
...
After removal of DISCINTIGMEM the NEED_MULTIPLE_NODES and NUMA
configuration options are equivalent.
Drop CONFIG_NEED_MULTIPLE_NODES and use CONFIG_NUMA instead.
Done with
$ sed -i 's/CONFIG_NEED_MULTIPLE_NODES/CONFIG_NUMA/' \
$(git grep -wl CONFIG_NEED_MULTIPLE_NODES)
$ sed -i 's/NEED_MULTIPLE_NODES/NUMA/' \
$(git grep -wl NEED_MULTIPLE_NODES)
with manual tweaks afterwards.
[rppt@linux.ibm.com: fix arm boot crash]
Link: https://lkml.kernel.org/r/YMj9vHhHOiCVN4BF@linux.ibm.com
Link: https://lkml.kernel.org/r/20210608091316.3622-9-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com >
Acked-by: Arnd Bergmann <arnd@arndb.de >
Acked-by: David Hildenbrand <david@redhat.com >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru >
Cc: Jonathan Corbet <corbet@lwn.net >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Richard Henderson <rth@twiddle.net >
Cc: Vineet Gupta <vgupta@synopsys.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-06-29 10:53:55 -07:00
Mike Rapoport
d3c251ab95
arch, mm: remove stale mentions of DISCONIGMEM
...
There are several places that mention DISCONIGMEM in comments or have
stale code guarded by CONFIG_DISCONTIGMEM.
Remove the dead code and update the comments.
Link: https://lkml.kernel.org/r/20210608091316.3622-7-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com >
Acked-by: Arnd Bergmann <arnd@arndb.de >
Reviewed-by: David Hildenbrand <david@redhat.com >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru >
Cc: Jonathan Corbet <corbet@lwn.net >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Richard Henderson <rth@twiddle.net >
Cc: Vineet Gupta <vgupta@synopsys.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-06-29 10:53:55 -07:00
Kefeng Wang
1f9d03c5e9
mm: move mem_init_print_info() into mm_init()
...
mem_init_print_info() is called in mem_init() on each architecture, and
pass NULL argument, so using void argument and move it into mm_init().
Link: https://lkml.kernel.org/r/20210317015210.33641-1-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com >
Acked-by: Dave Hansen <dave.hansen@linux.intel.com > [x86]
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr > [powerpc]
Acked-by: David Hildenbrand <david@redhat.com >
Tested-by: Anatoly Pugachev <matorola@gmail.com > [sparc64]
Acked-by: Russell King <rmk+kernel@armlinux.org.uk > [arm]
Acked-by: Mike Rapoport <rppt@linux.ibm.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Richard Henderson <rth@twiddle.net >
Cc: Guo Ren <guoren@kernel.org >
Cc: Yoshinori Sato <ysato@users.osdn.me >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Jonas Bonn <jonas@southpole.se >
Cc: Palmer Dabbelt <palmer@dabbelt.com >
Cc: Heiko Carstens <hca@linux.ibm.com >
Cc: "David S. Miller" <davem@davemloft.net >
Cc: "Peter Zijlstra" <peterz@infradead.org >
Cc: Ingo Molnar <mingo@redhat.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2021-04-30 11:20:42 -07:00
Thomas Bogendoerfer
a6e83acee2
MIPS: Remove empty prom_free_prom_memory functions
...
Most of the prom_free_prom_memory functions are empty. With
a new weak prom_free_prom_memory() we can remove all of them.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Acked-by: Florian Fainelli <f.fainelli@gmail.com >
2021-01-07 17:11:33 +01:00
Thomas Gleixner
a4c33e83bc
mips/mm/highmem: Switch to generic kmap atomic
...
No reason having the same code in every architecture
Signed-off-by: Thomas Gleixner <tglx@linutronix.de >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Arnd Bergmann <arnd@arndb.de >
Link: https://lore.kernel.org/r/20201103095857.885321106@linutronix.de
2020-11-06 23:14:56 +01:00
Joe Perches
33def8498f
treewide: Convert macro and uses of __section(foo) to __section("foo")
...
Use a more generic form for __section that requires quotes to avoid
complications with clang and gcc differences.
Remove the quote operator # from compiler_attributes.h __section macro.
Convert all unquoted __section(foo) uses to quoted __section("foo").
Also convert __attribute__((section("foo"))) uses to __section("foo")
even if the __attribute__ has multiple list entry forms.
Conversion done using the script at:
https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.pl
Signed-off-by: Joe Perches <joe@perches.com >
Reviewed-by: Nick Desaulniers <ndesaulniers@gooogle.com >
Reviewed-by: Miguel Ojeda <ojeda@kernel.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2020-10-25 14:51:49 -07:00
Kees Cook
3f649ab728
treewide: Remove uninitialized_var() usage
...
Using uninitialized_var() is dangerous as it papers over real bugs[1]
(or can in the future), and suppresses unrelated compiler warnings
(e.g. "unused variable"). If the compiler thinks it is uninitialized,
either simply initialize the variable or make compiler changes.
In preparation for removing[2] the[3] macro[4], remove all remaining
needless uses with the following script:
git grep '\buninitialized_var\b' | cut -d: -f1 | sort -u | \
xargs perl -pi -e \
's/\buninitialized_var\(([^\)]+)\)/\1/g;
s:\s*/\* (GCC be quiet|to make compiler happy) \*/$::g;'
drivers/video/fbdev/riva/riva_hw.c was manually tweaked to avoid
pathological white-space.
No outstanding warnings were found building allmodconfig with GCC 9.3.0
for x86_64, i386, arm64, arm, powerpc, powerpc64le, s390x, mips, sparc64,
alpha, and m68k.
[1] https://lore.kernel.org/lkml/20200603174714.192027-1-glider@google.com/
[2] https://lore.kernel.org/lkml/CA+55aFw+Vbj0i=1TGqCR5vQkCzWJ0QxK6CernOU6eedsudAixw@mail.gmail.com/
[3] https://lore.kernel.org/lkml/CA+55aFwgbgqhbp1fkxvRKEpzyR5J8n1vKT1VZdz9knmPuXhOeg@mail.gmail.com/
[4] https://lore.kernel.org/lkml/CA+55aFz2500WfbKXAx8s67wrm9=yVJu65TpLgN_ybYNv0VEOKA@mail.gmail.com/
Reviewed-by: Leon Romanovsky <leonro@mellanox.com > # drivers/infiniband and mlx4/mlx5
Acked-by: Jason Gunthorpe <jgg@mellanox.com > # IB
Acked-by: Kalle Valo <kvalo@codeaurora.org > # wireless drivers
Reviewed-by: Chao Yu <yuchao0@huawei.com > # erofs
Signed-off-by: Kees Cook <keescook@chromium.org >
2020-07-16 12:35:15 -07:00
Mike Rapoport
e31cf2f4ca
mm: don't include asm/pgtable.h if linux/mm.h is already included
...
Patch series "mm: consolidate definitions of page table accessors", v2.
The low level page table accessors (pXY_index(), pXY_offset()) are
duplicated across all architectures and sometimes more than once. For
instance, we have 31 definition of pgd_offset() for 25 supported
architectures.
Most of these definitions are actually identical and typically it boils
down to, e.g.
static inline unsigned long pmd_index(unsigned long address)
{
return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
}
static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
{
return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
}
These definitions can be shared among 90% of the arches provided
XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.
For architectures that really need a custom version there is always
possibility to override the generic version with the usual ifdefs magic.
These patches introduce include/linux/pgtable.h that replaces
include/asm-generic/pgtable.h and add the definitions of the page table
accessors to the new header.
This patch (of 12):
The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the
functions involving page table manipulations, e.g. pte_alloc() and
pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h>
in the files that include <linux/mm.h>.
The include statements in such cases are remove with a simple loop:
for f in $(git grep -l "include <linux/mm.h>") ; do
sed -i -e '/include <asm\/pgtable.h>/ d' $f
done
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Cc: Arnd Bergmann <arnd@arndb.de >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Brian Cain <bcain@codeaurora.org >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Chris Zankel <chris@zankel.net >
Cc: "David S. Miller" <davem@davemloft.net >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Greentime Hu <green.hu@gmail.com >
Cc: Greg Ungerer <gerg@linux-m68k.org >
Cc: Guan Xuetao <gxt@pku.edu.cn >
Cc: Guo Ren <guoren@kernel.org >
Cc: Heiko Carstens <heiko.carstens@de.ibm.com >
Cc: Helge Deller <deller@gmx.de >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Ley Foon Tan <ley.foon.tan@intel.com >
Cc: Mark Salter <msalter@redhat.com >
Cc: Matthew Wilcox <willy@infradead.org >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Max Filippov <jcmvbkbc@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Mike Rapoport <rppt@kernel.org >
Cc: Nick Hu <nickhu@andestech.com >
Cc: Paul Walmsley <paul.walmsley@sifive.com >
Cc: Richard Weinberger <richard@nod.at >
Cc: Rich Felker <dalias@libc.org >
Cc: Russell King <linux@armlinux.org.uk >
Cc: Stafford Horne <shorne@gmail.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Tony Luck <tony.luck@intel.com >
Cc: Vincent Chen <deanbo422@gmail.com >
Cc: Vineet Gupta <vgupta@synopsys.com >
Cc: Will Deacon <will@kernel.org >
Cc: Yoshinori Sato <ysato@users.sourceforge.jp >
Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org
Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2020-06-09 09:39:13 -07:00
Linus Torvalds
ee01c4d72a
Merge branch 'akpm' (patches from Andrew)
...
Merge more updates from Andrew Morton:
"More mm/ work, plenty more to come
Subsystems affected by this patch series: slub, memcg, gup, kasan,
pagealloc, hugetlb, vmscan, tools, mempolicy, memblock, hugetlbfs,
thp, mmap, kconfig"
* akpm: (131 commits)
arm64: mm: use ARCH_HAS_DEBUG_WX instead of arch defined
x86: mm: use ARCH_HAS_DEBUG_WX instead of arch defined
riscv: support DEBUG_WX
mm: add DEBUG_WX support
drivers/base/memory.c: cache memory blocks in xarray to accelerate lookup
mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid()
powerpc/mm: drop platform defined pmd_mknotpresent()
mm: thp: don't need to drain lru cache when splitting and mlocking THP
hugetlbfs: get unmapped area below TASK_UNMAPPED_BASE for hugetlbfs
sparc32: register memory occupied by kernel as memblock.memory
include/linux/memblock.h: fix minor typo and unclear comment
mm, mempolicy: fix up gup usage in lookup_node
tools/vm/page_owner_sort.c: filter out unneeded line
mm: swap: memcg: fix memcg stats for huge pages
mm: swap: fix vmstats for huge pages
mm: vmscan: limit the range of LRU type balancing
mm: vmscan: reclaim writepage is IO cost
mm: vmscan: determine anon/file pressure balance at the reclaim root
mm: balance LRU lists based on relative thrashing
mm: only count actual rotations as LRU reclaim cost
...
2020-06-03 20:24:15 -07:00
Mike Rapoport
9691a071aa
mm: use free_area_init() instead of free_area_init_nodes()
...
free_area_init() has effectively became a wrapper for
free_area_init_nodes() and there is no point of keeping it. Still
free_area_init() name is shorter and more general as it does not imply
necessity to initialize multiple nodes.
Rename free_area_init_nodes() to free_area_init(), update the callers and
drop old version of free_area_init().
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Tested-by: Hoan Tran <hoan@os.amperecomputing.com > [arm64]
Reviewed-by: Baoquan He <bhe@redhat.com >
Acked-by: Catalin Marinas <catalin.marinas@arm.com >
Cc: Brian Cain <bcain@codeaurora.org >
Cc: "David S. Miller" <davem@davemloft.net >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Greentime Hu <green.hu@gmail.com >
Cc: Greg Ungerer <gerg@linux-m68k.org >
Cc: Guan Xuetao <gxt@pku.edu.cn >
Cc: Guo Ren <guoren@kernel.org >
Cc: Heiko Carstens <heiko.carstens@de.ibm.com >
Cc: Helge Deller <deller@gmx.de >
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com >
Cc: Jonathan Corbet <corbet@lwn.net >
Cc: Ley Foon Tan <ley.foon.tan@intel.com >
Cc: Mark Salter <msalter@redhat.com >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Max Filippov <jcmvbkbc@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Hocko <mhocko@kernel.org >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Nick Hu <nickhu@andestech.com >
Cc: Paul Walmsley <paul.walmsley@sifive.com >
Cc: Richard Weinberger <richard@nod.at >
Cc: Rich Felker <dalias@libc.org >
Cc: Russell King <linux@armlinux.org.uk >
Cc: Stafford Horne <shorne@gmail.com >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Tony Luck <tony.luck@intel.com >
Cc: Vineet Gupta <vgupta@synopsys.com >
Cc: Yoshinori Sato <ysato@users.sourceforge.jp >
Link: http://lkml.kernel.org/r/20200412194859.12663-6-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2020-06-03 20:09:43 -07:00
Serge Semin
9ee195fd1b
mips: MAAR: Add XPA mode support
...
When XPA mode is enabled the normally 32-bits MAAR pair registers
are extended to be of 64-bits width as in pure 64-bits MIPS
architecture. In this case the MAAR registers can enable the
speculative loads/stores for addresses of up to 39-bits width.
But in this case the process of the MAAR initialization changes a bit.
The upper 32-bits of the registers are supposed to be accessed by mean
of the dedicated instructions mfhc0/mthc0 and there is a CP0.MAAR.VH
bit which should be set together with CP0.MAAR.VL as indication
of the boundary validity. All of these peculiarities were taken into
account in this commit so the speculative loads/stores would work
when XPA mode is enabled.
Co-developed-by: Alexey Malahov <Alexey.Malahov@baikalelectronics.ru >
Signed-off-by: Alexey Malahov <Alexey.Malahov@baikalelectronics.ru >
Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru >
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
Cc: Paul Burton <paulburton@kernel.org >
Cc: Ralf Baechle <ralf@linux-mips.org >
Cc: Arnd Bergmann <arnd@arndb.de >
Cc: Rob Herring <robh+dt@kernel.org >
Cc: linux-pm@vger.kernel.org
Cc: devicetree@vger.kernel.org
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de >
2020-05-19 17:39:32 +02:00
Thomas Bogendoerfer
f3c560a61b
MIPS: mm: Place per_cpu on different nodes, if NUMA is enabled
...
Implement placing of per_cpu into memory, which is local to the CPU.
Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de >
Signed-off-by: Paul Burton <paulburton@kernel.org >
Cc: Ralf Baechle <ralf@linux-mips.org >
Cc: James Hogan <jhogan@kernel.org >
Cc: linux-mips@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
2020-01-09 09:54:30 -08:00
Mike Rapoport
31168f033e
mips: drop __pXd_offset() macros that duplicate pXd_index() ones
...
The __pXd_offset() macros are identical to the pXd_index() macros and there
is no point to keep both of them. All architectures define and use
pXd_index() so let's keep only those to make mips consistent with the rest
of the kernel.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com >
Signed-off-by: Paul Burton <paulburton@kernel.org >
Cc: Ralf Baechle <ralf@linux-mips.org >
Cc: James Hogan <jhogan@kernel.org >
Cc: linux-mips@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: Mike Rapoport <rppt@kernel.org >
2019-11-22 10:51:17 -08:00
Paul Burton
05d013a036
MIPS: Detect bad _PFN_SHIFT values
...
2 recent commits have fixed issues where _PFN_SHIFT grew too large due
to the introduction of too many pgprot bits in our PTEs for some MIPS32
kernel configurations. Tracking down such issues can be tricky, so add a
BUILD_BUG_ON() to help.
Signed-off-by: Paul Burton <paul.burton@mips.com >
Cc: linux-mips@vger.kernel.org
2019-09-20 14:55:07 -07:00
Paul Burton
625cfb6f20
MIPS: mm: Fix highmem compile
...
Commit a5718fe8f7 ("MIPS: mm: Drop boot_mem_map") removed the
definition of a page variable for some reason, but that variable is
still used. Restore it to fix compilation with CONFIG_HIGHMEM enabled.
Signed-off-by: Paul Burton <paul.burton@mips.com >
2019-08-23 17:50:30 +01:00
Jiaxun Yang
a5718fe8f7
MIPS: mm: Drop boot_mem_map
...
Initialize maar by resource map and replace page_is_ram
by memblock_is_memory.
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com >
[paul.burton@mips.com:
- Fix bad MAAR address calculations.
- Use ALIGN() & define maar_align to make it clearer what's going on
with address manipulations.
- Drop the new used field from struct maar_config.
- Rework the RAM walk to avoid iterating over the cfg array needlessly
to find the first unused entry, then count used entries at the end.
Instead just keep the count as we go.]
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: yasha.che3@gmail.com
Cc: aurelien@aurel32.net
Cc: sfr@canb.auug.org.au
Cc: fancer.lancer@gmail.com
Cc: matt.redfearn@mips.com
Cc: chenhc@lemote.com
2019-08-23 15:40:14 +01:00
Christoph Hellwig
f94f7434cb
initramfs: poison freed initrd memory
...
Various architectures including x86 poison the freed initrd memory. Do
the same in the generic free_initrd_mem implementation and switch a few
more architectures that are identical to the generic code over to it now.
Link: http://lkml.kernel.org/r/20190213174621.29297-9-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de >
Acked-by: Mike Rapoport <rppt@linux.ibm.com >
Cc: Catalin Marinas <catalin.marinas@arm.com > [arm64]
Cc: Geert Uytterhoeven <geert@linux-m68k.org > [m68k]
Cc: Steven Price <steven.price@arm.com >
Cc: Alexander Viro <viro@zeniv.linux.org.uk >
Cc: Guan Xuetao <gxt@pku.edu.cn >
Cc: Russell King <linux@armlinux.org.uk >
Cc: Will Deacon <will.deacon@arm.com >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2019-05-14 09:47:47 -07:00
Mike Rapoport
8a7f97b902
treewide: add checks for the return value of memblock_alloc*()
...
Add check for the return value of memblock_alloc*() functions and call
panic() in case of error. The panic message repeats the one used by
panicing memblock allocators with adjustment of parameters to include
only relevant ones.
The replacement was mostly automated with semantic patches like the one
below with manual massaging of format strings.
@@
expression ptr, size, align;
@@
ptr = memblock_alloc(size, align);
+ if (!ptr)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, size, align);
[anders.roxell@linaro.org: use '%pa' with 'phys_addr_t' type]
Link: http://lkml.kernel.org/r/20190131161046.21886-1-anders.roxell@linaro.org
[rppt@linux.ibm.com: fix format strings for panics after memblock_alloc]
Link: http://lkml.kernel.org/r/1548950940-15145-1-git-send-email-rppt@linux.ibm.com
[rppt@linux.ibm.com: don't panic if the allocation in sparse_buffer_init fails]
Link: http://lkml.kernel.org/r/20190131074018.GD28876@rapoport-lnx
[akpm@linux-foundation.org: fix xtensa printk warning]
Link: http://lkml.kernel.org/r/1548057848-15136-20-git-send-email-rppt@linux.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com >
Signed-off-by: Anders Roxell <anders.roxell@linaro.org >
Reviewed-by: Guo Ren <ren_guo@c-sky.com > [c-sky]
Acked-by: Paul Burton <paul.burton@mips.com > [MIPS]
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com > [s390]
Reviewed-by: Juergen Gross <jgross@suse.com > [Xen]
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org > [m68k]
Acked-by: Max Filippov <jcmvbkbc@gmail.com > [xtensa]
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Christophe Leroy <christophe.leroy@c-s.fr >
Cc: Christoph Hellwig <hch@lst.de >
Cc: "David S. Miller" <davem@davemloft.net >
Cc: Dennis Zhou <dennis@kernel.org >
Cc: Greentime Hu <green.hu@gmail.com >
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
Cc: Guan Xuetao <gxt@pku.edu.cn >
Cc: Guo Ren <guoren@kernel.org >
Cc: Mark Salter <msalter@redhat.com >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Petr Mladek <pmladek@suse.com >
Cc: Richard Weinberger <richard@nod.at >
Cc: Rich Felker <dalias@libc.org >
Cc: Rob Herring <robh+dt@kernel.org >
Cc: Rob Herring <robh@kernel.org >
Cc: Russell King <linux@armlinux.org.uk >
Cc: Stafford Horne <shorne@gmail.com >
Cc: Tony Luck <tony.luck@intel.com >
Cc: Vineet Gupta <vgupta@synopsys.com >
Cc: Yoshinori Sato <ysato@users.sourceforge.jp >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2019-03-12 10:04:02 -07:00
Paul Burton
c8790d657b
MIPS: MemoryMapID (MMID) Support
...
Introduce support for using MemoryMapIDs (MMIDs) as an alternative to
Address Space IDs (ASIDs). The major difference between the two is that
MMIDs are global - ie. an MMID uniquely identifies an address space
across all coherent CPUs. In contrast ASIDs are non-global per-CPU IDs,
wherein each address space is allocated a separate ASID for each CPU
upon which it is used. This global namespace allows a new GINVT
instruction be used to globally invalidate TLB entries associated with a
particular MMID across all coherent CPUs in the system, removing the
need for IPIs to invalidate entries with separate ASIDs on each CPU.
The allocation scheme used here is largely borrowed from arm64 (see
arch/arm64/mm/context.c). In essence we maintain a bitmap to track
available MMIDs, and MMIDs in active use at the time of a rollover to a
new MMID version are preserved in the new version. The allocation scheme
requires efficient 64 bit atomics in order to perform reasonably, so
this support depends upon CONFIG_GENERIC_ATOMIC64=n (ie. currently it
will only be included in MIPS64 kernels).
The first, and currently only, available CPU with support for MMIDs is
the MIPS I6500. This CPU supports 16 bit MMIDs, and so for now we cap
our MMIDs to 16 bits wide in order to prevent the bitmap growing to
absurd sizes if any future CPU does implement 32 bit MMIDs as the
architecture manuals suggest is recommended.
When MMIDs are in use we also make use of GINVT instruction which is
available due to the global nature of MMIDs. By executing a sequence of
GINVT & SYNC 0x14 instructions we can avoid the overhead of an IPI to
each remote CPU in many cases. One complication is that GINVT will
invalidate wired entries (in all cases apart from type 0, which targets
the entire TLB). In order to avoid GINVT invalidating any wired TLB
entries we set up, we make sure to create those entries using a reserved
MMID (0) that we never associate with any address space.
Also of note is that KVM will require further work in order to support
MMIDs & GINVT, since KVM is involved in allocating IDs for guests & in
configuring the MMU. That work is not part of this patch, so for now
when MMIDs are in use KVM is disabled.
Signed-off-by: Paul Burton <paul.burton@mips.com >
Cc: linux-mips@vger.kernel.org
2019-02-04 10:56:41 -08:00
Mike Rapoport
57c8a661d9
mm: remove include/linux/bootmem.h
...
Move remaining definitions and declarations from include/linux/bootmem.h
into include/linux/memblock.h and remove the redundant header.
The includes were replaced with the semantic patch below and then
semi-automated removal of duplicated '#include <linux/memblock.h>
@@
@@
- #include <linux/bootmem.h>
+ #include <linux/memblock.h>
[sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h]
Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au
[sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h]
Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au
[sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal]
Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au
Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com >
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au >
Acked-by: Michal Hocko <mhocko@suse.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Chris Zankel <chris@zankel.net >
Cc: "David S. Miller" <davem@davemloft.net >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Greentime Hu <green.hu@gmail.com >
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
Cc: Guan Xuetao <gxt@pku.edu.cn >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org >
Cc: Jonas Bonn <jonas@southpole.se >
Cc: Jonathan Corbet <corbet@lwn.net >
Cc: Ley Foon Tan <lftan@altera.com >
Cc: Mark Salter <msalter@redhat.com >
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Palmer Dabbelt <palmer@sifive.com >
Cc: Paul Burton <paul.burton@mips.com >
Cc: Richard Kuo <rkuo@codeaurora.org >
Cc: Richard Weinberger <richard@nod.at >
Cc: Rich Felker <dalias@libc.org >
Cc: Russell King <linux@armlinux.org.uk >
Cc: Serge Semin <fancer.lancer@gmail.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Tony Luck <tony.luck@intel.com >
Cc: Vineet Gupta <vgupta@synopsys.com >
Cc: Yoshinori Sato <ysato@users.sourceforge.jp >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2018-10-31 08:54:16 -07:00
Mike Rapoport
c6ffc5ca8f
memblock: rename free_all_bootmem to memblock_free_all
...
The conversion is done using
sed -i 's@free_all_bootmem@memblock_free_all@' \
$(git grep -l free_all_bootmem)
Link: http://lkml.kernel.org/r/1536927045-23536-26-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com >
Acked-by: Michal Hocko <mhocko@suse.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Chris Zankel <chris@zankel.net >
Cc: "David S. Miller" <davem@davemloft.net >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Greentime Hu <green.hu@gmail.com >
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
Cc: Guan Xuetao <gxt@pku.edu.cn >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org >
Cc: Jonas Bonn <jonas@southpole.se >
Cc: Jonathan Corbet <corbet@lwn.net >
Cc: Ley Foon Tan <lftan@altera.com >
Cc: Mark Salter <msalter@redhat.com >
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Palmer Dabbelt <palmer@sifive.com >
Cc: Paul Burton <paul.burton@mips.com >
Cc: Richard Kuo <rkuo@codeaurora.org >
Cc: Richard Weinberger <richard@nod.at >
Cc: Rich Felker <dalias@libc.org >
Cc: Russell King <linux@armlinux.org.uk >
Cc: Serge Semin <fancer.lancer@gmail.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Tony Luck <tony.luck@intel.com >
Cc: Vineet Gupta <vgupta@synopsys.com >
Cc: Yoshinori Sato <ysato@users.sourceforge.jp >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2018-10-31 08:54:16 -07:00
Mike Rapoport
e8625dce71
memblock: replace alloc_bootmem_low_pages with memblock_alloc_low
...
The alloc_bootmem_low_pages() function allocates PAGE_SIZE aligned regions
from low memory. memblock_alloc_low() with alignment set to PAGE_SIZE does
exactly the same thing.
The conversion is done using the following semantic patch:
@@
expression e;
@@
- alloc_bootmem_low_pages(e)
+ memblock_alloc_low(e, PAGE_SIZE)
Link: http://lkml.kernel.org/r/1536927045-23536-19-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com >
Acked-by: Michal Hocko <mhocko@suse.com >
Cc: Catalin Marinas <catalin.marinas@arm.com >
Cc: Chris Zankel <chris@zankel.net >
Cc: "David S. Miller" <davem@davemloft.net >
Cc: Geert Uytterhoeven <geert@linux-m68k.org >
Cc: Greentime Hu <green.hu@gmail.com >
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org >
Cc: Guan Xuetao <gxt@pku.edu.cn >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org >
Cc: Jonas Bonn <jonas@southpole.se >
Cc: Jonathan Corbet <corbet@lwn.net >
Cc: Ley Foon Tan <lftan@altera.com >
Cc: Mark Salter <msalter@redhat.com >
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com >
Cc: Matt Turner <mattst88@gmail.com >
Cc: Michael Ellerman <mpe@ellerman.id.au >
Cc: Michal Simek <monstr@monstr.eu >
Cc: Palmer Dabbelt <palmer@sifive.com >
Cc: Paul Burton <paul.burton@mips.com >
Cc: Richard Kuo <rkuo@codeaurora.org >
Cc: Richard Weinberger <richard@nod.at >
Cc: Rich Felker <dalias@libc.org >
Cc: Russell King <linux@armlinux.org.uk >
Cc: Serge Semin <fancer.lancer@gmail.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Tony Luck <tony.luck@intel.com >
Cc: Vineet Gupta <vgupta@synopsys.com >
Cc: Yoshinori Sato <ysato@users.sourceforge.jp >
Signed-off-by: Andrew Morton <akpm@linux-foundation.org >
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org >
2018-10-31 08:54:15 -07:00
Alexandre Belloni
2f0b649b3b
MIPS: stop using _PTRS_PER_PGD
...
gcc 3.3 has been retired for a while, use PTRS_PER_PGD and remove the
asm-offsets.h inclusion.
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com >
Signed-off-by: Paul Burton <paul.burton@mips.com >
Patchwork: https://patchwork.linux-mips.org/patch/20814/
Cc: James Hogan <jhogan@kernel.org >
Cc: Ralf Baechle <ralf@linux-mips.org >
Cc: Arnd Bergmann <arnd@arndb.de >
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
2018-09-28 10:09:34 -07:00
Pravin Shedge
2fe69ede3e
MIPS: Remove duplicate includes
...
These duplicate includes have been found with scripts/checkincludes.pl
but they have been removed manually to avoid removing false positives.
Signed-off-by: Pravin Shedge <pravin.shedge4linux@gmail.com >
Cc: Ralf Baechle <ralf@linux-mips.org >
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17920/
Signed-off-by: James Hogan <jhogan@kernel.org >
2018-02-19 10:55:35 +00:00
Steven J. Hill
ac4f59f88a
MIPS: Remove unused variable 'lastpfn'
...
'lastpfn' is never used for anything. Remove it.
Signed-off-by: Steven J. Hill <steven.hill@cavium.com >
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17276/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org >
2017-10-09 14:53:57 +02:00
Paul Burton
2aa7687c3c
MIPS: Include linux/initrd.h for free_initrd_mem()
...
arch/mips/mm/init.c provides our implementation of free_initrd_mem(),
but doesn't include the linux/initrd.h header which declares them. This
leads to a warning from sparse:
arch/mips/mm/init.c:501:6: warning: symbol 'free_initrd_mem' was not
declared. Should it be static?
Fix this by including linux/initrd.h to get the declaration of
free_initrd_mem().
Signed-off-by: Paul Burton <paul.burton@imgtec.com >
Cc: linux-mips@linux-mips.org
Cc: trivial@kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/17172/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org >
2017-08-29 15:21:54 +02:00
Linus Torvalds
ac3c4aa248
Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
...
Pull MIPS updates from James Hogan:
"math-emu:
- Add missing clearing of BLTZALL and BGEZALL emulation counters
- Fix BC1EQZ and BC1NEZ condition handling
- Fix BLEZL and BGTZL identification
BPF:
- Add JIT support for SKF_AD_HATYPE
- Use unsigned access for unsigned SKB fields
- Quit clobbering callee saved registers in JIT code
- Fix multiple problems in JIT skb access helpers
Loongson 3:
- Select MIPS_L1_CACHE_SHIFT_6
Octeon:
- Remove vestiges of CONFIG_CAVIUM_OCTEON_2ND_KERNEL
- Remove unused L2C types and macros.
- Remove unused SLI types and macros.
- Fix compile error when USB is not enabled.
- Octeon: Remove unused PCIERCX types and macros.
- Octeon: Clean up platform code.
SNI:
- Remove recursive include of cpu-feature-overrides.h
Sibyte:
- Export symbol periph_rev to sb1250-mac network driver.
- Fix Kconfig warning.
Generic platform:
- Enable Root FS on NFS in generic_defconfig
SMP-MT:
- Use CPU interrupt controller IPI IRQ domain support
UASM:
- Add support for LHU for uasm.
- Remove needless ISA abstraction
mm:
- Add 48-bit VA space and 4-level page tables for 4K pages.
PCI:
- Add controllers before the specified head
irqchip driver for MIPS CPU:
- Replace magic 0x100 with IE_SW0
- Prepare for non-legacy IRQ domains
- Introduce IPI IRQ domain support
MAINTAINERS:
- Update email-id of Rahul Bedarkar
NET:
- sb1250-mac: Add missing MODULE_LICENSE()
CPUFREQ:
- Loongson2: drop set_cpus_allowed_ptr()
Misc:
- Disable Werror when W= is set
- Opt into HAVE_COPY_THREAD_TLS
- Enable GENERIC_CPU_AUTOPROBE
- Use common outgoing-CPU-notification code
- Remove dead define of ST_OFF
- Remove CONFIG_ARCH_HAS_ILOG2_U{32,64}
- Stengthen IPI IRQ domain sanity check
- Remove confusing else statement in __do_page_fault()
- Don't unnecessarily include kmalloc.h into <asm/cache.h>.
- Delete unused definition of SMP_CACHE_SHIFT.
- Delete redundant definition of SMP_CACHE_BYTES"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (39 commits)
MIPS: Sibyte: Fix Kconfig warning.
MIPS: Sibyte: Export symbol periph_rev to sb1250-mac network driver.
NET: sb1250-mac: Add missing MODULE_LICENSE()
MAINTAINERS: Update email-id of Rahul Bedarkar
MIPS: Remove confusing else statement in __do_page_fault()
MIPS: Stengthen IPI IRQ domain sanity check
MIPS: smp-mt: Use CPU interrupt controller IPI IRQ domain support
irqchip: mips-cpu: Introduce IPI IRQ domain support
irqchip: mips-cpu: Prepare for non-legacy IRQ domains
irqchip: mips-cpu: Replace magic 0x100 with IE_SW0
MIPS: Remove CONFIG_ARCH_HAS_ILOG2_U{32,64}
MIPS: generic: Enable Root FS on NFS in generic_defconfig
MIPS: mach-rm: Remove recursive include of cpu-feature-overrides.h
MIPS: Opt into HAVE_COPY_THREAD_TLS
CPUFREQ: Loongson2: drop set_cpus_allowed_ptr()
MIPS: uasm: Remove needless ISA abstraction
MIPS: Remove dead define of ST_OFF
MIPS: Use common outgoing-CPU-notification code
MIPS: math-emu: Fix BC1EQZ and BC1NEZ condition handling
MIPS: r2-on-r6-emu: Clear BLTZALL and BGEZALL debugfs counters
...
2017-05-12 09:56:30 -07:00
Alex Belits
3377e227af
MIPS: Add 48-bit VA space (and 4-level page tables) for 4K pages.
...
Some users must have 4K pages while needing a 48-bit VA space size.
The cleanest way do do this is to go to a 4-level page table for this
case. Each page table level using order-0 pages adds 9 bits to the
VA size (at 4K pages, so for four levels we get 9 * 4 + 12 == 48-bits.
For the 4K page size case only we add support functions for the PUD
level of the page table tree, also the TLB exception handlers get an
extra level of tree walk.
[david.daney@cavium.com: Forward port to v4.10.]
[david.daney@cavium.com: Forward port to v4.11.]
Signed-off-by: Alex Belits <alex.belits@cavium.com>
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Alex Belits <alex.belits@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/15312/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org >
2017-04-10 11:56:06 +02:00
James Hogan
f359a11155
MIPS: Separate MAAR V bit into VL and VH for XPA
...
The MAAR V bit has been renamed VL since another bit called VH is added
at the top of the register when it is extended to 64-bits on a 32-bit
processor with XPA. Rename the V definition, fix the various users, and
add definitions for the VH bit. Also add a definition for the MAARI
Index field.
Signed-off-by: James Hogan <james.hogan@imgtec.com >
Acked-by: Ralf Baechle <ralf@linux-mips.org >
Cc: Paul Burton <paul.burton@imgtec.com >
Cc: Paolo Bonzini <pbonzini@redhat.com >
Cc: "Radim Krčmář" <rkrcmar@redhat.com >
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2017-03-28 14:49:01 +01:00
James Hogan
f700a42008
Merge tag 'mips_kvm_4.11_1' into mips-for-linux-next
...
MIPS dependencies for KVM
Miscellaneous MIPS architecture changes depended on by the MIPS KVM
changes in the KVM tree.
- Move pgd_alloc() out of header.
- Exports so KVM can access page table management and TLBEX functions.
- Add return errors to protected cache ops.
2017-02-13 18:57:31 +00:00
James Hogan
ccf015166d
MIPS: Export pgd/pmd symbols for KVM
...
Export pmd_init(), invalid_pmd_table and tlbmiss_handler_setup_pgd to
GPL kernel modules so that MIPS KVM can use the inline page table
management functions and switch between page tables:
- pmd_init() will be used directly by KVM to initialise newly allocated
pmd tables with invalid lower level table pointers.
- invalid_pmd_table is used by pud_present(), pud_none(), and
pud_clear(), which KVM will use to test and clear pud entries.
- tlbmiss_handler_setup_pgd() will be called by KVM entry code to switch
to the appropriate GVA page tables.
Signed-off-by: James Hogan <james.hogan@imgtec.com >
Acked-by: Ralf Baechle <ralf@linux-mips.org >
Cc: Ralf Baechle <ralf@linux-mips.org >
Cc: Paolo Bonzini <pbonzini@redhat.com >
Cc: "Radim Krčmář" <rkrcmar@redhat.com >
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2017-02-03 15:18:56 +00:00
Paul Burton
aa4089e6ce
MIPS: Export invalid_pte_table alongside its definition
...
It's unclear to me why this wasn't always the case, but move the
EXPORT_SYMBOL invocation for invalid_pte_table to be alongside its
definition.
Signed-off-by: Paul Burton <paul.burton@imgtec.com >
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14511/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org >
2017-01-03 16:34:49 +01:00