kernel: drop MIPS fix cache flushing for highmem pages

This patch, in a variety of forms, has been around since beginning 2016
as e756c2bb07, ending up in present form 0aa6c7df60 (kernel 4.4.13 bump)
and carried forward ever since.

There have been a number of MIPS kernel memory handling changes since,
including VDSO fixes that meant openwrt patches have been dropped with
no apparent fallout.

Simple tests (ntfs-3g) on a HIGHMEM 512MB mt7621 device have not turned
up data corruption issues which would otherwise be expected.  Similarly
running on other MIPS based devices for the past 2 months hasn't turned
up anything obvious to retain this out of tree patch.

With thanks to Rosen Penev for testing on the known 'highmem' device and
Felix Fietkau for testing advice.  Not putting acked-by as it's my fault
if it breaks :-)

Signed-off-by: Kevin Darbyshire-Bryant <ldir@darbyshire-bryant.me.uk>
v19.07.3_mercusys_ac12_duma
Kevin Darbyshire-Bryant 6 years ago
parent c0248183a4
commit 8ff0dd57bf

@ -1,30 +0,0 @@
From: Felix Fietkau <nbd@nbd.name>
Subject: MIPS: fix cache flushing for highmem pages
Most cache flush ops were no-op for highmem pages. This led to nasty
segfaults and (in the case of page_address(page) == NULL) kernel
crashes.
Fix this by always flushing highmem pages using kmap/kunmap_atomic
around the actual cache flush. This might be a bit inefficient, but at
least it's stable.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
---
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -116,6 +116,13 @@ void __flush_anon_page(struct page *page
{
unsigned long addr = (unsigned long) page_address(page);
+ if (PageHighMem(page)) {
+ addr = (unsigned long)kmap_atomic(page);
+ flush_data_cache_page(addr);
+ __kunmap_atomic((void *)addr);
+ return;
+ }
+
if (pages_do_alias(addr, vmaddr)) {
if (page_mapcount(page) && !Page_dcache_dirty(page)) {
void *kaddr;

@ -1,30 +0,0 @@
From: Felix Fietkau <nbd@nbd.name>
Subject: MIPS: fix cache flushing for highmem pages
Most cache flush ops were no-op for highmem pages. This led to nasty
segfaults and (in the case of page_address(page) == NULL) kernel
crashes.
Fix this by always flushing highmem pages using kmap/kunmap_atomic
around the actual cache flush. This might be a bit inefficient, but at
least it's stable.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
---
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -116,6 +116,13 @@ void __flush_anon_page(struct page *page
{
unsigned long addr = (unsigned long) page_address(page);
+ if (PageHighMem(page)) {
+ addr = (unsigned long)kmap_atomic(page);
+ flush_data_cache_page(addr);
+ __kunmap_atomic((void *)addr);
+ return;
+ }
+
if (pages_do_alias(addr, vmaddr)) {
if (page_mapcount(page) && !Page_dcache_dirty(page)) {
void *kaddr;
Loading…
Cancel
Save