Author: Menglong Dong <menglong8.dong@gmail.com>
Date: Wed Sep 17 14:09:13 2025 +0800
arch: Add the macro COMPILE_OFFSETS to all the asm-offsets.c
[ Upstream commit 35561bab768977c9e05f1f1a9bc00134c85f3e28 ]
The include/generated/asm-offsets.h is generated in Kbuild during
compiling from arch/SRCARCH/kernel/asm-offsets.c. When we want to
generate another similar offset header file, circular dependency can
happen.
For example, we want to generate a offset file include/generated/test.h,
which is included in include/sched/sched.h. If we generate asm-offsets.h
first, it will fail, as include/sched/sched.h is included in asm-offsets.c
and include/generated/test.h doesn't exist; If we generate test.h first,
it can't success neither, as include/generated/asm-offsets.h is included
by it.
In x86_64, the macro COMPILE_OFFSETS is used to avoid such circular
dependency. We can generate asm-offsets.h first, and if the
COMPILE_OFFSETS is defined, we don't include the "generated/test.h".
And we define the macro COMPILE_OFFSETS for all the asm-offsets.c for this
purpose.
Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Richard Guy Briggs <rgb@redhat.com>
Date: Wed Aug 6 17:04:07 2025 -0400
audit: record fanotify event regardless of presence of rules
[ Upstream commit ce8370e2e62a903e18be7dd0e0be2eee079501e1 ]
When no audit rules are in place, fanotify event results are
unconditionally dropped due to an explicit check for the existence of
any audit rules. Given this is a report from another security
sub-system, allow it to be recorded regardless of the existence of any
audit rules.
To test, install and run the fapolicyd daemon with default config. Then
as an unprivileged user, create and run a very simple binary that should
be denied. Then check for an event with
ausearch -m FANOTIFY -ts recent
Link: https://issues.redhat.com/browse/RHEL-9065
Signed-off-by: Richard Guy Briggs <rgb@redhat.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Filipe Manana <fdmanana@suse.com>
Date: Wed Sep 3 17:43:04 2025 +0100
btrfs: abort transaction if we fail to update inode in log replay dir fixup
[ Upstream commit 5a0565cad3ef7cbf4cf43d1dd1e849b156205292 ]
If we fail to update the inode at link_to_fixup_dir(), we don't abort the
transaction and propagate the error up the call chain, which makes it hard
to pinpoint the error to the inode update. So abort the transaction if the
inode update call fails, so that if it happens we known immediately.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Filipe Manana <fdmanana@suse.com>
Date: Wed Jul 16 15:49:31 2025 +0100
btrfs: abort transaction in the process_one_buffer() log tree walk callback
[ Upstream commit e6dd405b6671b9753b98d8bdf76f8f0ed36c11cd ]
In the process_one_buffer() log tree walk callback we return errors to the
log tree walk caller and then the caller aborts the transaction, if we
have one, or turns the fs into error state if we don't have one. While
this reduces code it makes it harder to figure out where exactly an error
came from. So add the transaction aborts after every failure inside the
process_one_buffer() callback, so that it helps figuring out why failures
happen.
Reviewed-by: Boris Burkov <boris@bur.io>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Filipe Manana <fdmanana@suse.com>
Date: Wed Jul 16 14:56:11 2025 +0100
btrfs: abort transaction on specific error places when walking log tree
[ Upstream commit 6ebd726b104fa99d47c0d45979e6a6109844ac18 ]
We do several things while walking a log tree (for replaying and for
freeing a log tree) like reading extent buffers and cleaning them up,
but we don't immediately abort the transaction, or turn the fs into an
error state, when one of these things fails. Instead we the transaction
abort or turn the fs into error state in the caller of the entry point
function that walks a log tree - walk_log_tree() - which means we don't
get to know exactly where an error came from.
Improve on this by doing a transaction abort / turn fs into error state
after each such failure so that when it happens we have a better
understanding where the failure comes from. This deliberately leaves
the transaction abort / turn fs into error state in the callers of
walk_log_tree() as to ensure we don't get into an inconsistent state in
case we forget to do it deeper in call chain. It also deliberately does
not do it after errors from the calls to the callback defined in
struct walk_control::process_func(), as we will do it later on another
patch.
Reviewed-by: Boris Burkov <boris@bur.io>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Filipe Manana <fdmanana@suse.com>
Date: Wed Aug 27 12:10:28 2025 +0100
btrfs: always drop log root tree reference in btrfs_replay_log()
[ Upstream commit 2f5b8095ea47b142c56c09755a8b1e14145a2d30 ]
Currently we have this odd behaviour:
1) At btrfs_replay_log() we drop the reference of the log root tree if
the call to btrfs_recover_log_trees() failed;
2) But if the call to btrfs_recover_log_trees() did not fail, we don't
drop the reference in btrfs_replay_log() - we expect that
btrfs_recover_log_trees() does it in case it returns success.
Let's simplify this and make btrfs_replay_log() always drop the reference
on the log root tree, not only this simplifies code as it's what makes
sense since it's btrfs_replay_log() who grabbed the reference in the first
place.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Thorsten Blum <thorsten.blum@linux.dev>
Date: Mon Sep 1 17:01:44 2025 +0200
btrfs: scrub: replace max_t()/min_t() with clamp() in scrub_throttle_dev_io()
[ Upstream commit a7f3dfb8293c4cee99743132d69863a92e8f4875 ]
Replace max_t() followed by min_t() with a single clamp().
As was pointed by David Laight in
https://lore.kernel.org/linux-btrfs/20250906122458.75dfc8f0@pumpkin/
the calculation may overflow u32 when the input value is too large, so
clamp_t() is not used. In practice the expected values are in range of
megabytes to gigabytes (throughput limit) so the bug would not happen.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: David Sterba <dsterba@suse.com>
[ Use clamp() and add explanation. ]
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Qu Wenruo <wqu@suse.com>
Date: Tue Sep 16 08:34:05 2025 +0930
btrfs: tree-checker: add inode extref checks
[ Upstream commit aab9458b9f0019e97fae394c2d6d9d1a03addfb3 ]
Like inode refs, inode extrefs have a variable length name, which means
we have to do a proper check to make sure no header nor name can exceed
the item limits.
The check itself is very similar to check_inode_ref(), just a different
structure (btrfs_inode_extref vs btrfs_inode_ref).
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Dan Carpenter <dan.carpenter@linaro.org>
Date: Wed Oct 8 18:08:58 2025 +0300
btrfs: tree-checker: fix bounds check in check_inode_extref()
commit e92c2941204de7b62e9c2deecfeb9eaefe54a22a upstream.
The parentheses for the unlikely() annotation were put in the wrong
place so it means that the condition is basically never true and the
bounds checking is skipped.
Fixes: aab9458b9f00 ("btrfs: tree-checker: add inode extref checks")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Author: Filipe Manana <fdmanana@suse.com>
Date: Thu Aug 28 17:46:18 2025 +0100
btrfs: use level argument in log tree walk callback replay_one_buffer()
[ Upstream commit 6cb7f0b8c9b0d6a35682335fea88bd26f089306f ]
We already have the extent buffer's level in an argument, there's no need
to first ensure the extent buffer's data is loaded (by calling
btrfs_read_extent_buffer()) and then call btrfs_header_level() to check
the level. So use the level argument and do the check before calling
btrfs_read_extent_buffer().
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Filipe Manana <fdmanana@suse.com>
Date: Mon Sep 22 12:09:14 2025 +0100
btrfs: use smp_mb__after_atomic() when forcing COW in create_pending_snapshot()
[ Upstream commit 45c222468d33202c07c41c113301a4b9c8451b8f ]
After setting the BTRFS_ROOT_FORCE_COW flag on the root we are doing a
full write barrier, smp_wmb(), but we don't need to, all we need is a
smp_mb__after_atomic(). The use of the smp_wmb() is from the old days
when we didn't use a bit and used instead an int field in the root to
signal if cow is forced. After the int field was changed to a bit in
the root's state (flags field), we forgot to update the memory barrier
in create_pending_snapshot() to smp_mb__after_atomic(), but we did the
change in commit_fs_roots() after clearing BTRFS_ROOT_FORCE_COW. That
happened in commit 27cdeb7096b8 ("Btrfs: use bitfield instead of integer
data type for the some variants in btrfs_root"). On the reader side, in
should_cow_block(), we also use the counterpart smp_mb__before_atomic()
which generates further confusion.
So change the smp_wmb() to smp_mb__after_atomic(). In fact we don't
even need any barrier at all since create_pending_snapshot() is called
in the critical section of a transaction commit and therefore no one
can concurrently join/attach the transaction, or start a new one, until
the transaction is unblocked. By the time someone starts a new transaction
and enters should_cow_block(), a lot of implicit memory barriers already
took place by having acquired several locks such as fs_info->trans_lock
and extent buffer locks on the root node at least. Nevertlheless, for
consistency use smp_mb__after_atomic() after setting the force cow bit
in create_pending_snapshot().
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Naohiro Aota <naohiro.aota@wdc.com>
Date: Wed Jul 16 11:13:15 2025 +0900
btrfs: zoned: refine extent allocator hint selection
[ Upstream commit 0d703963d297964451783e1a0688ebdf74cd6151 ]
The hint block group selection in the extent allocator is wrong in the
first place, as it can select the dedicated data relocation block group for
the normal data allocation.
Since we separated the normal data space_info and the data relocation
space_info, we can easily identify a block group is for data relocation or
not. Do not choose it for the normal data allocation.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Date: Tue Jul 22 13:39:11 2025 +0200
btrfs: zoned: return error from btrfs_zone_finish_endio()
[ Upstream commit 3c44cd3c79fcb38a86836dea6ff8fec322a9e68c ]
Now that btrfs_zone_finish_endio_workfn() is directly calling
do_zone_finish() the only caller of btrfs_zone_finish_endio() is
btrfs_finish_one_ordered().
btrfs_finish_one_ordered() already has error handling in-place so
btrfs_zone_finish_endio() can return an error if the block group lookup
fails.
Also as btrfs_zone_finish_endio() already checks for zoned filesystems and
returns early, there's no need to do this in the caller.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Chen Ridong <chenridong@huawei.com>
Date: Fri Sep 19 01:12:27 2025 +0000
cpuset: Use new excpus for nocpu error check when enabling root partition
[ Upstream commit 59d5de3655698679ad8fd2cc82228de4679c4263 ]
A previous patch fixed a bug where new_prs should be assigned before
checking housekeeping conflicts. This patch addresses another potential
issue: the nocpu error check currently uses the xcpus which is not updated.
Although no issue has been observed so far, the check should be performed
using the new effective exclusive cpus.
The comment has been removed because the function returns an error if
nocpu checking fails, which is unrelated to the parent.
Signed-off-by: Chen Ridong <chenridong@huawei.com>
Reviewed-by: Waiman Long <longman@redhat.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Kyle Manna <kyle@kylemanna.com>
Date: Tue Aug 19 09:17:39 2025 -0700
EDAC/ie31200: Add two more Intel Alder Lake-S SoCs for EDAC support
[ Upstream commit 71b69f817e91b588030d7d47ddbdc4857a92eb4e ]
Host Device IDs (DID0) correspond to:
* Intel Core i7-12700K
* Intel Core i5-12600K
See documentation:
* 12th Generation Intel® Core™ Processors Datasheet
* Volume 1 of 2, Doc. No.: 655258, Rev.: 011
* https://edc.intel.com/output/DownloadPdfDocument?id=8297 (PDF)
Signed-off-by: Kyle Manna <kyle@kylemanna.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Link: https://lore.kernel.org/r/20250819161739.3241152-1-kyle@kylemanna.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Avadhut Naik <avadhut.naik@amd.com>
Date: Tue Sep 16 20:30:17 2025 +0000
EDAC/mc_sysfs: Increase legacy channel support to 16
[ Upstream commit 6e1c2c6c2c40ce99e0d2633b212f43c702c1a002 ]
Newer AMD systems can support up to 16 channels per EDAC "mc" device.
These are detected by the EDAC module running on the device, and the
current EDAC interface is appropriately enumerated.
The legacy EDAC sysfs interface however, provides device attributes for
channels 0 through 11 only. Consequently, the last four channels, 12
through 15, will not be enumerated and will not be visible through the
legacy sysfs interface.
Add additional device attributes to ensure that all 16 channels, if
present, are enumerated by and visible through the legacy EDAC sysfs
interface.
Signed-off-by: Avadhut Naik <avadhut.naik@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/20250916203242.1281036-1-avadhut.naik@amd.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Kuan-Wei Chiu <visitorckw@gmail.com>
Date: Fri Aug 29 03:19:54 2025 +0800
EDAC: Fix wrong executable file modes for C source files
[ Upstream commit 71965cae7db394ff5ba3b2d2befe4e136ceec268 ]
Three EDAC source files were mistakenly marked as executable when adding the
EDAC scrub controls.
These are plain C source files and should not carry the executable bit.
Correcting their modes follows the principle of least privilege and avoids
unnecessary execute permissions in the repository.
[ bp: Massage commit message. ]
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/20250828191954.903125-1-visitorckw@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Charles Keepax <ckeepax@opensource.cirrus.com>
Date: Thu Oct 23 16:48:59 2025 +0100
genirq/chip: Add buslock back in to irq_set_handler()
[ Upstream commit 5d7e45dd670e42df4836afeaa9baf9d41ca4b434 ]
The locking was changed from a buslock to a plain lock, but the patch
description states there was no functional change. Assuming this was
accidental so reverting to using the buslock.
Fixes: 5cd05f3e2315 ("genirq/chip: Rework irq_set_handler() variants")
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://patch.msgid.link/20251023154901.1333755-2-ckeepax@opensource.cirrus.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Charles Keepax <ckeepax@opensource.cirrus.com>
Date: Thu Oct 23 16:49:00 2025 +0100
genirq/manage: Add buslock back in to __disable_irq_nosync()
[ Upstream commit 56363e25f79fe83e63039c5595b8cd9814173d37 ]
The locking was changed from a buslock to a plain lock, but the patch
description states there was no functional change. Assuming this was
accidental so reverting to using the buslock.
Fixes: 1b7444446724 ("genirq/manage: Rework __disable_irq_nosync()")
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://patch.msgid.link/20251023154901.1333755-3-ckeepax@opensource.cirrus.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Charles Keepax <ckeepax@opensource.cirrus.com>
Date: Thu Oct 23 16:49:01 2025 +0100
genirq/manage: Add buslock back in to enable_irq()
[ Upstream commit ef3330b99c01bda53f2a189b58bed8f6b7397f28 ]
The locking was changed from a buslock to a plain lock, but the patch
description states there was no functional change. Assuming this was
accidental so reverting to using the buslock.
Fixes: bddd10c55407 ("genirq/manage: Rework enable_irq()")
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://patch.msgid.link/20251023154901.1333755-4-ckeepax@opensource.cirrus.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Date: Sun Nov 2 22:18:05 2025 +0900
Linux 6.17.7
Link: https://lore.kernel.org/r/20251031140043.564670400@linuxfoundation.org
Tested-by: Ronald Warsow <rwarsow@gmx.de>
Tested-by: Peter Schneider <pschneider1968@googlemail.com>
Tested-by: Dileep Malepu <dileep.debian@gmail.com>
Tested-by: Salvatore Bonaccorso <carnil@debian.org>
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Tested-by: Pavel Machek (CIP) <pavel@denx.de>
Tested-by: Shuah Khan <skhan@linuxfoundation.org>
Tested-By: Achill Gilgenast <achill@achill.org>=
Tested-by: Justin M. Forbes <jforbes@fedoraproject.org>
Tested-by: Linux Kernel Functional Testing <lkft@linaro.org>
Tested-by: Jeffrin Jose T <jeffrin@rajagiritech.edu.in>
Tested-by: Ron Economos <re@w6rz.net>
Tested-by: Brett A C Sheffield <bacs@librecast.net>
Tested-by: Miguel Ojeda <ojeda@kernel.org>
Tested-by: Takeshi Ogasawara <takeshi.ogasawara@futuring-girl.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Author: Dapeng Mi <dapeng1.mi@linux.intel.com>
Date: Wed Aug 20 10:30:31 2025 +0800
perf/x86/intel: Add ICL_FIXED_0_ADAPTIVE bit into INTEL_FIXED_BITS_MASK
[ Upstream commit 2676dbf9f4fb7f6739d1207c0f1deaf63124642a ]
ICL_FIXED_0_ADAPTIVE is missed to be added into INTEL_FIXED_BITS_MASK,
add it.
With help of this new INTEL_FIXED_BITS_MASK, intel_pmu_enable_fixed() can
be optimized. The old fixed counter control bits can be unconditionally
cleared with INTEL_FIXED_BITS_MASK and then set new control bits base on
new configuration.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Tested-by: Yi Lai <yi1.lai@intel.com>
Link: https://lore.kernel.org/r/20250820023032.17128-7-dapeng1.mi@linux.intel.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Josh Poimboeuf <jpoimboe@kernel.org>
Date: Wed Aug 20 14:03:40 2025 -0400
perf: Have get_perf_callchain() return NULL if crosstask and user are set
[ Upstream commit 153f9e74dec230f2e070e16fa061bc7adfd2c450 ]
get_perf_callchain() doesn't support cross-task unwinding for user space
stacks, have it return NULL if both the crosstask and user arguments are
set.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250820180428.426423415@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Josh Poimboeuf <jpoimboe@kernel.org>
Date: Wed Aug 20 14:03:43 2025 -0400
perf: Skip user unwind if the task is a kernel thread
[ Upstream commit 16ed389227651330879e17bd83d43bd234006722 ]
If the task is not a user thread, there's no user stack to unwind.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250820180428.930791978@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Steven Rostedt <rostedt@goodmis.org>
Date: Wed Aug 20 14:03:41 2025 -0400
perf: Use current->flags & PF_KTHREAD|PF_USER_WORKER instead of current->mm == NULL
[ Upstream commit 90942f9fac05702065ff82ed0bade0d08168d4ea ]
To determine if a task is a kernel thread or not, it is more reliable to
use (current->flags & (PF_KTHREAD|PF_USER_WORKERi)) than to rely on
current->mm being NULL. That is because some kernel tasks (io_uring
helpers) may have a mm field.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250820180428.592367294@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Tejun Heo <tj@kernel.org>
Date: Wed Sep 3 11:33:28 2025 -1000
sched_ext: Keep bypass on between enable failure and scx_disable_workfn()
[ Upstream commit 4a1d9d73aabc8f97f48c4f84f936de3b265ffd6f ]
scx_enable() turns on the bypass mode while enable is in progress. If
enabling fails, it turns off the bypass mode and then triggers scx_error().
scx_error() will trigger scx_disable_workfn() which will turn on the bypass
mode again and unload the failed scheduler.
This moves the system out of bypass mode between the enable error path and
the disable path, which is unnecessary and can be brittle - e.g. the thread
running scx_enable() may already be on the failed scheduler and can be
switched out before it triggers scx_error() leading to a stall. The watchdog
would eventually kick in, so the situation isn't critical but is still
suboptimal.
There is nothing to be gained by turning off the bypass mode between
scx_enable() failure and scx_disable_workfn(). Keep bypass on.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Tejun Heo <tj@kernel.org>
Date: Tue Sep 23 09:03:26 2025 -1000
sched_ext: Make qmap dump operation non-destructive
[ Upstream commit d452972858e5cfa4262320ab74fe8f016460b96f ]
The qmap dump operation was destructively consuming queue entries while
displaying them. As dump can be triggered anytime, this can easily lead to
stalls. Add a temporary dump_store queue and modify the dump logic to pop
entries, display them, and then restore them back to the original queue.
This allows dump operations to be performed without affecting the
scheduler's queue state.
Note that if racing against new enqueues during dump, ordering can get
mixed up, but this is acceptable for debugging purposes.
Acked-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Tejun Heo <tj@kernel.org>
Date: Wed Sep 3 11:33:28 2025 -1000
sched_ext: Move internal type and accessor definitions to ext_internal.h
[ Upstream commit 0c2b8356e430229efef42b03bd765a2a7ecf73fd ]
There currently isn't a place to place SCX-internal types and accessors to
be shared between ext.c and ext_idle.c. Create kernel/sched/ext_internal.h
and move internal type and accessor definitions there. This trims ext.c a
bit and makes future additions easier. Pure code reorganization. No
functional changes.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Andrea Righi <arighi@nvidia.com>
Stable-dep-of: efeeaac9ae97 ("sched_ext: Sync error_irq_work before freeing scx_sched")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Tejun Heo <tj@kernel.org>
Date: Wed Sep 3 11:33:28 2025 -1000
sched_ext: Put event_stats_cpu in struct scx_sched_pcpu
[ Upstream commit bcb7c2305682c77a8bfdbfe37106b314ac10110f ]
scx_sched.event_stats_cpu is the percpu counters that are used to track
stats. Introduce struct scx_sched_pcpu and move the counters inside. This
will ease adding more per-cpu fields. No functional changes.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Andrea Righi <arighi@nvidia.com>
Stable-dep-of: efeeaac9ae97 ("sched_ext: Sync error_irq_work before freeing scx_sched")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Tejun Heo <tj@kernel.org>
Date: Thu Oct 9 13:56:23 2025 -1000
sched_ext: Sync error_irq_work before freeing scx_sched
[ Upstream commit efeeaac9ae9763f9c953e69633c86bc3031e39b5 ]
By the time scx_sched_free_rcu_work() runs, the scx_sched is no longer
reachable. However, a previously queued error_irq_work may still be pending or
running. Ensure it completes before proceeding with teardown.
Fixes: bff3b5aec1b7 ("sched_ext: Move disable machinery into scx_sched")
Acked-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Jiri Olsa <jolsa@kernel.org>
Date: Sun Jul 20 13:21:30 2025 +0200
seccomp: passthrough uprobe systemcall without filtering
[ Upstream commit 89d1d8434d246c96309a6068dfcf9e36dc61227b ]
Adding uprobe as another exception to the seccomp filter alongside
with the uretprobe syscall.
Same as the uretprobe the uprobe syscall is installed by kernel as
replacement for the breakpoint exception and is limited to x86_64
arch and isn't expected to ever be supported in i386.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <kees@kernel.org>
Link: https://lore.kernel.org/r/20250720112133.244369-21-jolsa@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: Haofeng Li <lihaofeng@kylinos.cn>
Date: Wed Oct 15 14:17:53 2025 +0800
timekeeping: Fix aux clocks sysfs initialization loop bound
[ Upstream commit 39a9ed0fb6dac58547afdf9b6cb032d326a3698f ]
The loop in tk_aux_sysfs_init() uses `i <= MAX_AUX_CLOCKS` as the
termination condition, which results in 9 iterations (i=0 to 8) when
MAX_AUX_CLOCKS is defined as 8. However, the kernel is designed to support
only up to 8 auxiliary clocks.
This off-by-one error causes the creation of a 9th sysfs entry that exceeds
the intended auxiliary clock range.
Fix the loop bound to use `i < MAX_AUX_CLOCKS` to ensure exactly 8
auxiliary clock entries are created, matching the design specification.
Fixes: 7b95663a3d96 ("timekeeping: Provide interface to control auxiliary clocks")
Signed-off-by: Haofeng Li <lihaofeng@kylinos.cn>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://patch.msgid.link/tencent_2376993D9FC06A3616A4F981B3DE1C599607@qq.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: David Kaplan <david.kaplan@amd.com>
Date: Fri Sep 12 10:24:28 2025 -0500
x86/bugs: Add attack vector controls for VMSCAPE
[ Upstream commit 5799d5d8a6c877f03ad5b5a640977053be45059a ]
Use attack vector controls to select whether VMSCAPE requires mitigation,
similar to other bugs.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: David Kaplan <david.kaplan@amd.com>
Date: Mon Sep 15 08:47:05 2025 -0500
x86/bugs: Fix reporting of LFENCE retpoline
[ Upstream commit d1cc1baef67ac6c09b74629ca053bf3fb812f7dc ]
The LFENCE retpoline mitigation is not secure but the kernel prints
inconsistent messages about this fact. The dmesg log says 'Mitigation:
LFENCE', implying the system is mitigated. But sysfs reports 'Vulnerable:
LFENCE' implying the system (correctly) is not mitigated.
Fix this by printing a consistent 'Vulnerable: LFENCE' string everywhere
when this mitigation is selected.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/20250915134706.3201818-1-david.kaplan@amd.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: David Kaplan <david.kaplan@amd.com>
Date: Fri Oct 3 12:19:36 2025 -0500
x86/bugs: Qualify RETBLEED_INTEL_MSG
[ Upstream commit 204ced4108f5d38f6804968fd9543cc69c3f8da6 ]
When retbleed mitigation is disabled, the kernel already prints an info
message that the system is vulnerable. Recent code restructuring also
inadvertently led to RETBLEED_INTEL_MSG being printed as an error, which is
unnecessary as retbleed mitigation was already explicitly disabled (by config
option, cmdline, etc.).
Qualify this print statement so the warning is not printed unless an actual
retbleed mitigation was selected and is being disabled due to incompatibility
with spectre_v2.
Fixes: e3b78a7ad5ea ("x86/bugs: Restructure retbleed mitigation")
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=220624
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://patch.msgid.link/20251003171936.155391-1-david.kaplan@amd.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Author: David Kaplan <david.kaplan@amd.com>
Date: Mon Sep 15 08:47:06 2025 -0500
x86/bugs: Report correct retbleed mitigation status
[ Upstream commit 930f2361fe542a00de9ce6070b1b6edb976f1165 ]
On Intel CPUs, the default retbleed mitigation is IBRS/eIBRS but this
requires that a similar spectre_v2 mitigation is applied. If the user
selects a different spectre_v2 mitigation (like spectre_v2=retpoline) a
warning is printed but sysfs will still report 'Mitigation: IBRS' or
'Mitigation: Enhanced IBRS'. This is incorrect because retbleed is not
mitigated, and IBRS is not actually set.
Fix this by choosing RETBLEED_MITIGATION_NONE in this scenario so the
kernel correctly reports the system as vulnerable to retbleed.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/20250915134706.3201818-1-david.kaplan@amd.com
Stable-dep-of: 204ced4108f5 ("x86/bugs: Qualify RETBLEED_INTEL_MSG")
Signed-off-by: Sasha Levin <sashal@kernel.org>