Список изменений в Linux 5.10.194

 
arm64: module-plts: inline linux/moduleloader.h [+ + +]
Author: Arnd Bergmann <[email protected]>
Date:   Tue May 16 18:06:37 2023 +0200

    arm64: module-plts: inline linux/moduleloader.h
    
    commit 60a0aab7463ee69296692d980b96510ccce3934e upstream.
    
    module_frob_arch_sections() is declared in moduleloader.h, but
    that is not included before the definition:
    
    arch/arm64/kernel/module-plts.c:286:5: error: no previous prototype for 'module_frob_arch_sections' [-Werror=missing-prototypes]
    
    Signed-off-by: Arnd Bergmann <[email protected]>
    Reviewed-by: Kees Cook <[email protected]>
    Acked-by: Ard Biesheuvel <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Catalin Marinas <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

arm64: module: Use module_init_layout_section() to spot init sections [+ + +]
Author: James Morse <[email protected]>
Date:   Tue Aug 1 14:54:08 2023 +0000

    arm64: module: Use module_init_layout_section() to spot init sections
    
    commit f928f8b1a2496e7af95b860f9acf553f20f68f16 upstream.
    
    Today module_frob_arch_sections() spots init sections from their
    'init' prefix, and uses this to keep the init PLTs separate from the rest.
    
    module_emit_plt_entry() uses within_module_init() to determine if a
    location is in the init text or not, but this depends on whether
    core code thought this was an init section.
    
    Naturally the logic is different.
    
    module_init_layout_section() groups the init and exit text together if
    module unloading is disabled, as the exit code will never run. The result
    is kernels with this configuration can't load all their modules because
    there are not enough PLTs for the combined init+exit section.
    
    This results in the following:
    | WARNING: CPU: 2 PID: 51 at arch/arm64/kernel/module-plts.c:99 module_emit_plt_entry+0x184/0x1cc
    | Modules linked in: crct10dif_common
    | CPU: 2 PID: 51 Comm: modprobe Not tainted 6.5.0-rc4-yocto-standard-dirty #15208
    | Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
    | pstate: 20400005 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
    | pc : module_emit_plt_entry+0x184/0x1cc
    | lr : module_emit_plt_entry+0x94/0x1cc
    | sp : ffffffc0803bba60
    [...]
    | Call trace:
    |  module_emit_plt_entry+0x184/0x1cc
    |  apply_relocate_add+0x2bc/0x8e4
    |  load_module+0xe34/0x1bd4
    |  init_module_from_file+0x84/0xc0
    |  __arm64_sys_finit_module+0x1b8/0x27c
    |  invoke_syscall.constprop.0+0x5c/0x104
    |  do_el0_svc+0x58/0x160
    |  el0_svc+0x38/0x110
    |  el0t_64_sync_handler+0xc0/0xc4
    |  el0t_64_sync+0x190/0x194
    
    A previous patch exposed module_init_layout_section(), use that so the
    logic is the same.
    
    Reported-by: Adam Johnston <[email protected]>
    Tested-by: Adam Johnston <[email protected]>
    Fixes: 055f23b74b20 ("module: check for exit sections in layout_sections() instead of module_init_section()")
    Cc: <[email protected]> # 5.15.x: 60a0aab7463ee69 arm64: module-plts: inline linux/moduleloader.h
    Cc: <[email protected]> # 5.15.x
    Signed-off-by: James Morse <[email protected]>
    Acked-by: Catalin Marinas <[email protected]>
    Signed-off-by: Luis Chamberlain <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
ARM: module: Use module_init_layout_section() to spot init sections [+ + +]
Author: James Morse <[email protected]>
Date:   Tue Aug 1 14:54:09 2023 +0000

    ARM: module: Use module_init_layout_section() to spot init sections
    
    commit a6846234f45801441f0e31a8b37f901ef0abd2df upstream.
    
    Today module_frob_arch_sections() spots init sections from their
    'init' prefix, and uses this to keep the init PLTs separate from the rest.
    
    get_module_plt() uses within_module_init() to determine if a
    location is in the init text or not, but this depends on whether
    core code thought this was an init section.
    
    Naturally the logic is different.
    
    module_init_layout_section() groups the init and exit text together if
    module unloading is disabled, as the exit code will never run. The result
    is kernels with this configuration can't load all their modules because
    there are not enough PLTs for the combined init+exit section.
    
    A previous patch exposed module_init_layout_section(), use that so the
    logic is the same.
    
    Fixes: 055f23b74b20 ("module: check for exit sections in layout_sections() instead of module_init_section()")
    Cc: [email protected]
    Signed-off-by: James Morse <[email protected]>
    Signed-off-by: Luis Chamberlain <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
Linux: Linux 5.10.194 [+ + +]
Author: Greg Kroah-Hartman <[email protected]>
Date:   Sat Sep 2 09:18:14 2023 +0200

    Linux 5.10.194
    
    Link: https://lore.kernel.org/r/[email protected]
    Tested-by: Florian Fainelli <[email protected]>
    Tested-by: Sudip Mukherjee <[email protected]>
    Tested-by: Linux Kernel Functional Testing <[email protected]>
    Tested-by: Shuah Khan <[email protected]>
    Tested-by: Jon Hunter <[email protected]>
    Tested-by: Guenter Roeck <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
mhi: pci_generic: Fix implicit conversion warning [+ + +]
Author: Loic Poulain <[email protected]>
Date:   Wed Dec 2 09:12:30 2020 +0100

    mhi: pci_generic: Fix implicit conversion warning
    
    commit 4ea6fa2cb921cb17812501a27c3761037d64a217 upstream.
    
    Fix the following warning with explicit cast:
    
    warning: implicit conversion from 'unsigned long long' to
    'dma_addr_t' (aka 'unsigned int')
    mhi_cntrl->iova_stop = DMA_BIT_MASK(info->dma_data_width);
    
    Fixes: 855a70c12021 ("bus: mhi: Add MHI PCI support for WWAN modems")
    Signed-off-by: Loic Poulain <[email protected]>
    Reported-by: kernel test robot <[email protected]>
    Reviewed-by: Nathan Chancellor <[email protected]>
    Reviewed-by: Manivannan Sadhasivam <[email protected]>
    Signed-off-by: Manivannan Sadhasivam <[email protected]>
    Cc: Guenter Roeck <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
module: Expose module_init_layout_section() [+ + +]
Author: James Morse <[email protected]>
Date:   Tue Aug 1 14:54:07 2023 +0000

    module: Expose module_init_layout_section()
    
    commit 2abcc4b5a64a65a2d2287ba0be5c2871c1552416 upstream.
    
    module_init_layout_section() choses whether the core module loader
    considers a section as init or not. This affects the placement of the
    exit section when module unloading is disabled. This code will never run,
    so it can be free()d once the module has been initialised.
    
    arm and arm64 need to count the number of PLTs they need before applying
    relocations based on the section name. The init PLTs are stored separately
    so they can be free()d. arm and arm64 both use within_module_init() to
    decide which list of PLTs to use when applying the relocation.
    
    Because within_module_init()'s behaviour changes when module unloading
    is disabled, both architecture would need to take this into account when
    counting the PLTs.
    
    Today neither architecture does this, meaning when module unloading is
    disabled there are insufficient PLTs in the init section to load some
    modules, resulting in warnings:
    | WARNING: CPU: 2 PID: 51 at arch/arm64/kernel/module-plts.c:99 module_emit_plt_entry+0x184/0x1cc
    | Modules linked in: crct10dif_common
    | CPU: 2 PID: 51 Comm: modprobe Not tainted 6.5.0-rc4-yocto-standard-dirty #15208
    | Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
    | pstate: 20400005 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
    | pc : module_emit_plt_entry+0x184/0x1cc
    | lr : module_emit_plt_entry+0x94/0x1cc
    | sp : ffffffc0803bba60
    [...]
    | Call trace:
    |  module_emit_plt_entry+0x184/0x1cc
    |  apply_relocate_add+0x2bc/0x8e4
    |  load_module+0xe34/0x1bd4
    |  init_module_from_file+0x84/0xc0
    |  __arm64_sys_finit_module+0x1b8/0x27c
    |  invoke_syscall.constprop.0+0x5c/0x104
    |  do_el0_svc+0x58/0x160
    |  el0_svc+0x38/0x110
    |  el0t_64_sync_handler+0xc0/0xc4
    |  el0t_64_sync+0x190/0x194
    
    Instead of duplicating module_init_layout_section()s logic, expose it.
    
    Reported-by: Adam Johnston <[email protected]>
    Fixes: 055f23b74b20 ("module: check for exit sections in layout_sections() instead of module_init_section()")
    Cc: [email protected]
    Signed-off-by: James Morse <[email protected]>
    Signed-off-by: Luis Chamberlain <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
 
rcu-tasks: Add trc_inspect_reader() checks for exiting critical section [+ + +]
Author: Paul E. McKenney <[email protected]>
Date:   Wed Jul 28 11:32:28 2021 -0700

    rcu-tasks: Add trc_inspect_reader() checks for exiting critical section
    
    commit 18f08e758f34e6dfe0668bee51bd2af7adacf381 upstream.
    
    Currently, trc_inspect_reader() treats a task exiting its RCU Tasks
    Trace read-side critical section the same as being within that critical
    section.  However, this can fail because that task might have already
    checked its .need_qs field, which means that it might never decrement
    the all-important trc_n_readers_need_end counter.  Of course, for that
    to happen, the task would need to never again execute an RCU Tasks Trace
    read-side critical section, but this really could happen if the system's
    last trampoline was removed.  Note that exit from such a critical section
    cannot be treated as a quiescent state due to the possibility of nested
    critical sections.  This means that if trc_inspect_reader() sees a
    negative nesting value, it must set up to try again later.
    
    This commit therefore ignores tasks that are exiting their RCU Tasks
    Trace read-side critical sections so that they will be rechecked later.
    
    [ paulmck: Apply feedback from Neeraj Upadhyay and Boqun Feng. ]
    
    Signed-off-by: Paul E. McKenney <[email protected]>
    Cc: Joel Fernandes <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

rcu-tasks: Fix IPI failure handling in trc_wait_for_one_reader [+ + +]
Author: Neeraj Upadhyay <[email protected]>
Date:   Fri Aug 27 13:43:35 2021 +0530

    rcu-tasks: Fix IPI failure handling in trc_wait_for_one_reader
    
    commit 46aa886c483f57ef13cd5ea0a85e70b93eb1d381 upstream.
    
    The trc_wait_for_one_reader() function is called at multiple stages
    of trace rcu-tasks GP function, rcu_tasks_wait_gp():
    
    - First, it is called as part of per task function -
      rcu_tasks_trace_pertask(), for all non-idle tasks. As part of per task
      processing, this function add the task in the holdout list and if the
      task is currently running on a CPU, it sends IPI to the task's CPU.
      The IPI handler takes action depending on whether task is in trace
      rcu-tasks read side critical section or not:
    
      - a. If the task is in trace rcu-tasks read side critical section
           (t->trc_reader_nesting != 0), the IPI handler sets the task's
           ->trc_reader_special.b.need_qs, so that this task notifies exit
           from its outermost read side critical section (by decrementing
           trc_n_readers_need_end) to the GP handling function.
           trc_wait_for_one_reader() also increments trc_n_readers_need_end,
           so that the trace rcu-tasks GP handler function waits for this
           task's read side exit notification. The IPI handler also sets
           t->trc_reader_checked to true, and no further IPIs are sent for
           this task, for this trace rcu-tasks grace period and this
           task can be removed from holdout list.
    
      - b. If the task is in the process of exiting its trace rcu-tasks
           read side critical section, (t->trc_reader_nesting < 0), defer
           this task's processing to future calls to trc_wait_for_one_reader().
    
      - c. If task is not in rcu-task read side critical section,
           t->trc_reader_nesting == 0, ->trc_reader_checked is set for this
           task, so that this task is removed from holdout list.
    
    - Second, trc_wait_for_one_reader() is called as part of post scan, in
      function rcu_tasks_trace_postscan(), for all idle tasks.
    
    - Third, in function check_all_holdout_tasks_trace(), this function is
      called for each task in the holdout list, but only if there isn't
      a pending IPI for the task (->trc_ipi_to_cpu == -1). This function
      removed the task from holdout list, if IPI handler has completed the
      required work, to ensure that the current trace rcu-tasks grace period
      either waits for this task, or this task is not in a trace rcu-tasks
      read side critical section.
    
    Now, considering the scenario where smp_call_function_single() fails in
    first case, inside rcu_tasks_trace_pertask(). In this case,
    ->trc_ipi_to_cpu is set to the current CPU for that task. This will
    result in trc_wait_for_one_reader() getting skipped in third case,
    inside check_all_holdout_tasks_trace(), for this task. This further
    results in ->trc_reader_checked never getting set for this task,
    and the task not getting removed from holdout list. This can cause
    the current trace rcu-tasks grace period to stall.
    
    Fix the above problem, by resetting ->trc_ipi_to_cpu to -1, on
    smp_call_function_single() failure, so that future IPI calls can
    be send for this task.
    
    Note that all three of the trc_wait_for_one_reader() function's
    callers (rcu_tasks_trace_pertask(), rcu_tasks_trace_postscan(),
    check_all_holdout_tasks_trace()) hold cpu_read_lock().  This means
    that smp_call_function_single() cannot race with CPU hotplug, and thus
    should never fail.  Therefore, also add a warning in order to report
    any such failure in case smp_call_function_single() grows some other
    reason for failure.
    
    Signed-off-by: Neeraj Upadhyay <[email protected]>
    Signed-off-by: Paul E. McKenney <[email protected]>
    Cc: Joel Fernandes <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

rcu-tasks: Wait for trc_read_check_handler() IPIs [+ + +]
Author: Paul E. McKenney <[email protected]>
Date:   Fri Jul 30 12:17:59 2021 -0700

    rcu-tasks: Wait for trc_read_check_handler() IPIs
    
    commit cbe0d8d91415c9692fe88191940d98952b6855d9 upstream.
    
    Currently, RCU Tasks Trace initializes the trc_n_readers_need_end counter
    to the value one, increments it before each trc_read_check_handler()
    IPI, then decrements it within trc_read_check_handler() if the target
    task was in a quiescent state (or if the target task moved to some other
    CPU while the IPI was in flight), complaining if the new value was zero.
    The rationale for complaining is that the initial value of one must be
    decremented away before zero can be reached, and this decrement has not
    yet happened.
    
    Except that trc_read_check_handler() is initiated with an asynchronous
    smp_call_function_single(), which might be significantly delayed.  This
    can result in false-positive complaints about the counter reaching zero.
    
    This commit therefore waits for in-flight IPI handlers to complete before
    decrementing away the initial value of one from the trc_n_readers_need_end
    counter.
    
    Signed-off-by: Paul E. McKenney <[email protected]>
    Cc: Joel Fernandes <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
rcu: Prevent expedited GP from enabling tick on offline CPU [+ + +]
Author: Paul E. McKenney <[email protected]>
Date:   Wed Sep 29 09:21:34 2021 -0700

    rcu: Prevent expedited GP from enabling tick on offline CPU
    
    commit 147f04b14adde831eb4a0a1e378667429732f9e8 upstream.
    
    If an RCU expedited grace period starts just when a CPU is in the process
    of going offline, so that the outgoing CPU has completed its pass through
    stop-machine but has not yet completed its final dive into the idle loop,
    RCU will attempt to enable that CPU's scheduling-clock tick via a call
    to tick_dep_set_cpu().  For this to happen, that CPU has to have been
    online when the expedited grace period completed its CPU-selection phase.
    
    This is pointless:  The outgoing CPU has interrupts disabled, so it cannot
    take a scheduling-clock tick anyway.  In addition, the tick_dep_set_cpu()
    function's eventual call to irq_work_queue_on() will splat as follows:
    
    smpboot: CPU 1 is now offline
    WARNING: CPU: 6 PID: 124 at kernel/irq_work.c:95
    +irq_work_queue_on+0x57/0x60
    Modules linked in:
    CPU: 6 PID: 124 Comm: kworker/6:2 Not tainted 5.15.0-rc1+ #3
    Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS
    +rel-1.14.0-0-g155821a-rebuilt.opensuse.org 04/01/2014
    Workqueue: rcu_gp wait_rcu_exp_gp
    RIP: 0010:irq_work_queue_on+0x57/0x60
    Code: 8b 05 1d c7 ea 62 a9 00 00 f0 00 75 21 4c 89 ce 44 89 c7 e8
    +9b 37 fa ff ba 01 00 00 00 89 d0 c3 4c 89 cf e8 3b ff ff ff eb ee <0f> 0b eb b7
    +0f 0b eb db 90 48 c7 c0 98 2a 02 00 65 48 03 05 91
     6f
    RSP: 0000:ffffb12cc038fe48 EFLAGS: 00010282
    RAX: 0000000000000001 RBX: 0000000000005208 RCX: 0000000000000020
    RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff9ad01f45a680
    RBP: 000000000004c990 R08: 0000000000000001 R09: ffff9ad01f45a680
    R10: ffffb12cc0317db0 R11: 0000000000000001 R12: 00000000fffecee8
    R13: 0000000000000001 R14: 0000000000026980 R15: ffffffff9e53ae00
    FS:  0000000000000000(0000) GS:ffff9ad01f580000(0000)
    +knlGS:0000000000000000
    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 0000000000000000 CR3: 000000000de0c000 CR4: 00000000000006e0
    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Call Trace:
     tick_nohz_dep_set_cpu+0x59/0x70
     rcu_exp_wait_wake+0x54e/0x870
     ? sync_rcu_exp_select_cpus+0x1fc/0x390
     process_one_work+0x1ef/0x3c0
     ? process_one_work+0x3c0/0x3c0
     worker_thread+0x28/0x3c0
     ? process_one_work+0x3c0/0x3c0
     kthread+0x115/0x140
     ? set_kthread_struct+0x40/0x40
     ret_from_fork+0x22/0x30
    ---[ end trace c5bf75eb6aa80bc6 ]---
    
    This commit therefore avoids invoking tick_dep_set_cpu() on offlined
    CPUs to limit both futility and false-positive splats.
    
    Signed-off-by: Paul E. McKenney <[email protected]>
    Signed-off-by: Joel Fernandes (Google) <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
Revert "drm/amdgpu: install stub fence into potential unused fence pointers" [+ + +]
Author: Greg Kroah-Hartman <[email protected]>
Date:   Thu Aug 31 12:27:47 2023 +0200

    Revert "drm/amdgpu: install stub fence into potential unused fence pointers"
    
    This reverts commit 04bd3a362d2f1788272776bd8b10e5a012e69b77 which is
    commit 187916e6ed9d0c3b3abc27429f7a5f8c936bd1f0 upstream.
    
    It is reported to cause lots of log spam, so it should be dropped for
    now.
    
    Reported-by: Guenter Roeck <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Reported-by: From: Chia-I Wu <[email protected]>
    Link: https://lore.kernel.org/r/CAPaKu7RTgAMBLHbwtp4zgiBSDrTFtAj07k5qMzkuLQy2Zr+sZA@mail.gmail.com
    Cc: Christian Kц╤nig <[email protected]>
    Cc: Lang Yu <[email protected]>
    Cc: Alex Deucher <[email protected]>
    Cc: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
Revert "MIPS: Alchemy: fix dbdma2" [+ + +]
Author: Greg Kroah-Hartman <[email protected]>
Date:   Thu Aug 31 12:30:23 2023 +0200

    Revert "MIPS: Alchemy: fix dbdma2"
    
    This reverts commit 619672bf2d04cdc1bc3925defd0312c2ef11e5d0 which is
    commit 2d645604f69f3a772d58ead702f9a8e84ab2b342 upstream.
    
    It breaks the build, so should be dropped.
    
    Reported-by: Guenter Roeck <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Cc: Thomas Bogendoerfer <[email protected]>
    Cc: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>