diff --git a/news/README.md b/news/README.md index 5a6570c70e5cf90825ce237934d751e082219568..b3d15c498d1751d76bc3a1e83be9688a7431a555 100644 --- a/news/README.md +++ b/news/README.md @@ -4,6 +4,980 @@ * [2022 年](2022.md) +## 20230205:第 32 期 + +### 内核动态 + +#### RISC-V 架构支持 + +* [v5: KVM perf support](http://lore.kernel.org/linux-riscv/20230205011515.1284674-1-atishp@rivosinc.com/) + + This series extends perf support for KVM. The KVM implementation relies + on the SBI PMU extension and trap n emulation of hpmcounter CSRs. + The KVM implementation exposes the virtual counters to the guest and internally + manage the counters using kernel perf counters. + +* [v4: Basic pinctrl support for StarFive JH7110 RISC-V SoC](http://lore.kernel.org/linux-riscv/20230203141801.59083-1-hal.feng@starfivetech.com/) + + This patch series adds basic pinctrl support for StarFive JH7110 SoC. + +* [v3: StarFive's SDIO/eMMC driver support](http://lore.kernel.org/linux-riscv/20230203081913.81968-1-william.qiu@starfivetech.com/) + + This patchset adds initial rudimentary support for the StarFive + designware mobile storage host controller driver. And this driver will + be used in StarFive's VisionFive 2 board. The main purpose of adding + this driver is to accommodate the ultra-high speed mode of eMMC. + +* [v4: RISC-V kasan rework](http://lore.kernel.org/linux-riscv/20230203075232.274282-1-alexghiti@rivosinc.com/) + + As described in patch 2, our current kasan implementation is intricate, + so I tried to simplify the implementation and mimic what arm64/x86 are doing. + +* [v1: Documentation: RISC-v: Define Xlinuxs{s,m}aia](http://lore.kernel.org/linux-riscv/20230203001201.14770-1-palmer@rivosinc.com/) + + The AIA specification was only partially frozen, but provides no way to + refer to the subset of behavior that has been frozen. It seems like + there's not a whole lot of interest in the non-frozen behavior, so let's + just define an extension that only consists of the frozen behavior + +* [v1: RISC-v: Only provide the single-letter extensions in HWCAP](http://lore.kernel.org/linux-riscv/20230202233832.11036-1-palmer@rivosinc.com/) + + The recent refactoring led to us leaking some HWCAP bits to userspace + that didn't make much sense. With any luck we'll have a better scheme + soon, but for now just mask off those bits to avoid polluting userspace. + +* [v3: spi: Add support for stacked/parallel memories](http://lore.kernel.org/linux-riscv/20230202152258.512973-1-amit.kumar-mahapatra@amd.com/) + + This patch is in the continuation to the discussions which happened on + 'commit f89504300e94 ("spi: Stacked/parallel memories bindings")' for + adding dt-binding support for stacked/parallel memories. + +* [v1: RESEND: dt-bindings: timer: sifive,clint: add comaptibles for T-Head's C9xx](http://lore.kernel.org/linux-riscv/20230202072814.319903-1-uwu@icenowy.me/) + + T-Head C906/C910 CLINT is not compliant to SiFive ones (and even not + compliant to the newcoming ACLINT spec) because of lack of mtime register. + +* [v1: clocksource: riscv: Patch riscv_clock_next_event() jump before first use](http://lore.kernel.org/linux-riscv/512FC581-4097-4433-9C3D-CBCB7CD61954@rivosinc.com/) + + A static key is used to select between SBI and Sstc timer usage in + riscv_clock_next_event(), but currently the direction is resolved + after cpuhp_setup_state() is called (which sets the next event). + +* [v1: riscv: disable generation of unwind tables](http://lore.kernel.org/linux-riscv/mvmzg9xybqu.fsf@suse.de/) + + GCC 13 will enable -fasynchronous-unwind-tables by default on riscv. In + the kernel, we don't have any use for unwind tables yet, so disable them. + More importantly, the .eh_frame section brings relocations + (R_RISC_32_PCREL, R_RISCV_SET{6,8,16}, R_RISCV_SUB{6,8,16}) into modules + that we are not prepared to handle. + +* [v3: riscv: mm: hugetlb: Enable ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP](http://lore.kernel.org/linux-riscv/20230201015259.3222524-1-guoren@kernel.org/) + + Add HVO support for RISC-V; see commit 6be24bed9da3 ("mm: hugetlb: + introduce a new config HUGETLB_PAGE_FREE_VMEMMAP"). This patch is + similar to commit 1e63ac088f20 ("arm64: mm: hugetlb: enable + HUGETLB_PAGE_FREE_VMEMMAP for arm64"), and riscv's motivation is the same as arm64. + +* [v4: riscv: Allow to downgrade paging mode from the command line](http://lore.kernel.org/linux-riscv/20230131151115.1972740-1-alexghiti@rivosinc.com/) + + This new version gets rid of the limitation that prevented KASAN kernels + to use the newly introduced parameters. + + While looking into KASLR, I fell onto commit aacd149b6238 ("arm64: head: + avoid relocating the kernel twice for KASLR"): it allows to use the fdt + functions very early in the boot process with KASAN enabled by simply + compiling a new version of those functions without instrumentation. + +* [v1: Add basic ACPI support for RISC-V](http://lore.kernel.org/linux-riscv/20230130182225.2471414-1-sunilvl@ventanamicro.com/) + + This patch series enables the basic ACPI infrastructure for RISC-V. + Supporting external interrupt controllers is in progress and hence it is + tested using polling based HVC SBI console and RAM disk. + +* [v3: RISC-v: Apply Zicboz to clear_page](http://lore.kernel.org/linux-riscv/20230130120128.1349464-1-ajones@ventanamicro.com/) + + When the Zicboz extension is available we can more rapidly zero naturally + aligned Zicboz block sized chunks of memory. As pages are always page + aligned and are larger than any Zicboz block size will be, then + clear_page() appears to be a good candidate for the extension. + +* [v2: Change PWM-controlled LED pin active mode and algorithm](http://lore.kernel.org/linux-riscv/20230130093229.27489-1-nylon.chen@sifive.com/) + + According to the circuit diagram of User LEDs - RGB described in the + manual hifive-unleashed-a00.pdf[0] and hifive-unmatched-schematics-v3.pdf[1]. + The behavior of PWM is acitve-high. + +* [v2: riscv: mm: Implement pmdp_collapse_flush for THP](http://lore.kernel.org/linux-riscv/20230130074815.1694055-1-mchitale@ventanamicro.com/) + + When THP is enabled, 4K pages are collapsed into a single huge + page using the generic pmdp_collapse_flush() which will further + use flush_tlb_range() to shoot-down stale TLB entries. + +* [v2: mm, arch: add generic implementation of pfn_valid() for FLATMEM](http://lore.kernel.org/linux-riscv/20230129124235.209895-1-rppt@kernel.org/) + + Every architecture that supports FLATMEM memory model defines its own + version of pfn_valid() that essentially compares a pfn to max_mapnr. + +* [v1: riscv: Add header include guards to insn.h](http://lore.kernel.org/linux-riscv/20230129094242.282620-1-liaochang1@huawei.com/) + + Add header include guards to insn.h to prevent repeating declaration of + any identifiers in insn.h. + +* [v1: riscv: support arch_has_hw_pte_young()](http://lore.kernel.org/linux-riscv/20230129064956.143664-1-tjytimi@163.com/) + + The arch_has_hw_pte_young() is false for riscv by default. If it's + false, page table walk is almost skipped for MGLRU reclaim. And it + will also cause useless step in __wp_page_copy_user(). + +#### 进程调度 + +* [v1: sched/isolation: Prep work for pcp cache draining isolation](http://lore.kernel.org/lkml/20230203232409.163847-1-frederic@kernel.org/) + + For reference: https://lore.kernel.org/lkml/20230125073502.743446-1-leobras@redhat.com/ + And the latest proposal: https://lore.kernel.org/lkml/Y90mZQhW89HtYfT9@dhcp22.suse.cz/ + +* [v1: cpu,sched: Mark arch_cpu_idle_dead() __noreturn](http://lore.kernel.org/lkml/cover.1675461757.git.jpoimboe@kernel.org/) + + These are some minor changes to enable the __noreturn attribute for + arch_cpu_idle_dead(). (If there are no objections, I can merge the + entire set through the tip tree.) + + Until recently [1], in Xen, when a previously offlined CPU was brought + back online, it unexpectedly resumed execution where it left off in the + middle of the idle loop by returning from play_dead() and its caller + arch_cpu_idle_dead(). + +* [v1: sched/deadline: Add more reschedule cases to prio_changed_dl()](http://lore.kernel.org/lkml/20230202182854.3696665-1-vschneid@redhat.com/) + + On that kernel, it is quite easy to trigger using rt-tests's deadline_test + [1] with the test running on isolated CPUs (this reduces the chance of + something unrelated setting TIF_NEED_RESCHED on the idle tasks, making the + issue even more obvious as the hung task detector chimes in). + +* [v1: kernel/sched/core: adjust rt_priority accordingly when prio is changed](http://lore.kernel.org/lkml/1675245680-2811-1-git-send-email-chensong_2000@189.cn/) + + When a high priority process is acquiring a rtmutex which is held by a + low priority process, the latter's priority will be boosted up by calling + rt_mutex_setprio->__setscheduler_prio. + +* [v2: sched/numa: Enhance vma scanning](http://lore.kernel.org/lkml/cover.1675159422.git.raghavendra.kt@amd.com/) + + The patchset proposes one of the enhancements to numa vma scanning + suggested by Mel. This is continuation of [2]. Though I have removed + RFC, I do think some parts need more feedback and refinement. + + Existing mechanism of scan period involves, scan period derived from + per-thread stats. Process Adaptive autoNUMA [1] proposed to gather NUMA + fault stats at per-process level to capture aplication behaviour better. + +* [v1: sched: Consider capacity for certain load balancing decisions](http://lore.kernel.org/lkml/20230201012032.2874481-1-xii@google.com/) + + After load balancing was split into different scenarios, CPU capacity + is ignored for the "migrate_task" case, which means a thread can stay + on a softirq heavy cpu for an extended amount of time. + +* [v2: sched: pick_next_rt_entity(): checked list_entry](http://lore.kernel.org/lkml/20230128-list-entry-null-check-sched-v2-1-d8e010cce91b@diag.uniroma1.it/) + + Commit 326587b84078 ("sched: fix goto retry in pick_next_task_rt()") + removed any path which could make pick_next_rt_entity() return NULL. + However, BUG_ON(!rt_se) in _pick_next_task_rt() (the only caller of + pick_next_rt_entity()) still checks the error condition, which can + never happen, since list_entry() never returns NULL. + +#### 内存管理 + +* [v1: bpf-next: bpf, mm: introduce cgroup.memory=nobpf](http://lore.kernel.org/linux-mm/20230205065805.19598-1-laoar.shao@gmail.com/) + + So let's give the user an option to disable bpf memory accouting. + + The idea of "cgroup.memory=nobpf" is originally by Tejun[1]. + + [1]. https://lwn.net/ml/linux-mm/YxjOawzlgE458ezL@slm.duckdns.org/ + +* [v9: cachestat: a new syscall for page cache state of files](http://lore.kernel.org/linux-mm/20230203190413.2559707-1-nphamcs@gmail.com/) + + There is currently no good way to query the page cache state of large + file sets and directory trees. There is mincore(), but it scales poorly: + the kernel writes out a lot of bitmap data that userspace has to + aggregate, when the user really doesn not care about per-page information + in that case. + +* [v3: folio based filemap_map_pages()](http://lore.kernel.org/linux-mm/20230203131636.1648662-1-fengwei.yin@intel.com/) + + Current filemap_map_pages() uses page granularity even when + underneath folio is large folio. Making it use folio based + granularity allows batched refcount, rmap and mm counter + update. Which brings performance gain. + +* [v1: mm/page_alloc: reduce fallbacks to (MIGRATE_PCPTYPES - 1)](http://lore.kernel.org/linux-mm/20230203100132.1627787-1-yajun.deng@linux.dev/) + + The commit 1dd214b8f21c ("mm: page_alloc: avoid merging non-fallbackable + pageblocks with others") has removed MIGRATE_CMA and MIGRATE_ISOLATE from + fallbacks list. so there is no need to add an element at the end of every type. + +* [v7: shoot lazy tlbs (lazy tlb refcount scalability improvement)](http://lore.kernel.org/linux-mm/20230203071837.1136453-1-npiggin@gmail.com/) + + (Sorry about the double send) + + This series improves scalability of context switching between user and + kernel threads on large systems with a threaded process spread across a lot of CPUs. + +* [v1: Ignore non-LRU-based reclaim in memcg reclaim](http://lore.kernel.org/linux-mm/20230202233229.3895713-1-yosryahmed@google.com/) + + Reclaimed pages through other means than LRU-based reclaim are tracked + through reclaim_state in struct scan_control, which is stashed in + current task_struct. These pages are added to the number of reclaimed + pages through LRUs. + +* [v1: mm: memcontrol: don't account swap failures not due to cgroup limits](http://lore.kernel.org/linux-mm/20230202155626.1829121-1-hannes@cmpxchg.org/) + + Upon closer examination, this is an ARM64 machine that doesn't support + swapping out THPs. In that case, the first get_swap_page() fails, and + the kernel falls back to splitting the THP and swapping the 4k + constituents one by one. /proc/vmstat confirms this with a high rate + of thp_swpout_fallback events. + +* [v2: Introduce cmpxchg128() -- aka. the demise of cmpxchg_double()](http://lore.kernel.org/linux-mm/20230202145030.223740842@infradead.org/) + + Since Linus hated on cmpxchg_double(), a few patches to get rid of it, as + proposed here: + + https://lkml.kernel.org/r/Y2U3WdU61FvYlpUh@hirez.programming.kicks-ass.net + + These patches are based on 6.2.0-rc6 + cryptodev-2.6, but also apply to next/master. + + Available here: + + git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git core/wip-u128 + +* [v10: Implement IOCTL to get and/or the clear info about PTEs](http://lore.kernel.org/linux-mm/20230202112915.867409-1-usama.anjum@collabora.com/) + + Historically, soft-dirty PTE bit tracking has been used in the CRIU + project. The procfs interface is enough for finding the soft-dirty bit + status and clearing the soft-dirty bit of all the pages of a process. + We have the use case where we need to track the soft-dirty PTE bit for + only specific pages on-demand. We need this tracking and clear mechanism + of a region of memory while the process is running to emulate the + getWriteWatch() syscall of Windows. + +* [v1: mm: introduce entrance for root_mem_cgroup's current](http://lore.kernel.org/linux-mm/1675312377-4782-1-git-send-email-zhaoyang.huang@unisoc.com/) + + Introducing memory.root_current for the memory charges on root_mem_cgroup. + +* [v1: mm/bpf/perf: Store build id in file object](http://lore.kernel.org/linux-mm/20230201135737.800527-1-jolsa@kernel.org/) + + This RFC patchset adds new config CONFIG_FILE_BUILD_ID option, which adds + build id object pointer to the file object when enabled. The build id is + read/populated when the file is mmap-ed. + +* [v4: mm/vmalloc: replace BUG_ON to a simple if statement](http://lore.kernel.org/linux-mm/20230201115142.GA7772@min-iamroot/) + + As per the coding standards, in the event of an abnormal condition that + should not occur under normal circumstances, the kernel should attempt + recovery and proceed with execution, rather than halting the machine. + +* [v4: mm/vmalloc.c: allow vread() to read out vm_map_ram areas](http://lore.kernel.org/linux-mm/20230201091339.61761-1-bhe@redhat.com/) + + Stephen reported vread() will skip vm_map_ram areas when reading out + /proc/kcore with drgn utility. Please see below link to get more details. + +* [v4: mm: hwposion: support recovery from ksm_might_need_to_copy()](http://lore.kernel.org/linux-mm/20230201074433.96641-1-wangkefeng.wang@huawei.com/) + + When the kernel copy a page from ksm_might_need_to_copy(), but runs + into an uncorrectable error, it will crash since poisoned page is + consumed by kernel, this is similar to the issue recently fixed by + Copy-on-write poison recovery. + +* [v1: kasan: use %zd format for printing size_t](http://lore.kernel.org/linux-mm/20230201071312.2224452-1-arnd@kernel.org/) + + The size_t type depends on the architecture, so %lu does not work + on most 32-bit ones: + + In file included from include/kunit/assert.h:13, + from include/kunit/test.h:12, + from mm/kasan/report.c:12: + mm/kasan/report.c: In function 'describe_object_addr': + include/linux/kern_levels.h:5:25: error: format '%lu' expects argument of type 'long unsigned int', but argument 5 has type 'size_t' {aka 'unsigned int'} [-Werror=format=] + mm/kasan/report.c:270:9: note: in expansion of macro 'pr_err' + 270 | pr_err("The buggy address is located %d bytes %s of\n" + | ^ + +* [v1: mm/khugepaged: skip shmem with armed userfaultfd](http://lore.kernel.org/linux-mm/20230201034137.2463113-1-stevensd@google.com/) + + Collapsing memory in a vma that has an armed userfaultfd results in + zero-filling any missing pages, which breaks user-space paging for those + filled pages. Avoid khugepage bypassing userfaultfd by not collapsing + pages in shmem reached via scanning a vma with an armed userfaultfd if + doing so would zero-fill any pages. + +* [v1: mm: move FOLL_PIN debug accounting under CONFIG_DEBUG_VM](http://lore.kernel.org/linux-mm/54b0b07a-c178-9ffe-b5af-088f3c21696c@kernel.dk/) + + which wasn't there before. The node page state counters are percpu, but + with a very low threshold. On my setup, every 108th update ends up + needing to punt to two atomic_lond_add()'s, which is causing this above regression. + +* [v1: mm,page_alloc,cma: configurable CMA utilization](http://lore.kernel.org/linux-mm/20230131071052.GB19285@hu-sbhattip-lv.qualcomm.com/) + + Commit 16867664936e ("mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations") + added support to use CMA pages when more than 50% of total free pages in + the zone are free CMA pages. + +* [v1: mm/gup: Add folio to list when folio_isolate_lru() succeed](http://lore.kernel.org/linux-mm/20230131063206.28820-1-Kuan-Ying.Lee@mediatek.com/) + + If we call folio_isolate_lru() successfully, we will get + return value 0. We need to add this folio to the movable_pages_list. + +* [v2: mm-unstable: Convert a couple migrate functions to use folios](http://lore.kernel.org/linux-mm/20230130214352.40538-1-vishal.moola@gmail.com/) + + This patch set introduces folio_movable_ops() and converts 3 functions + in mm/migrate.c to use folios. It also introduces + folio_get_nontail_page() for folio conversions which may want to + distinguish between head and tail pages. + +#### 文件系统 + +* [v2: Support negative dentries on case-insensitive ext4 and f2fs](http://lore.kernel.org/linux-fsdevel/20230203210039.16289-1-krisman@suse.de/) + + This patchset enables negative dentries for case-insensitive directories + in ext4/f2fs. It solves the corner cases for this feature, including + those already tested by fstests (generic/556). It also solves an + existing bug with the existing implementation where old negative + dentries are left behind after a directory conversion to case-insensitive. + +* [v2: fsdax: dax_unshare_iter() should return a valid length](http://lore.kernel.org/linux-fsdevel/1675388906-50-1-git-send-email-ruansy.fnst@fujitsu.com/) + + The copy_mc_to_kernel() will return 0 if it executed successfully. + Then the return value should be set to the length it copied. + +* [v1: RESEND: pipe: avoid creating empty pipe buffers](http://lore.kernel.org/linux-fsdevel/20230131121127.466443-1-wiktorg@google.com/) + + pipe_write cannot be called on notification pipes so + post_one_notification cannot race it. + Locking and second pipe_full check are thus redundant. + +* [v9: DEPT(Dependency Tracker)](http://lore.kernel.org/linux-fsdevel/1675154394-25598-1-git-send-email-max.byungchul.park@gmail.com/) + + Nevertheless, I apologize for the lack of document. I promise to add it + before it gets needed to use DEPT's APIs by users. For now, you can use + DEPT just with CONFIG_DEPT on. + +* [GIT PULL: iov_iter: Improve page extraction (pin or just list)](http://lore.kernel.org/linux-fsdevel/3351099.1675077249@warthog.procyon.org.uk/) + + Could you consider pulling this patchset into the block tree? I think that + Al's fears wrt to pinned pages being removed from page tables causing deadlock + have been answered. Granted, there is still the issue of how to handle + vmsplice and a bunch of other places to fix, not least skbuff handling. + +* [v11: iov_iter: Improve page extraction (pin or just list)](http://lore.kernel.org/linux-fsdevel/20230130074129.28120-1-naresh.kamboju@linaro.org/) + + Build test pass on arm, arm64, i386, mips, parisc, powerpc, riscv, s390, sh, + sparc and x86_64. + Boot and LTP smoke pass on qemu-arm64, qemu-armv7, qemu-i386 and qemu-x86_64. + +* [v4: RESEND: fs: coredump: using preprocessor directives for dump_emit_page](http://lore.kernel.org/linux-fsdevel/20230130013347.17654-1-xiehongyu1@kylinos.cn/) + + When CONFIG_COREDUMP is set and CONFIG_ELF_CORE is not, you'll get warnings + like: + fs/coredump.c:841:12: error: ‘dump_emit_page’ defined but not used + [-Werror=unused-function] + 841 | static int dump_emit_page(struct coredump_params *cprm, struct + page *page) + +* [v1: fscrypt: Copy the memcg information to the ciphertext page](http://lore.kernel.org/linux-fsdevel/20230129121851.2248378-1-willy@infradead.org/) + + Both f2fs and ext4 end up passing the ciphertext page to + wbc_account_cgroup_owner(). At the moment, the ciphertext page appears + to belong to no cgroup, so it is accounted to the root_mem_cgroup instead of whatever cgroup the original page was in. + +* [v1: blk: optimization for classic polling](http://lore.kernel.org/linux-fsdevel/3578876466-3733-1-git-send-email-nj.shetty@samsung.com/) + + This removes the dependency on interrupts to wake up task. Set task + state as TASK_RUNNING, if need_resched() returns true, + while polling for IO completion. + Earlier, polling task used to sleep, relying on interrupt to wake it up. + This made some IO take very long when interrupt-coalescing is enabled in NVMe. + +#### 网络设备 + +* [v2: net-next: add support for per action hw stats](http://lore.kernel.org/netdev/20230205135525.27760-1-ozsh@nvidia.com/) + + This series provides the platform to query per action stats for in_hw flows. + + The first four patches are preparation patches with no functionality change. + The fifth patch re-uses the existing flow action stats api to query action + stats for both classifier and action dumps. + The rest of the patches add per action stats support to the Mellanox driver. + +* [v1: net-next: net: move more duplicate code of ovs and tc conntrack into nf_conntrack_ovs](http://lore.kernel.org/netdev/cover.1675548023.git.lucien.xin@gmail.com/) + + We've moved some duplicate code into nf_nat_ovs in: + + "net: eliminate the duplicate code in the ct nat functions of ovs and tc" + +* [v3: net-next: tuntap: correctly initialize socket uid](http://lore.kernel.org/netdev/20230131-tuntap-sk-uid-v3-0-81188b909685@diag.uniroma1.it/) + + sock_init_data() assumes that the `struct socket` passed in input is + contained in a `struct socket_alloc` allocated with sock_alloc(). + However, tap_open() and tun_chr_open() pass a `struct socket` embedded + in a `struct tap_queue` and `struct tun_file` respectively, both + allocated with sk_alloc(). + This causes a type confusion when issuing a container_of() with + SOCK_INODE() in sock_init_data() which results in assigning a wrong + sk_uid to the `struct sock` in input. + +* [v1: net-next: vxlan: Add MDB support](http://lore.kernel.org/netdev/20230204170801.3897900-1-idosch@nvidia.com/) + + This patchset implements MDB support in the VXLAN driver, allowing it to + selectively forward IP multicast traffic to VTEPs with interested + receivers instead of flooding it to all the VTEPs as BUM. + +* [v1: net-next:pull request: implement devlink reload in ice](http://lore.kernel.org/netdev/20230203211456.705649-1-anthony.l.nguyen@intel.com/) + + Michal Swiatkowski says: + + This is a part of changes done in patchset [0]. Resource management is + kind of controversial part, so I split it into two patchsets. + + It is the first one, covering refactor and implement reload API call. + +* [v1: firmware: qcom_scm: Move qcom_scm.h to include/linux/firmware/qcom/](http://lore.kernel.org/netdev/20230203210956.3580811-1-quic_eberman@quicinc.com/) + + Move include/linux/qcom_scm.h to include/linux/firmware/qcom/qcom_scm.h. + This removes 1 of a few remaining Qualcomm-specific headers into a more + approciate subdirectory under include/. + +* [v1: net-next: ionic: rx buffers and on-chip descriptors](http://lore.kernel.org/netdev/20230203210016.36606-1-shannon.nelson@amd.com/) + + We start with a couple of house-keeping patches that were + originally presented for 'net', then we add support for on-chip + descriptor rings and Rx buffer page cacheing. + +* [v2: 9p/client: don't assume signal_pending() clears on recalc_sigpending()](http://lore.kernel.org/netdev/9422b998-5bab-85cc-5416-3bb5cf6dd853@kernel.dk/) + + signal_pending() really means that an exit to userspace is required to + clear the condition, as it could be either an actual signal, or it could + be TWA_SIGNAL based task_work that needs processing. The 9p client + does a recalc_sigpending() to take care of the former, but that still + leaves TWA_SIGNAL task_work. The result is that if we do have TWA_SIGNAL + task_work pending, then we'll sit in a tight loop spinning as + signal_pending() remains true even after recalc_sigpending(). + +* [v11: nvme-tcp receive offloads](http://lore.kernel.org/netdev/20230203132705.627232-1-aaptel@nvidia.com/) + + Here is the next iteration of our nvme-tcp receive offload series. + + The main changes are in patch 3 (netlink). + + Rebased on top of today net-next + + The changes are also available through git: + + Repo: https://github.com/aaptel/linux.git branch nvme-rx-offload-v11 + Web: https://github.com/aaptel/linux/tree/nvme-rx-offload-v11 + + The NVMeTCP offload was presented in netdev 0x16 (video now available): + - https://netdevconf.info/0x16/session.html?NVMeTCP-Offload-%E2%80%93-Implementation-and-Performance-Gains + - https://youtu.be/W74TR-SNgi4 + +* [v1: atm: eni: replace DPRINTK macro with pr_debug()](http://lore.kernel.org/netdev/00f95478-c9cc-1f4b-820e-d427a9113418@icloud.com/) + + The macro DPRINTK is in use in lots of different source files, varying in + their implementation. One of those files is drivers/atm/eni.c. + + Replacing them with pr_debug() and their counterparts makes it more + consistent and easier to read. + +* [v1: Bluetooth: Make sure LE create conn cancel is sent when timeout](http://lore.kernel.org/netdev/20230203173900.1.I9ca803e2f809e339da43c103860118e7381e4871@changeid/) + + When sending LE create conn command, we set a timer with a duration of + HCI_LE_CONN_TIMEOUT before timing out and calling + create_le_conn_complete. Additionally, when receiving the command + complete, we also set a timer with the same duration to call le_conn_timeout. + +* [v1: Bluetooth: Free potentially unfreed SCO connection](http://lore.kernel.org/netdev/20230203173024.1.Ieb6662276f3bd3d79e9134ab04523d584c300c45@changeid/) + + When it happens, hci_cs_setup_sync_conn won't be able to obtain the + reference to the SCO connection, so it will be stuck and potentially hinder subsequent connections to the same device. + + This patch prevents that by also deleting the SCO connection if it is + still not established when the corresponding ACL connection is deleted. + +* [v3: net-next: Wangxun interrupt and RxTx support](http://lore.kernel.org/netdev/20230203091135.3294377-1-jiawenwu@trustnetic.com/) + + Configure interrupt, setup RxTx ring, support to receive and transmit packets. + +* [v1: net: ethernet: mtk_eth_soc: various enhancements](http://lore.kernel.org/netdev/cover.1675407169.git.daniel@makrotopia.org/) + + This series brings a variety of fixes and enhancements for mtk_eth_soc, + adds support for the MT7981 SoC and facilitates sharing the SGMII PCS + code between mtk_eth_soc and mt7530. + +* [v7: io_uring: add napi busy polling support](http://lore.kernel.org/netdev/20230203060850.3060238-1-shr@devkernel.io/) + + This adds the napi busy polling support in io_uring.c. It adds a new + napi_list to the io_ring_ctx structure. This list contains the list of + napi_id's that are currently enabled for busy polling. This list is + used to determine which napi id's enabled busy polling. For faster + access it also adds a hash table. + +* [v1: next: wifi: mwifiex: Replace one-element array with flexible-array member](http://lore.kernel.org/netdev/Y9xkjXeElSEQ0FPY@work/) + + One-element arrays are deprecated, and we are replacing them with flexible + array members instead. So, replace one-element array with flexible-array + member in struct mwifiex_ie_types_rates_param_set. + +* [v1: next: wifi: mwifiex: Replace one-element arrays with flexible-array members](http://lore.kernel.org/netdev/Y9xkECG3uTZ6T1dN@work/) + + One-element arrays are deprecated, and we are replacing them with flexible + array members instead. So, replace one-element arrays with flexible-array + members in multiple structures. + +* [v2: net-next: net: page_pool: use in_softirq() instead](http://lore.kernel.org/netdev/20230203011612.194701-1-dqfext@gmail.com/) + + We use BH context only for synchronization, so we don't care if it's + actually serving softirq or not. + + As a side node, in case of threaded NAPI, in_serving_softirq() will + return false because it's in process context with BH off, making + page_pool_recycle_in_cache() unreachable. + +#### 安全增强 + +* [v1: media: imx-jpeg: Bounds check sizeimage access](http://lore.kernel.org/linux-hardening/20230204183804.never.323-kees@kernel.org/) + + The call of mxc_jpeg_get_plane_size() from mxc_jpeg_dec_irq() sets + plane_no argument to 1. + +* [v1: scsi: mpi3mr: Replace 1-element array with flex-array](http://lore.kernel.org/linux-hardening/20230204183715.never.937-kees@kernel.org/) + + Nothing else defined MPI3_NVME_ENCAP_CMD_MAX, so the "command" + buffer was being defined as a fake flexible array of size 1. Replace + this with a proper flex array. + +* [v1: USB: ene_usb6250: Allocate enough memory for full object](http://lore.kernel.org/linux-hardening/20230204183546.never.849-kees@kernel.org/) + + The allocation of PageBuffer is 512 bytes in size, but the dereferencing + of struct ms_bootblock_idi (also size 512) happens at a calculated offset + within the allocation, which means the object could potentially extend + beyond the end of the allocation. Avoid this case by just allocating + enough space to catch any accesses beyond the end. + +* [v1: btrfs: sysfs: Handle NULL return values](http://lore.kernel.org/linux-hardening/20230204183510.never.909-kees@kernel.org/) + + Each of to_fs_info(), discard_to_fs_info(), and to_space_info() can + return NULL values. Check for these so it's not possible to perform + calculations against NULL pointers. + +* [v1: bpf: Replace bpf_lpm_trie_key 0-length array with flexible array](http://lore.kernel.org/linux-hardening/20230204183241.never.481-kees@kernel.org/) + + Replace deprecated 0-length array in struct bpf_lpm_trie_key with flexible array. + + This includes fixing the selftest which was incorrectly using a + variable length struct as a header, identified earlier[1]. Avoid this + by just explicitly including the prefixlen member instead of struct + bpf_lpm_trie_key. + + [1] https://lore.kernel.org/all/202206281009.4332AA33@keescook/ + +* [v2: lm85: Bounds check to_sensor_dev_attr()->index usage](http://lore.kernel.org/linux-hardening/20230203223250.gonna.713-kees@kernel.org/) + + The index into various register arrays was not bounds checked. Provide a + simple wrapper to bounds check the index, adding robustness in the face + of memory corruption, unexpected index manipulation, etc. + +* [v1: randstruct: temporarily disable clang support](http://lore.kernel.org/linux-hardening/20230203194201.92015-1-ebiggers@kernel.org/) + + Randstruct with clang is currently unsafe to use in any clang release + that supports it, due to a clang bug that is causing miscompilations: + "-frandomize-layout-seed inconsistently randomizes all-function-pointers + structs" (https://github.com/llvm/llvm-project/issues/60349). Disable + it temporarily until the bug is fixed and the fix is released in a clang + version that can be checked for. + +* [v1: uaccess: Add minimum bounds check on kernel buffer size](http://lore.kernel.org/linux-hardening/20230203193523.never.667-kees@kernel.org/) + + While there is logic about the difference between ksize and usize, + copy_struct_from_user() didn't check the size of the destination buffer + (when it was known) against ksize. Add this check so there is an upper + bounds check on the possible memset() call, otherwise lower bounds + checks made by callers will trigger bounds warnings under -Warray-bounds. + +* [v2: arm64: Support Clang UBSAN trap codes for better reporting](http://lore.kernel.org/linux-hardening/20230203173946.gonna.972-kees@kernel.org/) + + When building with CONFIG_UBSAN_TRAP=y on arm64, Clang encodes the UBSAN + check (handler) type in the esr. Extract this and actually report these + traps as coming from the specific UBSAN check that tripped. + +* [v1: pstore/blk: Export a method to implemente panic_write()](http://lore.kernel.org/linux-hardening/20230203113515.93540-1-victor@allwinnertech.com/) + + The panic_write() is necessary to write the pstore frontend message + to blk devices when panic. Here is a way to register panic_write when + we use "best_effort" way to register the pstore blk-backend. + +* [v1: next: xen: Replace one-element array with flexible-array member](http://lore.kernel.org/linux-hardening/Y9xjN6Wa3VslgXeX@work/) + + One-element arrays are deprecated, and we are replacing them with flexible + array members instead. So, replace one-element array with flexible-array + member in struct xen_page_directory. + + This helps with the ongoing efforts to tighten the FORTIFY_SOURCE + routines on memcpy() and help us make progress towards globally + enabling -fstrict-flex-arrays=3 [1]. + +* [v1: next: xfs: Replace one-element arrays with flexible-array members](http://lore.kernel.org/linux-hardening/Y9xiYmVLRIKdpJcC@work/) + + One-element arrays are deprecated, and we are replacing them with flexible + array members instead. So, replace one-element arrays with flexible-array + members in structures xfs_attr_leaf_name_local and + xfs_attr_leaf_name_remote. + +* [v2: 4.14: Backport oops_limit to 4.14](http://lore.kernel.org/linux-hardening/20230203003354.85691-1-ebiggers@kernel.org/) + + This series backports the patchset + "exit: Put an upper limit on how often we can oops" + (https://lore.kernel.org/linux-mm/20221117233838.give.484-kees@kernel.org/T/#u) + to 4.14, as recommended at + https://googleprojectzero.blogspot.com/2023/01/exploiting-null-dereferences-in-linux.html + +* [v2: 4.19: Backport oops_limit to 4.19](http://lore.kernel.org/linux-hardening/20230203002717.49198-1-ebiggers@kernel.org/) + + This series backports the patchset + "exit: Put an upper limit on how often we can oops" + (https://lore.kernel.org/linux-mm/20221117233838.give.484-kees@kernel.org/T/#u) + to 4.19, as recommended at + https://googleprojectzero.blogspot.com/2023/01/exploiting-null-dereferences-in-linux.html + +* [v1: 5.4: Backport oops_limit to 5.4](http://lore.kernel.org/linux-hardening/20230202044255.128815-1-ebiggers@kernel.org/) + + This series backports the patchset + "exit: Put an upper limit on how often we can oops" + (https://lore.kernel.org/linux-mm/20221117233838.give.484-kees@kernel.org/T/#u) + to 5.4, as recommended at + https://googleprojectzero.blogspot.com/2023/01/exploiting-null-dereferences-in-linux.html + This follows the backports to 5.10 and 5.15 which already released. + +* [v1: use canonical ftrace path whenever possible](http://lore.kernel.org/linux-hardening/20230130181915.1113313-1-zwisler@google.com/) + + The canonical location for the tracefs filesystem is at /sys/kernel/tracing. + + But, from Documentation/trace/ftrace.rst: + + Before 4.1, all ftrace tracing control files were within the debugfs + file system, which is typically located at /sys/kernel/debug/tracing. + +#### 异步 IO + +* [v7: liburing: add api for napi busy poll](http://lore.kernel.org/io-uring/20230205002424.102422-1-shr@devkernel.io/) + + This adds two new api's to set/clear the napi busy poll settings. The two + new functions are called: + - io_uring_register_napi + - io_uring_unregister_napi + + The patch series also contains the documentation for the two new functions + and two example programs. The client program is called napi-busy-poll-client + and the server program napi-busy-poll-server. The client measures the + roundtrip times of requests. + +* [v2: io_uring,audit: don't log IORING_OP_MADVISE](http://lore.kernel.org/io-uring/b5dfdcd541115c86dbc774aa9dd502c964849c5f.1675282642.git.rgb@redhat.com/) + + fadvise and madvise both provide hints for caching or access pattern for + file and memory respectively. Skip them. + +* [GIT PULL: Upgrade to clang-17 (for liburing's CI)](http://lore.kernel.org/io-uring/a9aac5c7-425d-8011-3c7c-c08dfd7d7c2f@gnuweeb.org/) + + clang-17 is now available. Upgrade the clang version in the liburing's + CI to clang-17. + + Two prep patches to address `-Wextra-semi-stmt` warnings: + + - Remove unnecessary semicolon (Alviro) + + - Wrap the CHECK() macro with a do-while statement (Alviro) + +#### Rust For Linux + +* [v1: rust: sync: Arc: Implement Debug and Display](http://lore.kernel.org/rust-for-linux/20230201232244.212908-1-boqun.feng@gmail.com/) + + I found that our Arc doesn't implement `Debug` or `Display` when I tried + to play with them, therefore add these implementation. + + Wedson, I know that you are considering to get rid of `ArcBorrow`, so + the patch #3 may have some conflicts with what you may be working on. + +* [v3: rust: MAINTAINERS: Add the zulip link](http://lore.kernel.org/rust-for-linux/20230201184525.272909-1-boqun.feng@gmail.com/) + + Zulip organization "rust-for-linux" was created 2 years ago[1] and has + proven to be a great place for Rust related discussion, therefore + add the information in MAINTAINERS file so that newcomers have more + options to find guide and help. + +* [v1: rust: add this_module macro](http://lore.kernel.org/rust-for-linux/20230131130841.318301-1-yakoyoku@gmail.com/) + + Adds a Rust equivalent to the handy THIS_MODULE macro from C. + +#### BPF + +* [v2: bpf-next: Add support for tracing programs in BPF_PROG_RUN](http://lore.kernel.org/bpf/20230203182812.20657-1-grantseltzer@gmail.com/) + + This patch changes the behavior of how BPF_PROG_RUN treats tracing + (fentry/fexit) programs. Previously only a return value is injected + but the actual program was not run. New behavior mirrors that of + running raw tracepoint BPF programs which actually runs the + instructions of the program via `bpf_prog_run()` + +* [v1: uapi: add missing ip/ipv6 header dependencies for linux/stddef.h](http://lore.kernel.org/bpf/20230203160448.1314205-1-herton@redhat.com/) + + Since commit 58e0be1ef6118 ("net: use struct_group to copy ip/ipv6 + header addresses"), ip and ipv6 headers started to use the __struct_group + definition, which is defined at include/uapi/linux/stddef.h. However, + linux/stddef.h isn't explicitly included in include/uapi/linux/{ip,ipv6}.h, + +* [v3: bpf-next: Document kfunc lifecycle / stability expectations](http://lore.kernel.org/bpf/20230203155727.793518-1-void@manifault.com/) + + This is v3 of the proposal for documenting BPF kfunc lifecycle and + stability. + +* [v1: bpf-next: libbpf: allow users to set kprobe/uprobe attach mode](http://lore.kernel.org/bpf/20230203031742.1730761-1-imagedong@tencent.com/) + + By default, libbpf will attach the kprobe/uprobe eBPF program in the + latest mode that supported by kernel. In this series, we add the support + to let users manually attach kprobe/uprobe in legacy or perf mode in the + 1th patch. + + And in the 2th patch, we add the selftests for it. + + *** BLURB HERE *** + +* [v2: perf lock contention: Improve aggr x filter combination](http://lore.kernel.org/bpf/20230203021324.143540-1-namhyung@kernel.org/) + + The callstack filter can be useful to debug lock issues but it has a + limitation that it only works with caller aggregation mode (which is the + default setting). IOW it cannot filter by callstack when showing tasks + or lock addresses/names. + +* [v1: bpf-next: selftests/bpf: Initialize tc in xdp_synproxy](http://lore.kernel.org/bpf/20230202235335.3403781-1-iii@linux.ibm.com/) + + xdp_synproxy/xdp fails in CI with: + + Error: bpf_tc_hook_create: File exists + + The XDP version of the test should not be calling bpf_tc_hook_create(); + the reason it's happening anyway is that if we don't specify --tc on the + command line, tc variable remains uninitialized. + +* [v1: tools/resolve_btfids: Tidy HOST_OVERRIDES](http://lore.kernel.org/bpf/20230202224253.40283-1-irogers@google.com/) + + Don't set EXTRA_CFLAGS to HOSTCFLAGS, ensure CROSS_COMPILE isn't + passed through. + + This patch is based on top of: + https://lore.kernel.org/bpf/20230202112839.1131892-1-jolsa@kernel.org/ + +* [v1: net: virtio-net: Keep stop() to follow mirror sequence of open()](http://lore.kernel.org/bpf/20230202163516.12559-1-parav@nvidia.com/) + + Cited commit in fixes tag frees rxq xdp info while RQ NAPI is + still enabled and packet processing may be ongoing. + + Follow the mirror sequence of open() in the stop() callback. + This ensures that when rxq info is unregistered, no rx + packet processing is ongoing. + +* [v1: bpf-next: tools/resolve_btfids: Compile resolve_btfids as host program](http://lore.kernel.org/bpf/20230202112839.1131892-1-jolsa@kernel.org/) + + Making resolve_btfids to be compiled as host program so + we can avoid cross compile issues as reported by Nathan. + + Also we no longer need HOST_OVERRIDES for BINARY target, + just for 'prepare' targets. + +* [v1: virtio-net: support AF_XDP zero copy](http://lore.kernel.org/bpf/20230202110058.130695-1-xuanzhuo@linux.alibaba.com/) + + XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero + copy feature of xsk (XDP socket) needs to be supported by the driver. The + performance of zero copy is very good. mlx5 and intel ixgbe already support + this feature, This patch set allows virtio-net to support xsk's zerocopy xmit feature. + +* [v2: bpf-next: libbpf: Add wakeup_events to creation options](http://lore.kernel.org/bpf/20230202062549.632425-1-arilou@gmail.com/) + + Add option to set when the perf buffer should wake up, by default the + perf buffer becomes signaled for every event that is being pushed to it. + +* [v1: virtio-net: close() to follow mirror of open()](http://lore.kernel.org/bpf/20230202050038.3187-1-parav@nvidia.com/) + + This two small patches improves ndo_close() callback to follow + the mirror sequence of ndo_open() callback. This improves the code auditing + and also ensure that xdp rxq info is not unregistered while NAPI on RXQ is ongoing. + +* [v1: bpf-next: bpf, mm: bpf memory usage](http://lore.kernel.org/bpf/20230202014158.19616-1-laoar.shao@gmail.com/) + + Currently we can't get bpf memory usage reliably. bpftool now shows the + bpf memory footprint, which is difference with bpf memory usage. + +* [v2: tools/resolve_btfids: Tidy host CFLAGS forcing](http://lore.kernel.org/bpf/20230201213743.44674-1-irogers@google.com/) + + Avoid passing CROSS_COMPILE to submakes and ensure CFLAGS is forced to + HOSTCFLAGS for submake builds. This fixes problems with cross + compilation. + + Tidy to not unnecessarily modify/export CFLAGS, make the override for + prepare and build clearer. + +* [v3: Documentation/bpf: Document API stability expectations for kfuncs](http://lore.kernel.org/bpf/20230201174449.94650-1-toke@redhat.com/) + + Following up on the discussion at the BPF office hours (and subsequent + discussion), this patch adds a description of API stability expectations + for kfuncs. The goal here is to manage user expectations about what kind of + stability can be expected for kfuncs exposed by the kernel. + +* [v1: Add ftrace direct call for arm64](http://lore.kernel.org/bpf/20230201163420.1579014-1-revest@chromium.org/) + + This series adds ftrace direct call support to arm64. + This makes BPF tracing programs (fentry/fexit/fmod_ret/lsm) work on arm64. + + It is meant to apply on top of the arm64 tree which contains Mark Rutland's + series on CALL_OPS [1] under the for-next/ftrace tag. + +* [v1: bpf-next: bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64](http://lore.kernel.org/bpf/1675254998-4951-1-git-send-email-yangtiezhu@loongson.cn/) + + The intention of this patchset is to make the code more readable, + no functional changes, based on bpf-next. + + If this patchset makes no sense, please ignore it and sorry for that. + +* [v5: bpf-next: xdp: introduce xdp-feature support](http://lore.kernel.org/bpf/cover.1675245257.git.lorenzo@kernel.org/) + + Introduce the capability to export the XDP features supported by the NIC. + Introduce a XDP compliance test tool (xdp_features) to check the features + exported by the NIC match the real features supported by the driver. + Allow XDP_REDIRECT of non-linear XDP frames into a devmap. + +* [v1: bpf-next: ice: add XDP mbuf support](http://lore.kernel.org/bpf/20230131204506.219292-1-maciej.fijalkowski@intel.com/) + + although this work started as an effort to add multi-buffer XDP support + to ice driver, as usual it turned out that some other side stuff needed to be addressed, so let me give you an overview. + +* [v3: bpf-next: BPF rbtree next-gen datastructure](http://lore.kernel.org/bpf/20230131180016.3368305-1-davemarchevsky@fb.com/) + + This series adds a rbtree datastructure following the "next-gen + datastructure" precedent set by recently-added linked-list [0]. This is + a reimplementation of previous rbtree RFC [1] to use kfunc + kptr + instead of adding a new map type. + +* [v2: bpf-next: bpf: Refactor release_regno searching logic](http://lore.kernel.org/bpf/20230131171038.2648165-1-davemarchevsky@fb.com/) + + Kfuncs marked KF_RELEASE indicate that they release some + previously-acquired arg. The verifier assumes that such a function will + only have one arg reg w/ ref_obj_id set, and that that arg is the one to + be released. Multiple kfunc arg regs have ref_obj_id set is considered + an invalid state. + +* [v1: dwarves: dwarves: sync with libbpf-1.1](http://lore.kernel.org/bpf/1675169241-32559-1-git-send-email-alan.maguire@oracle.com/) + + This will pull in BTF dedup improvements + + de048b6 libbpf: Resolve enum fwd as full enum64 and vice versa + f3c51fe libbpf: Btf dedup identical struct test needs check for nested structs/arrays + +* [v2: net-next: vsock: add support for sockmap](http://lore.kernel.org/bpf/20230118-support-vsock-sockmap-connectible-v2-0-58ffafde0965@bytedance.com/) + + Add support for sockmap to vsock. + + We're testing usage of vsock as a way to redirect guest-local UDS requests to + the host and this patch series greatly improves the performance of such a setup. + +* [v3: net: ixgbe: allow to increase MTU to 3K with XDP enabled](http://lore.kernel.org/bpf/20230131032357.34029-1-kerneljasonxing@gmail.com/) + + Recently I encountered one case where I cannot increase the MTU size + directly from 1500 to a much bigger value with XDP enabled if the + server is equipped with IXGBE card, which happened on thousands of + servers in production environment. After appling the current patch, + we can set the maximum MTU size to 3K. + +* [v1: bpf-next: selftests/bpf: Try to address xdp_metadata crashes](http://lore.kernel.org/bpf/20230130215137.3473320-1-sdf@google.com/) + + Commit e04ce9f4040b ("selftests/bpf: Make crashes more debuggable in + test_progs") hasn't uncovered anything interesting besides + confirming that the test passes successfully, but crashes eventually [0]. + +* [v1: bpf: add bpf_link support for BPF_NETFILTER programs](http://lore.kernel.org/bpf/20230130150432.24924-1-fw@strlen.de/) + + Doesn't apply, doesn't work -- there is no BPF_NETFILTER program type. + + nf_hook_run_bpf() (c-function that creates the program context and + calls the real bpf prog) would be "updated" to use the bpf dispatcher to + avoid the indirect call overhead. + + Does that seem ok to you? I'd ignore the bpf dispatcher for now and would work on the needed verifier changes first. + +### 周边技术动态 + +#### Qemu + +* [v10: riscv: Allow user to set the satp mode](http://lore.kernel.org/qemu-devel/20230203055812.257458-1-alexghiti@rivosinc.com/) + + This introduces new properties to allow the user to set the satp mode, + see patch 3 for full syntax. In addition, it prevents cpus to boot in a satp mode they do not support (see patch 4). + +* [v10: hw/riscv: handle kernel_entry high bits with 32bit CPUs](http://lore.kernel.org/qemu-devel/20230202135810.1657792-1-dbarboza@ventanamicro.com/) + + This new version removed the translate_fn() from patch 1 because it + wasn't removing the sign-extension for pentry as we thought it would. + A more detailed explanation is given in the commit msg of patch 1. + + We're now retrieving the 'lowaddr' value from load_elf_ram_sym() and + using it when we're running a 32-bit CPU. This worked with 32 bit 'virt' machine booting with the -kernel option. + +* [v1: Add RISC-V vector cryptography extensions](http://lore.kernel.org/qemu-devel/20230202124230.295997-1-lawrence.hunter@codethink.co.uk/) + + This patch series introduces an implementation for the six instruction sets + of the draft RISC-V vector cryptography extensions specification. + + This patch set implements the instruction sets as per the 20221202 + version of the specification (1). We plan to update to the latest spec + once stabilised. + +* [v1: Add basic ACPI support for risc-v virt](http://lore.kernel.org/qemu-devel/20230202045223.2594627-1-sunilvl@ventanamicro.com/) + + This series adds the basic ACPI support for the RISC-V virt machine. + Currently only INTC interrupt controller specification is approved by the + UEFI forum. External interrupt controller support in ACPI is in progress. + + The basic infrstructure changes are mostly leveraged from ARM. + +* [v1: target/riscv: Add RVV registers to log](http://lore.kernel.org/qemu-devel/20230201142454.109260-1-ivan.klokov@syntacore.com/) + + Added QEMU option 'rvv' to add RISC-V RVV registers to log like regular regs. + +* [v2: target/riscv: set tval for triggered watchpoints](http://lore.kernel.org/qemu-devel/20230131170955.752743-1-geomatsi@gmail.com/) + + According to priviledged spec, if [sm]tval is written with a nonzero + value when a breakpoint exception occurs, then [sm]tval will contain + the faulting virtual address. Set tval to hit address when breakpoint exception is triggered by hardware watchpoint. + +#### U-Boot + +* [v2: Migrate to split config](http://lore.kernel.org/u-boot/20230204002619.938387-1-sjg@chromium.org/) + + U-Boot uses an SPL prefix on CONFIG options to indicate when an option + relates to SPL. For example, while CONFIG_TEXT_BASE is the text base for + U-Boot proper, CONFIG_SPL_TEXT_BASE is the text base for SPL. + +* [v1: RFC: Migrate to split config](http://lore.kernel.org/u-boot/20230131152702.249197-1-sjg@chromium.org/) + + U-Boot uses an SPL prefix on CONFIG options to indicate when an option + relates to SPL. For example, while CONFIG_TEXT_BASE is the text base for + U-Boot proper, CONFIG_SPL_TEXT_BASE is the text base for SPL. + + Within the code it is possible do things like CONFIG_VAL(TEXT_BASE) to + get that value. It returns the appropriate option, depending on the phase being built. + +* [v2: riscv: cpu: ax25: Simplify cache enabling logic in harts_early_init()](http://lore.kernel.org/u-boot/20230131094034.12423-1-peterlin@andestech.com/) + + This patch improves the cache enabling operation in harts_early_init(), + also moves the CSR definition to include/asm/arch-andes/csr.h and drops + unnecessary i/d-cache disable functions from cleanup_before_linux(). + ## 20230129:第 31 期 ### 内核动态