• AOSC
  • Packages

llama.cpp 5298

C++ implementation of LLM inference

Section: runtime-creativity

Depends : gcc-runtime, libcl, vulkan-loader

Depends (build) : glslang, opencl-registry-api, shaderc, vulkan-headers

Depends (library): bitwarden, curl, discord, electron-netease-cloud-music, element-desktop, feishin, gcc-13, gcc-15, gcc-runtime, glibc, google-chrome, libcl, llvm-20, obs-studio, teams-for-linux, vscodium, vulkan-loader, yesplaymusic

Links: Changelog, Report issues

Upstream: source (git) 5298

Available versions

Version 5298 4879
amd64 15.4 MiB
arm64 14.2 MiB
loongarch64 14.2 MiB
loongson3 13.7 MiB
ppc64el 15.3 MiB
riscv64 15.0 MiB
Copyleft 2011–2024, Members of the community. – API - Third Party Licenses