Message ID | 7f9fe9d92aff101a12057e8571772428028dc6ae.1712202385.git.john@fremlin.org |
---|---|
State | New |
Headers | show |
Series | [bug#70175] gnu: llama-cpp: support OpenBLAS for faster prompt processing | expand |
John Fremlin via Guix-patches via <guix-patches@gnu.org> writes: > OpenBLAS is recommended by https://github.com/ggerganov/llama.cpp > > Change-Id: Iaf6f22252da13e2d6f503992878b35b0da7de0aa > --- > gnu/packages/machine-learning.scm | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) Looks good to me, I tweaked the commit message a bit and pushed this to master as d8a63bbcee616f224c10462dbfb117ec009c50d8. Chris
diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm index 225bff0ca2..ea3674ce3e 100644 --- a/gnu/packages/machine-learning.scm +++ b/gnu/packages/machine-learning.scm @@ -542,6 +542,8 @@ (define-public llama-cpp (build-system cmake-build-system) (arguments (list + #:configure-flags + '(list "-DLLAMA_BLAS=ON" "-DLLAMA_BLAS_VENDOR=OpenBLAS") #:modules '((ice-9 textual-ports) (guix build utils) ((guix build python-build-system) #:prefix python:) @@ -576,8 +578,9 @@ (define-public llama-cpp (lambda _ (copy-file "bin/main" (string-append #$output "/bin/llama"))))))) (inputs (list python)) + (native-inputs (list pkg-config)) (propagated-inputs - (list python-numpy python-pytorch python-sentencepiece)) + (list python-numpy python-pytorch python-sentencepiece openblas)) (home-page "https://github.com/ggerganov/llama.cpp") (synopsis "Port of Facebook's LLaMA model in C/C++") (description "This package provides a port to Facebook's LLaMA collection