diff mbox series

[bug#40002] WIP: Add mxnet.

Message ID 20200309182508.14144-1-mail@ambrevar.xyz
State Work in progress
Headers show
Series [bug#40002] WIP: Add mxnet. | expand

Checks

Context Check Description
cbaines/comparison success View comparision
cbaines/git branch success View Git branch
cbaines/applying patch success View Laminar job

Commit Message

Pierre Neidhardt March 9, 2020, 6:25 p.m. UTC
* gnu/packages/machine-learning.scm (mxnet): New variable.
---
 gnu/packages/machine-learning.scm | 62 +++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

Comments

Pierre Neidhardt March 9, 2020, 6:38 p.m. UTC | #1
This definitions build the library.

One obstacle: The tests take a very long time to complete, typically
some 30+ minutes on one core.  But CMake has a default timeout set to
1500 sec (25 minutes).  I've passed "--timeout 6000" to `ctest` to fix
it.
What do you think?  It seems that it's the first package to need this.

openexr sets

--8<---------------cut here---------------start------------->8---
             (setenv "CTEST_TEST_TIMEOUT" "2000")
--8<---------------cut here---------------end--------------->8---

but that does not work for MXNet for some reason.

Thoughts?
diff mbox series

Patch

diff --git a/gnu/packages/machine-learning.scm b/gnu/packages/machine-learning.scm
index 50c41dc113..9408c4b8e0 100644
--- a/gnu/packages/machine-learning.scm
+++ b/gnu/packages/machine-learning.scm
@@ -60,6 +60,8 @@ 
   #:use-module (gnu packages graphviz)
   #:use-module (gnu packages gstreamer)
   #:use-module (gnu packages image)
+  #:use-module (gnu packages image-processing)
+  #:use-module (gnu packages jemalloc)
   #:use-module (gnu packages linux)
   #:use-module (gnu packages lisp-xyz)
   #:use-module (gnu packages maths)
@@ -2121,3 +2123,63 @@  These include a barrier, broadcast, and allreduce.")
 technique that can be used for visualisation similarly to t-SNE, but also for
 general non-linear dimension reduction.")
     (license license:bsd-3)))
+
+(define-public mxnet
+  (package
+    (name "mxnet")
+    (version "1.6.0")
+    (source
+     (origin
+       (method git-fetch)
+       (uri (git-reference
+             (url "https://github.com/apache/incubator-mxnet")
+             (commit version)
+             ;; TODO: Test if possible to include system version of those deps:
+             ;; mkldnn
+             ;; openmp
+             (recursive? #t)))
+       (file-name (git-file-name name version))
+       (sha256
+        (base32
+         "1jlk0a9kls4fxxq4sap21hk6k3vhqhlflx5jm8i2amwh1z22sj09"))))
+    (build-system cmake-build-system)
+    (arguments
+     `(#:configure-flags '("-DUSE_CUDA=OFF")
+       #:parallel-build? #f             ; TODO: Try rebuilding in parallel.
+       #:phases
+       (modify-phases %standard-phases
+         (replace 'check
+           (lambda _
+             ;; Skip tests that require internet access.
+             (invoke "ctest" "--timeout" "6000")))
+         ;; (add-before 'check 'increase-test-timeout
+         ;;   (lambda _
+         ;;     ;; TODO: Set right timeout.
+         ;;     (setenv "CTEST_TEST_TIMEOUT" "6000")
+         ;;     #t))
+         )))
+    (native-inputs
+     `(("pkg-config" ,pkg-config)
+       ("perl" ,perl)
+       ;; TODO: Use our gtest
+       ;; ("googletest" ,googletest)
+       ))
+    (inputs
+     `(("lapack" ,lapack)
+       ("openblas" ,openblas)
+       ("opencv" ,opencv)
+       ("jemalloc" ,jemalloc)))
+    (home-page "https://mxnet.apache.org/")
+    (synopsis "Distributed/Mobile Deep Learning")
+    (description
+     "Apache MXNet (incubating) is a deep learning framework.  It allows you
+to mix symbolic and imperative programming.  At its core, MXNet contains a
+dynamic dependency scheduler that automatically parallelizes both symbolic and
+imperative operations on the fly.  A graph optimization layer on top of that
+makes symbolic execution fast and memory efficient.  MXNet is portable and
+lightweight, scaling effectively to multiple GPUs and multiple machines.
+
+MXNet is more than a deep learning project. It is a collection of blue prints
+and guidelines for building deep learning systems, and interesting insights of
+DL systems for hackers.")
+    (license license:asl2.0)))