'Deep Learning/setup_related'에 해당되는 글 57건

  1. 2023.03.02 pip install cusignal 에러 시 설치방법
  2. 2023.03.02 'pip install cupy'로 설치되지 않을 때
  3. 2021.03.16 cusolver64_10.dll not found만 찾을 수 없다는 에러
  4. 2021.03.16 tensorflow-gpu설치하다 다음의 오류났다
  5. 2020.08.26 안녕하세요, 왕초보입니다. MacOS, Windows 또는 LInux ? 인공지능을 제대로 공부하려는데요, 어떤 운영체제에서 시작하는 것이 성
  6. 2020.06.05 https://lambdalabs.com (한국에 판매를 하는지는 모르겠지만..) 이 사이트에서 여러가지 구성 만들어보고 대략적인 가격도 확인
  7. 2017.07.25 #How to install caffe in windows in just 5 minutes !YouTube - Jun 3, 2017
  8. 2017.07.25 Caffe 윈도우 설치하기
  9. 2017.05.13 파이참 pycharm 설치 Install PyCharm to develop TensorFlow project
  10. 2017.03.28 How to Install TensorFlow | NVIDIA
  11. 2017.03.25 tensorflow install & pycharm install & setting
  12. 2017.03.25 Ubuntu 16.04LTS + TensorFlow + Cuda + Cudnn + Pycharm + Anaconda
  13. 2017.03.24 텐서플로우 빌드하는 방법 (텐서플로우에 내코드 추가하기)
  14. 2017.03.21 [Caffe] windows 환경에서 caffe를 설치하자 (161102 기준)
  15. 2017.03.21 윈도우10(Windows 10) / Caffe / CUDA / cuDNN / Python 설치
  16. 2017.03.21 Installing Keras, Theano and Dependencies on Windows 10
  17. 2017.03.18 Windows에 TensorFlow 설치하기
  18. 2017.03.18 텐서플로우 설치하기
  19. 2017.03.18 윈도우 텐서플로우 설치 (Tensorflow installation in window)
  20. 2017.03.18 아나콘다를 이용한 윈도우에 텐서플로우 설치해보기
  21. 2017.03.18 Windows 10 64bit 에서 텐서플로우(Tensorflow) 1.0.0 설치하기
  22. 2017.03.01 ssl verification error ssl certificate_verify_failed
  23. 2017.03.01 Python Keras+Tensorflow on Windows7 64bit 설치하기
  24. 2017.03.01 TensorFlow-v1.0.0 + Keras 설치 (Windows/Linux/macOS)
  25. 2017.02.07 Windows 에서 Tensorflow GPU를 사용하는 방법
  26. 2017.02.07 윈도우 10 + 케라스 (tensorflow backend) + 아나콘다로 <케라스 설치하기>
  27. 2017.01.31 Intro to Machine Learning using Tensorflow – Part 1
  28. 2017.01.17 GPU-accelerated Theano & Keras on Windows 10 native
  29. 2017.01.12 YouTube에서 'How to Install Tensorflow on Windows10 with GPU support' 보기
  30. 2017.01.12 Installing Theano in Windows 7 64-bit
Posted by uniqueone
,

'pip install cupy'로 설치하려니 설치되지 않고 계속 진행중이라고만 나온다. 

검색해보니 https://twitter.com/mitmul/status/986171511873523712?lang=enhttps://github.com/cupy/cupy/issues/1643#issuecomment-420896839, https://docs.cupy.dev/en/latest/install.html#install-cupy

 
 

에서 'pip install cupy'로 설치가 안되면 'pip install cupy-cuda112' (cuda버전이 11.2일때) 이런식으로 설치해보라해서 하니 설치됨.

cuda버전 확인방법은 윈도우에서는 명령프롬프트 창에 'nvcc --version'

 

 

 

Posted by uniqueone
,

cusolver64_10.dll not found만 찾을 수 없다고 해서 

stackoverflow.com/questions/65608713/tensorflow-gpu-could-not-load-dynamic-library-cusolver64-10-dll-dlerror-cuso

에서 'Rename file cusolver64_11.dll To cusolver64_10.dll '

2021-03-16 19:26:14.435563: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-03-16 19:26:14.498628: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-03-16 19:26:14.499003: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2021-03-16 19:26:14.527712: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2021-03-16 19:26:14.532245: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2021-03-16 19:26:14.535978: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found
2021-03-16 19:26:14.585485: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll

Posted by uniqueone
,

tensorflow-gpu설치하다 다음의 오류났다

Installing collected packages: tensorflow-gpu

ERROR: Could not install packages due to an EnvironmentError: [WinError 5] 액세스가 거부되었습니다: 'C:\ProgramData\Anaconda3\envs\venv20\Lib\site-packages\tensorflow\lite\experimental\microfrontend\python\ops\_audio_microfrontend_op.so'

Consider using the `--user` option or check the permissions.

cmd를 관리자 권한으로 해도 났다. 그래서

pip install --user tensorflow-gpu

라고 github.com/pypa/pip/issues/6068의 조언대로 하니 설치됐다.

Posted by uniqueone
,

https://www.facebook.com/groups/TensorFlowKR/permalink/1282565068751215/

Facebook 그룹

TensorFlow KR에 멤버 51,082명이 있습니다. TensorFlow (TF), 딥러닝의 모든 이야기를 나누는 곳, 텐서플로우 코리아(TF-KR)입니다. 잡담방: tensorflowkr.slack.com - 잡담방에 참여하고 싶으신 분은 https://tensorflow

www.facebook.com

안녕하세요, 왕초보입니다.
MacOS, Windows 또는 LInux ?
인공지능을 제대로 공부하려는데요, 어떤 운영체제에서 시작하는 것이
성능 측면에서 (동일한 하드웨어 가정)
커뮤니티 측면에서 (추후 Q&A 시)
바람직할지 여쭤봅니다. 미리 감사드립니다. ^____^
MacOS, Windows는 친숙하고, Linux는 필요시 더 친숙해지려구요.

리눅스 컴퓨터로 하시는걸 적극*10000 추천드립니다... 윈도우도 cmd가 있지만 터미널이 진짜 편하고 온라인으로 나와있는 튜토리얼들도 왠만해서는 리눅스에서 돌리기 편합니다...

개인적으로 제일 좋았던건, 리눅스 컴퓨터 좋은거 맞추고 맥으로 그 컴퓨터를 ssh해서 학습 돌리는 거 였습니다. 개발은 mac에서 하고, 학습은 linux으로 하는 식으루요

Linux가 성능이나, 커뮤니티 측면에서 여러모로 편리합니다.
Mac으로 ssh 접속도 편리하고요.

Mac이 익숙하시니, Linux에 큰 이질감 없이 금방 적응하시리라 믿습니다.









Posted by uniqueone
,

https://www.facebook.com/groups/TensorFlowKR/permalink/1215140582160331/

곽영철

텐서플로우 코리아 분들 안녕하세요! 이번에 딥러닝 컴퓨터 셋업 구축을 하는데 여러분들의 도움을 얻고 싶습니다. 약 5천만원 가량의 셋업을 연구실에서 구축하려고 하는데, 이 정도 규모의 셋

www.facebook.com

텐서플로우 코리아 분들 안녕하세요!

이번에 딥러닝 컴퓨터 셋업 구축을 하는데 여러분들의 도움을 얻고 싶습니다.
약 5천만원 가량의 셋업을 연구실에서 구축하려고 하는데, 이 정도 규모의 셋업을 어떻게 구축해야 할 지 조언을 주시면 감사하겠습니다!

https://lambdalabs.com

(한국에 판매를 하는지는 모르겠지만..) 이 사이트에서 여러가지 구성 만들어보고 대략적인 가격도 확인할 수 있습니다.

사실 네 분이 각자 데스크탑급 GPU 4장 정도를 탑재한 컴퓨터를 사셔도 되는 예산이긴 한데, 한 대를 사셔야 한다면 테슬라 4개나 타이탄RTX 8개로 구성하실 수 있을 것 같습니다.



Posted by uniqueone
,
Posted by uniqueone
,

Caffe 설치하기 ๑•‿•๑ (1)
http://kimering.blogspot.kr/2017/03/caffe-1.html?m=1
Posted by uniqueone
,

https://www.youtube.com/watch?v=TFpStiC2wDg

https://youtu.be/TFpStiC2wDg

[cs-hcmup-2016] Install PyCharm to develop TensorFlow project


Posted by uniqueone
,

How to Install TensorFlow | NVIDIA
http://www.nvidia.com/object/gpu-accelerated-applications-tensorflow-installation.html
Posted by uniqueone
,

http://yeramee.tistory.com/1

Posted by uniqueone
,

Ubuntu 16.04LTS + TensorFlow + Cuda + Cudnn + Pycharm + Anaconda

http://yeramee.tistory.com/3


Posted by uniqueone
,

https://deeptensorflow.github.io/2017/02/21/how-to-build-tensorflow-with-pip/



직접 bazel로 빌드했을때의 장점?

tensorflow 커뮤니티에 올라온 정보들에 따르면 직접 빌드했을때 속도가 소폭 향상된다고 합니다.
추후 직접 실험을 해서 속도비교를 해서 올릴 예정입니다.
그리고 자신이 직접 내부의 소스코드를 수정해서 빌드하면 ‘나만의 텐서플로우’를 만들 수 있습니다.

빌드 전 개발환경 세팅

pip(파이썬 패키지 매니저)과 java-jdk가 설치되어 있어야 합니다.
설치 방법은 OS환경마다 다르지만 매우 간단합니다.
아마 대부분 설치가 이미 되어있을 것이라고 생각하기 때문에 그냥 넘어가겠습니다.

bazel 다운로드

https://bazel.build/ 에서 다운로드가 가능합니다.
curl로 다운 받는 방법은 아래와 같습니다.

1
2
3
4
5
6
7
8
9
10
export BAZELRC=/home/<yourid>/.bazelrc
export BAZEL_VERSION=0.4.2
mkdir /home/<yourid>/bazel
cd /home/<yourid>/bazel
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-installer-linux-x86_64.sh
curl -fSsL -o /home/<yourid>/bazel/LICENSE.txt https://raw.githubusercontent.com/bazelbuild/bazel/master/LICENSE.txt
chmod +x bazel-*.sh
sudo ./bazel-$BAZEL_VERSION-installer-linux-x86_64.sh
cd /home/<yourid>/
rm -f /home/<yourid>/bazel/bazel-$BAZEL_VERSION-installer-linux-x86_64.sh

tensorflow github에서 다운로드

소스가 있어야 빌드를 하겠죠?
그리고 wheel, six, numpy 패키지가 필요합니다.

1
2
3
4
5
6
git clone https://github.com/tensorflow/tensorflow
cd tensorflow
git checkout r1.0 #빌드를 원하는 버전을 입력하시면 되요.
sudo apt-get install python3-numpy python3-dev python3-pip python3-wheel #ubuntu 쓰는 분들만
brew install python3 #맥 쓰는 분들만
sudo pip3 install six numpy wheel #맥쓰는 분들만

여기서 빌드를 하기 전에 소스코드를 고쳐서 텐서플로우에 내 소스를 추가할 수 있습니다.
공식 홈페이지에서 해당 내용도 있습니다.
https://www.tensorflow.org/extend/adding_an_op
다음 포스팅으로 자세히 해당 내용에 대해서도 다루어 보겠습니다.

tensorflow에서는 설정을 쉽게 하는 방법을 제공합니다.

tensorflow 폴더에서 ./configure 하시면 빌드 설정을 쉽게 하도록 도와줍니다.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
$ cd tensorflow # cd to the top-level directory created
$ ./configure
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python2.7
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to use jemalloc as the malloc implementation? [Y/n]
jemalloc enabled
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N]
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]
No XLA JIT support will be enabled for TensorFlow
Found possible Python library paths:
/usr/local/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages
Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages]
Using python library path: /usr/local/lib/python2.7/dist-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] N
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] Y
CUDA support will be enabled for TensorFlow
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Please specify the Cuda SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 8.0
Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the cuDNN version you want to use. [Leave empty to use system default]: 5
Please specify the location where cuDNN 5 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: 3.0
Setting up Cuda include
Setting up Cuda lib
Setting up Cuda bin
Setting up Cuda nvvm
Setting up CUPTI include
Setting up CUPTI lib64
Configuration finished

중간에 파이썬 버전 선택이나 CUDA 지원 여부 자신의 환경에 맞게 잘 세팅해주세요.
제안들이 친절하게 분류되어있어서 어려움은 없을 것 같습니다.

bazel로 빌드하기

설정이 끝났다면 빌드를 해야겠죠?

1
2
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package #cpu버전일경우
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package #gpu버전일 경우

pip 패키지로 관리하기 쉽게 만들어줍시다.

1
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

pip으로 다운받아줍시다.

1
2
sudo pip install /tmp/tensorflow_pkg/tensorflow-1.0.0-py2-none-any.whl #python2버전
sudo pip3 install /tmp/tensorflow_pkg/tensorflow-1.0.0-cp36-cp36m-macosx_10_12_x86_64.whl #python3버전

이건 설정에 따라서 다르닌깐요
sudo pip(파이썬3이면 pip3) install /tmp/tensorflow_pkg/ 한다음에 tab키 쳐주시면 뜨는데 그리고 엔터 눌러주세요.

Posted by uniqueone
,

[Caffe] windows 환경에서 caffe를 설치하자 (161102 기준)
http://jangjy.tistory.com/m/249
Posted by uniqueone
,

윈도우10(Windows 10) / Caffe / CUDA / cuDNN / Python 설치
http://hanmaruj.tistory.com/m/15
Posted by uniqueone
,

Installing Keras, Theano and Dependencies on Windows 10 – Ankivil
http://ankivil.com/installing-keras-theano-and-dependencies-on-windows-10/
Posted by uniqueone
,

http://blog.daum.net/buillee/1513

 

1. 시스템종류 : 64비트 운영체제(윈도우즈)


2. Python3.5을 제공하는 Aanaconda3 4.2.0 버전을 다운로드함.

(https://repo.continuum.io/archive/index.html 에서 Anaconda3-4.2.0-Windows-x86_64.exe 를 다운로드)


3. Anaconda3 4.2.0을 실행해서 설치함


4. 제어판 --> 시스템 --> 고급 --> 환경변수 --> 사용자 변수 --> Path에 Anaconda3이 설치된 경로를 지정

(저의 경우는 다음과 같음.

C:\Users\buil\Anaconda3;C:\Users\buil\Anaconda3\Scripts;C:\Users\buil\Anaconda3\Library\bin;)


5. 제어판 --> 시스템 --> 고급 --> 환경변수 --> 시스템 변수 --> Path에 Anaconda3이 다음과 같이 지정함

(C:\Users\buil\Anaconda3\Scripts;)


6. 컴퓨터 다시 시작


7. 실행 --> cmd


8. command 창에서 conda create -n 지정이름 python=3.5를 입력함

(저는 conda create -n tf3 python=3.5)

중간에 질문에 나오면 y를 선택함


9. command 창에서 activate tf 입력함


10. command 창에서 pip install tensorflow 입력함

에러가 나지 않으면 잘 설치됨


11. command 창에서 jupyter notebook을 입력함

그러면 브라우저가 실행되며, 그곳에서 명령어를 입력해서 TensorFlow를 연습하면 됨.

 

 

Posted by uniqueone
,

1. 텐서플로우 설치하기
http://aileen93.tistory.com/58
Posted by uniqueone
,

윈도우 텐서플로우 설치 (Tensorflow installation in window)
http://lifestudying.tistory.com/9
Posted by uniqueone
,

아나콘다를 이용한 윈도우에 텐서플로우 설치해보기 : 네이버 블로그
http://blog.naver.com/windpriest/220937771369
Posted by uniqueone
,

Windows 10 64bit 에서 텐서플로우(Tensorflow) 1.0.0 설치하기
http://blog.ggaman.com/1000
Posted by uniqueone
,

https://github.com/conda/conda/issues/1979

 

 

The response to this issue is

conda config --set ssl_verify false
conda update requests
conda config --set ssl_verify true
Posted by uniqueone
,

http://skyer9.tistory.com/11

 

 

 

 

Python Keras+Tensorflow on Windows7 64bit 설치하기




1. python 3.5 64bit 설치하기


https://www.python.org/downloads/release/python-352/

--> Windows x86-64 executable installer를 다운받음


"Add Python 3.5.2 to PATH" 를 체크하고 "Install Now" 를 선택한다.


도스창을 열고 아래 명령을 입력해 정상적으로 설치되었는지 확인한다.


C:\Users\skyer9>python -V

Python 3.5.2




2. tensorflow 설치하기


도스창에서 아래 명령을 입력한다.


C:\> pip3 install --upgrade tensorflow-gpu


현재(2017-02-25) 배포버전에 버그가 있다.

tensorflow 테스트할 때 오류가 발생하면 아래 설치방법으로 설치하면 된다.


C:\> pip3 install --upgrade http://ci.tensorflow.org/view/Nightly/job/nightly-win/85/DEVICE=gpu,OS=windows/artifact/cmake_build/tf_python/dist/tensorflow_gpu-1.0.0rc2-cp35-cp35m-win_amd64.whl




3. CUDA 8.0 을 설치한다.


아래 사이트에서 다운받는다.


https://developer.nvidia.com/cuda-downloads


왜인지... 여러번 반복해서 설치/삭제 후 다시 설치를 해야 설치되는 듯.

필요하면 Visual Studio 2015 Community 버전도 설치해준다.



4. cuDNN 을 설치한다.


https://developer.nvidia.com/cudnn


위 사이트에서 아래 파일을 다운받는다.


cuDNN v5.1 Library for Windows 7


압축해제 후 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0 에 붙여넣기 한다.




5. 테스트 프로그램 실행하기


아래 내용을 hello.py 라는 이름으로 생성한다.

(TF_CPP_MIN_LOG_LEVEL 은 그냥 환경변수에 추가하는 것이 편하다.)


# ------------------------------------------------------------------------------

from __future__ import print_function


#disable tensorflow logging

import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'


import tensorflow as tf


hello = tf.constant('Hello, TensorFlow!')


# Start tf session

sess = tf.Session()


print(str(sess.run(hello).strip(), 'utf-8'))

# ------------------------------------------------------------------------------


C:\> python hello.py




6. keras 설치하기


아래 사이트에서 numpy-1.12.0+mkl-cp35-cp35m-win_amd64.whl 를 다운받는다.


http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy


설치한다.

(설치도중 오류가 나더라도 리스트에 numpy (1.12.0+mkl) 이 표시되면 정상적으로 설치된 것이다.)


C:\> pip3 install --upgrade numpy-1.12.0+mkl-cp35-cp35m-win_amd64.whl

C:\> pip3 list


아래 사이트에서 scipy-0.19.0rc2-cp35-cp35m-win_amd64.whl 를 다운받는다.


http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy


scipy 를 설치한다.


C:\> pip3 install scipy-0.19.0rc2-cp35-cp35m-win_amd64.whl


keras 를 설치한다.


C:\> pip3 install --upgrade keras




7. 테스트 프로그램 실행하기


아래 내용을 hello2.py 라는 이름으로 생성한다.


(Microsoft Visual C++ 2015 Redistributable 또는 Visual Studio 2015 Community 둘 중 어느것도 설치되어 있지 않으면 에러가 발생한다.)


# ------------------------------------------------------------------------------

import os


#disable tensorflow logging

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'

os.environ['KERAS_BACKEND'] = 'tensorflow'


import tensorflow as tf

sess = tf.Session()


from keras import backend as K

K.set_session(sess)

# ------------------------------------------------------------------------------


C:\> python hello2.py

 

 

 

 

 

 

 

 

Posted by uniqueone
,

http://tmmse.xyz/2017/03/01/tensorflow-keras-installation-windows-linux-macos/

 

 

 

참고 :https://groups.google.com/forum/#!topic/keras-users/_hXfBOjXow8

선요약:

# export PATH=~/anaconda/bin:$PATH # MAC
conda create -n tf python=3.5  # 3.5 버전만 TensorFlow/Keras가 지원  
activate tf # Windows  
# source activate tf  : Linux/macOS

# 여기서부터 (tf) 환경. 설치 순서 중요
pip install tensorflow   # pip install tensorflow-gpu : GPU 버전  
conda install pandas matplotlib scikit-learn  
pip install keras  
conda install jupyter notebook  

jupyter notebook # Test 해보기  

우선 python 3.5.X 버전을 사용해야 현재 TensorFlow-v1.0.0와 Keras를 지원합니다. 평소에 Source code로만 설치하는 것을 선호했지만 Windows에서 설치가 조금 난감하기 때문에 Anaconda를 통해 비교적 쉽게 설치를 할 수 있습니다. 그리고 Linux/macOS에서도 다 작동하는 것을 확인했습니다.

편의상 문어체를 사용합니다. Anaconda 다운로드를 통해서 Anaconda Python 3.X 버전을 자신의 플랫폼에 맞게 설치한다.

나는 Python 3.6 버전이다.

Windows 경우 설치중에 Anaconda를 PATH 경로에 포함하는 체크란이 있음으로 반드시 체크 됨을 확인하자.

설치를 완료한 후에 linux/macOS 라면 terminal 그리고 Windows 라면 CMD 창에 conda --v 명령어가 작동해야한다.

만약 linux/macOS에서 conda 명령어가 먹히지 않으면 export PATH=~/anaconda/bin:$PATH 로 anaconda를 경로에 추가한다. 각자의 anaconda의 경로가 다를 수 있으므로 anaconda 혹은 anaconda3로 추가해준다. 편의상 자신의 .bashrc 등에 넣어주자. echo 'export PATH=~/anaconda/bin:$PATH' >> ~/.bashrc

커맨드 작동을 확인한다.Anaconda의 현 버전은 상관없는 듯하다.

그리고 conda environment를 만든다. 환경 설정후 환경을 활성화 시킨다.

conda create -n tf python=3.5 # y 등으로 계속 진행  
activate tf # Windows  
# source activate tf   # Linux/macOS

이 과정은 python의 virtualenv와 비슷하다. 안전하게 시스템의 python library가 꼬이지 않게 격리해서 만드는 과정이다. 예를들어 TensorFlow의 버전을 다르게 쓰고싶을때.

여기서 중요한건 python=3.5를 해줘야한다. Anaconda가 python3.6 버전을 사용하면 TensorFlow와 Keras 설치가 불가능하다. 그리고 환경을 활성화시킬때 Windows와 Linux/macOS 명령어의 차이가 난다.

다음 처럼 (tf) 환경이라고 쉘/터미널의 모습이 바뀐다.

그리고 쭉쭉 이 순서대로 설치해준다.

pip install tensorflow  # pip install tensorflow-gpu  
conda install pandas matplotlib scikit-learn  
pip install keras  
conda install jupyter notebook  

TensorFlow를 GPU 버전으로 사용할 수 있다면pip install tensorflow-gpu로 설치한다.
Keras 설치 중에 Theano를 설치하는 듯 하지만 기본 백엔드는 TensorFlow로 작동한다.

참고로 Keras 설치 전에 jupyter notebook 설치시 keras module을 import 하지 못하는 오류가 있다. 그래서 keras 설치후 jupyter notebook을 설치한다.

간단하게 Keras import가 되는지 확인한다.

끝으로 ..

자세히 읽지는 않았지만 다음 블로그 글에서 Windows7 64bit에서 CUDA와 CuDNN을 포함한 Windows에서 Keras+TensorFlow를 설치하는 글도 참고하면 도움이 되겠습니다. Python Keras+Tensorflow on Windows7 64bit 설치하기

 

 

------------------------------------------------------------

Kyung Mo Kweon 추가하자면, 윈도우에서만 3.5고
나머지는 이미 3.6으로 빌드 되어 있어서 꼭 3.5로 할 필요는 없습니다.
https://pypi.python.org/pypi/tensorflow/1.0.0

 

Posted by uniqueone
,
http://www.heatonresearch.com/2017/01/01/tensorflow-windows-gpu.html

Windows 에서 Tensorflow GPU를 사용하는 방법입니다.  이미 알고 계시는 것이면 죄송.  해보니까 확실히 빠르네요.  새로 산  MBP를 팔아 버리고 싶은 욕구가....

Posted by uniqueone
,
윈도우 10 + 케라스 (tensorflow backend) + 아나콘다로 <케라스 설치하기>
까먹을까봐 정리해 놨습니다.

http://www.modulabs.co.kr/DeepLAB_free/11368
Posted by uniqueone
,
https://blog.openshift.com/intro-machine-learning-using-tensorflow-part-1/

 

 

Intro to Machine Learning using Tensorflow – Part 1

Share4

 

Think about this: what’s something that exists today that will still exist 100 years from now? Better yet, what do you use on a daily basis today you think will be utilized as frequently 100 years from now? Suffice to say, there isn’t a whole lot out there with that kind of longevity. But there is at least one thing that will stick around, data. In fact, mankind is estimated to create 44 zettabytes (that’s 44 trillion gigabytes, ladies and gentlemen) of data by 2020 . While impressive, data is useless unless you actually do something with it. So now, the question is, how do we work with all this information and how to we create value from it? Through machine learning and artificial intelligence, you – yes you –  can tap into data and generate genuine, insightful value from it. Over the course of this series, you’ll learn the basics of Tensorflow, machine learning, neural networks, and deep learning in a container-based environment.

Before we get started, I need to call out one of my favorite things about OpenShift. When using OpenShift, you get to skip all the hassle of building, configuring or maintaining your application environment. When I’m learning something new, I absolutely hate spending several hours of trial and error just to get the environment ready. I’m from the Nintendo generation; I just want to pick up a controller and start playing. Sure, there’s still some setup with OpenShift, but it’s much less. For the most part with OpenShift, you get to skip right to the fun stuff and learn about the important environment fundamentals along the way.

And that’s where we’ll start our journey to machine learning(ML), by deploying Tensorflow & Jupyter container on OpenShift Online. Tensorflow is an open-source software library created by Google for Machine Intelligence. And Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text with others. Throughout this series, we’ll be using these two applications primarily, but we’ll also venture into other popular frameworks as well. By the end of this post, you’ll be able to run a linear regression (the “hello world” of ML) inside a container you built running in a cloud. Pretty cool right? So let’s get started.

Machine Learning Setup

The first thing you need to do is sign up for OpenShift Online Dev Preview. That will give you access to an environment where you can deploy a machine learning app.  We also need to make sure that you have the “oc” tools and docker installed on your local machine. Finally, you’ll need to fork the Tensorshift Github repo and clone it to your machine. I’ve gone ahead and provided the links here to make it easier.

  1. Sign up for the OpenShift Online Developer Preview
  2. Install the OpenShift command line tool
  3. Install the Docker Engine on your local machine
  4. Fork this repo on GitHub and clone it to your machine
  5. Sign into the OpenShift Console and create your first project called “<yourname>-tensorshift”

Building & Tagging the Tensorflow docker image: “TensorShift”

Once you’ve got everything installed to the latest and greatest, change over to the directory where you cloned the repo and then run:

Docker build -t registry.preview.openshift.com/<your_project_name>/tensorshift ./

You want to make sure to replace the stuff in “<>” with your environment information mine looked like this

Docker build -t registry.preview.openshift.com/nick-tensorflow/tensorshift ./

Since we’ll be uploading our tensorshift image to the OpenShift Online docker registry in our next step. We needed to make sure it was tag it appropriately so it ends up in the right place, hence the -t registry.preview.openshift.com/nick-tensorflow/tensorshift we appended to our docker build ./ command.

Once you hit enter, you’ll see docker start to build the image from the Dockerfile included in your repo (feel free to take a look at it to see what’s going on there). Once that’s complete you should be able to run docker images and see that been added.

Example output of `docker images` to show the newly built tensorflow image

 

Pushing TensorShift to the OpenShift Online Docker Registry

Now that we have the image built and tagged we need to upload it to the OpenShift Online Registry. However, before we do that we need to authenticate to the OpenShift Docker Registry:

docker login -u `oc whoami` -p `oc whoami -t` registry.preview.openshift.com`

All that’s left is to push it

docker push registry.preview.openshift.com/<your_project_name>/<your_image_name>

Deploying Tensorflow (TensorShift)

So far you’ve built your own Tensorflow docker image and published to the OpenShift Online Docker registry, well done!

Next, we’ll tell OpenShift to deploy our app using our Tensorflow image we built earlier.

oc app-create <image_name> —appname=<appname>

You should now have a running a containerized Tensorflow instance orchestrated by OpenShift and Kubernetes! How rad is that!

There’s one more thing that we need to be able to access it through the browser. Admittedly, this next step is because I haven’t gotten around to fully integrating the Tensorflow docker image into the complete OpenShift workflow, but it’ll take all of 5 seconds for you to fix.

You need to go to your app in OpenShift and delete the service that’s running. Here’s an example on how to use the web console to do it.

Example of how to delete the preconfigured services created by the TensorShift Image

 
Because we’re using both Jupyter and Tensorboard in the same container for this tutorial we need to actually create the two services so we can access them individually.

Run these two oc commands to knock that out:

oc expose dc <appname> --port=6006 --name=tensorboard

oc expose dc <appname< --port=8888 --name=jupyter

Lastly, just create two routes so you can access them in the browser:

oc expose svc/tensorboard

oc expose svc/jupyter

That’s it for the setup! You should be all set to access your Tensorflow environment and Jupyter through the browser. just run oc status to find the url

$ oc status
 In project Nick TensorShift (nick-tensorshift) on server https://api.preview.openshift.com:443
 
 http://jupyter-nick-tensorshift.44fs.preview.openshiftapps.com to pod port 8888 (svc/jupyter)
 dc/mlexample deploys istag/tensorshift:latest
 deployment #1 deployed 14 hours ago - 1 pod
 
 http://tensorboard-nick-tensorshift.44fs.preview.openshiftapps.com to pod port 6006 (svc/tensorboard)
 dc/mlexample deploys istag/tensorshift:latest
 deployment #1 deployed 14 hours ago - 1 pod
 
 1 warning identified, use 'oc status -v' to see details.

On To The Fun Stuff

Get ready to pick up your Nintendo controller. Open <Linktoapp>:8888 and log into Jupyter using “Password” then create a new notebook like so:

Example of how to create a jupyter notebook

 

Now paste in the following code into your newly created notebook:

  import tensorflow as tf
  import numpy as np
  import matplotlib.pyplot as plt
 
 learningRate = 0.01
  trainingEpochs = 100
 
 # Return evenly spaced numbers over a specified interval
  xTrain = np.linspace(-2, 1, 200)
 
 #Return a random matrix with data from the standard normal distribution.
  yTrain = 2 * xTrain + np.random.randn(*xTrain.shape) * 0.33
 
 #Create a placeholder for a tensor that will be always fed.
  X = tf.placeholder("float")
  Y = tf.placeholder("float")
 
 #define model and construct a linear model
  def model (X, w):
  return tf.mul(X, w)
 
 #Set model weights
  w = tf.Variable(0.0, name="weights")
 
 y_model = model(X, w)
 
 #Define our cost function
  costfunc = (tf.square(Y-y_model))
 
 #Use gradient decent to fit line to the data
  train_op = tf.train.GradientDescentOptimizer(learningRate).minimize(costfunc)
 
 # Launch a tensorflow session to
  sess = tf.Session()
  init = tf.global_variables_initializer()
  sess.run(init)
 
 # Execute everything
  for epoch in range(trainingEpochs):
  for (x, y) in zip(xTrain, yTrain):
  sess.run(train_op, feed_dict={X: x, Y: y})
  w_val = sess.run(w)
 
 sess.close()
 
 #Plot the data
  plt.scatter(xTrain, yTrain)
  y_learned = xTrain*w_val
  plt.plot(xTrain, y_learned, 'r')
  plt.show()

Once you’ve pasted it in, hit ctrl + a (cmd + a for you mac users) to select it and then ctrl + enter  (cmd + enter for mac) And you should see a graph similar to the following:

Let’s Review

That’s it! You just fed a machine a bunch of information and then told it to plot a line that fit’s the dataset. This line shows the “prediction” of what the value of a variable should be based on a single parameter. In other words, you just taught a machine to PREDICT something. You’re one step closer to Skynet – uh, I mean creating your own AI that won’t take over the world. How rad is that!

In the next blog, will dive deeper into linear regression and I’ll go over how it all works. We’ll also and feed our program a CSV file of actual data to try and predict house prices.

Posted by uniqueone
,

 

https://github.com/philferriere/dlwin

GitHub - philferriere_dlwin_ GPU-accelerated Deep Learning on Windows 10 native.pdf

 

GPU-accelerated Theano & Keras on Windows 10 native

>> LAST UPDATED JANUARY, 2017 <<

There are certainly a lot of guides to assist you build great deep learning (DL) setups on Linux or Mac OS (including with Tensorflow which, unfortunately, as of this posting, cannot be easily installed on Windows), but few care about building an efficient Windows 10-native setup. Most focus on running an Ubuntu VM hosted on Windows or using Docker, unnecessary - and ultimately sub-optimal - steps.

We also found enough misguiding/deprecated information out there to make it worthwhile putting together a step-by-step guide for the latest stable versions of Theano and Keras. Used together, they make for one of the simplest and fastest DL configurations to work natively on Windows.

If you must run your DL setup on Windows 10, then the information contained here may be useful to you.

Dependencies

Here's a summary list of the tools and libraries we use for deep learning on Windows 10 (Version 1607 OS Build 14393.222):

  1. Visual Studio 2015 Community Edition Update 3 w. Windows Kit 10.0.10240.0
    • Used for its C/C++ compiler (not its IDE) and SDK
  2. Anaconda (64-bit) w. Python 2.7 (Anaconda2-4.2.0) or Python 3.5 (Anaconda3-4.2.0)
    • A Python distro that gives us NumPy, SciPy, and other scientific libraries
  3. CUDA 8.0.44 (64-bit)
    • Used for its GPU math libraries, card driver, and CUDA compiler
  4. MinGW-w64 (5.4.0)
    • Used for its Unix-like compiler and build tools (g++/gcc, make...) for Windows
  5. Theano 0.8.2
    • Used to evaluate mathematical expressions on multi-dimensional arrays
  6. Keras 1.1.0
    • Used for deep learning on top of Theano
  7. OpenBLAS 0.2.14 (Optional)
    • Used for its CPU-optimized implementation of many linear algebra operations
  8. cuDNN v5.1 (August 10, 2016) for CUDA 8.0 (Conditional)
    • Used to run vastly faster convolution neural networks

For an older setup using VS2013 and CUDA 7.5, please refer to README-2016-07.md (July, 2016 setup)

Hardware

  1. Dell Precision T7900, 64GB RAM
    • Intel Xeon E5-2630 v4 @ 2.20 GHz (1 processor, 10 cores total, 20 logical processors)
  2. NVIDIA GeForce Titan X, 12GB RAM
    • Driver version: 372.90 / Win 10 64

Installation steps

We like to keep our toolkits and libraries in a single root folder boringly called c:\toolkits, so whenever you see a Windows path that starts with c:\toolkits below, make sure to replace it with whatever you decide your own toolkit drive and folder ought to be.

Visual Studio 2015 Community Edition Update 3 w. Windows Kit 10.0.10240.0

You can download Visual Studio 2015 Community Edition from here:

Select the executable and let it decide what to download on its own:

Run the downloaded executable to install Visual Studio, using whatever additional config settings work best for you:

  1. Add C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin to your PATH, based on where you installed VS 2015.
  2. Define sysenv variable INCLUDE with the value C:\Program Files (x86)\Windows Kits\10\Include\10.0.10240.0\ucrt
  3. Define sysenv variable LIB with the value C:\Program Files (x86)\Windows Kits\10\Lib\10.0.10240.0\um\x64;C:\Program Files (x86)\Windows Kits\10\Lib\10.0.10240.0\ucrt\x64

Reference Note: We couldn't run any Theano python files until we added the last two env variables above. We would get a c:\program files (x86)\microsoft visual studio 14.0\vc\include\crtdefs.h(10): fatal error C1083: Cannot open include file: 'corecrt.h': No such file or directory error at compile time and missing kernel32.lib uuid.lib ucrt.lib errors at link time. True, you could probably run C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\vcvars64.bat (with proper params) every single time you open a MINGW cmd prompt, but, obviously, none of the sysenv vars would stick from one session to the next.

Anaconda (64-bit)

This tutorial was created with Python 2.7, but if you prefer to use Python 3.5 it should work too.

Depending on your installation use c:\toolkits\anaconda3-4.2.0 instead of c:\toolkits\anaconda2-4.2.0.

Download the appropriate Anaconda version from here:

Run the downloaded executable to install Anaconda in c:\toolkits\anaconda2-4.2.0:

Warning: Below, we enabled Register Anaconda as the system Python 2.7 because it works for us, but that may not be the best option for you!

  1. Define sysenv variable PYTHON_HOME with the value c:\toolkits\anaconda2-4.2.0
  2. Add %PYTHON_HOME%, %PYTHON_HOME%\Scripts, and %PYTHON_HOME%\Library\bin to PATH

After anaconda installation open a command prompt and execute:

$ cd $PYTHON_HOME; conda install libpython

Note: The version of MinGW above is old (gcc 4.7.0). Instead, we will use MinGW 5.4.0, as shown below.

CUDA 8.0.44 (64-bit)

Download CUDA 8.0 (64-bit) from the NVidia website

Select the proper target platform:

Download the installer:

Run the downloaded installer. Install the files in c:\toolkits\cuda-8.0.44:

After completion, the installer should have created a system environment (sysenv) variable named CUDA_PATH and added %CUDA_PATH%\bin as well as%CUDA_PATH%\libnvvp to PATH. Check that it is indeed the case. If, for some reason, the CUDA env vars are missing, then:

  1. Define a system environment (sysenv) variable named CUDA_PATH with the value c:\toolkits\cuda-8.0.44
  2. Add%CUDA_PATH%\libnvvp and %CUDA_PATH%\bin to PATH

MinGW-w64 (5.4.0)

Download MinGW-w64 from here:

Install it to c:\toolkits\mingw-w64-5.4.0 with the following settings (second wizard screen):

  1. Define the sysenv variable MINGW_HOME with the value c:\toolkits\mingw-w64-5.4.0
  2. Add %MINGW_HOME%\mingw64\bin to PATH

Run the following to make sure all necessary build tools can be found:

$ where gcc; where g++; where cl; where nvcc; where cudafe; where cudafe++
$ gcc --version; g++ --version
$ cl
$ nvcc --version; cudafe --version; cudafe++ --version

You should get results similar to:

Theano 0.8.2

Version 0.8.2? Why not just install the latest bleeding-edge version of Theano since it obviously must work better, right? Simply put, because it makes reproducible research harder. If your work colleagues or Kaggle teammates install the latest code from the dev branch at a different time than you did, you will most likely be running different code bases on your machines, increasing the odds that even though you're using the same input data (the same random seeds, etc.), you still end up with different results when you shouldn't. For this reason alone, we highly recommend only using point releases, the same one across machines, and always documenting which one you use if you can't just use a setup script.

Clone a stable Theano release (0.8.2) from GitHub into c:\toolkits\theano-0.8.2 using the following commands:

$ cd /c/toolkits
$ git clone https://github.com/Theano/Theano.git theano-0.8.2 --branch rel-0.8.2

Install Theano as follows:

$ cd /c/toolkits/theano-0.8.2
$ python setup.py install --record installed_files.txt

The list of files installed can be found here

Verify Theano was installed by querying Anaconda for the list of installed packages:

$ conda list | grep -i theano

Note: We also tried installing Theano with the following command:

$ pip install git+https://github.com/Theano/Theano.git@rel-0.8.2

In our case, this resulted in conflicts between 32-bit and 64-bit DLL when trying to run Theano code.

OpenBLAS 0.2.14 (Optional)

If we're going to use the GPU, why install a CPU-optimized linear algebra library? With our setup, most of the deep learning grunt work is performed by the GPU, that is correct, but the CPU isn't idle. An important part of image-based Kaggle competitions is data augmentation. In that context, data augmentation is the process of manufacturing additional input samples (more training images) by transformation of the original training samples, via the use of image processing operators. Basic transformations such as downsampling and (mean-centered) normalization are also needed. If you feel adventurous, you'll want to try additional pre-processing enhancements (noise removal, histogram equalization, etc.). You certainly could use the GPU for that purpose and save the results to file. In practice, however, those operations are often executed in parallel on the CPU while the GPU is busy learning the weights of the deep neural network and the augmented data discarded after use. For this reason, we highly recommend installing the OpenBLAS library.

According to the Theano documentation, the multi-threaded OpenBLAS library performs much better than the un-optimized standard BLAS (Basic Linear Algebra Subprograms) library, so that's what we use.

Download OpenBLAS from here and extract the files to c:\toolkits\openblas-0.2.14-int32

  1. Define sysenv variable OPENBLAS_HOME with the value c:\toolkits\openblas-0.2.14-int32
  2. Add %OPENBLAS_HOME%\bin to PATH

Switching between CPU and GPU mode

Next, create the two following sysenv variables:

  • sysenv variable THEANO_FLAGS_CPU with the value:

floatX=float32,device=cpu,lib.cnmem=0.8,blas.ldflags=-LC:/toolkits/openblas-0.2.14-int32/bin -lopenblas

  • sysenv variable THEANO_FLAGS_GPU with the value:

floatX=float32,device=gpu,dnn.enabled=False,lib.cnmem=0.8,blas.ldflags=-LC:/toolkits/openblas-0.2.14-int32/bin -lopenblas

Theano only cares about the value of the sysenv variable named THEANO_FLAGS. All we need to do to tell Theano to use the CPU or GPU is to set THEANO_FLAGS to either THEANO_FLAGS_CPU or THEANO_FLAGS_GPU. You can verify those variables have been successfully added to your environment with the following command:

$ env | grep -i theano

Validating our OpenBLAS install (Optional)

We can use the following program from the Theano documentation:

import numpy as np
import time
import theano

print('blas.ldflags=', theano.config.blas.ldflags)

A = np.random.rand(1000, 10000).astype(theano.config.floatX)
B = np.random.rand(10000, 1000).astype(theano.config.floatX)
np_start = time.time()
AB = A.dot(B)
np_end = time.time()
X, Y = theano.tensor.matrices('XY')
mf = theano.function([X, Y], X.dot(Y))
t_start = time.time()
tAB = mf(A, B)
t_end = time.time()
print("numpy time: %f[s], theano time: %f[s] (times should be close when run on CPU!)" % (
np_end - np_start, t_end - t_start))
print("Result difference: %f" % (np.abs(AB - tAB).max(), ))

Save the code above to a file named openblas_test.py in the current directory (or download it from this GitHub repo) and run the next commands:

$ THEANO_FLAGS=$THEANO_FLAGS_CPU
$ python openblas_test.py

Note: If you get a failure of the kind NameError: global name 'CVM' is not defined, it may be because, like us, you've messed with the value of THEANO_FLAGS_CPU and switched back and forth between floatX=float32 and floatX=float64 several times. Cleaning your C:\Users\username\AppData\Local\Theano directory (replace username with your login name) will fix the problem (See here, for reference)

Checking our PATH sysenv var

At this point, the PATH environment variable should look something like:

%MINGW_HOME%\mingw64\bin;
%CUDA_PATH%\bin;
%CUDA_PATH%\libnvvp;
%OPENBLAS_HOME%\bin;
%PYTHON_HOME%;
%PYTHON_HOME%\Scripts;
%PYTHON_HOME%\Library\bin;
C:\ProgramData\Oracle\Java\javapath;
C:\WINDOWS\system32;
C:\WINDOWS;
C:\WINDOWS\System32\Wbem;
C:\WINDOWS\System32\WindowsPowerShell\v1.0\;
C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin;
C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit\;
C:\Program Files\Git\cmd;
C:\Program Files\Git\mingw64\bin;
C:\Program Files\Git\usr\bin
...

Validating our GPU install with Theano

We'll run the following program from the Theano documentation to compare the performance of the GPU install vs using Theano in CPU-mode. Save the code to a file named cpu_gpu_test.py in the current directory (or download it from this GitHub repo):

from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
    r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
    print('Used the cpu')
else:
    print('Used the gpu')

First, let's see what kind of results we get running Theano in CPU mode:

$ THEANO_FLAGS=$THEANO_FLAGS_CPU
$ python cpu_gpu_test.py

Next, let's run the same program on the GPU:

$ THEANO_FLAGS=$THEANO_FLAGS_GPU
$ python cpu_gpu_test.py

Note: If you get a c:\program files (x86)\microsoft visual studio 14.0\vc\include\crtdefs.h(10): fatal error C1083: Cannot open include file: 'corecrt.h': No such file or directory with the above, please see the Reference Note at the end of the Visual Studio 2015 Community Edition Update 3 section.

Almost a 68:1 improvement. It works! Great, we're done with setting up Theano 0.8.2.

Keras 1.1.0

Clone a stable Keras release (1.1.0) to your local machine from GitHub using the following commands:

$ cd /c/toolkits
$ git clone https://github.com/fchollet/keras.git keras-1.1.0 --branch 1.1.0

This should clone Keras 1.1.0 in c:\toolkits\keras-1.1.0:

Install it as follows:

$ cd /c/toolkits/keras-1.1.0
$ python setup.py install --record installed_files.txt

The list of files installed can be found here

Verify Keras was installed by querying Anaconda for the list of installed packages:

$ conda list | grep -i keras

Recent builds of Keras can either use Tensorflow or Theano as a backend. At the time of this writing, TensorFlow supports only 64-bit Python 3.5 on Windows. This doesn't work for us, but if you are using Python 3.5, then by all means, feel free to give it a try. By default, we will use Theano as our backend, using the commands below:

$ cp ~/.keras/keras.json ~/.keras/keras.json.bak
$ echo -e '{\n\t"image_dim_ordering": "th",\n\t"epsilon": 1e-07,\n\t"floatx": "float32",\n\t"backend": "theano"\n}' >> ~/.keras/keras_theano.json
$ echo -e '{\n\t"image_dim_ordering": "tf",\n\t"epsilon": 1e-07,\n\t"floatx": "float32",\n\t"backend": "tensorflow"\n}' >> ~/.keras/keras_tensorflow.json
$ cp -f ~/.keras/keras_theano.json ~/.keras/keras.json

Validating our GPU install with Keras

We can train a simple convnet (convolutional neural network) on the MNIST dataset by using one of the example scripts provided with Keras. The file is called mnist_cnn.py and can be found in the examples folder:

$ THEANO_FLAGS=$THEANO_FLAGS_GPU
$ cd /c/toolkits/keras-1.1.0/examples
$ python mnist_cnn.py

Without cuDNN, each epoch takes about 20s. If you install TechPowerUp's GPU-Z, you can track how well the GPU is being leveraged. Here, in the case of this convnet (no cuDNN), we max out at 92% GPU usage on average:

cuDNN v5.1 (August 10, 2016) for CUDA 8.0 (Conditional)

If you're not going to train convnets then you might not really benefit from installing cuDNN. Per NVidia's website, "cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers," hallmarks of convolution network architectures. Theano is mentioned in the list of frameworks that support cuDNN v5 for GPU acceleration.

If you are going to train convnets, then download cuDNN from here. Choose the cuDNN Library for Windows10 dated August 10, 2016:

The downloaded ZIP file contains three directories (bin, include, lib). Extract those directories and copy the files they contain to the identically named folders in C:\toolkits\cuda-8.0.44.

To enable cuDNN, create a new sysenv variable named THEANO_FLAGS_GPU_DNN with the following value:

floatX=float32,device=gpu,optimizer_including=cudnn,lib.cnmem=0.8,dnn.conv.algo_bwd_filter=deterministic,dnn.conv.algo_bwd_data=deterministic,blas.ldflags=-LC:/toolkits/openblas-0.2.14-int32/bin -lopenblas

Then, run the following commands:

$ THEANO_FLAGS=$THEANO_FLAGS_GPU_DNN
$ cd /c/toolkits/keras-1.1.0/examples
$ python mnist_cnn.py

Note: If you get a cuDNN not available message after this, try cleaning your C:\Users\username\AppData\Local\Theano directory (replace username with your login name). If you get an error similar to cudnn error: Mixed dnn version. The header is from one version, but we link with a different version (5010, 5005), try cuDNN v5.0 instead of cuDNN v5.1. Windows will sometimes also helpfully block foreign .dll files from running on your computer. If that is the case, right click and unblock the files to allow them to be used.

Here's the (cleaned up) execution log for the simple convnet Keras example, using cuDNN:

Now, each epoch takes about 3s, instead of 20s, a large improvement in speed, with slightly lower GPU usage:

The Your cuDNN version is more recent than the one Theano officially supports message certainly sounds ominous but a test accuracy of 0.9899 would suggest that it can be safely ignored. So...

...we're done!

References

Setup a Deep Learning Environment on Windows (Theano & Keras with GPU Enabled), by Ayse Elvan Aydemir

Installation of Theano on Windows, by Theano team

A few tips to install theano on Windows, 64 bits, by Kagglers

How do I install Keras and Theano in Anaconda Python 2.7 on Windows?, by S.O. contributors

Additional Thanks Go To...

Kaggler Vincent L. for recommending adding dnn.conv.algo_bwd_filter=deterministic,dnn.conv.algo_bwd_data=deterministic to THEANO_FLAGS_GPU_DNN in order to improve reproducibility with no observable impact on performance.

If you'd rather use Python3, conda's built-in MinGW package, or pip, please refer to @stmax82's note here.

Suggested viewing/reading

Intro to Deep Learning with Python, by Alec Radford

@ https://www.youtube.com/watch?v=S75EdAcXHKk

@ http://slidesha.re/1zs9M11

@ https://github.com/Newmu/Theano-Tutorials

About the Author

For information about the author, please visit:

https://www.linkedin.com/in/philferriere

Posted by uniqueone
,

https://youtu.be/cF7tIo6Njo4
Posted by uniqueone
,

http://www.gergltd.com/home/2015/04/installing-theano-in-windows-7-64-bit/

 

 

 

 

 

Installing Theano in Windows 7 64-bit

My instructions for installing Theano 0.6 with

  • Windows 7-64 bit
  • Anaconda 2.1.0 (Python 2.7).  This tutorial only works with 2.1.0.  I tested it with 2.2.0 and it did not work.  I have no plans to fix this issue.
  • CUDA 7.0

Steps

  1. Download Anaconda 2.1.0 from here.
  2. Install pip via command line using “pip install https://pypi.python.org/packages/source/T/Theano/Theano-0.6.0.zip#md5=0a2211b250c358809014adb945dd0ba7
  3. Create a .theanorc.txt file in your user area (C:\Users\username\.theanorc.txt) with the specified text listed below.
  4. Open Anaconda
  5. Import and test/build theano by typeing import theano and then theano.test()
  6. Sit back and relax while everything builds.

.theanorc.txt file contents (you must create at %USERDIR%/.theanorc.txt, for me this is c:\users\username\.theanorc.txt)

[global]
openmp=False
device = gpu0
floatX = float32

[blas] ldflags=

Notes

If you get an error about “CVM,” you must delete the cache files that are in C:\Users\MyUsername\AppData\Local\Theano. Once you delete everything, start python again and continue from there.

If you have path issues when trying to import theano, try using the Visual Studio 64-bit command prompt if you have it.  It sets a bunch of paths for you and “just works” for me.  For reference, the path I use is:

PATH=C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\amd64;C:\Windows\Microsoft.NET\Framework64\v4.0.30319;C:\Windows\Microsoft.NET\Framework64\v3.5;C:\Program Files (x86)\Microsoft Visual
Studio 10.0\VC\VCPackages;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\Tools;C:\Program Files (x86)\HTML Help Workshop;C:
\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin\NETFX 4.0 Tools\x64;C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin\x64;C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin;C:\Program
 Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\libnvvp;C:\Python34\Lib\site-packages\PyQt5;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Co
mmon;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v6.5\libnvvp;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS C
lient\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel
(R) Management Engine Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\MATL
AB\R2014a\bin;C:\Program Files\TortoiseHg\;C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\;C:\Program Files\Microsoft SQL Server\100\Tools\Binn\;C:\Program Files\Microsoft SQL Server\100\D
TS\Binn\;C:\Users\username\AppData\Local\Continuum\Anaconda;C:\Users\username\AppData\Local\Continuum\Anaconda\Scripts

Update June 11, 2015
Added link to Ananconda download

 

 

 

 

 

 

 

 

 

 

 

 

Posted by uniqueone
,