에서 'Rename file cusolver64_11.dll To cusolver64_10.dll '
2021-03-16 19:26:14.435563: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll 2021-03-16 19:26:14.498628: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll 2021-03-16 19:26:14.499003: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll 2021-03-16 19:26:14.527712: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll 2021-03-16 19:26:14.532245: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll 2021-03-16 19:26:14.535978: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found 2021-03-16 19:26:14.585485: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
ERROR: Could not install packages due to an EnvironmentError: [WinError 5] 액세스가 거부되었습니다: 'C:\ProgramData\Anaconda3\envs\venv20\Lib\site-packages\tensorflow\lite\experimental\microfrontend\python\ops\_audio_microfrontend_op.so'
Consider using the `--user` option or check the permissions.
안녕하세요, 왕초보입니다. MacOS, Windows 또는 LInux ? 인공지능을 제대로 공부하려는데요, 어떤 운영체제에서 시작하는 것이 성능 측면에서 (동일한 하드웨어 가정) 커뮤니티 측면에서 (추후 Q&A 시) 바람직할지 여쭤봅니다. 미리 감사드립니다. ^____^ MacOS, Windows는 친숙하고, Linux는 필요시 더 친숙해지려구요.
리눅스 컴퓨터로 하시는걸 적극*10000 추천드립니다... 윈도우도 cmd가 있지만 터미널이 진짜 편하고 온라인으로 나와있는 튜토리얼들도 왠만해서는 리눅스에서 돌리기 편합니다...
개인적으로 제일 좋았던건, 리눅스 컴퓨터 좋은거 맞추고 맥으로 그 컴퓨터를 ssh해서 학습 돌리는 거 였습니다. 개발은 mac에서 하고, 학습은 linux으로 하는 식으루요
Linux가 성능이나, 커뮤니티 측면에서 여러모로 편리합니다. Mac으로 ssh 접속도 편리하고요.
# export PATH=~/anaconda/bin:$PATH # MAC
conda create -n tf python=3.5 # 3.5 버전만 TensorFlow/Keras가 지원
activate tf # Windows # source activate tf : Linux/macOS# 여기서부터 (tf) 환경. 설치 순서 중요
pip install tensorflow # pip install tensorflow-gpu : GPU 버전
conda install pandas matplotlib scikit-learn
pip install keras
conda install jupyter notebook
jupyter notebook # Test 해보기
우선 python 3.5.X 버전을 사용해야 현재 TensorFlow-v1.0.0와 Keras를 지원합니다. 평소에 Source code로만 설치하는 것을 선호했지만 Windows에서 설치가 조금 난감하기 때문에 Anaconda를 통해 비교적 쉽게 설치를 할 수 있습니다. 그리고 Linux/macOS에서도 다 작동하는 것을 확인했습니다.
편의상 문어체를 사용합니다. Anaconda 다운로드를 통해서 Anaconda Python 3.X 버전을 자신의 플랫폼에 맞게 설치한다.
나는 Python 3.6 버전이다.
Windows 경우 설치중에 Anaconda를 PATH 경로에 포함하는 체크란이 있음으로 반드시 체크 됨을 확인하자.
설치를 완료한 후에 linux/macOS 라면 terminal 그리고 Windows 라면 CMD 창에 conda --v 명령어가 작동해야한다.
만약 linux/macOS에서 conda 명령어가 먹히지 않으면 export PATH=~/anaconda/bin:$PATH 로 anaconda를 경로에 추가한다. 각자의 anaconda의 경로가 다를 수 있으므로 anaconda 혹은 anaconda3로 추가해준다. 편의상 자신의 .bashrc 등에 넣어주자. echo 'export PATH=~/anaconda/bin:$PATH' >> ~/.bashrc
커맨드 작동을 확인한다.Anaconda의 현 버전은 상관없는 듯하다.
그리고 conda environment를 만든다. 환경 설정후 환경을 활성화 시킨다.
conda create -n tf python=3.5 # y 등으로 계속 진행
activate tf # Windows # source activate tf # Linux/macOS
이 과정은 python의 virtualenv와 비슷하다. 안전하게 시스템의 python library가 꼬이지 않게 격리해서 만드는 과정이다. 예를들어 TensorFlow의 버전을 다르게 쓰고싶을때.
여기서 중요한건 python=3.5를 해줘야한다. Anaconda가 python3.6 버전을 사용하면 TensorFlow와 Keras 설치가 불가능하다. 그리고 환경을 활성화시킬때 Windows와 Linux/macOS 명령어의 차이가 난다.
Think about this: what’s something that exists today that will still exist 100 years from now? Better yet, what do you use on a daily basis today you think will be utilized as frequently 100 years from now? Suffice to say, there isn’t a whole lot out there with that kind of longevity. But there is at least one thing that will stick around, data. In fact, mankind is estimated to create 44 zettabytes (that’s 44 trillion gigabytes, ladies and gentlemen) of data by 2020 . While impressive, data is useless unless you actually do something with it. So now, the question is, how do we work with all this information and how to we create value from it? Through machine learning and artificial intelligence, you – yes you – can tap into data and generate genuine, insightful value from it. Over the course of this series, you’ll learn the basics of Tensorflow, machine learning, neural networks, and deep learning in a container-based environment.
Before we get started, I need to call out one of my favorite things about OpenShift. When using OpenShift, you get to skip all the hassle of building, configuring or maintaining your application environment. When I’m learning something new, I absolutely hate spending several hours of trial and error just to get the environment ready. I’m from the Nintendo generation; I just want to pick up a controller and start playing. Sure, there’s still some setup with OpenShift, but it’s much less. For the most part with OpenShift, you get to skip right to the fun stuff and learn about the important environment fundamentals along the way.
And that’s where we’ll start our journey to machine learning(ML), by deploying Tensorflow & Jupyter container on OpenShift Online. Tensorflow is an open-source software library created by Google for Machine Intelligence. And Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text with others. Throughout this series, we’ll be using these two applications primarily, but we’ll also venture into other popular frameworks as well. By the end of this post, you’ll be able to run a linear regression (the “hello world” of ML) inside a container you built running in a cloud. Pretty cool right? So let’s get started.
Machine Learning Setup
The first thing you need to do is sign up for OpenShift Online Dev Preview. That will give you access to an environment where you can deploy a machine learning app. We also need to make sure that you have the “oc” tools and docker installed on your local machine. Finally, you’ll need to fork the Tensorshift Github repo and clone it to your machine. I’ve gone ahead and provided the links here to make it easier.
Since we’ll be uploading our tensorshift image to the OpenShift Online docker registry in our next step. We needed to make sure it was tag it appropriately so it ends up in the right place, hence the -t registry.preview.openshift.com/nick-tensorflow/tensorshift we appended to our docker build ./ command.
Once you hit enter, you’ll see docker start to build the image from the Dockerfile included in your repo (feel free to take a look at it to see what’s going on there). Once that’s complete you should be able to run docker images and see that been added.
Example output of `docker images` to show the newly built tensorflow image
Pushing TensorShift to the OpenShift Online Docker Registry
Now that we have the image built and tagged we need to upload it to the OpenShift Online Registry. However, before we do that we need to authenticate to the OpenShift Docker Registry:
So far you’ve built your own Tensorflow docker image and published to the OpenShift Online Docker registry, well done!
Next, we’ll tell OpenShift to deploy our app using our Tensorflow image we built earlier.
oc app-create <image_name> —appname=<appname>
You should now have a running a containerized Tensorflow instance orchestrated by OpenShift and Kubernetes! How rad is that!
There’s one more thing that we need to be able to access it through the browser. Admittedly, this next step is because I haven’t gotten around to fully integrating the Tensorflow docker image into the complete OpenShift workflow, but it’ll take all of 5 seconds for you to fix.
You need to go to your app in OpenShift and delete the service that’s running. Here’s an example on how to use the web console to do it.
Example of how to delete the preconfigured services created by the TensorShift Image
Because we’re using both Jupyter and Tensorboard in the same container for this tutorial we need to actually create the two services so we can access them individually.
Run these two oc commands to knock that out:
oc expose dc <appname> --port=6006 --name=tensorboard
oc expose dc <appname< --port=8888 --name=jupyter
Lastly, just create two routes so you can access them in the browser:
oc expose svc/tensorboard
oc expose svc/jupyter
That’s it for the setup! You should be all set to access your Tensorflow environment and Jupyter through the browser. just run oc status to find the url
$ oc status
In project Nick TensorShift (nick-tensorshift) on server https://api.preview.openshift.com:443
http://jupyter-nick-tensorshift.44fs.preview.openshiftapps.com to pod port 8888 (svc/jupyter)
dc/mlexample deploys istag/tensorshift:latest
deployment #1 deployed 14 hours ago - 1 pod
http://tensorboard-nick-tensorshift.44fs.preview.openshiftapps.com to pod port 6006 (svc/tensorboard)
dc/mlexample deploys istag/tensorshift:latest
deployment #1 deployed 14 hours ago - 1 pod
1 warning identified, use 'oc status -v' to see details.
On To The Fun Stuff
Get ready to pick up your Nintendo controller. Open <Linktoapp>:8888 and log into Jupyter using “Password” then create a new notebook like so:
Example of how to create a jupyter notebook
Now paste in the following code into your newly created notebook:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
learningRate = 0.01
trainingEpochs = 100
# Return evenly spaced numbers over a specified interval
xTrain = np.linspace(-2, 1, 200)
#Return a random matrix with data from the standard normal distribution.
yTrain = 2 * xTrain + np.random.randn(*xTrain.shape) * 0.33
#Create a placeholder for a tensor that will be always fed.
X = tf.placeholder("float")
Y = tf.placeholder("float")
#define model and construct a linear model
def model (X, w):
return tf.mul(X, w)
#Set model weights
w = tf.Variable(0.0, name="weights")
y_model = model(X, w)
#Define our cost function
costfunc = (tf.square(Y-y_model))
#Use gradient decent to fit line to the data
train_op = tf.train.GradientDescentOptimizer(learningRate).minimize(costfunc)
# Launch a tensorflow session to
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
# Execute everything
for epoch in range(trainingEpochs):
for (x, y) in zip(xTrain, yTrain):
sess.run(train_op, feed_dict={X: x, Y: y})
w_val = sess.run(w)
sess.close()
#Plot the data
plt.scatter(xTrain, yTrain)
y_learned = xTrain*w_val
plt.plot(xTrain, y_learned, 'r')
plt.show()
Once you’ve pasted it in, hit ctrl + a (cmd + a for you mac users) to select it and then ctrl + enter (cmd + enter for mac) And you should see a graph similar to the following:
Let’s Review
That’s it! You just fed a machine a bunch of information and then told it to plot a line that fit’s the dataset. This line shows the “prediction” of what the value of a variable should be based on a single parameter. In other words, you just taught a machine to PREDICT something. You’re one step closer to Skynet – uh, I mean creating your own AI that won’t take over the world. How rad is that!
In the next blog, will dive deeper into linear regression and I’ll go over how it all works. We’ll also and feed our program a CSV file of actual data to try and predict house prices.
GPU-accelerated Theano & Keras on Windows 10 native
>> LAST UPDATED JANUARY, 2017 <<
There are certainly a lot of guides to assist you build great deep learning (DL) setups on Linux or Mac OS (including with Tensorflow which, unfortunately, as of this posting, cannot be easily installed on Windows), but few care about building an efficient Windows 10-native setup. Most focus on running an Ubuntu VM hosted on Windows or using Docker, unnecessary - and ultimately sub-optimal - steps.
We also found enough misguiding/deprecated information out there to make it worthwhile putting together a step-by-step guide for the latest stable versions of Theano and Keras. Used together, they make for one of the simplest and fastest DL configurations to work natively on Windows.
If you must run your DL setup on Windows 10, then the information contained here may be useful to you.
Dependencies
Here's a summary list of the tools and libraries we use for deep learning on Windows 10 (Version 1607 OS Build 14393.222):
Visual Studio 2015 Community Edition Update 3 w. Windows Kit 10.0.10240.0
Used for its C/C++ compiler (not its IDE) and SDK
Anaconda (64-bit) w. Python 2.7 (Anaconda2-4.2.0) or Python 3.5 (Anaconda3-4.2.0)
A Python distro that gives us NumPy, SciPy, and other scientific libraries
CUDA 8.0.44 (64-bit)
Used for its GPU math libraries, card driver, and CUDA compiler
MinGW-w64 (5.4.0)
Used for its Unix-like compiler and build tools (g++/gcc, make...) for Windows
Theano 0.8.2
Used to evaluate mathematical expressions on multi-dimensional arrays
Keras 1.1.0
Used for deep learning on top of Theano
OpenBLAS 0.2.14 (Optional)
Used for its CPU-optimized implementation of many linear algebra operations
cuDNN v5.1 (August 10, 2016) for CUDA 8.0 (Conditional)
Used to run vastly faster convolution neural networks
For an older setup using VS2013 and CUDA 7.5, please refer to README-2016-07.md (July, 2016 setup)
We like to keep our toolkits and libraries in a single root folder boringly called c:\toolkits, so whenever you see a Windows path that starts with c:\toolkits below, make sure to replace it with whatever you decide your own toolkit drive and folder ought to be.
Visual Studio 2015 Community Edition Update 3 w. Windows Kit 10.0.10240.0
You can download Visual Studio 2015 Community Edition from here:
Select the executable and let it decide what to download on its own:
Run the downloaded executable to install Visual Studio, using whatever additional config settings work best for you:
Add C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin to your PATH, based on where you installed VS 2015.
Define sysenv variable INCLUDE with the value C:\Program Files (x86)\Windows Kits\10\Include\10.0.10240.0\ucrt
Define sysenv variable LIB with the value C:\Program Files (x86)\Windows Kits\10\Lib\10.0.10240.0\um\x64;C:\Program Files (x86)\Windows Kits\10\Lib\10.0.10240.0\ucrt\x64
Reference Note: We couldn't run any Theano python files until we added the last two env variables above. We would get a c:\program files (x86)\microsoft visual studio 14.0\vc\include\crtdefs.h(10): fatal error C1083: Cannot open include file: 'corecrt.h': No such file or directory error at compile time and missing kernel32.lib uuid.lib ucrt.lib errors at link time. True, you could probably run C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\vcvars64.bat (with proper params) every single time you open a MINGW cmd prompt, but, obviously, none of the sysenv vars would stick from one session to the next.
Anaconda (64-bit)
This tutorial was created with Python 2.7, but if you prefer to use Python 3.5 it should work too.
Depending on your installation use c:\toolkits\anaconda3-4.2.0 instead of c:\toolkits\anaconda2-4.2.0.
Download the appropriate Anaconda version from here:
Run the downloaded executable to install Anaconda in c:\toolkits\anaconda2-4.2.0:
Warning: Below, we enabled Register Anaconda as the system Python 2.7 because it works for us, but that may not be the best option for you!
Define sysenv variable PYTHON_HOME with the value c:\toolkits\anaconda2-4.2.0
Add %PYTHON_HOME%, %PYTHON_HOME%\Scripts, and %PYTHON_HOME%\Library\bin to PATH
After anaconda installation open a command prompt and execute:
$ cd $PYTHON_HOME; conda install libpython
Note: The version of MinGW above is old (gcc 4.7.0). Instead, we will use MinGW 5.4.0, as shown below.
Run the downloaded installer. Install the files in c:\toolkits\cuda-8.0.44:
After completion, the installer should have created a system environment (sysenv) variable named CUDA_PATH and added %CUDA_PATH%\bin as well as%CUDA_PATH%\libnvvp to PATH. Check that it is indeed the case. If, for some reason, the CUDA env vars are missing, then:
Define a system environment (sysenv) variable named CUDA_PATH with the value c:\toolkits\cuda-8.0.44
Add%CUDA_PATH%\libnvvp and %CUDA_PATH%\bin to PATH
Install it to c:\toolkits\mingw-w64-5.4.0 with the following settings (second wizard screen):
Define the sysenv variable MINGW_HOME with the value c:\toolkits\mingw-w64-5.4.0
Add %MINGW_HOME%\mingw64\bin to PATH
Run the following to make sure all necessary build tools can be found:
$ where gcc; where g++; where cl; where nvcc; where cudafe; where cudafe++
$ gcc --version; g++ --version
$ cl
$ nvcc --version; cudafe --version; cudafe++ --version
You should get results similar to:
Theano 0.8.2
Version 0.8.2? Why not just install the latest bleeding-edge version of Theano since it obviously must work better, right? Simply put, because it makes reproducible research harder. If your work colleagues or Kaggle teammates install the latest code from the dev branch at a different time than you did, you will most likely be running different code bases on your machines, increasing the odds that even though you're using the same input data (the same random seeds, etc.), you still end up with different results when you shouldn't. For this reason alone, we highly recommend only using point releases, the same one across machines, and always documenting which one you use if you can't just use a setup script.
Clone a stable Theano release (0.8.2) from GitHub into c:\toolkits\theano-0.8.2 using the following commands:
$ cd /c/toolkits
$ git clone https://github.com/Theano/Theano.git theano-0.8.2 --branch rel-0.8.2
Install Theano as follows:
$ cd /c/toolkits/theano-0.8.2
$ python setup.py install --record installed_files.txt
In our case, this resulted in conflicts between 32-bit and 64-bit DLL when trying to run Theano code.
OpenBLAS 0.2.14 (Optional)
If we're going to use the GPU, why install a CPU-optimized linear algebra library? With our setup, most of the deep learning grunt work is performed by the GPU, that is correct, but the CPU isn't idle. An important part of image-based Kaggle competitions is data augmentation. In that context, data augmentation is the process of manufacturing additional input samples (more training images) by transformation of the original training samples, via the use of image processing operators. Basic transformations such as downsampling and (mean-centered) normalization are also needed. If you feel adventurous, you'll want to try additional pre-processing enhancements (noise removal, histogram equalization, etc.). You certainly could use the GPU for that purpose and save the results to file. In practice, however, those operations are often executed in parallel on the CPU while the GPU is busy learning the weights of the deep neural network and the augmented data discarded after use. For this reason, we highly recommend installing the OpenBLAS library.
According to the Theano documentation, the multi-threaded OpenBLAS library performs much better than the un-optimized standard BLAS (Basic Linear Algebra Subprograms) library, so that's what we use.
Download OpenBLAS from here and extract the files to c:\toolkits\openblas-0.2.14-int32
Define sysenv variable OPENBLAS_HOME with the value c:\toolkits\openblas-0.2.14-int32
Theano only cares about the value of the sysenv variable named THEANO_FLAGS. All we need to do to tell Theano to use the CPU or GPU is to set THEANO_FLAGS to either THEANO_FLAGS_CPU or THEANO_FLAGS_GPU. You can verify those variables have been successfully added to your environment with the following command:
Note: If you get a failure of the kind NameError: global name 'CVM' is not defined, it may be because, like us, you've messed with the value of THEANO_FLAGS_CPU and switched back and forth between floatX=float32 and floatX=float64 several times. Cleaning your C:\Users\username\AppData\Local\Theano directory (replace username with your login name) will fix the problem (See here, for reference)
Checking our PATH sysenv var
At this point, the PATH environment variable should look something like:
We'll run the following program from the Theano documentation to compare the performance of the GPU install vs using Theano in CPU-mode. Save the code to a file named cpu_gpu_test.py in the current directory (or download it from this GitHub repo):
from theano import function, config, shared, sandbox
import theano.tensor asTimport numpy
import time
vlen =10*30*768# 10 x #cores x # threads per core
iters =1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i inrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds"% (iters, t1 - t0))
print("Result is %s"% (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
First, let's see what kind of results we get running Theano in CPU mode:
Note: If you get a c:\program files (x86)\microsoft visual studio 14.0\vc\include\crtdefs.h(10): fatal error C1083: Cannot open include file: 'corecrt.h': No such file or directory with the above, please see the Reference Note at the end of the Visual Studio 2015 Community Edition Update 3 section.
Almost a 68:1 improvement. It works! Great, we're done with setting up Theano 0.8.2.
Keras 1.1.0
Clone a stable Keras release (1.1.0) to your local machine from GitHub using the following commands:
$ cd /c/toolkits
$ git clone https://github.com/fchollet/keras.git keras-1.1.0 --branch 1.1.0
This should clone Keras 1.1.0 in c:\toolkits\keras-1.1.0:
Install it as follows:
$ cd /c/toolkits/keras-1.1.0
$ python setup.py install --record installed_files.txt
Verify Keras was installed by querying Anaconda for the list of installed packages:
$ conda list | grep -i keras
Recent builds of Keras can either use Tensorflow or Theano as a backend. At the time of this writing, TensorFlow supports only 64-bit Python 3.5 on Windows. This doesn't work for us, but if you are using Python 3.5, then by all means, feel free to give it a try. By default, we will use Theano as our backend, using the commands below:
We can train a simple convnet (convolutional neural network) on the MNIST dataset by using one of the example scripts provided with Keras. The file is called mnist_cnn.py and can be found in the examples folder:
$ THEANO_FLAGS=$THEANO_FLAGS_GPU
$ cd /c/toolkits/keras-1.1.0/examples
$ python mnist_cnn.py
Without cuDNN, each epoch takes about 20s. If you install TechPowerUp's GPU-Z, you can track how well the GPU is being leveraged. Here, in the case of this convnet (no cuDNN), we max out at 92% GPU usage on average:
cuDNN v5.1 (August 10, 2016) for CUDA 8.0 (Conditional)
If you're not going to train convnets then you might not really benefit from installing cuDNN. Per NVidia's website, "cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers," hallmarks of convolution network architectures. Theano is mentioned in the list of frameworks that support cuDNN v5 for GPU acceleration.
If you are going to train convnets, then download cuDNN from here. Choose the cuDNN Library for Windows10 dated August 10, 2016:
The downloaded ZIP file contains three directories (bin, include, lib). Extract those directories and copy the files they contain to the identically named folders in C:\toolkits\cuda-8.0.44.
To enable cuDNN, create a new sysenv variable named THEANO_FLAGS_GPU_DNN with the following value:
$ THEANO_FLAGS=$THEANO_FLAGS_GPU_DNN
$ cd /c/toolkits/keras-1.1.0/examples
$ python mnist_cnn.py
Note: If you get a cuDNN not available message after this, try cleaning your C:\Users\username\AppData\Local\Theano directory (replace username with your login name). If you get an error similar to cudnn error: Mixed dnn version. The header is from one version, but we link with a different version (5010, 5005), try cuDNN v5.0 instead of cuDNN v5.1. Windows will sometimes also helpfully block foreign .dll files from running on your computer. If that is the case, right click and unblock the files to allow them to be used.
Here's the (cleaned up) execution log for the simple convnet Keras example, using cuDNN:
Now, each epoch takes about 3s, instead of 20s, a large improvement in speed, with slightly lower GPU usage:
The Your cuDNN version is more recent than the one Theano officially supports message certainly sounds ominous but a test accuracy of 0.9899 would suggest that it can be safely ignored. So...
Kaggler Vincent L. for recommending adding dnn.conv.algo_bwd_filter=deterministic,dnn.conv.algo_bwd_data=deterministic to THEANO_FLAGS_GPU_DNN in order to improve reproducibility with no observable impact on performance.
If you'd rather use Python3, conda's built-in MinGW package, or pip, please refer to @stmax82's note here.
Suggested viewing/reading
Intro to Deep Learning with Python, by Alec Radford
If you get an error about “CVM,” you must delete the cache files that are in C:\Users\MyUsername\AppData\Local\Theano. Once you delete everything, start python again and continue from there.
If you have path issues when trying to import theano, try using the Visual Studio 64-bit command prompt if you have it. It sets a bunch of paths for you and “just works” for me. For reference, the path I use is: