How to use tensorflow from a docker container in a container with django? Link Containers?

4

I need to link two containers, my main container contains django and the secondary tensorflow, I am creating the two containers with docker-compose, both are created correctly, but I need to enter the container with django and when running python I can make an import tensorflowflow as tf inside the container with django, since the project that I'm going to run in the django container uses tensorflow, but I have had problems creating a container that has django and tensorflow, for that reason I am creating two containers but I need to link them to be able use tensorflow in the project with django.

This is my Dockerfile based on python3

 FROM python:3
 MAINTAINER Eduardo Barrios
 ENV PYTHONUNBUFFERED 1
 RUN mkdir /code
 WORKDIR /code
 ADD requirements.txt /code/     
 RUN pip install -r requirements.txt
 ADD . /code/

My requirements.txt file

 Django==2.1
 psycopg2
 numpy
 opencv-python
 opencv-contrib-python

And the docker-compose.yml

 version: '3'

 services:
   web:  
     image: ebarrioscode/django_python
     container_name: django_python
     build: .
     command: python manage.py runserver 0.0.0.0:8081
     volumes:
       - .:/code
     ports:
       - "8081:8081"   
     depends_on:
       - tensorflow
     links:
       - tensorflow
       - tensorflow:8888

   tensorflow:
     image: tensorflow/tensorflow:1.0.0-gpu
     container_name: tensorflow_python
     ports:
       - 8888:8888

Update

I have changed my dockerfile and docker-compose now I only use a container which contains tensorflow and install it by pip or RUN pip install django in the Dockerfile, the way to start the django project inside the container is accessing the container and from inside make a python manage.py runserver 0.0.0.0:8081.

I am exposing port 8081 in the Dockerfile so that this port leaves my django application, this port is also exposed in the docker-compose.yml

So I am my Dockerfile

 FROM nvidia/cuda:8.0-devel-ubuntu16.04
 MAINTAINER Eduardo Barrios

 COPY ./keyboard /etc/default/keyboard
 RUN apt-get update 

 RUN pip install --upgrade python
 RUN apt-get --yes --force-yes install python-pip
 RUN pip install numpy
 RUN pip install scipy
 RUN pip install plotly
 RUN pip install tflearn
 RUN apt-get install python-opencv -y
 RUN apt-get update
 RUN pip install 
 https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0-cp27- 
none-linux_x86_64.whl
 RUN python -m pip install jupyter
 RUN pip install django

 ENV CUDA_VERSION 8.0.61

 ENV CUDA_PKG_VERSION 8-0=$CUDA_VERSION-1
 RUN apt-get update && apt-get install -y --no-install-recommends \
    cuda-nvrtc-$CUDA_PKG_VERSION \
    cuda-nvgraph-$CUDA_PKG_VERSION \
    cuda-cusolver-$CUDA_PKG_VERSION \
    cuda-cublas-8-0=8.0.61.2-1 \
    cuda-cufft-$CUDA_PKG_VERSION \
    cuda-curand-$CUDA_PKG_VERSION \
    cuda-cusparse-$CUDA_PKG_VERSION \
    cuda-npp-$CUDA_PKG_VERSION \
    cuda-cudart-$CUDA_PKG_VERSION && \
ln -s cuda-8.0 /usr/local/cuda && \
rm -rf /var/lib/apt/lists/*


 LABEL com.nvidia.volumes.needed="nvidia_driver"
 LABEL com.nvidia.cuda.version="${CUDA_VERSION}"

 RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
     echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf

 ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
 ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64

 EXPOSE 6006
 EXPOSE 8081 
 EXPOSE 8886 
 EXPOSE 8888

 CMD jupyter notebook --ip=0.0.0.0 --port=8888 --allow-root

This is the docker-compose.yml file

 version: '3'

 services:
   djangotf:
     image: ebarrioscode/tensorflow
     container_name: ia_project
     build: .
     volumes:
       - ./files:/notebooks
     ports:
       - "8081:8081"
       - "8888:8888"

The project is already working I had some problems with port 8081 but it resolves by deleting the previously generated image as it is cached and makes the build stage faster, but this caused errors when exposing port 8081 .

    
asked by Eduardo Barrios 06.08.2018 в 20:37
source

1 answer

2

Although this is more of a comment, it is too long to put it as such and therefore I write it in an answer.

The Dockerfile file that you show in the update is not very efficient, because it contains many RUN commands. Each RUN command results in a new layer in the final image. The total number of layers is limited so it is generally a good idea to minimize the number of layers, and that is why many commands are often separated by && in a single RUN.

In the case of installing python packages, the usual technique is to write a file called requirements.txt that contains the names of the packages to install (one per line), and even better if you specify the specific versions of each package ( If you already had everything installed locally in a virtual environment, the pip freeze command can be used to obtain the list of installed packages and their versions, and in this way you ensure that the container image will be 100% compatible with your you used even if it was created at another time when new versions were released).

The RUN apt-get update that you have added is repeated later. I would put all the apt in a single RUN, putting them in front of the pip install .

Finally, if you do not want to have to enter the container to launch the django server "by hand", you could create (outside the container) a lanzar-todo.sh file that contains the following two lines:

jupyter notebook --ip=0.0.0.0 --port=8888 --allow-root
python manage.py runserver 0.0.0.0:8081

and copy that file to the container to launch it in the CMD final. In short, the Dockerfile would look like this:

FROM nvidia/cuda:8.0-devel-ubuntu16.04
MAINTAINER Eduardo Barrios

COPY ./keyboard /etc/default/keyboard
COPY ./requirements.txt /
COPY ./lanzar-todo.sh /

# Instalar primero cuda
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
    cuda-nvrtc-$CUDA_PKG_VERSION \
    cuda-nvgraph-$CUDA_PKG_VERSION \
    cuda-cusolver-$CUDA_PKG_VERSION \
    cuda-cublas-8-0=8.0.61.2-1 \
    cuda-cufft-$CUDA_PKG_VERSION \
    cuda-curand-$CUDA_PKG_VERSION \
    cuda-cusparse-$CUDA_PKG_VERSION \
    cuda-npp-$CUDA_PKG_VERSION \
    cuda-cudart-$CUDA_PKG_VERSION && \
    ln -s cuda-8.0 /usr/local/cuda && \
    rm -rf /var/lib/apt/lists/*

ENV CUDA_VERSION 8.0.61
ENV CUDA_PKG_VERSION 8-0=$CUDA_VERSION-1

LABEL com.nvidia.volumes.needed="nvidia_driver"
LABEL com.nvidia.cuda.version="${CUDA_VERSION}"

RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
    echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf

ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64

# Instalar después python
RUN apt-get update && apt-get install --yes --force-yes \
    python-opencv python-pip  && \
    rm -rf /var/lib/apt/lists/*

# Instalar todos los paquetes python necesarios (django, etc)
RUN pip install -r /requirements.txt

# Exponer puertos
EXPOSE 6006
EXPOSE 8081
EXPOSE 8886
EXPOSE 8888

# Lanzar jupyter y django
CMD /lanzar-todo.sh

On the other hand, since parts of a base image that has already installed cuda, I do not know to what extent it is necessary all the part that installs and configures cuda.

    
answered by 07.08.2018 в 17:06