I need to link two containers, my main container contains django and the secondary tensorflow, I am creating the two containers with docker-compose, both are created correctly, but I need to enter the container with django and when running python I can make an import tensorflowflow as tf inside the container with django, since the project that I'm going to run in the django container uses tensorflow, but I have had problems creating a container that has django and tensorflow, for that reason I am creating two containers but I need to link them to be able use tensorflow in the project with django.
This is my Dockerfile based on python3
FROM python:3
MAINTAINER Eduardo Barrios
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
My requirements.txt file
Django==2.1
psycopg2
numpy
opencv-python
opencv-contrib-python
And the docker-compose.yml
version: '3'
services:
web:
image: ebarrioscode/django_python
container_name: django_python
build: .
command: python manage.py runserver 0.0.0.0:8081
volumes:
- .:/code
ports:
- "8081:8081"
depends_on:
- tensorflow
links:
- tensorflow
- tensorflow:8888
tensorflow:
image: tensorflow/tensorflow:1.0.0-gpu
container_name: tensorflow_python
ports:
- 8888:8888
Update
I have changed my dockerfile and docker-compose now I only use a container which contains tensorflow and install it by pip or RUN pip install django in the Dockerfile, the way to start the django project inside the container is accessing the container and from inside make a python manage.py runserver 0.0.0.0:8081.
I am exposing port 8081 in the Dockerfile so that this port leaves my django application, this port is also exposed in the docker-compose.yml
So I am my Dockerfile
FROM nvidia/cuda:8.0-devel-ubuntu16.04
MAINTAINER Eduardo Barrios
COPY ./keyboard /etc/default/keyboard
RUN apt-get update
RUN pip install --upgrade python
RUN apt-get --yes --force-yes install python-pip
RUN pip install numpy
RUN pip install scipy
RUN pip install plotly
RUN pip install tflearn
RUN apt-get install python-opencv -y
RUN apt-get update
RUN pip install
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.0-cp27-
none-linux_x86_64.whl
RUN python -m pip install jupyter
RUN pip install django
ENV CUDA_VERSION 8.0.61
ENV CUDA_PKG_VERSION 8-0=$CUDA_VERSION-1
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-nvrtc-$CUDA_PKG_VERSION \
cuda-nvgraph-$CUDA_PKG_VERSION \
cuda-cusolver-$CUDA_PKG_VERSION \
cuda-cublas-8-0=8.0.61.2-1 \
cuda-cufft-$CUDA_PKG_VERSION \
cuda-curand-$CUDA_PKG_VERSION \
cuda-cusparse-$CUDA_PKG_VERSION \
cuda-npp-$CUDA_PKG_VERSION \
cuda-cudart-$CUDA_PKG_VERSION && \
ln -s cuda-8.0 /usr/local/cuda && \
rm -rf /var/lib/apt/lists/*
LABEL com.nvidia.volumes.needed="nvidia_driver"
LABEL com.nvidia.cuda.version="${CUDA_VERSION}"
RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64
EXPOSE 6006
EXPOSE 8081
EXPOSE 8886
EXPOSE 8888
CMD jupyter notebook --ip=0.0.0.0 --port=8888 --allow-root
This is the docker-compose.yml file
version: '3'
services:
djangotf:
image: ebarrioscode/tensorflow
container_name: ia_project
build: .
volumes:
- ./files:/notebooks
ports:
- "8081:8081"
- "8888:8888"
The project is already working I had some problems with port 8081 but it resolves by deleting the previously generated image as it is cached and makes the build stage faster, but this caused errors when exposing port 8081 .