Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
225 views
in Technique[技术] by (71.8m points)

php - Multi Docker container with PHP7 fpm and nginx

I am having issues with setting up a multi docker container environment. The idea is pretty standard:

  • One container have php-fpm running
  • Another is a nginx proxy

My phpfpm Docker file is as simple as:

FROM php:7.0-fpm

# install the PHP extensions we need
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* 
    && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr 
    && docker-php-ext-install gd mysqli opcache

# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN { 
        echo 'opcache.memory_consumption=128'; 
        echo 'opcache.interned_strings_buffer=8'; 
        echo 'opcache.max_accelerated_files=4000'; 
        echo 'opcache.revalidate_freq=2'; 
        echo 'opcache.fast_shutdown=1'; 
        echo 'opcache.enable_cli=1'; 
    } > /usr/local/etc/php/conf.d/opcache-recommended.ini

VOLUME /var/www/html

CMD ["php-fpm"]

and Nginx is even more so:

FROM nginx

COPY conf.d/* /etc/nginx/conf.d/

Where inside the conf.d folder is a single file default.conf

server {
    listen 80;
    server_name priz-local.com;
    root /var/www/html;

    index index.php;

    location / {
        proxy_pass  http://website:9000;
        proxy_set_header   Connection "";
        proxy_http_version 1.1;
        proxy_set_header        Host            $host;
        proxy_set_header        X-Real-IP       $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

And docker-compose.yml

website:
  build: ./website/
  ports:
   - "9000:9000"
  container_name: website
  external_links:
     - mysql:mysql
nginx-proxy:
  build: ./proxy/
  ports:
    - "8000:80"
  container_name: proxy
  links:
       - website:website

This exact setup works perfectly on AWS Elastic Beanstalk. However, on my local docker I am getting errors such as:

2016/11/17 09:55:36 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: priz-local.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:9000/", host: "priz-local.com:8888" 172.17.0.1 - - [17/Nov/2016:09:55:36 +0000] "GET / HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36" "-"

UPDATE If I log into the proxy container and try curl to the other one I am getting this:

root@4fb46a4713a8:/# curl http://website
curl: (7) Failed to connect to website port 80: Connection refused
root@4fb46a4713a8:/# curl http://website:9000
curl: (56) Recv failure: Connection reset by peer

Another thing I tried is:

server {
    listen 80;
    server_name priz-local.com;
    root /var/www/html;

    #index index.php;
    #charset UTF-8;

    #gzip on;
    #gzip_http_version 1.1;
    #gzip_vary on;
    #gzip_comp_level 6;
    #gzip_proxied any;
    #gzip_types text/plain text/xml text/css application/x-javascript;

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }

    location /nginx_status {
        stub_status on;
        access_log off;
    }

    location / {
        try_files $uri $uri/ /index.php?q=$uri&$args;
    }

    location ~ .php$ {

        set $nocache "";
        if ($http_cookie ~ (comment_author_.*|wordpress_logged_in.*|wp-postpass_.*)) {
           set $nocache "Y";
        }

        fastcgi_pass  website:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_intercept_errors on;
        include fastcgi_params;

        #fastcgi_cache_use_stale error timeout invalid_header http_500;
        #fastcgi_cache_key $host$request_uri;
        #fastcgi_cache example;
        #fastcgi_cache_valid 200 1m;
        #fastcgi_cache_bypass $nocache;
        #fastcgi_no_cache $nocache;
    }

    location ~* .(js|css|png|jpg|jpeg|gif|ico)$ {
        allow all;
        expires max;
        log_not_found off;

        fastcgi_pass  wordpress:9000;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_intercept_errors on;
        include fastcgi_params;
    }
}

The site started to work, but all the resources (js|css|png|jpg|jpeg|gif|ico) are now returning 403.

What am I missing?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

After a very long chat with R0MANARMY and a lot of his help, I think I finally understood the root of the problem.

The main issue here is the fact that I was not using docker as it was intended to work.

Another cause is the fact that fpm is not a webserver, and the only way to proxy into it is through fastcgi (or maybe not the only, but simple proxy_pass does not work in this case).

So, the correct way of setting it up is:

  1. mounting the code volume into both containers.
  2. configure fastcgi for php scripts through nginx into php container
  3. configure virtual host to serve static assets directly by nginx.

Here are couple of examples of how to do it:

http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/

https://ejosh.co/de/2015/08/wordpress-and-docker-the-correct-way/

UPDATE Adding the actual solution that worked for me:

For faster turnaround, I decided to user docker-compose and docker-compose.yml looks like this:

website:
  build: ./website/
  container_name: website
  external_links:
    - mysql:mysql
  volumes:
    - ~/Dev/priz/website:/var/www/html
  environment:
    WORDPRESS_DB_USER: **
    WORDPRESS_DB_PASSWORD: ***
    WORDPRESS_DB_NAME: ***
    WORDPRESS_DB_HOST: ***
proxy:
  image: nginx
  container_name: proxy
  links:
    - website:website
  ports:
    - "9080:80"
  volumes:
    - ~/Dev/priz/website:/var/www/html
    - ./deployment/proxy/conf.d/default.conf:/etc/nginx/conf.d/default.conf

Now, the most important piece of information here is the fact that I am mounting exactly the same code to both containers. The reason for that, is because fastcgi cannot serve static files (at least as far as I understand), so the idea is to serve then directly through nginx.

My default.conf file looks like this:

server {
    listen 80;
    server_name localhost;
    root /var/www/html;

    index index.php;

    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }

    location /nginx_status {
        stub_status on;
        access_log off;
    }

    location / {
        try_files $uri $uri/ /index.php?q=$uri&$args;
    }

    location ~ .php$ {
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_pass website:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        #fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_intercept_errors on;
        include fastcgi_params;
    }
}

So, this config, proxies through php request to be handled by fpm container, while everything else is taken from locally mounted volume.

That's it. I hope it will help someone.

The only couple of issues with it:

  1. ONLY sometimes http://localhost:9080 downloads index.php file instead of executing it
  2. cURL'ing from php script to outside world, takes really long time not sure how to even debug this, at this point.

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...