如何在与docker-compose链接到另一个容器的容器中运行linux守护进程?

I have the following docker-compose.yml file which runs nginx with PHP support:

version: '3'
services:
  nginx:
    container_name: my-app-nginx
    image: nginx:1.13.6
    ports:
      - 8080:80
    volumes:
      - ./nginx-default.conf:/etc/nginx/conf.d/default.conf
      - ./my-app:/var/www/my-app
    restart: always
    depends_on:
      - php
  php:
    container_name: my-app-php
    image: php:7.1-fpm
    volumes:
      - ./my-app:/var/www/my-app
    restart: always

The PHP application inside /var/www/my-app needs to communicate with a linux daemon (let's call it myappd).

The way I see it, I need to either:

  1. Copy the myappd into the nginx container to /usr/local/bin, make it executable with chmod +x and run it in the background.
  2. Create a different container, copy myappd to /usr/local/bin, make it executable with chmod +x and run it in the foreground.

Now, I'm new to Docker and I'm researching and learning about it but my best guess, given that I'm using Docker Composer, is that option 2 is probably the recommended one? Given my limited knowledge about Docker, I'd have to guess that this container would require some sort of linux-based image (like Ubuntu or something) to run this binary. So maybe option 1 is preferred? Or maybe option 2 is possible with a minimal Ubuntu image or maybe it's possible without such image?

Either way, I have no idea how would I implement that on the composer file. Especially option 2, how would the PHP application communicate with the daemon in a different container? Just "sharing" a volume (where the binary is located) like I did for nginx/php services would suffice? Or something else is required?

Simple answer is adding command entry to php service in docker-compose.yml.

Given that myappd is at ./my-app/ on host machine and at /var/www/my-app/, updated docker-compose.yml is something like following.

version: '3'
services:
  nginx:
    container_name: my-app-nginx
    image: nginx:1.13.6
    ports:
       - 8080:80
    volumes:
      - ./nginx-default.conf:/etc/nginx/conf.d/default.conf
      - ./my-app:/var/www/my-app
    restart: always
    depends_on:
       - php
  php:
    container_name: my-app-php
    image: php:7.1-fpm
    volumes:
      - ./my-app:/var/www/my-app
    restart: always
    command: ["/bin/sh", "/var/www/my-app/mappd", "&&", "php-fpm"]

Better answer is to create the third container which runs linux daemon.

New Dockerfile is something like following.

FROM debian:jessie

COPY ./myappd /usr/src/app/

EXPOSE 44444

ENTRYPOINT ['/bin/sh']
CMD ['/usr/src/app/myappd']

Build image and name it myapp/myappd.

Updated docker-compose.yml is something like following.

version: '3'
services:
  nginx:
    container_name: my-app-nginx
    image: nginx:1.13.6
    ports:
       - 8080:80
    volumes:
      - ./nginx-default.conf:/etc/nginx/conf.d/default.conf
      - ./my-app:/var/www/my-app
    restart: always
    depends_on:
       - php
  php:
    container_name: my-app-php
    image: php:7.1-fpm
    volumes:
      - ./my-app:/var/www/my-app
    restart: always
    networks:
      - network1
    depends_on:
       - daemon
  daemon:
    container_name: my-app-daemon
    image: myapp/myappd
    restart: always
    networks:
      - network1
networks:
  network1:

You can send request with hostname daemon from inside php. Docker container has capability to resolve hostname of another container in the same network.