A lot of things must be considered when creating a Docker image, which can be overwhelming for beginners.
Here are some best practices and gotchas for Docker I collected over time and the reasoning behind them.

Avoid internal state

A Docker container should not rely on internal state. That means it should be possible to destroy a container and recreate it with minimal effort with regard to configuration.
This guideline originates from the use of Docker containers for web applications and microservices, where containers are spawned and destroyed regularly.
For development applications (e.g. dockerized development tools: compiler, linker, etc.) the container becomes easier to use if there is no internal state. Everyone who runs the container will work with the same environment.

Keep the build context slim

Keep the build context slim. When building a Docker image all referenced files in the build context (a folder or URL; usually the folder in which the Dockerfile resides) will contribute to the size of the Docker image. Look for messages like the following when running the docker build command to find out about the size of the build context.

$ docker build .
...
Sending build context to Docker daemon  133.7MB
...

To exclude files from a build a .dockerignore file can be used. This method works similar to the .gitignore file found in git repositories.

Do not install unnecessary packages

Do not install any unnecessary packages. Every package you add to your Docker image will bloat the final image.
When using an Ubuntu base image consider the --no-install-recommends flag for apt to install only necessary dependencies. Otherwise apt will install additional “recommended” packages you may not want or need.

RUN apt install --no-install-recommends -y -qq gnat

Utilize multi-stage builds

Use a first stage Docker image to build your application. Then move the build artefacts to a second stage image. All the packages only required to build will remain in the first stage / build image, only the packages and artefacts required to run the application should be present in the second stage image.

Get, make and clean up in a single RUN instruction

Whenever possible do everything you need to do with a dependency in a single RUN instruction.
For a dependency which is installed via a package manager, an example could look like this:

RUN apt update -q \
    && apt install --no-install-recommends -y -q ${BUILD_DEPS} \
    && make foo \
    && apt purge -y --autoremove ${BUILD_DEPS}

For a dependency which is not installed via a package manager, an example could look like this:

RUN git clone https://github.com/some/repo \
    && cd /path/to/repo \
    && ./configure \
    && make \
    && make install \
    && rm -rf /path/to/repo

References:

  1. https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
  2. https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
  3. https://blog.container-solutions.com/6-dockerfile-tips-official-images#:~:text=The%20gosu%20utility%20is%20often,and%20signal%2Dforwarding%20behavior%22.
  4. https://gist.github.com/yogeek/bc8dc6dadbb72cb39efadf83920077d3
  5. https://stackoverflow.com/questions/45277186/is-it-possible-to-add-environment-variables-in-automated-builds-in-docker-hub#:~:text=This%20Environment%20Variables%20configured%20from,variable%20name%20and%20the%20value