43
submitted 1 year ago by Dirk@lemmy.ml to c/selfhosted@lemmy.world

Currently I’m planning to dockerize some web applications but I didn’t find a reasonably easy way do create the images to be hosted in my repository so I can pull them on my server.

What I currently have is:

  1. A local computer with a directory where the application that I want to dockerize is located
  2. A “docker server” running Portainer without shell/ssh access
  3. A place where I can upload/host the Docker images and where I can pull the images from on the “Docker server”
  4. Basic knowledge on how to write the needed Dockerfile

What I now need is a sane way to build the images WITHOUT setting up a fully featured Docker environment on the local computer.

Ideally something where I can build the images and upload them but without that something “littering Docker-related files all over my system”.

Something like a VM that resets on every start maybe? So … build the image, upload to repository, close the terminal window, and forget that anything ever happened.

What is YOUR solution to create and upload Docker images in a clean and sane way?

you are viewing a single comment's thread
view the rest of the comments
[-] Dirk@lemmy.ml 1 points 1 year ago

I have no problem with Docker creating several images and containers and volumes for building a single-image application. The problem is that it does not clean up afterwards and leaves me with multiple things I don’t need for anything else.

I also don’t care about caching or any “magic” stuff. I just ideally want to run one command (or script doing it for me) to build an image resulting in just this one image without any other traces left. … I just like a clean environment and the build process ideally being self-contained.

But I’ll look into your suggestions, thanks!

[-] agressivelyPassive@feddit.de 1 points 1 year ago

I seriously don't understand what leftovers you're talking about.

You essentially have a Dockerfile that describes how you want to build your image, you run docker build with the path of your Dockerfile and the path of the context, and the rest is completely up to you. Docker does not leave that many traces around - only the built images within docker itself, but as I said, that's the point of building them.

You can even export the image into a tar file and run docker prune afterwards, that should only leave the exported tar file.

[-] Dirk@lemmy.ml 1 points 1 year ago

When I built an image last time there were several unused other images with just hashes as names and two unused volumes, also multiple cache files and other files in the user’s home directory in various subfolders.

[-] adora@kbin.social 1 points 1 year ago

It's very possible they weren't unused.
Docker builds their images out of layers, and all the layers are used during runtime!:
https://sweetcode.io/understanding-docker-image-layers/

The idea is that you can essentially change PARTS of an image, without rebuilding it entirely, which saves space and bandwidth.

this post was submitted on 07 Sep 2023
43 points (92.2% liked)

Selfhosted

40734 readers
384 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS