To prune the Docker build cache and reclaim disk space,
you should use the docker builder prune command.
Quick Commands
- Remove dangling build cache:bash
docker builder pruneUse code with caution.This removes cache for builds that are no longer associated with a tagged image. - Remove ALL build cache:bash
docker builder prune -aUse code with caution.This wipes the entire build cache, including cache for images you are currently using. - Force without confirmation:
Add the-for--forceflag to bypass the prompt.
Alternative: System-wide Cleanup
If you want to clean up more than just the build cache, you can use docker system prune.
- What it removes: All stopped containers, unused networks, dangling images, and unused build cache.
- To include volumes: Use
docker system prune --volumes. Docker Docs +1
Advanced Cache Management
- Check usage: Run
docker buildx duto see exactly how much space the build cache is using before you delete it. - Filter by age: Use
--filter "until=24h"to only remove cache older than a specific timeframe. - Set a budget: Use
--keep-storage <size>(e.g.,10gb) withdocker buildx pruneto keep the most recent cache while staying under a specific disk limit.
To see how much space you can reclaim by removing unused objects (including “dangling” or stopped containers), use the docker system df command.
Understanding the Output
Running docker system df provides a summary table:
- TYPE: The resource (Images, Containers, Volumes, Build Cache).
- TOTAL: Total number of that resource.
- ACTIVE: Number currently being used by running containers.
- SIZE: Total disk space occupied.
- RECLAIMABLE: The space you will get back if you run a prune command.
Detailed Breakdowns
- See specific containers: To identify exactly which stopped containers are taking up the most space, run:
docker system df -vThe-v(verbose) flag lists every container, its status, and its size. - Check writable layers: If you want to see the size of all containers (including running ones) and their writable layers, use:
docker ps -asThe-sflag adds a “Size” column showing how much data has been written to that specific container’s layer.
Pro-Tip: Watch for “Hidden” Space
Note that docker system df might not account for container logs, which can sometimes grow into gigabytes. If the reported “Reclaimable” space seems low but your disk is full, check /var/lib/docker/containers/<id>/ for large .log files.
To automatically limit log sizes and prevent them from filling up your disk, you should configure Log Rotation. This can be done globally for all new containers or individually for specific ones.
1. Global Limit (Recommended)
You can set a default limit for all newly created containers by editing or creating the Docker daemon configuration file (usually at /etc/docker/daemon.json on Linux):
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
- max-size: The maximum size of a single log file before it rotates (e.g.,
10mfor 10MB). - max-file: The maximum number of rotated files to keep. In this case, Docker will keep up to 30MB of logs per container (3 files x 10MB).
- Apply changes: Run
sudo systemctl restart docker. Note: This only affects containers created after the restart.
2. Docker Compose Limit
To limit logs for a specific service in a docker-compose.yml file:
services:
app:
image: my-app
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
3. Single Container (Docker Run)
If you are starting a container manually, use the --log-opt flags:
docker run --log-opt max-size=10m --log-opt max-file=3 my-image
4. Emergency: Truncate Existing Logs
If a container is already running and its logs are too big,
you cannot change the limit without recreating it.
However, you can safely clear the current log file without stopping the container:
truncate -s 0 $(docker inspect --format='{{.LogPath}}' <container_name_or_id>)
When a container reaches its log size limit, it performs a rotation rather than stopping the container or cutting off future logs.
The Rotation Process
- Creation of a New File: When the current log file (e.g.,
container-id-json.log) hits themax-sizeyou defined, Docker renames it (e.g.,container-id-json.log.1) and starts a fresh, empty log file for new entries. - Deleting the Oldest Logs: If the number of files exceeds your
max-filesetting, Docker permanently deletes the oldest log file to make room for the new one. - Continuous Cycle: This creates a “rolling window” of logs. Your total disk usage for that container will never exceed
max-sizemultiplied bymax-file.
Key Behaviors to Note
- No Service Interruption: The application inside the container continues to run and log as normal; it is unaware that rotation is happening.
- Retention: Once the oldest file is deleted, those logs are gone forever. If you need to keep logs long-term, you should use a log shipping tool (like Fluentd or ELK) before they rotate out.
- “Dangling” Logs: Changing global settings in
daemon.jsondoes not affect containers that are already running. You must recreate existing containers (stop, remove, and start again) for them to pick up the new rotation limits. - Blocking vs. Non-blocking: By default, Docker uses blocking mode. If the disk is so full that Docker cannot even write the small rotated file, the application might experience “backpressure” and slow down while waiting to write its output.
This script takes the logic you provided, extracts the container ID from the file path, and uses docker inspect to pull the specific metadata you’re looking for.
The Bash Script
<em>#!/bin/bash</em>
<em># Ensure the script is run with sudo/root to access /var/lib/docker</em>
if [ "$EUID" -ne 0 ]; then
echo "Please run as root or with sudo"
exit
fi
printf "%-10s %-20s %-30s %-s\n" "SIZE" "ID" "IMAGE" "MOUNTS"
echo "------------------------------------------------------------------------------------------------"
for log_path in $(find /var/lib/docker/containers/ -name "*.log"); do
<em># Get the human-readable size</em>
size=$(du -sh "$log_path" | cut -f1)
<em># Extract the container ID from the path (the directory name)</em>
container_id=$(echo "$log_path" | cut -d'/' -f6)
<em># Inspect the container for Image and Mounts</em>
<em># We use --format to get clean, tab-separated strings</em>
info=$(docker inspect --format '{{.Config.Image}} | {{range .Mounts}}{{.Source}} -> {{.Destination}}, {{end}}' "$container_id" 2>/dev/null)
if [ $? -eq 0 ]; then
image_name=$(echo "$info" | cut -d'|' -f1 | xargs)
mounts=$(echo "$info" | cut -d'|' -f2 | xargs)
<em># Output the formatted line</em>
printf "%-10s %-20s %-30s %-s\n" "$size" "${container_id:0:12}" "$image_name" "$mounts"
else
<em># If the container no longer exists (race condition)</em>
printf "%-10s %-20s %-s\n" "$size" "${container_id:0:12}" "Container not found (dangling log)"
fi
done
How it works:
- Path Parsing: It uses
cut -d'/' -f6to grab the long ID string from the directory path in/var/lib/docker/containers/. - Docker Inspect: It queries Docker directly for the Image name and a list of all Mounts (Volumes and Bind Mounts).
- Formatting:
- It shortens the Container ID to 12 characters for readability.
- It formats the Mounts as
Source -> Destinationso you can see exactly where the data is mapped on your host.
- Error Handling: If a log file exists but the container was recently deleted (a “dangling” log), it will flag it as “not found” so you know you can safely delete that log file.
Quick Tip
If you want to sort this list to see the largest logs at the bottom, pipe the script output to sort:
sudo ./your_script_name.sh | sort -h
Updated:
#!/bin/bash
# Ensure the script is run with sudo/root
if [ "$EUID" -ne 0 ]; then
echo "Please run as root or with sudo"
exit
fi
# Accumulator for total size in KB
total_kb=0
printf "%-10s %-20s %-30s %-s\n" "SIZE" "ID" "IMAGE" "MOUNTS"
echo "------------------------------------------------------------------------------------------------"
for log_path in $(find /var/lib/docker/containers/ -name "*.log"); do
# Get size in KB for calculation and human-readable for display
kb_size=$(du -k "$log_path" | cut -f1)
human_size=$(du -sh "$log_path" | cut -f1)
# Add to total
total_kb=$((total_kb + kb_size))
# Extract ID
container_id=$(echo "$log_path" | cut -d'/' -f6)
# Get metadata
info=$(docker inspect --format '{{.Config.Image}} | {{range .Mounts}}{{.Source}} -> {{.Destination}}, {{end}}' "$container_id" 2>/dev/null)
if [ $? -eq 0 ]; then
image_name=$(echo "$info" | cut -d'|' -f1 | xargs)
mounts=$(echo "$info" | cut -d'|' -f2 | xargs | sed 's/,$//') # Remove trailing comma
printf "%-10s %-20s %-30s %-s\n" "$human_size" "${container_id:0:12}" "$image_name" "$mounts"
else
printf "%-10s %-20s %-s\n" "$human_size" "${container_id:0:12}" "Container not found (dangling log)"
fi
done
# Convert total KB to MB or GB for the final summary
total_human=$(numfmt --to=iec --from-unit=1024 $total_kb)
echo "------------------------------------------------------------------------------------------------"
echo "TOTAL LOG DISK USAGE: $total_human"
What changed:
total_kbvariable: It now tracks the running sum usingdu -k(which forces the output to kilobytes, making math easy).numfmt: This utility converts the large kilobyte sum back into a readable format like 1.2G or 450M at the very end.- Clean Mounts: I added a small
sedcommand to strip the trailing comma from the list of mounts for a cleaner look.
How to use it:
- Save it as
check_logs.sh. - Make it executable:
chmod +x check_logs.sh. - Run it:
sudo ./check_logs.sh
#!/bin/bash
if [ "$EUID" -ne 0 ]; then
echo "Please run as root or with sudo"
exit
fi
total_kb=0
# Create a temporary string to hold the body of our table
body=""
# Header
printf "%-10s %-20s %-30s %-s\n" "SIZE" "ID" "IMAGE" "MOUNTS"
echo "------------------------------------------------------------------------------------------------"
for log_path in $(find /var/lib/docker/containers/ -name "*.log"); do
kb_size=$(du -k "$log_path" | cut -f1)
human_size=$(du -sh "$log_path" | cut -f1)
total_kb=$((total_kb + kb_size))
container_id=$(echo "$log_path" | cut -d'/' -f6)
info=$(docker inspect --format '{{.Config.Image}} | {{range .Mounts}}{{.Source}} -> {{.Destination}}, {{end}}' "$container_id" 2>/dev/null)
if [ $? -eq 0 ]; then
image_name=$(echo "$info" | cut -d'|' -f1 | xargs)
mounts=$(echo "$info" | cut -d'|' -f2 | xargs | sed 's/,$//')
# Append data to the body variable
body+="$(printf "%-10s %-20s %-30s %-s\n" "$human_size" "${container_id:0:12}" "$image_name" "$mounts")\n"
else
body+="$(printf "%-10s %-20s %-s\n" "$human_size" "${container_id:0:12}" "Container not found (dangling log)")\n"
fi
done
# Print the body, sorted by the first column (size)
echo -e "$body" | sort -h | grep -v '^$'
# Final Footer
total_human=$(numfmt --to=iec --from-unit=1024 $total_kb)
echo "------------------------------------------------------------------------------------------------"
echo "TOTAL LOG DISK USAGE: $total_human"
Key Changes:
- Body Accumulation: Instead of printing directly inside the
forloop, the script saves each line into a variable called$body. sort -h: This sorts the collected lines numerically while respecting human-readable suffixes (K, M, G).grep -v '^$': A small cleanup step to ensure no empty lines are printed if the loop generates any.- Formatting: The
totalremains at the very bottom, unaffected by the sort, providing a clear summary.
Quick Usage Tip:
If you want to see the top 5 largest logs only, you can modify the sort line to:echo -e "$body" | sort -hr | head -n 5
To clean up those 961M of logs and manage the impact of a system prune, here is the strategy:
1. Cleaning up the 961M in Logs
Since logs are stored in the json-file format on the host, docker prune does not actually clear the logs of containers that are still “Active” (running).
- The “Safe” Way (Truncate): To reclaim that 961M immediately without stopping any services, you can run this command to empty the log files:bash
find /var/lib/docker/containers/ -name "*.log" -exec truncate -s 0 {} +Use code with caution.This keeps the files but resets their size to 0 bytes. - The “Permanent” Way: Apply the
max-sizeandmax-filelimits to yourdaemon.json(as discussed earlier). However, this will only take effect when you recreate the containers.
2. Impact of docker system prune
Looking at your docker system df output, here is exactly what will happen:
- Containers (4.81GB Reclaimable):
- Impact: It will delete 169 containers (220 total – 51 active).
- Warning: If you have “stopped” containers that you intended to start later (e.g., a one-off migration script or a database you toggled off), they will be deleted. You will lose any data inside their writable layers (files not in a volume).
- Local Volumes (33.66GB Reclaimable):
- Impact: This is your biggest win. It will remove 251 unused volumes.
- Warning:
system pruneonly removes volumes if you add the--volumesflag. If you do, ensure no “dangling” volume contains old database backups or user data you might need later.
- Images (40.54GB Reclaimable):
- Impact: It will remove “dangling” images (layers with no tags).
- Warning: Generally safe. If you use
-a, it removes all unused images, meaning the next time you start a service, Docker will have to re-download the image from the registry.
Suggested Action Plan
- Run the log truncate command above to instantly get your 961M back.
- Run a “Safe” Prune first to see what would happen:bash
docker system prune --dry-runUse code with caution. - Execute the Cleanup:bash
<em># Removes stopped containers and dangling images (Safe-ish)</em> docker system prune <em># Removes unused volumes (High impact, high reward: 33GB)</em> docker volume pruneUse code with caution.
Caution: Before pruning volumes, check if any “Active” containers rely on “Anonymous” volumes that might look unused to the system.
Truncating all logs is technically “safe” in that it won’t crash your containers or delete your data; Docker just sees a file that suddenly became 0 bytes and continues writing new logs to it.
However, updating the script for selective truncation is definitely worth it for two reasons:
- Troubleshooting: If a container is currently crashing, you need those last 100 lines of logs to see why. Truncating everything wipes your evidence.
- Compliance/Audits: Some apps (like database or security tools) might require log retention for a certain window.
Recommendation: The “Interactive” Script
Instead of a complex “select 1, 3, and 5” menu, a threshold-based approach or a confirmation prompt is usually more efficient.
Here is how you can update check_logs.sh to include an interactive “Clean up?” step:
bash
<em># ... (Keep the previous script logic until the Total line) ...</em>
echo "TOTAL LOG DISK USAGE: $total_human"
echo "------------------------------------------------------------------------------------------------"
read -p "Would you like to truncate (clear) logs? [all / large / none]: " choice
case $choice in
all)
echo "Truncating all logs..."
find /var/lib/docker/containers/ -name "*.log" -exec truncate -s 0 {} +
echo "Done."
;;
large)
read -p "Enter threshold size in MB (e.g., 50): " threshold
echo "Truncating logs larger than ${threshold}MB..."
<em># Find files, check size in MB, truncate if over threshold</em>
find /var/lib/docker/containers/ -name "*.log" | while read log; do
size_mb=$(du -m "$log" | cut -f1)
if [ "$size_mb" -gt "$threshold" ]; then
truncate -s 0 "$log"
echo "Cleared: $log"
fi
done
echo "Done."
;;
*)
echo "No logs were cleared."
;;
esac
Why the “Large” option is the sweet spot:
- Safety: It leaves small, recent logs alone (keeping your debug history).
- Impact: It targets the 8.9M+ files you saw in your previous output, which are the real “space hogs.”
- Speed: You don’t have to manually pick IDs from a list of 220 containers.
One final “System Prune” warning:
Since your df output shows 33.66GB of reclaimable volumes, truncating 961M of logs is a great start, but it’s “small potatoes” compared to your volume bloat.
To see exactly which containers will be deleted by a prune,
use the following command. It lists only stopped (exited) containers, their names, and the disk space taken by their writable layers:
docker ps -a --filter "status=exited" --size --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Size}}"
What a system prune will remove:
Based on your previous df output, running docker system prune will have the following impact:
- 169 Stopped Containers (4.81GB): These are the containers you’ll see in the list from the command above.
- Dangling Images (40.54GB): These are layers that have no name/tag (often appearing as
<none>:<none>). They are usually safe to delete unless you are mid-build. - Build Cache (2.289GB): Temporary files from previous
docker buildcommands.
⚠️ The “33GB Risk”
Your Local Volumes show 33.66GB reclaimable.
- Standard
system prune: Will NOT touch these volumes. system prune --volumes: Will delete every volume not currently attached to a running container.- Risk: If you have a database container that is currently stopped, its data volume will be permanently deleted.
Recommended Safety Check
Before you prune, check which volumes are considered “unattached” (reclaimable) with this command:
bash
docker volume ls -f dangling=true
If you see volumes there that belong to stopped databases or important apps, do not use the --volumes flag.
