Go Hosting · 2025

Go Application Hosting: Deploy Go Apps to Production with Git Push

Updated April 2025 · 8 min read

Go binary deployment with automatic builds, environment variables, and custom domains.

HomeBlog › Go Application Hosting: Deploy Go Apps to Production with Git Push

Go (Golang) Cloud Hosting: Deploying Go Applications to Production

Go applications have characteristics that make them unusually well-suited to container cloud hosting: they compile to static binaries with minimal runtime dependencies, start in milliseconds, and have predictable memory usage. A Go HTTP server that would require 512MB on Node.js or Python often runs comfortably in 64-128MB. This guide covers deploying Go applications to production correctly.

Why Go Applications Are Easy to Deploy

Other languages require managing runtime versions, virtual environments, or complex dependency trees at the system level. Go's deployment story is simpler:

# Compile for Linux from any OS
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o server ./cmd/server

# Result: a single binary with no external dependencies
# That binary is your entire application

The compiled binary includes the Go runtime, your application code, and all dependencies. No separate installation step on the server. No version manager. No npm install or pip install equivalent at deploy time.

This simplicity is the primary reason Go services often start 10-50x faster than equivalent Node.js or Python services and use significantly less memory.

The Minimal Dockerfile

# Multi-stage build
FROM golang:1.22-alpine AS builder

WORKDIR /app

# Download dependencies first (cache layer)
COPY go.mod go.sum ./
RUN go mod download

# Copy source and build
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build \
    -ldflags="-w -s" \
    -o server \
    ./cmd/server

# Production image
FROM scratch

# Copy SSL certificates for HTTPS client calls
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/

# Copy the binary
COPY --from=builder /app/server /server

EXPOSE 8080

ENTRYPOINT ["/server"]

The scratch base image contains nothing — no shell, no OS utilities, no package manager. The production container is the binary and SSL certs only. Image size is typically 5-20MB.

The -ldflags="-w -s" flags strip debug information and symbol table from the binary, reducing its size by 30-40%.

When you need a base image instead of scratch:
- Your application calls CGO (C extensions)
- You need timezone data (time.LoadLocation beyond UTC)
- You need to run shell commands (exec.Command)

For these cases, use gcr.io/distroless/static-debian11 — a minimal Debian image with no shell, just enough for static Go binaries.

FROM gcr.io/distroless/static-debian11

COPY --from=builder /app/server /server
EXPOSE 8080
ENTRYPOINT ["/server"]

Writing a Production-Ready HTTP Server

package main

import (
    "context"
    "encoding/json"
    "log/slog"
    "net/http"
    "os"
    "os/signal"
    "syscall"
    "time"
)

func main() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))

    mux := http.NewServeMux()
    mux.HandleFunc("/health", healthHandler)
    mux.HandleFunc("/api/v1/", apiHandler)

    port := os.Getenv("PORT")
    if port == "" {
        port = "8080"
    }

    server := &http.Server{
        Addr:         ":" + port,
        Handler:      mux,
        ReadTimeout:  15 * time.Second,
        WriteTimeout: 15 * time.Second,
        IdleTimeout:  60 * time.Second,
    }

    // Start server in goroutine
    go func() {
        logger.Info("server starting", "port", port)
        if err := server.ListenAndServe(); err != http.ErrServerClosed {
            logger.Error("server error", "err", err)
            os.Exit(1)
        }
    }()

    // Graceful shutdown on signal
    quit := make(chan os.Signal, 1)
    signal.Notify(quit, syscall.SIGTERM, syscall.SIGINT)
    <-quit

    logger.Info("shutting down server")
    ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
    defer cancel()

    if err := server.Shutdown(ctx); err != nil {
        logger.Error("shutdown error", "err", err)
    }
    logger.Info("server stopped")
}

func healthHandler(w http.ResponseWriter, r *http.Request) {
    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(map[string]string{"status": "ok"})
}

Key production patterns here:

Explicit timeouts: Without ReadTimeout and WriteTimeout, slow clients can hold connections open indefinitely, eventually exhausting your connection pool.

Graceful shutdown: When the container receives SIGTERM (what orchestration platforms send when stopping a container), the server stops accepting new connections and waits up to 30 seconds for in-flight requests to complete. Without this, active requests are killed abruptly.

PORT from environment: Cloud platforms inject PORT dynamically. Never hardcode port numbers.

Structured JSON logging: slog with JSON handler produces machine-parseable logs that work with any log aggregation system.

Database Configuration

PostgreSQL with pgx

import (
    "context"
    "os"
    "github.com/jackc/pgx/v5/pgxpool"
)

func newDB() (*pgxpool.Pool, error) {
    config, err := pgxpool.ParseConfig(os.Getenv("DATABASE_URL"))
    if err != nil {
        return nil, err
    }

    // Connection pool sizing
    config.MaxConns = 10
    config.MinConns = 2
    config.MaxConnLifetime = time.Hour
    config.MaxConnIdleTime = 30 * time.Minute

    return pgxpool.NewWithConfig(context.Background(), config)
}

pgxpool manages a pool of database connections. Connections are expensive to create — a pool reuses them across goroutines. Size the pool based on your database's max_connections setting and how many application instances you run.

Internal Database Hostname

When your database runs on the same cloud platform as your Go application:

DATABASE_URL=postgresql://user:password@db:5432/myapp

The hostname db resolves over the internal network. Query round trips are sub-millisecond. This matters for Go applications especially — Go's efficiency means the database is often the bottleneck, not the application code.

Redis with go-redis

import (
    "github.com/redis/go-redis/v9"
    "os"
)

func newRedis() *redis.Client {
    return redis.NewClient(&redis.Options{
        Addr:         os.Getenv("REDIS_ADDR"),  // e.g., "redis:6379"
        Password:     os.Getenv("REDIS_PASSWORD"),
        DB:           0,
        PoolSize:     10,
        MinIdleConns: 2,
    })
}

Configuration Management

Go applications should read all configuration from environment variables:

package config

import (
    "fmt"
    "os"
    "strconv"
    "time"
)

type Config struct {
    Port            string
    DatabaseURL     string
    RedisAddr       string
    JWTSecret       string
    Environment     string
    RequestTimeout  time.Duration
}

func Load() (*Config, error) {
    cfg := &Config{
        Port:        getEnv("PORT", "8080"),
        DatabaseURL: requireEnv("DATABASE_URL"),
        RedisAddr:   getEnv("REDIS_ADDR", "redis:6379"),
        JWTSecret:   requireEnv("JWT_SECRET"),
        Environment: getEnv("APP_ENV", "production"),
    }

    timeoutSecs, err := strconv.Atoi(getEnv("REQUEST_TIMEOUT_SECS", "30"))
    if err != nil {
        return nil, fmt.Errorf("invalid REQUEST_TIMEOUT_SECS: %w", err)
    }
    cfg.RequestTimeout = time.Duration(timeoutSecs) * time.Second

    return cfg, nil
}

func requireEnv(key string) string {
    val := os.Getenv(key)
    if val == "" {
        panic(fmt.Sprintf("required environment variable %s is not set", key))
    }
    return val
}

func getEnv(key, defaultValue string) string {
    if val := os.Getenv(key); val != "" {
        return val
    }
    return defaultValue
}

requireEnv panics at startup if a required variable isn't set. This is intentional — it's better to fail at startup with a clear error than to fail at runtime when a user makes a request.

Health Check Endpoint

type HealthResponse struct {
    Status   string            `json:"status"`
    Checks   map[string]string `json:"checks"`
    Version  string            `json:"version"`
    Uptime   string            `json:"uptime"`
}

var startTime = time.Now()

func healthHandler(db *pgxpool.Pool, redis *redis.Client) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        checks := map[string]string{}
        healthy := true

        // Check database
        ctx, cancel := context.WithTimeout(r.Context(), 2*time.Second)
        defer cancel()
        if err := db.Ping(ctx); err != nil {
            checks["database"] = "disconnected: " + err.Error()
            healthy = false
        } else {
            checks["database"] = "connected"
        }

        // Check Redis
        if err := redis.Ping(r.Context()).Err(); err != nil {
            checks["redis"] = "disconnected: " + err.Error()
            healthy = false
        } else {
            checks["redis"] = "connected"
        }

        resp := HealthResponse{
            Checks:  checks,
            Version: os.Getenv("APP_VERSION"),
            Uptime:  time.Since(startTime).Round(time.Second).String(),
        }

        if healthy {
            resp.Status = "healthy"
            w.WriteHeader(http.StatusOK)
        } else {
            resp.Status = "degraded"
            w.WriteHeader(http.StatusServiceUnavailable)
        }

        w.Header().Set("Content-Type", "application/json")
        json.NewEncoder(w).Encode(resp)
    }
}

Background Workers

Go goroutines are cheap — running background jobs in the same process as your HTTP server is common and correct:

func startWorkers(ctx context.Context, db *pgxpool.Pool) {
    // Email sender — processes queued emails every 30 seconds
    go func() {
        ticker := time.NewTicker(30 * time.Second)
        defer ticker.Stop()
        for {
            select {
            case <-ctx.Done():
                return
            case <-ticker.C:
                if err := processEmailQueue(ctx, db); err != nil {
                    slog.Error("email queue processing failed", "err", err)
                }
            }
        }
    }()

    // Data cleanup — runs daily at midnight
    go func() {
        for {
            now := time.Now()
            next := time.Date(now.Year(), now.Month(), now.Day()+1, 0, 0, 0, 0, now.Location())
            select {
            case <-ctx.Done():
                return
            case <-time.After(time.Until(next)):
                if err := cleanupExpiredData(ctx, db); err != nil {
                    slog.Error("cleanup failed", "err", err)
                }
            }
        }
    }()
}

Pass a context.Context derived from the shutdown signal to workers — when the server receives SIGTERM, canceling the context signals all workers to stop cleanly.

Environment Variables for Production

# Application
PORT=8080
APP_ENV=production
APP_VERSION=1.2.3

# Database (internal network)
DATABASE_URL=postgresql://user:password@db:5432/myapp

# Redis (internal network)
REDIS_ADDR=redis:6379

# Secrets
JWT_SECRET=64-char-random-string-here
API_KEY=your-api-key

# Optional tuning
GOMAXPROCS=2          # Limit to container CPU allocation
GOMEMLIMIT=900MiB     # Soft memory limit (set to ~90% of container RAM)

GOMEMLIMIT (Go 1.19+) sets a soft memory limit. The Go runtime will try to keep memory usage below this value by running GC more aggressively. Set it slightly below your container's RAM limit to prevent OOM kills.

GOMAXPROCS defaults to the number of available CPUs. In containers, this may read the host CPU count rather than the container's CPU allocation. Setting it explicitly to your container's CPU count prevents excessive goroutine scheduling overhead.

Monitoring Go Applications

Go exposes runtime metrics through expvar or net/http/pprof:

import _ "net/http/pprof"

// In development, register pprof handlers
mux.HandleFunc("/debug/pprof/", pprof.Index)

Key metrics to watch in production:
- Memory (RSS): Should be stable over time. Gradual growth indicates a memory leak.
- Goroutine count: Should correlate with request concurrency. Unbounded growth indicates goroutine leak.
- GC pause time: Available via runtime metrics. High GC pause time (>10ms) under sustained load indicates memory pressure.
- Error rate: Track 5xx responses from your reverse proxy.

Go applications rarely need complex monitoring — the binary is stable, starts fast, and handles failures gracefully. What you're watching for is the edge cases: connection pool exhaustion under unexpected load, memory growth from a caching bug, or goroutine leaks in long-running background workers.

The Go Production Advantage

Go's production profile is genuinely different from other languages: tiny container images, fast startup, predictable memory usage, and no runtime version management. A Go API service that would require 512MB RAM on Node.js often runs in 64MB. That resource efficiency translates directly to lower hosting costs and more predictable scaling behavior.

The deployment model — compile once, run anywhere the Linux ABI matches — means Go containers are among the most portable and reproducible production artifacts in common use. What runs in your CI/CD test is exactly what runs in production.

Deploy Your App with Git Push

Automatic builds, environment variables, live logs, rollback, and custom domains. No server management required.

Deploy Free — No Card Required

Powered by WHMCompleteSolution