A grey, red, and blue background
Illustration of a cloud with a speedometer, a database cylinder, and a filing cabinet containing folders labeled "GO," representing high-speed database operations and caching in Go programming.

Scaling Go Applications with Efficient Database Caching

15 min read

Nowadays, it’s rare to find a service that doesn’t use a database in some capacity. Databases are essential for preserving information, and there are countless implementations to help you store structured, time-series, or other types of data. The usual way of using a database includes connecting to it by using your programming language of choice, creating an API, and interacting with the data as needed.

However, database requirements can vary significantly from project to project. In some cases, the demand for data may become so great that you need to consider scaling. There are many ways to scale a database, such as increasing resources, adding primary-replica setups, or creating shards. While these methods are reliable, they can be costly.

A practical first step is to implement a simple caching layer in your application. This can delay the need for more expensive infrastructure upgrades, optimizing both performance and cost.

In this article, we’ll build a simulated financial application that tracks transaction volume information for various assets. We will consume a simulated stream of messages and save the information to a database and cache, which will help to optimize further repeated reads. To demonstrate the performance benefits, we’ll:

  1. Implement a simulated high-frequency message stream in Go
  2. Create a service for updating objects in a simulated database
  3. Integrate a caching middleware to optimize read operations

Initial Setup

To get started, fire up your terminal and initialize your Go project with the go mod init command. Just like in our previous article on speeding up HTTP responses using cache, we will follow the same golang-standards/project-layout project structure to maintain a tidy project. Create a database code file in internal/db/db.go , a message stream in internal/order/stream/stream.go, and a service that utilizes both in /internal/order/manager.go.

├── go.mod
├── go.sum
└── internal
    ├── db
    │   └── db.go
    └── order
        ├── manager.go
        └── stream
            └── stream.go

Before we begin with the streaming service, let’s quickly visit the manager.go file and create the order object that will be used throughout the project. For the demonstration purposes, the order only needs to have the asset and quantity properties.

// manager.go

// Order represents an order in the system.
type Order struct {
	// Asset is the identifier for the asset.
	Asset string `json:"asset"`

	// Quantity is the amount of the asset in the order.
	Quantity int64 `json:"quantity"`
}

Mocking a Streaming Service

To mock our streaming service, we will create a simple structure with a single Consume method that will stream a predefined list of order objects using Go channels. The data order consistency throughout the runs will help us reliably measure the speed of processing with different caching set-ups.

// streamer.go

import (
	"context"
	"dbcache/internal/order"
)

// _orders is a slice of predefined orders that will be streamed.
// This is just a mock data source for demonstration purposes.
var _orders = []order.Order{
	{
		Asset:    "BTC",
		Quantity: 11,
	},
	{
		Asset:    "ETH",
		Quantity: 42,
	},
	{
		Asset:    "ETH",
		Quantity: 33,
	},
	{
		Asset:    "BTC",
		Quantity: 15,
	},
	{
		Asset:    "BTC",
		Quantity: 8,
	},
	{
		Asset:    "ETH",
		Quantity: 29,
	},
	{
		Asset:    "BTC",
		Quantity: 34,
	},
	{
		Asset:    "BTC",
		Quantity: 65,
	},
	{
		Asset:    "ETH",
		Quantity: 5,
	},
	{
		Asset:    "BTC",
		Quantity: 71,
	},
}

// Streamer is responsible for streaming orders.
type Streamer struct{}

// NewStreamer creates a new streamer instance.
func NewStreamer() *Streamer {
	return &Streamer{}
}

// Consume returns a channel that streams orders. The channel
// is closed when all orders are sent or when the context
// is done.
func (s *Streamer) Consume(ctx context.Context) <-chan order.Order {
	// Most message broker APIs usually provide a way to stream messages
	// using channels. Here we simulate that by creating a channel
	// and sending predefined orders to it.
	ch := make(chan order.Order)

	go func() {
		defer close(ch)

		// Simulate streaming orders by sending them to the channel
		// one by one.
		for _, ord := range _orders {
			select {
			case <-ctx.Done():
				return
			case ch <- ord:
			}
		}
	}()

	return ch
}

Mocking a Database

To mock our database, once again, we will create a simple Go structure with two methods for fetching and saving the data. Since an in-memory database implementation doesn’t reflect network traffic, we will manually delay the operations by using time.After. Depending on the location of your database and the size of the payload, the operation can take from milliseconds to even seconds, but for this example we will use a static 100 milliseconds delay. It’s important to monitor your performance as cache implementation or other decisions should be made according to the statistics and metrics.

// db.go

import (
	"context"
	"time"
)

// DB is an interface that defines methods for
// interacting with a database.
type DB struct {
	volumes map[string]int64 // Mock in-memory storage for volumes
}

// NewDB creates a new instance of the DB.
func NewDB() *DB {
	return &DB{
		volumes: make(map[string]int64),
	}
}

// Close stops the database and releases resources.
// In a real application, this would close database connections.
func (db *DB) Close() error {
	db.volumes = nil // Clear the in-memory storage

	return nil
}

// FetchAssetVolume retrieves the volume for a given asset.
func (db *DB) FetchAssetVolume(ctx context.Context, asset string) (int64, error) {
	// In a real application, this would query the database.
	select {
	case <-ctx.Done():
		return 0, ctx.Err()
	case <-time.After(100 * time.Millisecond):
	}

	return db.volumes[asset], nil
}

// UpsertAssetVolume updates or inserts the volume for a given asset.
func (db *DB) UpsertAssetVolume(ctx context.Context, asset string, volume int64) error {
	// In a real application, this would perform an upsert operation in the database.
	select {
	case <-ctx.Done():
		return ctx.Err()
	case <-time.After(100 * time.Millisecond):
	}

	db.volumes[asset] = volume

	return nil
}

Creating an Order Manager

Now that we got our mocked services out of the way, let’s create a service that could actually exist in a real-life application. We will utilize interfaces for Streamer and DB structures, so we could easily drop-in other implementations to replace mocks. To process the data, the Manager structure setup should look as follows:

// manager.go

// Manager is responsible for managing orders,
// consuming them from a stream, and updating the
// asset volumes in the database.
type Manager struct {
	log      *slog.Logger
	db       DB
	streamer Streamer
}

// NewManager creates a new order manager instance.
func NewManager(streamer Streamer, db DB) *Manager {
	return &Manager{
		log:      slog.Default().With("component", "order-manager"),
		db:       db,
		streamer: streamer,
	}
}

// Streamer is an interface that defines methods for
// consuming orders from a stream.
type Streamer interface {
	// Consume should return a channel that streams orders.
	Consume(ctx context.Context) <-chan Order
}

// DB is an interface that defines methods for
// interacting with a database
type DB interface {
	io.Closer

	// FetchAssetVolume should retrieve the volume
	// for a given asset.
	FetchAssetVolume(ctx context.Context, asset string) (int64, error)

	// UpsertAssetVolume should update or insert the volume
	// for a given asset.
	UpsertAssetVolume(ctx context.Context, asset string, volume int64) error
}

Finally, we need to combine all the components together and create a method for consuming the orders from the streamer service, fetching the latest volume from the database and saving the newly updated data to it again.

// Run starts the order manager, consuming orders from the streamer
// and updating the asset volumes in the database.
func (m *Manager) Run(ctx context.Context) {
	for ord := range m.streamer.Consume(ctx) {
		vol, err := m.db.FetchAssetVolume(ctx, ord.Asset)
		if err != nil {
			m.log.With("error", err).Error("failed to fetch asset volume")
			continue
		}

		vol += ord.Quantity

		if err := m.db.UpsertAssetVolume(ctx, ord.Asset, vol); err != nil {
			m.log.With("error", err).Error("failed to upsert asset volume")
			continue
		}
	}
}

The final file implementation should look as follows:

// manager.go

import (
	"context"
	"io"
	"log/slog"
)

// Manager is responsible for managing orders,
// consuming them from a stream, and updating the
// asset volumes in the database.
type Manager struct {
	log      *slog.Logger
	db       DB
	streamer Streamer
}

// NewManager creates a new order manager instance.
func NewManager(streamer Streamer, db DB) *Manager {
	return &Manager{
		log:      slog.Default().With("component", "order-manager"),
		db:       db,
		streamer: streamer,
	}
}

// Run starts the order manager, consuming orders from the streamer
// and updating the asset volumes in the database.
func (m *Manager) Run(ctx context.Context) {
	for ord := range m.streamer.Consume(ctx) {
		vol, err := m.db.FetchAssetVolume(ctx, ord.Asset)
		if err != nil {
			m.log.With("error", err).Error("failed to fetch asset volume")
			continue
		}

		vol += ord.Quantity

		if err := m.db.UpsertAssetVolume(ctx, ord.Asset, vol); err != nil {
			m.log.With("error", err).Error("failed to upsert asset volume")
			continue
		}
	}
}

// Streamer is an interface that defines methods for
// consuming orders from a stream.
type Streamer interface {
	// Consume should return a channel that streams orders.
	Consume(ctx context.Context) <-chan Order
}

// DB is an interface that defines methods for
// interacting with a database
type DB interface {
	io.Closer

	// FetchAssetVolume should retrieve the volume
	// for a given asset.
	FetchAssetVolume(ctx context.Context, asset string) (int64, error)

	// UpsertAssetVolume should update or insert the volume
	// for a given asset.
	UpsertAssetVolume(ctx context.Context, asset string, volume int64) error
}

// Order represents an order in the system.
type Order struct {
	// Asset is the identifier for the asset.
	Asset string `json:"asset"`

	// Quantity is the amount of the asset in the order.
	Quantity int64 `json:"quantity"`
}

The last thing that we need to do for this application to work is to create the main file. Following our project structure, create a file in /cmd/main.go. In main.go, we’ll start up the manager service and set things up so it gracefully shuts down once the streaming is done or a termination signal is sent. We will also add duration logging, so we could see how long it takes to process our orders with and without cache.

// main.go

import (
	"context"
	"dbcache/internal/db"
	"dbcache/internal/order"
	"dbcache/internal/order/stream"
	"log/slog"
	"os/signal"
	"syscall"
	"time"
)

func main() {
	ctx, cancel := signal.NotifyContext(
		context.Background(),
		syscall.SIGINT,
		syscall.SIGTERM,
	)
	defer cancel()

	tstamp := time.Now()
	
	dbc := db.NewDB()

	order.NewManager(
		stream.NewStreamer(),
		dbc,
	).Run(ctx)
	
	dbc.Close()

	slog.With("duration", time.Since(tstamp)).Info("shutdown complete")
}

Here’s what’s happening: we use signal.NotifyContext to listen for process termination signals like CTRL+C, which means, when you tell the app to quit, it doesn’t just vanish, it wraps things up neatly. The manager service runs while staying alert for shutdown signals using context. When it’s time to terminate the process, the context is cancelled and manager gracefully exits.

That’s it, it’s time to test our application. Since we have a finite number of orders that the program needs to process, it will automatically exit once it finishes the execution. Run it using:

$ go run cmd/main.go
2025/07/05 20:07:50 INFO shutdown complete duration=2.004040484s

Now we have a base point for our application. In the next section, we’ll build a simple database wrapper to cache our recently used asset volumes.

Making the Database Faster with Caching

To make our database faster, let’s use a proxy pattern and create a cache for the database layer. This way, we can avoid fetching asset volumes from a database in case they were recently updated. Since, the cache will wrap the database, let’s add a new package named cache to the db folder. Add a file named cache.go, so the final structure of the project should look like this:

├── cmd
│   └── main.go
├── go.mod
├── go.sum
└── internal
    ├── db
    │   ├── cache
    │   │   └── cache.go
    │   └── db.go
    └── order
        ├── manager.go
        └── stream
            └── stream.go

Let’s create a Cache structure type, a function that will create the object, and a method that will stop our Cache procedures. For this example, we’ll use an automatic cleaner to clear out expired cache items. We’ll use the github.com/jellydator/ttlcache package (you can install it with go get github.com/jellydator/ttlcache/v3).

Since this structure is a wrapper around our database package, we want to accept a database object and an expiration time for our cache items as parameters in the NewCache function.

// cache.go

import (
	"context"
	"dbcache/internal/order"
	"time"

	"github.com/jellydator/ttlcache/v3" // import the ttlcache package
)

// Cache is an interface that defines methods for
// interacting with a database.
type Cache struct {
	db          order.DB
	volumeCache *ttlcache.Cache[string, int64]
}

// NewCache creates a new instance of the cache.
func NewCache(
	db order.DB,
	expiration time.Duration,
) *Cache {
	c := ttlcache.New(
		ttlcache.WithTTL[string, int64](expiration),
	)

	go c.Start()

	return &Cache{
		db:          db,
		volumeCache: c,
	}
}

// Close stops the cache and releases resources.
func (c *Cache) Close() error {
	c.volumeCache.Stop()

	return nil
}

Now for the main caching logic. Let’s create identical methods for FetchAssetVolume and UpsertAssertVolume as in the database package so we implement the order.DB interface.

In the FetchAssetVolume method, check the cache for the asset volume and in case it doesn’t exist, only then use the underlying database to fetch it and save the response.

// cache.go

// FetchAssetVolume retrieves the volume for a given asset.
func (c *Cache) FetchAssetVolume(ctx context.Context, asset string) (int64, error) {
	if item := c.volumeCache.Get(asset); item != nil {
		return item.Value(), nil
	}

	volume, err := c.db.FetchAssetVolume(ctx, asset)
	if err != nil {
		return 0, err
	}

	c.volumeCache.Set(asset, volume, ttlcache.DefaultTTL)

	return volume, nil
}

In the UpsertAssetVolume method, save the provided value, and only if there are no errors, save it to cache. This ensures that our in-memory cache is in sync with the database.

// cache.go

// UpsertAssetVolume updates or inserts the volume for a given asset.
func (c *Cache) UpsertAssetVolume(ctx context.Context, asset string, volume int64) error {
	err := c.db.UpsertAssetVolume(ctx, asset, volume)
	if err != nil {
		return err
	}

	// We set the volume after the upsert operation to ensure
	// that cache is in sync with the database.
	c.volumeCache.Set(asset, volume, ttlcache.DefaultTTL)

	return nil
}

We’re done. The final cache.go file should look like this:

// cache.go

import (
	"context"
	"dbcache/internal/order"
	"time"

	"github.com/jellydator/ttlcache/v3"
)

// Cache is an interface that defines methods for
// interacting with a database.
type Cache struct {
	db          order.DB
	volumeCache *ttlcache.Cache[string, int64]
}

// NewCache creates a new instance of the cache.
func NewCache(
	db order.DB,
	expiration time.Duration,
) *Cache {
	c := ttlcache.New(
		ttlcache.WithTTL[string, int64](expiration),
	)

	go c.Start()

	return &Cache{
		db:          db,
		volumeCache: c,
	}
}

// Close stops the cache and releases resources.
func (c *Cache) Close() error {
	c.volumeCache.Stop()

	return nil
}

// FetchAssetVolume retrieves the volume for a given asset.
func (c *Cache) FetchAssetVolume(ctx context.Context, asset string) (int64, error) {
	if item := c.volumeCache.Get(asset); item != nil {
		return item.Value(), nil
	}

	volume, err := c.db.FetchAssetVolume(ctx, asset)
	if err != nil {
		return 0, err
	}

	c.volumeCache.Set(asset, volume, ttlcache.DefaultTTL)

	return volume, nil
}

// UpsertAssetVolume updates or inserts the volume for a given asset.
func (c *Cache) UpsertAssetVolume(ctx context.Context, asset string, volume int64) error {
	err := c.db.UpsertAssetVolume(ctx, asset, volume)
	if err != nil {
		return err
	}

	// We set the volume after the upsert operation to ensure
	// that cache is in sync with the database.
	c.volumeCache.Set(asset, volume, ttlcache.DefaultTTL)

	return nil
}

Now, let’s get back to the main.go file and use the cache implementation. To make our cache testing easier, let’s add a flag for setting the expiration time that our Cache accepts as a parameter. In case the expiration time is not zero, we will wrap our database object using our newly created cache function cache.NewCache and pass it to the manager as a database.

// main.go

import (
	"context"
	"dbcache/internal/db"
	"dbcache/internal/db/cache"
	"dbcache/internal/order"
	"dbcache/internal/order/stream"
	"flag"
	"log/slog"
	"os/signal"
	"syscall"
	"time"
)

func main() {
	var rawExpiration string

	flag.StringVar(&rawExpiration, "exp", "0ms", "Cache expiration duration")
	flag.Parse()

	expiration, err := time.ParseDuration(rawExpiration)
	if err != nil {
		slog.With("error", err).Error("failed to parse expiration duration")
		return
	}

	ctx, cancel := signal.NotifyContext(
		context.Background(),
		syscall.SIGINT,
		syscall.SIGTERM,
	)
	defer cancel()

	tstamp := time.Now()

	var dbc order.DB = db.NewDB()

	// In case expiration is not provided, we assume
	// that caching is disabled.
	if expiration > 0 {
		dbc = cache.NewCache(
			dbc,
			expiration,
		)
	}

	order.NewManager(
		stream.NewStreamer(),
		dbc,
	).Run(ctx)

	dbc.Close()

	slog.With("duration", time.Since(tstamp)).Info("shutdown complete")
}

And that’s it! It’s time to test it. Let’s try running it as we did before to see if the time it takes to complete hasn’t changed:

$ go run cmd/main.go
2025/07/05 20:36:44 INFO shutdown complete duration=2.003807307s

Since we didn’t provide an expiration time, we can see that the execution duration remains the same. Now, let’s try using 200 milliseconds caching.

$ go run cmd/main.go -exp=200ms
025/07/05 20:37:57 INFO shutdown complete duration=1.703427644s

As expected, we skipped some of the database reads as we hit our cache, and if we were to increase the limit even more, we would see that the time would continue to decrease.

$ go run cmd/main.go -exp=500ms
2025/07/05 20:38:26 INFO shutdown complete duration=1.202475988s

However, we must not use an extremely high value as it comes at the expense of memory usage. While this tutorial isn’t going to go into that, it’s important to measure the performance of your application and find the middle-ground for performance and memory usage.

Conclusion

You’ve just built a simple yet powerful cache for your Go database layer. What was once an expensive database call for every read operation is now much faster and more efficient, thanks to ttlcache’s cache implementation.

This isn’t just about speed - it’s about working smarter. With caching, you reduce database load, save resources, and keep things running smoothly as your app scales. Plus, your code remains clean and easy to extend.

This is just the tip of the iceberg. You can experiment with cache duration, add invalidation, or expand caching to other parts of your app. The choice is yours.

Want to see the full code or try it out yourself? Check it out here: https://github.com/jellydator/ttlcache/tree/v3/examples/dbcache

Thanks for reading and happy coding!