Add production features: slog adapter, scan helpers, slow query logging, pool stats, tracer passthrough, test tx isolation
Some checks failed
CI / test (push) Failing after 13s

- slog.go: SlogLogger adapts *slog.Logger to dbx.Logger interface
- scan.go: Collect[T] and CollectOne[T] generic helpers using pgx.RowToStructByName
- cluster.go: slow query logging via Config.SlowQueryThreshold (Warn level in queryEnd)
- stats.go: PoolStats with Cluster.Stats() aggregating pool stats across all nodes
- config.go/node.go: NodeConfig.Tracer passthrough for pgx.QueryTracer (OpenTelemetry)
- options.go: WithSlowQueryThreshold and WithTracer functional options
- dbxtest/tx.go: RunInTx runs callback in always-rolled-back transaction for test isolation

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
2026-03-23 00:19:26 +03:00
parent 7d25e1b73e
commit 2c9af28548
16 changed files with 495 additions and 29 deletions

View File

@@ -50,6 +50,9 @@ cluster.RunTx(ctx, func(ctx context.Context, tx pgx.Tx) error {
| `healthChecker` | Background goroutine that pings all nodes on an interval. |
| `Querier` injection | `InjectQuerier` / `ExtractQuerier` — context-based Querier for service layers. |
| `MetricsHook` | Optional callbacks: query start/end, retry, node up/down, replica fallback. |
| `SlogLogger` | Adapts `*slog.Logger` to the `dbx.Logger` interface. |
| `Collect`/`CollectOne` | Generic scan helpers — read rows directly into structs via `pgx.RowToStructByName`. |
| `PoolStats` | Aggregate pool statistics across all nodes via `cluster.Stats()`. |
## Routing
@@ -159,6 +162,60 @@ dbx.PgErrorCode(err) // extract raw PG error code
Sentinel errors: `ErrNoHealthyNode`, `ErrClusterClosed`, `ErrRetryExhausted`.
## slog integration
```go
cluster, _ := dbx.NewCluster(ctx, dbx.Config{
Master: dbx.NodeConfig{DSN: "postgres://..."},
Logger: dbx.NewSlogLogger(slog.Default()),
})
```
## Scan helpers
Generic functions that eliminate row scanning boilerplate:
```go
type User struct {
ID int `db:"id"`
Name string `db:"name"`
}
users, err := dbx.Collect[User](ctx, cluster, "SELECT id, name FROM users WHERE active = $1", true)
user, err := dbx.CollectOne[User](ctx, cluster, "SELECT id, name FROM users WHERE id = $1", 42)
// returns pgx.ErrNoRows if not found
```
## Slow query logging
```go
cluster, _ := dbx.NewCluster(ctx, dbx.Config{
Master: dbx.NodeConfig{DSN: "postgres://..."},
Logger: dbx.NewSlogLogger(slog.Default()),
SlowQueryThreshold: 100 * time.Millisecond,
})
// queries exceeding threshold are logged at Warn level
```
## Pool stats
```go
stats := cluster.Stats()
fmt.Println(stats.TotalConns, stats.IdleConns, stats.AcquireCount)
// per-node stats: stats.Nodes["master"], stats.Nodes["replica-1"]
```
## OpenTelemetry / pgx tracer
Pass any `pgx.QueryTracer` (e.g., `otelpgx.NewTracer()`) to instrument all queries:
```go
dbx.ApplyOptions(&cfg, dbx.WithTracer(otelpgx.NewTracer()))
```
Or set per-node via `NodeConfig.Tracer`.
## dbxtest helpers
The `dbxtest` package provides test helpers:
@@ -171,6 +228,18 @@ func TestMyRepo(t *testing.T) {
}
```
### Transaction isolation for tests
```go
func TestCreateUser(t *testing.T) {
c := dbxtest.NewTestCluster(t)
dbxtest.RunInTx(t, c, func(ctx context.Context, q dbx.Querier) {
// all changes are rolled back after fn returns
_, _ = q.Exec(ctx, "INSERT INTO users (name) VALUES ($1)", "test")
})
}
```
Set `DBX_TEST_DSN` env var to override the default DSN (`postgres://postgres:postgres@localhost:5432/dbx_test?sslmode=disable`).
## Requirements