Library API: Using the Engine Programmatically
Use pgsquash-engine as a Go library to build custom tools or integrate CAPYSQUASH's technology into your systems
Library API: Using the Engine Programmatically
Use pgsquash-engine as a Go library to integrate CAPYSQUASH's consolidation technology into your custom tools and systems.
Most users should start with CAPYSQUASH Platform for the easiest experience with one-click squashing and team features. This library API is for developers who need to build custom tooling or integrate migration consolidation into existing systems. For command-line usage, see capysquash-cli.
🎯 Quick Start
Installation
go get github.com/CAPYSQUASH/pgsquash-engineBasic Usage
package main
import (
"fmt"
"log"
"github.com/CAPYSQUASH/pgsquash-engine/pkg/engine"
)
func main() {
// Squash migrations with default configuration
result, err := engine.SquashDirectory("./migrations", nil)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Squashed %d files\n", result.FilesProcessed)
fmt.Printf("Consolidated %d objects\n", result.ObjectsConsolidated)
fmt.Println(result.SQL)
}📦 Available Packages
pgsquash-engine exports 4 public packages:
| Package | Purpose | Import Path |
|---|---|---|
| pkg/cli | CLI execution | github.com/CAPYSQUASH/pgsquash-engine/pkg/cli |
| pkg/engine | Library API | github.com/CAPYSQUASH/pgsquash-engine/pkg/engine |
| pkg/plugins | Plugin management | github.com/CAPYSQUASH/pgsquash-engine/pkg/plugins |
| pkg/utils | Logging utilities | github.com/CAPYSQUASH/pgsquash-engine/pkg/utils |
🔧 pkg/engine - Core Library API
Functions
SquashDirectory
Squash all migrations in a directory.
func SquashDirectory(directory string, config *Config) (*SquashResult, error)Parameters:
directory: Path to migrations directoryconfig: Configuration options (usenilfor defaults)
Returns:
SquashResult: Contains optimized SQL and statisticserror: Any errors encountered
Example:
result, err := engine.SquashDirectory("./migrations", &engine.Config{
SafetyLevel: engine.Standard,
EnableStreaming: true,
Verbose: true,
})SquashFiles
Squash specific migration files.
func SquashFiles(migrations map[int]string, config *Config) (*SquashResult, error)Parameters:
migrations: Map of migration number to SQL contentconfig: Configuration options
Example:
migrations := map[int]string{
1: "CREATE TABLE users (id SERIAL PRIMARY KEY);",
2: "ALTER TABLE users ADD COLUMN email TEXT;",
3: "CREATE INDEX idx_users_email ON users(email);",
}
result, err := engine.SquashFiles(migrations, nil)AnalyzeDirectory
Analyze migrations without modifying them.
func AnalyzeDirectory(directory string, config *Config) (*AnalysisResult, error)Returns:
AnalysisResult: Analysis statistics and redundancy detectionerror: Any errors encountered
Example:
analysis, err := engine.AnalyzeDirectory("./migrations", nil)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Files: %d\n", analysis.TotalFiles)
fmt.Printf("Objects: %d\n", analysis.TotalObjects)
fmt.Printf("Redundancies: %d\n", len(analysis.Redundancies))
for _, r := range analysis.Redundancies {
fmt.Printf("- %s: %s\n", r.ObjectName, r.Description)
}DefaultConfig
Get default configuration.
func DefaultConfig() *ConfigTypes
Config
Configuration for squashing operations.
type Config struct {
SafetyLevel SafetyLevel // Conservative, Standard, Aggressive, Paranoid
OutputFormat OutputFormat // SQL, JSON, YAML
EnableStreaming bool // Enable streaming for large migrations
MemoryLimitMB int // Memory limit in MB
EnableAI bool // Enable AI-powered analysis
Verbose bool // Verbose logging
}Safety Levels:
const (
Conservative SafetyLevel = "conservative" // Safest, ~50-60% reduction
Standard SafetyLevel = "standard" // Recommended, ~70-80% reduction
Aggressive SafetyLevel = "aggressive" // Maximum optimization, ~85-95% reduction
Paranoid SafetyLevel = "paranoid" // Requires DB connection, ~20-30% reduction
)SquashResult
Result of a squashing operation.
type SquashResult struct {
SQL string // Optimized SQL output
Warnings []string // Any warnings generated
FilesProcessed int // Number of files processed
ObjectsConsolidated int // Number of objects consolidated
ProcessingTime string // Time taken to process
}AnalysisResult
Result of an analysis operation.
type AnalysisResult struct {
TotalFiles int // Total migration files
TotalStatements int // Total SQL statements
TotalObjects int // Total database objects
Redundancies []Redundancy // Detected redundancies
ObjectsByType map[string]int // Objects grouped by type
Warnings []string // Analysis warnings
}Redundancy
Detected redundancy in migrations.
type Redundancy struct {
ObjectName string // Name of the redundant object
ObjectType string // Type (TABLE, INDEX, etc.)
Description string // Description of the redundancy
FileNumbers []int // Migration files involved
}🎨 Usage Patterns
Pattern 1: Basic Squashing
package main
import (
"fmt"
"log"
"github.com/CAPYSQUASH/pgsquash-engine/pkg/engine"
)
func main() {
result, err := engine.SquashDirectory("./migrations", nil)
if err != nil {
log.Fatal(err)
}
fmt.Printf("☑ Squashed %d files into optimized SQL\n", result.FilesProcessed)
fmt.Println(result.SQL)
}Pattern 2: Custom Configuration
config := &engine.Config{
SafetyLevel: engine.Conservative,
EnableStreaming: true,
MemoryLimitMB: 512,
EnableAI: true,
Verbose: true,
}
result, err := engine.SquashDirectory("./migrations", config)
if err != nil {
log.Fatal(err)
}
// Handle warnings
for _, warning := range result.Warnings {
fmt.Printf("⚠️ %s\n", warning)
}Pattern 3: Analysis Only
analysis, err := engine.AnalyzeDirectory("./migrations", nil)
if err != nil {
log.Fatal(err)
}
fmt.Printf("📊 Analysis Results:\n")
fmt.Printf(" Files: %d\n", analysis.TotalFiles)
fmt.Printf(" Statements: %d\n", analysis.TotalStatements)
fmt.Printf(" Objects: %d\n", analysis.TotalObjects)
if len(analysis.Redundancies) > 0 {
fmt.Printf("\n🔍 Found %d redundancies:\n", len(analysis.Redundancies))
for _, r := range analysis.Redundancies {
fmt.Printf(" - %s (%s): %s\n", r.ObjectName, r.ObjectType, r.Description)
}
}Pattern 4: Specific Files
migrations := map[int]string{
1: readFile("001_initial.sql"),
2: readFile("002_add_users.sql"),
5: readFile("005_add_indexes.sql"),
}
result, err := engine.SquashFiles(migrations, &engine.Config{
SafetyLevel: engine.Standard,
})Pattern 5: Custom Migration Tool
package main
import (
"fmt"
"os"
"path/filepath"
"github.com/CAPYSQUASH/pgsquash-engine/pkg/engine"
"github.com/CAPYSQUASH/pgsquash-engine/pkg/utils"
)
func main() {
// Setup logging
logger := utils.NewLogger(utils.LogLevelInfo, os.Stdout)
utils.SetDefaultLogger(logger)
// Analyze first
logger.Info("Analyzing migrations...")
analysis, err := engine.AnalyzeDirectory("./migrations", nil)
if err != nil {
logger.Error("Analysis failed: %v", err)
os.Exit(1)
}
logger.Info("Found %d files with %d objects", analysis.TotalFiles, analysis.TotalObjects)
// Squash with custom config
logger.Info("Squashing migrations...")
result, err := engine.SquashDirectory("./migrations", &engine.Config{
SafetyLevel: engine.Standard,
Verbose: true,
})
if err != nil {
logger.Error("Squashing failed: %v", err)
os.Exit(1)
}
// Write output
outputPath := filepath.Join("./squashed", "001_consolidated.sql")
if err := os.WriteFile(outputPath, []byte(result.SQL), 0644); err != nil {
logger.Error("Failed to write output: %v", err)
os.Exit(1)
}
logger.Info("☑ Success! Processed %d files in %s", result.FilesProcessed, result.ProcessingTime)
}🔌 pkg/cli - CLI Execution API
Execute the full CLI programmatically.
package main
import (
"os"
"github.com/CAPYSQUASH/pgsquash-engine/pkg/cli"
"github.com/CAPYSQUASH/pgsquash-engine/pkg/plugins"
"github.com/CAPYSQUASH/pgsquash-engine/pkg/utils"
)
func main() {
// Setup logging
logger := utils.NewLogger(utils.LogLevelInfo, os.Stdout)
utils.SetDefaultLogger(logger)
// Configure CLI
cli.SetVersionInfo("0.9.7", "2025-10-21", "abc123")
cli.SetBrandName("capysquash")
// Register plugins
if err := plugins.RegisterDefault(); err != nil {
logger.Warn("Plugin registration warning: %v", err)
}
// Execute CLI
if err := cli.Execute(); err != nil {
logger.Error("CLI execution failed: %v", err)
os.Exit(1)
}
}Use cases:
- Build custom branded CLIs (like capysquash-cli)
- Integrate CLI into larger applications
- CI/CD automation with full CLI features
🔧 pkg/plugins - Plugin Management
import "github.com/CAPYSQUASH/pgsquash-engine/pkg/plugins"
// Register all built-in plugins
if err := plugins.RegisterDefault(); err != nil {
log.Printf("Warning: %v", err)
}Built-in plugins:
- Supabase: Auth schema, RLS policies, function patterns
- Clerk: JWT v2 table detection and preservation
- Prisma: ORM-specific patterns
- Drizzle: ORM-specific patterns
📝 pkg/utils - Logging
import "github.com/CAPYSQUASH/pgsquash-engine/pkg/utils"
// Create logger
logger := utils.NewLogger(utils.LogLevelInfo, os.Stdout)
// Set as default
utils.SetDefaultLogger(logger)
// Use logger
logger.Info("Processing migrations...")
logger.Warn("Found potential issue: %s", issue)
logger.Error("Failed: %v", err)Log Levels:
const (
LogLevelDebug utils.LogLevel = iota
LogLevelInfo
LogLevelWarn
LogLevelError
LogLevelFatal
)🎯 Complete Example: Custom Migration Optimizer
package main
import (
"fmt"
"os"
"path/filepath"
"github.com/CAPYSQUASH/pgsquash-engine/pkg/engine"
"github.com/CAPYSQUASH/pgsquash-engine/pkg/plugins"
"github.com/CAPYSQUASH/pgsquash-engine/pkg/utils"
)
func main() {
// Setup
logger := utils.NewLogger(utils.LogLevelInfo, os.Stdout)
utils.SetDefaultLogger(logger)
// Register plugins
if err := plugins.RegisterDefault(); err != nil {
logger.Warn("Plugin registration: %v", err)
}
// Configuration
config := &engine.Config{
SafetyLevel: engine.Standard,
EnableStreaming: true,
MemoryLimitMB: 512,
Verbose: true,
}
// Step 1: Analyze
logger.Info("Step 1: Analyzing migrations...")
analysis, err := engine.AnalyzeDirectory("./migrations", config)
if err != nil {
logger.Error("Analysis failed: %v", err)
os.Exit(1)
}
logger.Info("Found %d files, %d objects, %d redundancies",
analysis.TotalFiles,
analysis.TotalObjects,
len(analysis.Redundancies))
// Step 2: Squash
logger.Info("Step 2: Squashing migrations...")
result, err := engine.SquashDirectory("./migrations", config)
if err != nil {
logger.Error("Squashing failed: %v", err)
os.Exit(1)
}
// Step 3: Handle warnings
if len(result.Warnings) > 0 {
logger.Warn("Generated %d warnings:", len(result.Warnings))
for _, w := range result.Warnings {
logger.Warn(" - %s", w)
}
}
// Step 4: Write output
outputDir := "./squashed"
if err := os.MkdirAll(outputDir, 0755); err != nil {
logger.Error("Failed to create output directory: %v", err)
os.Exit(1)
}
outputPath := filepath.Join(outputDir, "001_consolidated.sql")
if err := os.WriteFile(outputPath, []byte(result.SQL), 0644); err != nil {
logger.Error("Failed to write output: %v", err)
os.Exit(1)
}
// Step 5: Report
logger.Info("☑ Success!")
logger.Info(" Files processed: %d", result.FilesProcessed)
logger.Info(" Objects consolidated: %d", result.ObjectsConsolidated)
logger.Info(" Processing time: %s", result.ProcessingTime)
logger.Info(" Output: %s", outputPath)
// Calculate reduction
reduction := float64(analysis.TotalFiles-1) / float64(analysis.TotalFiles) * 100
logger.Info(" File reduction: %.1f%%", reduction)
}🚀 Advanced Use Cases
Batch Processing Multiple Projects
projects := []string{"./project1/migrations", "./project2/migrations", "./project3/migrations"}
for _, project := range projects {
logger.Info("Processing %s...", project)
result, err := engine.SquashDirectory(project, &engine.Config{
SafetyLevel: engine.Standard,
})
if err != nil {
logger.Error("Failed %s: %v", project, err)
continue
}
logger.Info("☑ %s: %d files → 1 file", project, result.FilesProcessed)
}CI/CD Integration
func main() {
// Analyze migrations in CI
analysis, err := engine.AnalyzeDirectory("./migrations", nil)
if err != nil {
fmt.Fprintf(os.Stderr, "Analysis failed: %v\n", err)
os.Exit(1)
}
// Fail CI if too many redundancies
if len(analysis.Redundancies) > 10 {
fmt.Fprintf(os.Stderr, "☒ Too many redundancies: %d (max 10)\n", len(analysis.Redundancies))
os.Exit(1)
}
fmt.Printf("☑ Migration health check passed\n")
}Custom Reporting
result, err := engine.SquashDirectory("./migrations", nil)
if err != nil {
log.Fatal(err)
}
// Generate custom report
report := struct {
FilesProcessed int
ObjectsConsolidated int
ProcessingTime string
Warnings int
SQLSize int
}{
FilesProcessed: result.FilesProcessed,
ObjectsConsolidated: result.ObjectsConsolidated,
ProcessingTime: result.ProcessingTime,
Warnings: len(result.Warnings),
SQLSize: len(result.SQL),
}
// Output as JSON
json.NewEncoder(os.Stdout).Encode(report)📚 API Comparison
| Feature | CLI API (pkg/cli) | Library API (pkg/engine) |
|---|---|---|
| Purpose | Run full CLI | Programmatic squashing |
| Entry Point | cli.Execute() | engine.SquashDirectory() |
| Configuration | Cobra flags | Go structs |
| Output | Terminal | In-memory results |
| Flexibility | All CLI features | Specific functions |
| Best For | CLI wrappers | Custom tools |
👨💻 When to Use the Library API
Use CAPYSQUASH Platform when:
- You want the easiest setup with one-click squashing
- You need team collaboration and project sharing
- You want GitHub integration and PR analysis
- You prefer a visual interface
Use capysquash-cli when:
- You need command-line automation
- You're working on local development
- You prefer terminal workflows
- You want quick checks in your dev process
Use pgsquash-engine library when:
- Building custom migration tools
- Integrating into existing Go applications
- Creating specialized automation workflows
- Need programmatic control over consolidation
� Next Steps
- CAPYSQUASH Platform - Easiest way to get started
- capysquash-cli - Command-line usage
- API Reference - Complete API documentation
- Configuration - Configuration options
- Examples - More code examples
Ready to build custom tooling? Install pgsquash-engine and integrate CAPYSQUASH's consolidation technology into your systems.
go get github.com/CAPYSQUASH/pgsquash-engineHow is this guide?