- Merged buffer pooling optimizations into buffer_pool.go
- Integrated string interning and tag detection into zero_alloc_tokenizer.go
- Removed duplicate and superseded optimization implementations
- Added optimized expression parsing to expr.go
- Ensured all tests pass with consolidated implementation
- Maintained zero allocation implementation for tokenization
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added specialized tag detection with direct byte operations
- Implemented hybrid tokenizer approach for template size optimization
- Created comprehensive benchmarks for tag detection and tokenization
- Documented results in TOKENIZER_V2_BENCHMARK_RESULTS.md
Performance improvements:
- Large template tokenization: 154× faster (306,697 ns/op vs 47,226,410 ns/op)
- Zero allocations for all template sizes
- Memory usage for large templates: 0 B/op vs 37,477 B/op
- Small templates: Similar performance to original implementation
- Implemented pooled slices for function arguments
- Added specialized pooling for variable node and literal node objects
- Modified array and hash node evaluation to reduce allocations
- Optimized test and filter evaluation with pooled resources
- Added comprehensive benchmarks to validate improvements
- Updated node pool implementation to remove duplicate declarations
- Fixed memory allocations in merge filter to correctly handle array manipulations
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>