- Modified ExtendsNode, IncludeNode, ImportNode, and FromImportNode to only fallback when specifically dealing with ErrTemplateNotFound
- When a syntax error or other functional error occurs in templates, the real error is now displayed instead of silently falling back to other templates
- Added proper error type checking with errors.Is() to ensure correct fallback behavior
- Improved ignoreMissing behavior in IncludeNode to only apply to not found errors
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixed the issue with relative path resolution by:
1. Adding a directory field to Template struct to store template's path
2. Adding lastLoadedTemplate field to RenderContext to track template context
3. Updating Clone method to preserve lastLoadedTemplate reference
4. Setting lastLoadedTemplate in all render contexts to maintain path context
This ensures that when templates include/extend/import other templates
using relative paths (starting with ./ or ../), the paths are properly
resolved relative to the original template's location.
- Deleted benchmark result files that have been consolidated
- Removed optimization plan files that are no longer needed
- Documentation is now centralized in BUFFER_OPTIMIZATION.md
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Created comprehensive documentation of all optimizations
- Described implementation details and performance benefits
- Provided overview of the zero allocation approach
- Documented key techniques and their locations in the codebase
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Merged buffer pooling optimizations into buffer_pool.go
- Integrated string interning and tag detection into zero_alloc_tokenizer.go
- Removed duplicate and superseded optimization implementations
- Added optimized expression parsing to expr.go
- Ensured all tests pass with consolidated implementation
- Maintained zero allocation implementation for tokenization
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added specialized tag detection with direct byte operations
- Implemented hybrid tokenizer approach for template size optimization
- Created comprehensive benchmarks for tag detection and tokenization
- Documented results in TOKENIZER_V2_BENCHMARK_RESULTS.md
Performance improvements:
- Large template tokenization: 154× faster (306,697 ns/op vs 47,226,410 ns/op)
- Zero allocations for all template sizes
- Memory usage for large templates: 0 B/op vs 37,477 B/op
- Small templates: Similar performance to original implementation
- Added global string cache for efficient string interning (5.2x faster)
- Implemented optimized tokenizer with object pooling
- Created comprehensive benchmarks and documentation
- Cleaned up old optimization files and experiments
Performance improvements:
- String interning: 5.2x faster (1,492 ns/op vs 7,746 ns/op)
- Zero allocations for common strings
- Same memory efficiency as original (36 B/op, 9 allocs/op)
Track progress of memory optimization efforts with performance metrics and next steps
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Implemented pooled slices for function arguments
- Added specialized pooling for variable node and literal node objects
- Modified array and hash node evaluation to reduce allocations
- Optimized test and filter evaluation with pooled resources
- Added comprehensive benchmarks to validate improvements
- Updated node pool implementation to remove duplicate declarations
- Fixed memory allocations in merge filter to correctly handle array manipulations
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Create component-specific memory profiling tool to identify allocation hotspots
- Generate memory usage analysis for key template engine components
- Develop comprehensive implementation plan for zero-allocation rendering
- Prioritize optimization targets based on empirical measurements
- Provide detailed code examples for major optimization strategies
This plan will guide the implementation of a zero-allocation rendering path,
making Twig the most memory-efficient template engine available.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Create comprehensive benchmarks for memory allocation analysis
- Implement a memory analysis utility to identify allocation hotspots
- Add command-line profiling tool for different template complexities
- Create analysis script to generate detailed memory reports
- Add benchmarking documentation with optimization strategy
These tools will help identify critical areas for optimization
in implementing a zero-allocation rendering path.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add detailed README sections for sandbox mode, parent function, and whitespace control
- Create dedicated wiki pages for each new feature
- Update release notes for v1.0.3
- Update Home wiki page to include links to new documentation
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Implement SecurityPolicy interface with function/filter/tag restrictions
- Add DefaultSecurityPolicy with sensible defaults for common operations
- Add sandboxed option to include tag for secure template inclusion
- Implement context-level sandbox flag and methods
- Add engine-level sandbox control methods
- Create comprehensive tests for sandbox functionality
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add sandboxed option to include tag
- Implement SecurityPolicy interface and DefaultSecurityPolicy
- Support restricting function calls, filters, and more in sandbox mode
- Add tests for sandbox functionality
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Tests confirm proper behavior of {{- and -}} whitespace trimming functionality for both variable and block tags.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit implements the parent() function in the Twig template engine,
which allows child templates to access parent template block content when
overriding blocks. The implementation supports:
- Basic single-level parent() calls (child accessing parent content)
- Tracking of original block content through template inheritance chain
- Prevention of infinite recursion in parent() function calls
- Comprehensive test suite for parent() functionality
Future work may include full multi-level inheritance support for nested
parent() calls (child -> parent -> grandparent).
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
1. Added spaceless filter that removes whitespace between HTML tags
2. Implemented {% apply filter %} ... {% endapply %} tag
3. Updated spaceless tag to use the spaceless filter internally
4. Fixed endverbatim tag handling
5. Added tests for all new functionality
The apply tag allows applying filters to blocks of content, which is the
modern recommended approach in Twig, replacing the deprecated spaceless tag.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implements proper resolution of relative paths in templates.
When a template in a subdirectory uses a relative path (starting with "./" or "../")
in an include, extend, import, or from-import directive, the engine now correctly
resolves the path relative to the current template's directory first.
This fixes issues where templates in subdirectories couldn't properly
include/extend templates using relative paths.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Move parsing functions for tags into dedicated files
- Add comprehensive tests for the from tag
- Fix the implementation of parseFrom to correctly handle imports
- Improve test coverage for macros and imports
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Test basic verbatim tag usage with variables and blocks
- Test error cases like unclosed verbatim tags
- Ensure HTML content with Twig-like syntax is preserved
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added VerbatimNode with Render method to output content without processing
- Implemented parseVerbatim function to collect all content until endverbatim tag
- Registered verbatim tag in parser block handlers
- Added comprehensive tests for verbatim tag functionality
- Properly preserves Twig syntax in output without processing it
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix whitespace indentation in do tag implementation
- Remove debug prints from range function
- Delete temporary update_tokenizer file
- Fix newline at end of file
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add DoNode type and rendering implementation
- Update parser to properly handle do tag assignments
- Add special tokenization for do tag expressions
- Add comprehensive test cases for do tag functionality
- Fix numeric variable validation in do tag assignments
- Enable using do tag for expressions and assignments
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Latest benchmark runs show dramatic performance improvements:
- Twig is now 57x faster than Go's html/template for complex templates
- Memory usage reduced by 90% compared to standard Go templates
- Performance on medium templates improved to 0.14 µs/op from 0.35 µs/op
- Simple template rendering improved to 0.28 µs/op from 0.47 µs/op
These improvements reflect the optimizations from object pooling and
filter chain handling optimizations in recent commits.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit dramatically improves memory efficiency and performance by:
1. Adding object pooling for frequently allocated node types:
- TextNode, PrintNode, RootNode, LiteralNode, VariableNode, IfNode, ForNode
- Released nodes properly after rendering is complete
2. Implementing token pooling to reduce allocations during parsing:
- Added token pool for all token creation sites
- Optimized token slice allocation with capacity hints
3. Improved RenderContext and StringBuffer handling:
- Better cleanup and resource management
Benchmark results show:
- 91% reduction in memory usage compared to Go's templates
- 60x performance improvement in rendering speed
- Consistently 2 allocations per operation for all node types
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Improved HTML escape filter using strings.Builder for single-pass escaping
- Optimized filter chain building to avoid O(n²) prepending operations
- Added benchmarks for filter chain operations with different depths
- Enhanced escape filter performance with pre-allocation strategy
- Filter chain collection now has O(n) complexity instead of O(n²)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added tracking of last access time and access count to cache entries
- Implemented eviction policy to remove least recently/frequently used entries
- Cache now removes 10% of entries when the cache is full, prioritizing by usage
- Added benchmarks and tests to verify eviction strategy
- Fixed the previously ineffective eviction strategy
- Improved cache efficiency for applications with many diverse object types
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added byteBufferPool to reuse buffers during rendering
- Implemented WriteString utility function to eliminate allocations
- Updated all node Render methods to use the optimized function
- Reduced memory allocations during template rendering
- Added benchmarks showing significant improvement with io.Writer
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit fixes potential goroutine leaks by ensuring render contexts
are properly released even in error paths. The changes include:
1. Using defer for context release in ExtendsNode
- Ensures parent context is always released even when errors occur
- Simplifies control flow with unified return path
2. Using defer for context release in MacroNode.CallMacro
- Removes manual error handling with context release calls
- Ensures macro context is always properly returned to the pool
3. Using defer for context release in ImportNode
- Prevents resource leakage when template rendering fails
- Removes duplicate context release code in error paths
4. Using defer for context release in FromImportNode
- Ensures consistent resource cleanup in all code paths
- Simplifies error handling logic
These changes improve stability and resource management in the template
engine, especially under error conditions that previously could cause
context objects to be leaked.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit adds:
1. A new benchmark for comparing serialization performance
- Creates a benchmark for the new binary serialization format
- Compares with previous gob-based serialization
- Measures serialization, deserialization, and round-trip performance
- Reports memory usage improvements
2. Updated benchmark results in BENCHMARK_RESULTS.md
- Documents the 53.8% size reduction
- Shows 22.4x overall performance improvement
- Highlights memory efficiency benefits
The serialization_benchmark.go file provides a comprehensive test suite
for validating and demonstrating the optimization benefits in different
scenarios and with different template complexity levels.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit improves the template serialization format by:
1. Replacing gob encoding with binary encoding for CompiledTemplate
- Reduces serialized data size by ~55%
- Improves serialization/deserialization speed by ~22x
- Adds fallback path for backward compatibility
2. Adding buffer pooling to reduce allocations
- Reuses buffers during serialization
- Minimizes garbage collection pressure
3. Adding version marker to serialized data
- Enables future format changes with backward compatibility
- Provides clean upgrade path
4. Adding Size() method to report compiled template size
- Helps with monitoring memory usage
This optimization significantly reduces both memory usage and CPU
overhead when using compiled templates.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit improves the template loading mechanism by:
1. Minimizing lock-holding time
- Uses short read locks for initial cache check
- Avoids holding locks during template loading and parsing operations
- Acquires write lock only when necessary to update cache
2. Restructuring the template loading logic
- Cleaner control flow with early returns
- Better error handling
- Prevents unnecessary lock acquisitions
These changes significantly reduce contention in concurrent environments
where many templates are loaded simultaneously, improving performance
for high-throughput scenarios.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit introduces two major performance improvements:
1. Memory optimization in string handling
- Pre-allocates slice capacities to reduce reallocations
- Uses string builders with pre-allocated capacity
- Avoids unnecessary string concatenations and conversions
- Optimizes pattern matching code to reduce allocations
- Reuses memory for string slices where possible
2. Fixed unbounded attribute cache growth
- Added maximum cache size limit (1000 entries)
- Implemented simple cache eviction strategy
- Tracks current cache size with counter
- Prevents potential memory leaks in long-running applications
These changes significantly reduce memory allocations and improve
performance, especially for large templates.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Created comprehensive macro examples that demonstrate:
- Scope handling in macros
- Self-referencing macros with _self
- Template context usage
- Macro libraries and organization
- Performance optimization techniques
- Implementation note about from...import syntax
This provides concrete examples of the concepts described in the README.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added specific benchmark for macro performance (direct, imported, nested)
- Added memory usage benchmark comparing Twig with Go templates
- Created MEMORY_RESULTS.md to track memory efficiency
- Updated benchmark results with latest performance metrics
- Enhanced documentation for macros in README with examples
- Fixed badges in README for better rendering on GitHub
- Added comprehensive information about variable scope in macros
- Updated example code to demonstrate nested macros
- Improved organization of benchmark directory
- Updated roadmap with completed items and future plans
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fixed macro parameter parsing with improved tokenization handling
- Added support for proper module.function() syntax in templates
- Enhanced filter handling within macro variables
- Added support for variable interpolation in macro body
- Added comprehensive test suite for macros with various scenarios
- Improved overall coverage from 43.8% to 45.7%
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>