- Added explicit support and usage examples for Mistral and LLaMA architectures in both root and llm/ READMEs
- Updated directory structure and naming (datasets, tokenizers, mistral, hf-proxy)
- Clarified quickstart and experiments usage including config location and CLI
- Documented HuggingFace integration via and marked it as experimental
- Highlighted differences and specifics of all supported architectures
- Improved guide for launching training/generation/experiments
- Made project scope and architecture more transparent for new contributors
- Expanded module-level and function/class docstrings in optimizer.py, scheduler.py, and trainer.py
- Described mathematical foundations, theoretical motivations, and provided detailed usage examples for students
- All docstrings in Russian, clear scientific style
test(training): add comprehensive tests for optimizer, scheduler, and trainer modules
- Added new test files for get_optimizer, get_linear_schedule_with_warmup, and Trainer
- Tests cover parameter handling, edge cases, and expected learning dynamics (lr schedules and loss behavior)
- Trainer now logs average epoch losses to self.loss_history for testability and analysis
refactor(training/trainer): log epoch loss to loss_history for downstream analysis and tests
BREAKING CHANGE: Trainer.loss_history is a new attribute consolidating average losses per epoch, enabling robust learning dynamics assertions in tests
- docs: add comprehensive docstrings for the Mistral class (in Russian) and its methods (forward, generate)
- docs: explain model architecture (GQA, Sliding Window Attention, SwiGLU, RMSNorm, RoPE), arguments, constraints, generation modes, usage examples, and references (Mistral, nucleus sampling)
- strictly documentation improvements, no logic/API changes
This commit makes Mistral model documentation clear and user-friendly for LLM engineering and inference.
- docs: add full, detailed Russian-language docstring for LLaMA.generate (sampling, top-k/top-p, examples, all parameter constraints and references)
- docs: bring LLaMA class header in line with modern LLM doc practices (motivation, architecture, references)
- no changes to logic, API, or tests
This makes the LLaMA model documentation fully transparent for all generation and inference modes.
- docs: add detailed Russian-language docstring for generate method (args, nuances, sampling modes, error handling, usage examples, references to nucleus sampling and GPT-2 paper)
- strictly doc improvements, no logic or API changes
The updated documentation helps users clearly understand all generation options, constraints, and application modes in GPT2 LLMs.
- docs: update and focus References in GPT model docstring to only original GPT-1 (Radford et al., 2018) and BPE/Attention Is All You Need, removing GPT-2/HuggingFace links
- no changes to logic, API, or tests
This makes the documentation accurate for the vanilla GPT architecture and research lineage.
- docs: update and clarify docstrings for BpeTokenizer class and main methods (encode, decode)
- explain BPE algorithm, motivation, architecture, detailed usage examples, implementation details, references to original papers and major LLMs
- strictly doc improvements, no logic/API changes
This update makes tokenizer code easier to understand and use for language modeling research and engineering.
- docs: expand, clarify, and modernize docstrings for TokenEmbeddings class and its methods (__init__, forward, properties)
- explain layer purpose, motivation, math, parameter details, usage examples, and references
- no logic/API changes
This makes the input embedding code more accessible and maintainable for transformer and LLM development.
- docs: rewrite and expand docstrings for SwiGLU class and forward method (motivation, math, architecture, usage, references to LLaMA/Mistral/PaLM)
- test: add unit tests for SwiGLU (shape, dtype, gradients, output range, fp16 support, reproducibility)
- strictly doc/tests, no logic or API changes
This improves transparency and reliability for gated FFN blocks in transformer architectures.
- docs: expand and clarify docstrings for SiLU class and its method (mathematical formula, motivation, properties vs ReLU/GELU, usage, and references to Swish/LLM papers)
- test: add unit tests for SiLU (shape/dtype, behavior on large/small values, PyTorch reference, gradients, broadcast)
- no logic/API changes
This update improves reliability and usability of the SiLU activation module.
- docs: update/increase docstring detail for RMSNorm class and methods (motivation, formula, architecture, usage, references to LLaMA/PaLM/GPT)
- test: add comprehensive unit tests for RMSNorm (shape/type preservation, rms scaling, gradients for input and weights, fp16, large eps stability)
No code/API changes beyond docs and new tests.
- docs: update and clarify docstrings for PositionalEmbeddings class and methods (__init__, forward)
- explain motivation, mathematical formulas, usage examples, architectural options (learned vs sinusoidal), external references
- no API or code changes
This makes the positional encoding component easier to understand and use for all transformer practitioners.
- docs: rewrite and expand docstrings for GELU class and method (motivation, math formula, smoother ReLU for Transformers, usage, references)
- test: add dedicated tests for GELU (output shape, dtype, comparison with torch GELU, monotonicity, gradients, large/small value behavior)
- fix: align numerical test to allow for minor approximation difference vs PyTorch gelu
This update makes the GELU module more transparent and robust for deep learning practitioners and researchers.
- docs: rewrite and clarify docstrings for FeedForward class and its methods (__init__, forward) with architectural explanation, pseudocode, motivation, parameter details, usage example, and key references (GELU, SwiGLU, Transformer)
- no changes to logic or APIs
This makes the feed-forward block more transparent for users and researchers working with transformer models.
- docs: rewrite and expand docstrings for Decoder class and its methods (__init__, forward)
- clarify the block’s architecture, pre-LN logic, flow with residual connections, and attention masking
- add mathematical pseudocode, motivation, feature list, usage example, and external references (papers, blog)
- no logic or behavior changes
This improves readability and makes the codebase easier to understand for transformer/LLM practitioners.
- docs: Add detailed docstrings for CachedDecoder class and its methods (__init__, forward); explain autoregressive caching, architecture, math, usage, and links to GPT-2/LLM references
- test: Add comprehensive unit tests for CachedDecoder (initialization, forward with and without cache, cache chaining, output shape, error on long input, backward pass)
- These changes improve code clarity, reliability, and testing for decoder blocks with KV cache.
- docs: expanded docstrings for MistralDecoder class and methods (__init__, forward); explained architecture, key parameters, usage, and links to relevant papers (Mistral, Llama 2)
- test: add comprehensive unit tests for MistralDecoder (init, forward, cache handling, output shape, shape errors, backward)
- These changes improve explainability, reliability, and test coverage for the decoder module.
- docs: Rewrite and expand docstrings for the GroupedQueryAttention class and all main methods (__init__, forward, _repeat_kv_heads, _create_sliding_window_mask):
- explained GQA architecture and motivation
- included mathematical formulas, step-by-step algorithms, usage examples
- added references to relevant scientific papers (Mistral, Llama 2, etc.)
- test: Add dedicated unit tests for GQA (output shape correctness, mask/window logic, KV head replication, RoPE processing, error and edge-cases)
- docs/test: Documentation and tests now fully reflect modern GQA usage and best practices for LLM architectures
This commit makes the implementation, usage, and theoretical underpinnings of GQA transparent and reproducible for researchers and engineers.
- Covers inference with and without cache and with sampling (top-k, top-p)
- Includes test for max sequence length (should raise ValueError)
- Verifies output shape and absence of dtype errors for the mask logic
- Minimal config and random data ensure tests are fast and robust
Motivation: Regression and integration protection for Llama decoding and sampling logic.
- Covers forward pass with and without KV-cache
- Verifies correct sequence generation for greedy, top-k, and top-p sampling
- Adds ValueError test for exceeding max sequence length
- Uses small random toy config and minimal setup for fast test feedback
Motivation: Prevent regressions in decoding, sampling, and KV-cache logic in GPT2 implementation.
- Refactored token selection logic in methods of GPT2 and Llama classes.
- Masks are now created with dtype=torch.bool (or torch.uint8 for legacy PyTorch).
- Used True/False for mask/scatter instead of 1/0, ensuring correctness across PyTorch versions.
- Fixed RuntimeError: masked_fill_ only supports boolean masks, previously raised by uint8-masks in new PyTorch.
- Backward compatibility maintained: code works on PyTorch >=1.2 and for old clusters (via the else branch).
Motivation: Fixes sampling errors for all modern PyTorch users while keeping research code usable on old infra.
- refactor: улучшена и унифицирована реализация RoPE, теперь поддерживаются строгие проверки размерности входа; внесены улучшения и структурные изменения в MultiHeadAttention (более понятная логика, строгая спецификация входов/выходов)
- docs: полностью переписаны docstrings для RoPE и MultiHeadAttention — включены математические формулы, ссылки на научные статьи, подробные пояснения по алгоритму, формату входных данных, ограничениям, примеры использования
- test: добавлены отдельные unit-тесты для RoPE (корректность формы, ошибки на неверную размерность, сохранение нормы, backward/градиенты, работу с параметрами start_pos и батчами)
- chore: удалён неиспользуемый модуль core/head_attention.py
- fix: теперь выбрасывается AssertionError при неправильной размерности входа RoPE; это позволило полностью покрыть тест-кейсы на ошибки
Этот коммит синхронизирует логику реализации базового внимания с современной практикой LLM, укрепляет документацию для инженеров и исследователей, а также расширяет надежность автотестирования библиотеки.
- implement Mistral model in llm/models/mistral/mistral.py with GroupedQueryAttention, SwiGLU, RoPE, sliding window attention
- add __init__.py for module export
- add config files for mistral training and generation
- update universal experiment runner to support Mistral model
- add notebook for Mistral experiments
- Исправлена ошибка расчёта позиции для RoPE (Rotary Positional Embeddings) при автодополнении с использованием кэша.
- В HeadAttention теперь передаётся start_pos в RoPE, вычисляемый из длины кэша.
- Обновлена сигнатура и логика метода RoPE.forward.
- Обновлен ноутбук llama.ipynb под новые интерфейсы и выводы.
BREAKING CHANGE: переопределён метод forward у RoPE, требуется обновить код, если RoPE использовался вручную.
- add universal runner run_llm_experiment.py with JSON-config driven LLM training / generation
- add configs for gpt, gpt2, llama (training/generation)
- remove individual train/generate scripts for each model
- update README with simple how-to for experiments block
BREAKING CHANGE: all llm_only experiments now run only through run_llm_experiment.py; legacy scripts removed
- Улучшены и дополнены docstring базовых компонентов (decoder, cached_decoder, multi_head_attention, head_attention, feed_forward, token_embeddings, positional_embeddings, gelu, silu, swi_glu, rope, rms_norm)
- На русском языке: объяснены алгоритмы архитектур, приведены формулы и ссылки на статьи
- Для всех моделей (GPT, GPT2, LLaMA) добавлены подробные описания классов, методов forward/generate, форматы входа/выхода
- Примеры использования в каждом ключевом классе
- Описаны научные концепции, архитектурные отличия и причины выбора решений
- Completely removed duplicate CachedDecoder from llama.py
- Modified core CachedDecoder to support dependency injection:
- Added feed_forward_layer parameter (required)
- Added norm_layer parameter with LayerNorm default
- Added rope parameter for RoPE support
- Removed unused activation parameter
- Updated GPT2 to use new CachedDecoder with FeedForward
- Updated LLaMA to use new CachedDecoder with SwiGLU and RMSNorm
- Fixed parameter order in constructor to follow Python syntax rules
This eliminates all code duplication while maintaining architectural specificities through dependency injection.
- Removed duplicate HeadAttention and MultiHeadAttention implementations from llama.py
- Now importing MultiHeadAttention from core module
- Added RoPE support parameter to core HeadAttention constructor
- Kept LLaMA-specific CachedDecoder implementation (uses SwiGLU and RMSNorm)
- Core CachedDecoder uses different components (FeedForward and LayerNorm)
- Improved code reuse for attention components while maintaining LLaMA-specific decoder
This is a partial refactor - attention components are now shared, but decoder remains LLaMA-specific due to different normalization and activation requirements.
- Moved GELU, RMSNorm, RoPE, SiLU, and SwiGLU implementations from llama.py to dedicated files in core/
- Updated feed_forward.py to use new modular components
- Modified llama.py to import components from core modules instead of local definitions
- Improved code organization and reusability of activation functions and normalization layers
This refactor enables better code reuse across different model architectures and follows the single responsibility principle.
- Added LLaMA model architecture with RMSNorm and SwiGLU activation
- Implemented Rotary Positional Embeddings (RoPE) for better positional encoding
- Created training script for LLaMA with BPE tokenizer
- Fixed matplotlib dependency version in uv.lock
- Added LLaMA module initialization
The implementation includes:
- TokenEmbeddings, HeadAttention, MultiHeadAttention with RoPE support
- RMSNorm normalization layer
- SwiGLU feed-forward activation
- Cached decoder implementation for efficient generation
- Implement Rotary Positional Embeddings (RoPE) with separate cosine/sine components
- Add vectorized computation of inverse frequencies for RoPE
- Include tensor slicing utilities for even/odd column separation
- Update dependencies in pyproject.toml and uv.lock