KrystalineX Anomaly Analyzer
A fine-tuned language model for analyzing performance anomalies in distributed crypto exchange systems. Built for the KrystalineX observability platform.
Model Details
| Property | Value |
|---|---|
| Base Model | meta-llama/Llama-3.2-1B-Instruct |
| Method | LoRA (Low-Rank Adaptation) |
| Trainable Parameters | 1.56M / 1.24B (0.13%) |
| Training Framework | Axolotl |
| Precision | BF16 with 8-bit quantized base |
| License | Apache 2.0 |
Intended Use
This model analyzes OpenTelemetry trace data and correlated system metrics to identify root causes of performance anomalies in microservice architectures. Given an anomaly report containing span attributes, latency deviations, and system metrics (CPU, memory, error rates), the model produces:
- Summary of the likely cause
- Root causes with reasoning based on actual metric values
- Actionable recommendations for remediation
- Confidence level assessment
Example Input
Analyze this performance anomaly:
- Service: kx-exchange
- Operation: pg-pool.connect
- Duration: 286.94ms (expected: 0.44ms ยฑ 10.66ms)
- Deviation: 26.88ฯ
- CPU Usage: 0.5%, Memory: 142MB, Error Rate: 0.0%
Example Output
SUMMARY: The pg-pool.connect operation experienced extreme latency due to
connection pool exhaustion requiring a new TCP connection to PostgreSQL.
CAUSES:
- Connection pool was empty, forcing a new connection establishment
- TCP connect span of 264ms confirms network-level connection setup
- Idle timeout (30s) likely evicted pooled connections
RECOMMENDATIONS:
- Increase minimum pool size to maintain warm connections
- Reduce idle timeout or implement connection keepalive
- Add connection pool metrics to monitoring
CONFIDENCE: high
Training Details
Dataset
222 training examples (22 real + 200 synthetic) of anomaly analysis from a production crypto exchange platform:
- 22 expert-curated examples from real OpenTelemetry traces, including hallucination corrections (e.g., teaching the model NOT to cite "high CPU usage" when CPU is at 0.5%)
- 200 synthetic examples generated across 15 anomaly scenario templates (connection pool exhaustion, cold start, query lock contention, DNS cache miss, network jitter, GC pause, message queue backlog, cascading timeout, retry storm, etc.)
- ~40% dismissal training: examples explicitly teaching the model to dismiss low metrics as irrelevant rather than hallucinating problems
- Mixed prompt formats: both short-form (
Analyze anomaly: service:operation) and detailed structured prompts with full span attributes and metrics
LoRA Configuration
| Parameter | Value |
|---|---|
| Rank (r) | 16 |
| Alpha | 32 |
| Dropout | 0.05 |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
Training Hyperparameters
| Parameter | Value |
|---|---|
| Epochs | 3 |
| Learning Rate | 2e-4 |
| Scheduler | Cosine |
| Warmup Ratio | 0.1 |
| Batch Size | 2 (micro) ร 4 (grad accum) = 8 effective |
| Optimizer | AdamW |
| Sequence Length | 2048 |
Training Results (v2 โ 222 examples)
| Epoch | Loss | Learning Rate |
|---|---|---|
| 0.36 | 2.5158 | 1.83e-4 |
| 1.07 | 1.6355 | 1.00e-4 |
| 1.79 | 0.5071 | 2.66e-5 |
| 2.50 | 0.3211 | 4.88e-6 |
| 3.0 | 0.2422 | 0 |
- Final Training Loss: 0.24 (down from 2.41 on 22 examples)
- Training Time: ~56 minutes (84 steps) on NVIDIA Turing GPU (sm_75)
- VRAM Usage: ~1.9GB training, ~4.4GB cache
- Throughput: 0.2 samples/s, 0.025 steps/s
Usage
With Ollama
ollama run anomaly-analyzer "Analyze: service latency 500ms, expected 50ms, CPU 0.1%"
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("XavierThibaudon/anomaly-analyzer")
tokenizer = AutoTokenizer.from_pretrained("XavierThibaudon/anomaly-analyzer")
prompt = "Analyze anomaly: kx-exchange GET 500ms, expected 50ms, CPU 0.1%"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations
- Trained on 222 examples (22 real + 200 synthetic) โ results continue to improve with more real-world data
- Optimized for the KrystalineX platform's specific service topology (kx-exchange, kx-wallet, api-gateway, order-matcher)
- Best results when prompts include correlated system metrics alongside trace data
- Small 1B model may not always follow strict output formatting โ the parser handles free-form responses gracefully
- May hallucinate metric interpretations for scenarios not represented in training data
Citation
@misc{krystalinex-anomaly-analyzer,
title={KrystalineX Anomaly Analyzer},
author={Xavier Thibaudon},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/XavierThibaudon/anomaly-analyzer}
}
- Downloads last month
- 13
Model tree for XavierThibaudon/anomaly-analyzer
Base model
meta-llama/Llama-3.2-1B-Instruct