The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
checkpoint_prefix: string
timestamp: string
status: string
model_size: int64
d_model: int64
n_layers: int64
n_heads: int64
d_ff: int64
vocab_size: int64
max_seq_len: int64
learning_rate: double
epochs: int64
batch_size: int64
weight_decay: double
seed: int64
deterministic: bool
cudnn_deterministic: bool
cudnn_benchmark: bool
best_val_loss: null
final_train_loss: double
mean_train_loss: double
global_batch_tokens: int64
std_train_loss: double
mean_val_loss: null
total_steps: int64
total_tokens: int64
val_epochs_count: int64
final_val_loss: null
cumulative_time_seconds: double
to
{'checkpoint_prefix': Value('string'), 'timestamp': Value('string'), 'model_size': Value('int64'), 'd_model': Value('int64'), 'n_layers': Value('int64'), 'n_heads': Value('int64'), 'd_ff': Value('int64'), 'vocab_size': Value('int64'), 'max_seq_len': Value('int64'), 'learning_rate': Value('float64'), 'epochs': Value('int64'), 'batch_size': Value('int64'), 'weight_decay': Value('float64'), 'global_batch_tokens': Value('int64'), 'seed': Value('int64'), 'deterministic': Value('bool'), 'total_steps': Value('int64'), 'total_tokens': Value('int64'), 'cumulative_time_seconds': Value('float64'), 'final_train_loss': Value('float64'), 'mean_train_loss': Value('float64'), 'std_train_loss': Value('float64'), 'final_val_loss': Value('null'), 'best_val_loss': Value('null'), 'mean_val_loss': Value('null'), 'val_epochs_count': Value('int64')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
checkpoint_prefix: string
timestamp: string
status: string
model_size: int64
d_model: int64
n_layers: int64
n_heads: int64
d_ff: int64
vocab_size: int64
max_seq_len: int64
learning_rate: double
epochs: int64
batch_size: int64
weight_decay: double
seed: int64
deterministic: bool
cudnn_deterministic: bool
cudnn_benchmark: bool
best_val_loss: null
final_train_loss: double
mean_train_loss: double
global_batch_tokens: int64
std_train_loss: double
mean_val_loss: null
total_steps: int64
total_tokens: int64
val_epochs_count: int64
final_val_loss: null
cumulative_time_seconds: double
to
{'checkpoint_prefix': Value('string'), 'timestamp': Value('string'), 'model_size': Value('int64'), 'd_model': Value('int64'), 'n_layers': Value('int64'), 'n_heads': Value('int64'), 'd_ff': Value('int64'), 'vocab_size': Value('int64'), 'max_seq_len': Value('int64'), 'learning_rate': Value('float64'), 'epochs': Value('int64'), 'batch_size': Value('int64'), 'weight_decay': Value('float64'), 'global_batch_tokens': Value('int64'), 'seed': Value('int64'), 'deterministic': Value('bool'), 'total_steps': Value('int64'), 'total_tokens': Value('int64'), 'cumulative_time_seconds': Value('float64'), 'final_train_loss': Value('float64'), 'mean_train_loss': Value('float64'), 'std_train_loss': Value('float64'), 'final_val_loss': Value('null'), 'best_val_loss': Value('null'), 'mean_val_loss': Value('null'), 'val_epochs_count': Value('int64')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
license: gpl-2.0
Collinear Scaling Models
Checkpoint repository for scaling law experiments comparing collinear (CO) and non-collinear (NC) experimental designs.
Directory Structure
{dataset}/{design}/N_{param_count}/
- Dataset:
wikipedia,pes2o,cosmopedia,redpajama,c4(plus_fp16and_bigtppvariants) - Design:
colinearornon_colinear - N: Model parameter count (one of 14 canonical sizes from ~5M to ~70M)
Experimental Designs
Collinear (CO): Models are trained along a line in (N, D) space where D = TPP × N for varying TPP (tokens per parameter) values. A single model size N is swept across many TPP values.
Non-collinear (NC): Models are trained on a grid over (N, D) space (NxD_GRID), varying both N and D independently.
Holdout Sets
Some checkpoints include HOLDOUT in the filename. These were held out from scaling law fitting and are used to evaluate extrapolation / interpolation accuracy of fitted scaling laws. Both CO and NC designs have holdout checkpoints:
COLINEAR_HOLDOUT_*→ collinear holdout (held-out TPP values)*_HOLDOUT_*(withoutCOLINEAR) → non-collinear holdout (held-out (N, D) pairs)
Filename Convention
{PREFIX}{DESIGN}N{approx_size}[TPP{val}]D{tokens}_{dataset}_m{exact_N}_token{exact_D}lr{lr}..._completedAt{timestamp}.pt
The m{N} and token{D} fields contain the exact parameter count and token count used for training.
- Downloads last month
- 23,978