Upload folder using huggingface_hub
Browse files- README.md +13 -51
- config.json +1 -1
- mergekit_config.yml +1 -1
- model-00427-of-00481.safetensors +3 -0
- model-00433-of-00481.safetensors +3 -0
- tokenizer_config.json +1 -1
README.md
CHANGED
|
@@ -1,39 +1,29 @@
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
-
-
|
| 4 |
library_name: transformers
|
| 5 |
tags:
|
| 6 |
- mergekit
|
| 7 |
- merge
|
| 8 |
-
---
|
| 9 |
-
|
| 10 |
-
# 🦙✨ BigLlama-3.1-1T-Instruct
|
| 11 |
-
|
| 12 |
-

|
| 13 |
-
|
| 14 |
-
<center>🦙⛰️ <i><a href="https://huggingface.co/mlabonne/BigLlama-3.1-681B-Instruct">mlabonne/BigLlama-3.1-681B-Instruct</a></i></center>
|
| 15 |
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
This is the direct successor of [Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct), a self-merge of Llama 3 70B that produced a decent 120B model for tasks like creative writing.
|
| 19 |
-
|
| 20 |
-
I tweaked the range of duplicated layers to hopefully make a sensible model. Use it at your own risk!
|
| 21 |
-
|
| 22 |
-
## 🔍 Applications
|
| 23 |
|
| 24 |
-
|
| 25 |
|
| 26 |
-
##
|
|
|
|
| 27 |
|
| 28 |
-
|
| 29 |
|
| 30 |
-
##
|
| 31 |
|
| 32 |
-
|
|
|
|
| 33 |
|
| 34 |
-
##
|
| 35 |
|
| 36 |
-
|
| 37 |
|
| 38 |
```yaml
|
| 39 |
slices:
|
|
@@ -48,33 +38,5 @@ slices:
|
|
| 48 |
model: mlabonne/BigLlama-3.1-681B-Instruct
|
| 49 |
merge_method: passthrough
|
| 50 |
dtype: bfloat16
|
| 51 |
-
```
|
| 52 |
-
|
| 53 |
-
Here is the code I've used to generate the config and calculate the number of layers/parameters after passthrough:
|
| 54 |
-
|
| 55 |
-
```python
|
| 56 |
-
def generate_yaml_config(range_size, total_layers, nb_parameters):
|
| 57 |
-
new_size = total_layers + total_layers - range_size
|
| 58 |
-
new_param = (nb_parameters / total_layers) * new_size
|
| 59 |
-
print(f"New size = {new_size} layers")
|
| 60 |
-
print(f"New parameters = {new_param:.2f}B")
|
| 61 |
-
yaml_str = "slices:\n"
|
| 62 |
-
|
| 63 |
-
for i in range(0, round(total_layers - range_size + 1), range_size // 2):
|
| 64 |
-
start = i
|
| 65 |
-
end = min(start + range_size, total_layers)
|
| 66 |
-
yaml_str += f"- sources:\n"
|
| 67 |
-
yaml_str += f" - layer_range: [{start}, {end}]\n"
|
| 68 |
-
yaml_str += f" model: meta-llama/Meta-Llama-3.1-405B-Instruct\n"
|
| 69 |
|
| 70 |
-
|
| 71 |
-
yaml_str += "dtype: bfloat16\n"
|
| 72 |
-
|
| 73 |
-
print(yaml_str)
|
| 74 |
-
|
| 75 |
-
return new_size, new_param
|
| 76 |
-
|
| 77 |
-
# Example usage
|
| 78 |
-
new_size, new_param = generate_yaml_config(42, 126, 410)
|
| 79 |
-
new_size, new_param = generate_yaml_config(105, new_size, new_param)
|
| 80 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
+
- mlabonne/BigLlama-3.1-681B-Instruct
|
| 4 |
library_name: transformers
|
| 5 |
tags:
|
| 6 |
- mergekit
|
| 7 |
- merge
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
+
---
|
| 10 |
+
# BigLlama-3.1-1T-Instruct
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
| 13 |
|
| 14 |
+
## Merge Details
|
| 15 |
+
### Merge Method
|
| 16 |
|
| 17 |
+
This model was merged using the passthrough merge method.
|
| 18 |
|
| 19 |
+
### Models Merged
|
| 20 |
|
| 21 |
+
The following models were included in the merge:
|
| 22 |
+
* [mlabonne/BigLlama-3.1-681B-Instruct](https://huggingface.co/mlabonne/BigLlama-3.1-681B-Instruct)
|
| 23 |
|
| 24 |
+
### Configuration
|
| 25 |
|
| 26 |
+
The following YAML configuration was used to produce this model:
|
| 27 |
|
| 28 |
```yaml
|
| 29 |
slices:
|
|
|
|
| 38 |
model: mlabonne/BigLlama-3.1-681B-Instruct
|
| 39 |
merge_method: passthrough
|
| 40 |
dtype: bfloat16
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
config.json
CHANGED
|
@@ -33,7 +33,7 @@
|
|
| 33 |
"rope_theta": 500000.0,
|
| 34 |
"tie_word_embeddings": false,
|
| 35 |
"torch_dtype": "bfloat16",
|
| 36 |
-
"transformers_version": "4.
|
| 37 |
"use_cache": true,
|
| 38 |
"vocab_size": 128256
|
| 39 |
}
|
|
|
|
| 33 |
"rope_theta": 500000.0,
|
| 34 |
"tie_word_embeddings": false,
|
| 35 |
"torch_dtype": "bfloat16",
|
| 36 |
+
"transformers_version": "4.44.0",
|
| 37 |
"use_cache": true,
|
| 38 |
"vocab_size": 128256
|
| 39 |
}
|
mergekit_config.yml
CHANGED
|
@@ -9,4 +9,4 @@ slices:
|
|
| 9 |
- layer_range: [104, 209]
|
| 10 |
model: mlabonne/BigLlama-3.1-681B-Instruct
|
| 11 |
merge_method: passthrough
|
| 12 |
-
dtype: bfloat16
|
|
|
|
| 9 |
- layer_range: [104, 209]
|
| 10 |
model: mlabonne/BigLlama-3.1-681B-Instruct
|
| 11 |
merge_method: passthrough
|
| 12 |
+
dtype: bfloat16
|
model-00427-of-00481.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a7a93419dcc4485d05ba631b082c8a5c9ecd819ac1beb63db5b841ebc1137674
|
| 3 |
+
size 4697687008
|
model-00433-of-00481.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c5decda5f7406f249ad0489d85729d204f66ad75ccdcb9f5f23ec3b5b88f0862
|
| 3 |
+
size 4697687008
|
tokenizer_config.json
CHANGED
|
@@ -2059,4 +2059,4 @@
|
|
| 2059 |
],
|
| 2060 |
"model_max_length": 131072,
|
| 2061 |
"tokenizer_class": "PreTrainedTokenizerFast"
|
| 2062 |
-
}
|
|
|
|
| 2059 |
],
|
| 2060 |
"model_max_length": 131072,
|
| 2061 |
"tokenizer_class": "PreTrainedTokenizerFast"
|
| 2062 |
+
}
|