Dataset Viewer
Auto-converted to Parquet Duplicate
audio
audioduration (s)
3
4
source
stringclasses
3 values
guitar_type
stringclasses
2 values
player_id
int32
2
2
string_name
stringclasses
6 values
string_number
int32
1
6
fret
int32
0
12
note_name
stringclasses
37 values
midi_number
int32
40
76
frequency
float32
82.4
659
pitch_class
stringclasses
12 values
octave
int32
2
5
duration
float32
3
4
ele
electric
2
A
5
0
A2
45
110
A
2
4
ele
electric
2
A
5
1
A#2
46
116.540901
A#
2
4
ele
electric
2
A
5
10
G3
55
195.997696
G
3
4
ele
electric
2
A
5
11
G#3
56
207.652298
G#
3
4
ele
electric
2
A
5
12
A3
57
220
A
3
4
ele
electric
2
A
5
2
B2
47
123.470802
B
2
4
ele
electric
2
A
5
3
C3
48
130.812805
C
3
4
ele
electric
2
A
5
4
C#3
49
138.591293
C#
3
4
ele
electric
2
A
5
5
D3
50
146.832397
D
3
4
ele
electric
2
A
5
6
D#3
51
155.563507
D#
3
4
ele
electric
2
A
5
7
E3
52
164.813797
E
3
4
ele
electric
2
A
5
8
F3
53
174.614105
F
3
4
ele
electric
2
A
5
9
F#3
54
184.997192
F#
3
4
ele
electric
2
B
2
0
B3
59
246.941696
B
3
4
ele
electric
2
B
2
1
C4
60
261.62561
C
4
4
ele
electric
2
B
2
10
A4
69
440
A
4
4
ele
electric
2
B
2
11
A#4
70
466.163788
A#
4
4
ele
electric
2
B
2
12
B4
71
493.883301
B
4
4
ele
electric
2
B
2
2
C#4
61
277.182587
C#
4
4
ele
electric
2
B
2
3
D4
62
293.664795
D
4
4
ele
electric
2
B
2
4
D#4
63
311.127014
D#
4
4
ele
electric
2
B
2
5
E4
64
329.627594
E
4
4
ele
electric
2
B
2
6
F4
65
349.22821
F
4
4
ele
electric
2
B
2
7
F#4
66
369.994385
F#
4
4
ele
electric
2
B
2
8
G4
67
391.995392
G
4
4
ele
electric
2
B
2
9
G#4
68
415.304688
G#
4
4
ele
electric
2
D
4
0
D3
50
146.832397
D
3
4
ele
electric
2
D
4
1
D#3
51
155.563507
D#
3
4
ele
electric
2
D
4
10
C4
60
261.62561
C
4
4
ele
electric
2
D
4
11
C#4
61
277.182587
C#
4
4
ele
electric
2
D
4
12
D4
62
293.664795
D
4
4
ele
electric
2
D
4
2
E3
52
164.813797
E
3
4
ele
electric
2
D
4
3
F3
53
174.614105
F
3
4
ele
electric
2
D
4
4
F#3
54
184.997192
F#
3
4
ele
electric
2
D
4
5
G3
55
195.997696
G
3
4
ele
electric
2
D
4
6
G#3
56
207.652298
G#
3
4
ele
electric
2
D
4
7
A3
57
220
A
3
4
ele
electric
2
D
4
8
A#3
58
233.081894
A#
3
4
ele
electric
2
D
4
9
B3
59
246.941696
B
3
4
ele
electric
2
low_E
6
0
E2
40
82.406898
E
2
4
ele
electric
2
low_E
6
1
F2
41
87.307098
F
2
4
ele
electric
2
low_E
6
10
D3
50
146.832397
D
3
4
ele
electric
2
low_E
6
11
D#3
51
155.563507
D#
3
4
ele
electric
2
low_E
6
12
E3
52
164.813797
E
3
4
ele
electric
2
low_E
6
2
F#2
42
92.498596
F#
2
4
ele
electric
2
low_E
6
3
G2
43
97.998901
G
2
4
ele
electric
2
low_E
6
4
G#2
44
103.826202
G#
2
4
ele
electric
2
low_E
6
5
A2
45
110
A
2
4
ele
electric
2
low_E
6
6
A#2
46
116.540901
A#
2
4
ele
electric
2
low_E
6
7
B2
47
123.470802
B
2
4
ele
electric
2
low_E
6
8
C3
48
130.812805
C
3
4
ele
electric
2
low_E
6
9
C#3
49
138.591293
C#
3
4
ele
electric
2
G
3
0
G3
55
195.997696
G
3
4
ele
electric
2
G
3
1
G#3
56
207.652298
G#
3
4
ele
electric
2
G
3
10
F4
65
349.22821
F
4
4
ele
electric
2
G
3
11
F#4
66
369.994385
F#
4
4
ele
electric
2
G
3
12
G4
67
391.995392
G
4
4
ele
electric
2
G
3
2
A3
57
220
A
3
4
ele
electric
2
G
3
3
A#3
58
233.081894
A#
3
4
ele
electric
2
G
3
4
B3
59
246.941696
B
3
4
ele
electric
2
G
3
5
C4
60
261.62561
C
4
4
ele
electric
2
G
3
6
C#4
61
277.182587
C#
4
4
ele
electric
2
G
3
7
D4
62
293.664795
D
4
4
ele
electric
2
G
3
8
D#4
63
311.127014
D#
4
4
ele
electric
2
G
3
9
E4
64
329.627594
E
4
4
ele
electric
2
high_E
1
0
E4
64
329.627594
E
4
4
ele
electric
2
high_E
1
1
F4
65
349.22821
F
4
4
ele
electric
2
high_E
1
10
D5
74
587.329529
D
5
4
ele
electric
2
high_E
1
11
D#5
75
622.254028
D#
5
4
ele
electric
2
high_E
1
12
E5
76
659.255127
E
5
4
ele
electric
2
high_E
1
2
F#4
66
369.994385
F#
4
4
ele
electric
2
high_E
1
3
G4
67
391.995392
G
4
4
ele
electric
2
high_E
1
4
G#4
68
415.304688
G#
4
4
ele
electric
2
high_E
1
5
A4
69
440
A
4
4
ele
electric
2
high_E
1
6
A#4
70
466.163788
A#
4
4
ele
electric
2
high_E
1
7
B4
71
493.883301
B
4
4
ele
electric
2
high_E
1
8
C5
72
523.251099
C
5
4
ele
electric
2
high_E
1
9
C#5
73
554.365295
C#
5
4
eqm2
acoustic
2
A
5
0
A2
45
110
A
2
4
eqm2
acoustic
2
A
5
1
A#2
46
116.540901
A#
2
4
eqm2
acoustic
2
A
5
10
G3
55
195.997696
G
3
4
eqm2
acoustic
2
A
5
11
G#3
56
207.652298
G#
3
4
eqm2
acoustic
2
A
5
12
A3
57
220
A
3
4
eqm2
acoustic
2
A
5
2
B2
47
123.470802
B
2
4
eqm2
acoustic
2
A
5
3
C3
48
130.812805
C
3
4
eqm2
acoustic
2
A
5
4
C#3
49
138.591293
C#
3
4
eqm2
acoustic
2
A
5
5
D3
50
146.832397
D
3
4
eqm2
acoustic
2
A
5
6
D#3
51
155.563507
D#
3
4
eqm2
acoustic
2
A
5
7
E3
52
164.813797
E
3
4
eqm2
acoustic
2
A
5
8
F3
53
174.614105
F
3
4
eqm2
acoustic
2
A
5
9
F#3
54
184.997192
F#
3
4
eqm2
acoustic
2
B
2
0
B3
59
246.941696
B
3
4
eqm2
acoustic
2
B
2
1
C4
60
261.62561
C
4
4
eqm2
acoustic
2
B
2
10
A4
69
440
A
4
4
eqm2
acoustic
2
B
2
11
A#4
70
466.163788
A#
4
4
eqm2
acoustic
2
B
2
12
B4
71
493.883301
B
4
4
eqm2
acoustic
2
B
2
2
C#4
61
277.182587
C#
4
4
eqm2
acoustic
2
B
2
3
D4
62
293.664795
D
4
4
eqm2
acoustic
2
B
2
4
D#4
63
311.127014
D#
4
4
eqm2
acoustic
2
B
2
5
E4
64
329.627594
E
4
4
End of preview. Expand in Data Studio

Guitar Single-Note Recordings

A dataset of 390 single-note guitar recordings spanning 6 strings and frets 0-12, recorded by two players on acoustic and electric guitars.

Dataset Summary

This dataset contains isolated single-note recordings from a standard-tuned guitar. Each recording captures one note played on a specific string and fret combination, covering the first 12 frets across all 6 strings (78 unique notes per source). The recordings are raw, unprocessed 44100 Hz / 32-bit float WAV files suitable for training note classification and pitch detection models.

Five recording sources from two players provide variety in playing style and instrument timbre. Player 1 (deb) recorded on an acoustic guitar. Player 2 recorded on both acoustic (eqm, eqm2) and electric guitar (ele, ele_natural), giving the dataset a range of tonal characteristics from warm acoustic to clean electric.

The labeling scheme derives entirely from the filename convention {source}_{string}_{fret}.wav. Each sample carries 12 metadata columns including MIDI number, frequency (Hz), pitch class, octave, and string/fret coordinates, making it straightforward to set up classification or regression tasks without any manual annotation.

Quick Start

from datasets import load_dataset

ds = load_dataset("collegefishiesd/guitar-fretboard-notes")

Dataset Structure

Splits

Split Samples Sources
train 234 ele, eqm, eqm2
test 78 deb
validation 78 ele_natural
Total 390

Splits are assigned by recording source (not random) to prevent data leakage. The train set uses Player 2's acoustic and electric recordings, the test set uses Player 1's acoustic recordings, and the validation set uses Player 2's electric guitar with natural tone.

Columns

Column Type Description
audio Audio WAV audio (44100 Hz, 32-bit float, mono)
source string Recording source identifier (deb, eqm, eqm2, ele, ele_natural)
guitar_type string Instrument type (acoustic or electric)
player_id int64 Player identifier (1 or 2)
string_name string Guitar string name (low_E, A, D, G, B, high_E)
string_number int64 String number (1=high E through 6=low E)
fret int64 Fret number (0=open through 12)
note_name string Scientific pitch notation (e.g., E2, A4)
midi_number int64 MIDI note number (40-76)
frequency float64 Fundamental frequency in Hz (82.41-659.26)
pitch_class string Note name without octave (e.g., E, A, C#)
octave int64 Octave number (2-5)
duration float64 Recording duration in seconds

Statistics

Split Sizes

Split Samples Sources
train 234 ele, eqm, eqm2
test 78 deb
validation 78 ele_natural
Total 390

Source x String Recording Counts

Each cell shows the number of recordings for that source and string combination. Every source covers frets 0-12 on each string (13 recordings per cell).

Source low_E A D G B high_E Total
deb 13 13 13 13 13 13 78
ele 13 13 13 13 13 13 78
ele_natural 13 13 13 13 13 13 78
eqm 13 13 13 13 13 13 78
eqm2 13 13 13 13 13 13 78

Visualizations

Representative waveforms from across the pitch range (all from deb source, acoustic guitar):

Low E Open (E2, 82 Hz)

Waveform: Low E Open

A String Fret 5 (D3, 147 Hz)

Waveform: A String Fret 5

High E Fret 12 (E5, 659 Hz)

Waveform: High E Fret 12

Usage Examples

Load the Dataset

from datasets import load_dataset

ds = load_dataset("collegefishiesd/guitar-fretboard-notes")
print(ds)
# DatasetDict({
#     train: Dataset({...features...num_rows: 234})
#     test: Dataset({...features...num_rows: 78})
#     validation: Dataset({...features...num_rows: 78})
# })

Filter by String and Fret

# Get all open string recordings from the test set
open_strings = ds["test"].filter(lambda x: x["fret"] == 0)
print(f"Open string samples: {len(open_strings)}")

# Get all A string recordings across all splits
a_string = ds["train"].filter(lambda x: x["string_name"] == "A")
print(f"A string training samples: {len(a_string)}")

Basic Preprocessing

import numpy as np
import torch
import torchaudio

def preprocess(example):
    audio = example["audio"]
    waveform = torch.tensor(audio["array"], dtype=torch.float32).unsqueeze(0)
    sr = audio["sampling_rate"]

    # Resample to 16 kHz if needed
    if sr != 16000:
        resampler = torchaudio.transforms.Resample(sr, 16000)
        waveform = resampler(waveform)

    # Extract mel spectrogram
    mel_transform = torchaudio.transforms.MelSpectrogram(
        sample_rate=16000, n_mels=64, n_fft=1024, hop_length=512
    )
    mel_spec = mel_transform(waveform)

    # Log scale
    log_mel = torch.log(mel_spec + 1e-9)
    return {"mel_spectrogram": log_mel, "label": example["midi_number"]}

Minimal Training Loop

import torch
import torch.nn as nn
from torch.utils.data import DataLoader, Dataset

class NoteDataset(Dataset):
    def __init__(self, hf_split, n_fft=1024):
        self.data = hf_split
        self.mel = torchaudio.transforms.MelSpectrogram(
            sample_rate=44100, n_mels=64, n_fft=n_fft, hop_length=512
        )

    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        row = self.data[idx]
        wav = torch.tensor(row["audio"]["array"], dtype=torch.float32)
        spec = self.mel(wav.unsqueeze(0)).squeeze(0)  # (n_mels, time)
        label = row["midi_number"] - 40  # Shift to 0-based (MIDI 40-76 -> 0-36)
        return spec.mean(dim=-1), label  # Average over time -> (n_mels,)

train_ds = NoteDataset(ds["train"])
loader = DataLoader(train_ds, batch_size=16, shuffle=True)

model = nn.Sequential(nn.Linear(64, 128), nn.ReLU(), nn.Linear(128, 37))
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
loss_fn = nn.CrossEntropyLoss()

for epoch in range(10):
    total_loss = 0
    for features, labels in loader:
        logits = model(features)
        loss = loss_fn(logits, labels)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        total_loss += loss.item()
    print(f"Epoch {epoch + 1}: loss={total_loss / len(loader):.4f}")

Intended Uses

  • Note classification: Train models to identify which of the 37 unique notes (MIDI 40-76) is being played
  • Pitch detection: Build frequency estimation models using the known fundamental frequencies as ground truth
  • Audio ML benchmarks: Use as a small, well-labeled audio classification benchmark for quick experimentation
  • String/fret prediction: Train models that predict not just the note but the specific string and fret position (useful for guitar tablature generation)
  • Transfer learning: Fine-tune pretrained audio models on a domain-specific guitar task

Limitations

  • Limited fret range: Only frets 0-12 are covered. Notes above the 12th fret (which exist on most guitars up to fret 19-24) are absent.
  • Two players only: Recordings come from just two people. Models trained on this data may not generalize well to other playing styles, finger techniques, or pick types.
  • Mostly acoustic: Three of five sources are acoustic guitar. Electric guitar representation is limited to one player.
  • Single notes only: No chords, arpeggios, hammer-ons, pull-offs, bends, or other techniques. Each recording is a single cleanly-played note.
  • Recording environment variation: Sources were recorded in different rooms and setups. While this adds some natural variation, it also means acoustic conditions are not controlled.
  • No noise augmentation: The recordings are clean studio-ish takes, not noisy real-world captures. Models may need augmentation for deployment in noisy environments.

Contributors and Recording Details

Source Player Guitar Notes
deb Player 1 Acoustic guitar 78 recordings (6 strings x 13 frets)
eqm Player 2 Acoustic guitar 78 recordings (6 strings x 13 frets)
eqm2 Player 2 Acoustic guitar 78 recordings (second session)
ele Player 2 Electric guitar 78 recordings (6 strings x 13 frets)
ele_natural Player 2 Electric guitar 78 recordings (natural/clean tone)

All recordings use standard guitar tuning (E-A-D-G-B-E), 44100 Hz sample rate, 32-bit float WAV format.

License

This dataset is released under the Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-SA-4.0).

You are free to share and adapt the data for any purpose, including commercial use, as long as you give appropriate credit and distribute any derivative work under the same license.

Citation

@dataset{guitar_fretboard_notes_2026,
  title={Guitar Single-Note Recordings},
  author={collegefishiesd},
  year={2026},
  url={https://huggingface.co/datasets/collegefishiesd/guitar-fretboard-notes},
  license={CC-BY-SA-4.0},
  note={390 single-note guitar recordings, 6 strings, frets 0-12, 44100 Hz}
}
Downloads last month
121