PyTorch dataset dataloader用法
dataset & dataloader
参考PyTorch官方文档: https://pytorch.org/docs/stable/data.html
Dataloader
TODO: sample取样器
本来想写这个参数的用法,结果发现自己总用不上这个参数
pin_memory
详解Pytorch里的pin_memory 和 non_blocking.https://zhuanlan.zhihu.com/p/477870660
将数据放入cuda
pin_memory (bool, optional) – If True
, the data loader will copy Tensors into device/CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn
returns a batch that is a custom type, see the example below.
若传入的是自定义的类型,传入的pin_memory参数不会生效。但你可调用tensor.pin_memory()。
锁页(pinned page)是操作系统常用的操作,就是为了使硬件外设直接访问CPU内存,从而避免过多的复制操作。
“ 内存可以分为 没锁的(pageable,可分页的) 和 锁了的(pinned)。
锁页内存和GPU显存之间的拷贝速度大约是6GB/s
可分页内存和GPU显存间的拷贝速度大约是3GB/s。
GPU内存间速度是30GB/s,CPU间内存速度是10GB/s
GPU只能读取pinned的数据。但Host(例如CPU)的数据分配默认是pageable(可分页的)。GPU处理可分页(pageable)的数据时,需要把该数据拷贝到临时的缓冲区(pinned memory)才能读取。通过设置pin_memory=True
,从一开始就把内存给锁住,这样就减少了开销,避免了CPU内存拷贝时间。
train_sampler = None
train_loader = torch.utils.data.DataLoader(train_dataset,...,pin_memory=True
)for data, labels in train_loader:data = data.to('cuda:0', non_blocking=True)
collate_fn
不给collate_fn赋值, automatic batching。the default collate_fn
simply converts NumPy arrays into PyTorch Tensors, and keeps everything else untouched. 传入的字典,也会自动处理字典的values。
当你想在collate_fn的函数中,给DataLoader传入collate_fn,可自定义一些操作。
collate_fn的输入是长为batch_size的由sample组成的列表。
def collate_fn_func(data):feature, label = zip(*data)return torch.stack(feature,dim=0),torch.stack(label,dim=0)
collate_fn函数的返回值,便是dataloader迭代器最终的结果。
dataset
划分训练集和验证集
torch.utils.data.random_split
trainlen = int(0.9 * len(dataset))
lengths = [trainlen, len(dataset) - trainlen]
trainset, validset = random_split(dataset, lengths)# 简化
trainset, validset = random_split(dataset, [0.7,0.3])
常用的都是使用自定义的数据集类
TensorDataset
-
class torch.utils.data.TensorDataset(*tensors)[source]
Dataset wrapping tensors. Each sample will be retrieved by indexing tensors along the first dimension. Parameters:
*tensors
– tensors that have the same size of the first dimension.
有时候,一些列表的数据,使用
TensorDataset
也挺方便的。摘抄自官方Demo,感兴趣的话,参看官方文档
可忽略SimpleCustomBatch
这个类,直接看TensorDataset
的使用即可
from torch.utils.data import TensorDataset
class SimpleCustomBatch:def __init__(self, data):transposed_data = list(zip(*data))self.inp = torch.stack(transposed_data[0], 0)self.tgt = torch.stack(transposed_data[1], 0)# custom memory pinning method on custom typedef pin_memory(self):self.inp = self.inp.pin_memory()self.tgt = self.tgt.pin_memory()return selfdef collate_wrapper(batch):return SimpleCustomBatch(batch)inps = torch.arange(10 * 5, dtype=torch.float32).view(10, 5)
tgts = torch.arange(10 * 5, dtype=torch.float32).view(10, 5)
dataset = TensorDataset(inps, tgts)loader = DataLoader(dataset, batch_size=2, collate_fn=collate_wrapper,pin_memory=True)for batch_ndx, sample in enumerate(loader):print(sample.inp.is_pinned())print(sample.tgt.is_pinned())
常用示例
BERT
huggingface tokenizer处理示例,代码由笔者编写,不保证是否高效。
from torch.utils.data import Dataset,DataLoader,random_split
或许你需要random_split
,自行完善切分数据集的操作
# train_dataset, val_dataset = random_split(dataset, [train_size, val_size])
def collate_fn(data):features, labels = list(zip(*data))return tokenizer(features,add_special_tokens = True,max_length = 64,pad_to_max_length = True,return_attention_mask = True,return_tensors = 'pt'),torch.tensor(labels)class SampleDataset(Dataset):def __init__(self, sentences,labels=None,is_train=True,):self.sentences = sentencesself.labels = labelsself.is_train = is_traindef __len__(self):return len(self.sentences)def __getitem__(self, idx):if not self.is_train:return self.sentences[idx]return self.sentences[idx], self.labels[idx]
sample_dataset = SampleDataset(sentences,labels)
sample_dataloader = DataLoader(sample_dataset,batch_size = batch_size,shuffle=False,collate_fn=collate_fn,pin_memory=True,)
for d,label in sample_dataloader:input_id = d['input_ids']attention_mask = d['attention_mask']print(input_id.shape,attention_mask.shape,label.shape)break
# torch.Size([32, 64]) torch.Size([32, 64]) torch.Size([32])