Pytorch cuda tries to allocate memory but it is not available

If you see an error similar to this one:

RuntimeError: CUDA out of memory. Tried to allocate 11.88 MiB (GPU 4; 15.75 GiB total capacity; 10.50 GiB already allocated; 1.88 MiB free; 3.03 GiB cached)

then, try reducing the batch size.

    train_loader = DataLoader(
        train_dataset,
        shuffle=False,
        pin_memory=False,
        batch_size=2, # was: 16
        num_workers=12,
    )
Please login or register to post a comment.