Recent comments in /f/deeplearning

International_Deer27 OP t1_j6x0tpy wrote

I've simplified my model a lot to only take into account 2000x1 tensors as input for X and the prediction is either 0 or 1 as before. I've made it using nn.Sequential with only a few layers to be easier to follow:

import torch

import torch.nn as nn

from torch.utils.data import Dataset, DataLoader

import torch.nn.functional as F

from sklearn.model_selection import train_test_split

import numpy as np

import matplotlib as plt

df_Y_MACE = np.array(df_Y_MACE)

df_X_MACE1 = []

for i in range(len(df_X_MACE)):

df_X_MACE1.append(df_X_MACE[i][0])

df_X_MACE1 = np.array(df_X_MACE1)

X = torch.from_numpy(df_X_MACE1).float()

Y = torch.from_numpy(df_Y_MACE).float()

# Define the dataset

class ECGDataset(Dataset):

def __init__(self, data, labels):

self.data = data

self.labels = labels

def __len__(self):

return len(self.data)

def __getitem__(self, idx):

return self.data[idx], self.labels[idx]

# Split the data into training and testing sets

train_data, test_data, train_labels, test_labels = train_test_split(X, Y, test_size=0.8)

# Create the dataset and data loader

train_dataset = ECGDataset(train_data, train_labels)

test_dataset = ECGDataset(test_data, test_labels)

train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)

test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)

# Define the CNN

class ECGClassifier(nn.Module):

def __init__(self):

super(ECGClassifier, self).__init__()

self.ECG_seq = nn.Sequential(nn.Conv1d(1, 32, kernel_size = 50, stride = 5), nn.ReLU(), nn.MaxPool1d(7,2), nn.Linear(193,1))

self.fc = nn.Linear(32, 1)

self.sigmoid = nn.Sigmoid()

def forward(self, x):

x = x.unsqueeze(1)

out = self.ECG_seq(x)

out = self.fc(out.view(-1,32))

out = self.sigmoid(out)

return out

# Define the model and move it to the device

device = torch.device('cpu')

model = ECGClassifier()

model = model.to(device)

model = model.float()

# Define the loss function and optimizer

criterion = nn.BCELoss()

optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=0.01)

total_loss = []

# Train the model

for epoch in range(5):

for i, (data, labels) in enumerate(train_loader):

data, labels = data.to(device), labels.to(device)

# Forward pass

with torch.set_grad_enabled(True):

outputs = model(data)

labels = labels.unsqueeze(1)

loss = criterion(outputs, labels)

total_loss.append(loss)

# Backward and optimize

optimizer.zero_grad()

loss.backward()

optimizer.step()

print ('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, 5, loss.item()))

1

redditorhaveatit t1_j6w9mau wrote

Used it and talked to Joseph for a bit. I really like it! Answers were to the point and incorporated stuff we had talked about before. It also knew exactly what I was talking about without me having to provide context.

What's different between this an ChatGPT? Have you got it searching a domain-specific corpus?

1

AwkwardlyPure t1_j6w7x6m wrote

What other Python packages do I need to install ? Sometimes I get a warning about not having Tensorrt, then when I install it shows version 0.0.1 but apparently it's at version 8 ? There is a guide on nvidias website with some snippets of code but I don't fully understand.

I have already install TensorFlow and Keras.

1

Some-Assistance-7812 t1_j6w1l5z wrote

Reply to comment by AbCi16 in Using Jupyter via GPU by AbCi16

Yeah it’s CUDA capable! RTX is much better than GTX for deep learning, and 2080 super is powerful! I did plenty of deep learning on my laptop using 1050ti using PyTorch. If you need any help, we can connect and help you out on Zoom call. It won’t take more than 20-30 mins.

2

Appropriate_Ant_4629 t1_j6vmpm8 wrote

Reply to comment by FastestLearner in Using Jupyter via GPU by AbCi16

> Being able to use the GPU doesn’t have anything to do with Jupyter.

It's certainly not required....

.. but Nvidia makes it extremely convenient through the notebooks they provide:

https://catalog.ngc.nvidia.com/resources

>> The NGC catalog offers step-by-step instructions and scripts through Jupyter Notebooks for various use cases, including machine learning, computer vision, and conversational AI. These resources help you examine, understand, customize, test, and build AI faster, while taking advantage of best practices.

4

BellyDancerUrgot t1_j6v9w0p wrote

Reply to comment by FastestLearner in Using Jupyter via GPU by AbCi16

Last I checked for tensorflow-gpu conda install didn’t install the correct cuda version for some reason and it was annoying to roll back and then reinstall correct cuda and cudnn versions. PyTorch is fking clean tho.

0

FastestLearner t1_j6v8nsz wrote

Being able to use the GPU doesn’t have anything to do with Jupyter. It’s the packages (TensorFlow, PyTorch, etc.) that must be installed with CUDA support and also you must have the correct drivers installed. My recommendation would be simply use a conda environment, which automatically installs the correct CUDA packages during a PyTorch install or a Tensorflow install.

8

agentfuzzy999 t1_j6uzju1 wrote

You do not need docker. Open a notebook, import torch or tensorflow, check if GPU is available. If true, profit. If false, you have python/framework/CUDA problems.

5