How to handle regression tasks in Torch?
In Torch, handling regression tasks typically involves defining a loss function and optimizer to train the model. First, a loss function is defined to measure the difference between model predictions and actual values, commonly used loss functions include Mean Squared Error and Mean Absolute Error. Then an optimizer is used to adjust model parameters to minimize the loss function, commonly used optimizers include Stochastic Gradient Descent (SGD) and Adam.
Here is a simple example code for handling regression tasks:
import torch
import torch.nn as nn
import torch.optim as optim
# 定义数据
X = torch.tensor([[1.0], [2.0], [3.0]])
y = torch.tensor([[2.0], [4.0], [6.0]])
# 定义模型
model = nn.Linear(1, 1)
# 定义损失函数和优化器
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
# 训练模型
for epoch in range(100):
optimizer.zero_grad()
outputs = model(X)
loss = criterion(outputs, y)
loss.backward()
optimizer.step()
if (epoch+1) % 10 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, 100, loss.item()))
# 测试模型
with torch.no_grad():
test_input = torch.tensor([[4.0]])
predicted = model(test_input)
print('Predicted value: {:.2f}'.format(predicted.item()))
In the above code, we first define the data X and y, then define a simple linear model with mean square error as the loss function and stochastic gradient descent as the optimizer. Next, we train the model, calculating loss and updating model parameters for each epoch, and finally test the model and output the predictions.