Forecasting stock and cryptocurrency prices is an ambitious dream for many investors and finance enthusiasts. While growing wings might seem impossible, predicting prices isn’t. In this guide, we will focus on using PyTorch, a powerful machine learning library, to build a learning algorithm capable of predicting the prices of cryptocurrencies, in particular ADA, the cryptocurrency that operates on the Cardano blockchain
.
Objective
You’ll learn how to leverage PyTorch to build a machine learning algorithm, using not only prices, but also the volume and amounts of transactions as input. We will implement the sliding window method and introduce an ‘outlook gap’, a less used but effective technique. We will explore different model architectures and optimizers to improve model performance
.
Loading Data
We will use historical ADA data provided by Kraken, an extensive repository of historical cryptocurrency data. The data will be loaded into a Pandas dataframe
.
Python
Copy code
import pandas as pd df = pd.read_csv (“data/ADAEUR_60.csv”) df [‘date’] = pd.to_datetime (df [‘timestamp’], unit=’s’, errors=’coerce’) df.set_index (‘date’, inplace=True) df.head ()
Data Visualization
Before proceeding, let’s visualize the data to better understand their structure. We will create a chart showing the closing price and the volume of transactions
.
Python
Copy code
import matplotlib.pyplot as plt downsampled_df = df.resample (‘1D’) .mean () plt.plot (downsampled_df.index, downsampled_df [‘close’], label=’Close’, color=’blue’) plt.ylabel (‘Close’, color=’blue’) plt.tick_params (axis=’y’, labelcolor=’blue’) ax2 = plt.twinx () ax2.plot (downsampled_df.index, downsampled_df [‘volume’], label=’Volume’, color=’red’) ax2.set_ylabel (‘Volume’, color=’red’) ax2.tick_params (axis=’y# 8217;, labelcolor=’red’) plt.title (‘Close Price vs. Volume’) plt.show ()
Data Preparation
We will set up some essential hyperparameters for model training and we will normalize the data to improve the quality and speed of calculation.
Python
Copy code
from sklearn.preprocessing import StandardScaler hidden_units = 64 num_layers = 4learning_rate = 0.001 num_epochs = 100 batch_size = 32 window_size = 14prediction_steps = 7 dropout_rate = 0.2 features = [‘close’, ‘volume’, ‘trades’] target = ‘close’ df_sampled = df [features] .head (1000) .copy () scaler = standardScaler () selected_features = df_sampled [features] .values.reshape (len -1, (features)) scaled_features = scaler.fit_transform (selected_features) df_sampled [features] = scaled_features
Sliding Window Method
To avoid introducing bias and improve model learning, we will use the sliding window method and introduce a prediction gap.
Python
Copy code
import numpy as np def create_sequences (data, window_size, prediction_steps, features, label): X, y = [], [] for i in range (len (data) — window_size — prediction_steps + 1): sequence = data.iloc [i:i + window_size] [features] target = data.iloc [i + window_size + prediction_steps — 1] [label] x.append (sequence) y.append (target) return np.array (X), np.array (y X), y = create_sequences (df_sampled, window_size, prediction_steps, features, target)
Data Division and Batch
We’ll divide the data into training and test sets, and organize the data in batches.
Python
Copy code
from sklearn.model_selection import train_test_split import torch fromtorch.utils.data import TensorDataset, DataLoader X_train, X_test, y_train, y_test = train_test_split (X, y, test_size=0.2, shuffle=False) X_train_Tensor = torch.Tensor (X_train) y_train_tensor = torch.Tensor (y_train) X_test_tensor = torch.Tensor (X_test) y_test_tensor = torch.Tensor (X_test) y_test_tensor = torch.Tensor (X_test) and_test_tensor = torch.Tensor (y_test) train_dataset = TensorDataset (X_train_Tensor, y_train_tensor) test_dataset = TensorDataset (X_test_Tensor, y_test_tensor) train_dataloader = DataLoader (train_dataset, batch_size=batch_size, shuffle=False) test_dataloader = DataLoader (test_dataset, batch_size=batch_size, shuffle=False)
Creating the LSTM Model
We’ll start with an LSTM (Long Short-Term Memory) model, a type of recurrent neural network (RNN).
Python
Copy code
import torch.nn as nn class PricePredictionLstm (nn.Module): def __init__ (self, input_size, hidden_size, num_layers, output_size=1): super (PricePredictionLstm, self). __init__ () self.hidden_size = hidden_size self.num_layers = num_layers self.lstm = nn.lstm (input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear (hidden_size, output_size) def forward (self, x): h0 = torch.zeros (self.num_layers, x.size (0), self.hidden_size) .to (x.device) c0 = torch.zeros eros (self.num_layers, x.size (0), self.hidden_size) .to (x.device) out, _ = self.lstm (x, (h0, c0)) out = self.fc (out [:
, -1,:]) return out
Loss and Optimizer function
We will use the Mean Squared Error (MSE) as a loss function and the AdamW optimizer to update the model parameters.
Python
Copy code
loss_fn = nn.mseloss () optimizer = torch.optim.adamw (model.parameters (), lr=learning_rate)
Training Cycle
The training cycle is the heart of the optimization process. At each epoch, we will calculate the forecasts, the loss and update the model parameters
.
Python
Copy code
import time from TQDM import TQDM import math from sklearn.metrics importmean_squared_error start = time.time () for epoch in TQDM (range (num_epochs)): model.train () total_train_loss = 0.0 all_train_targets, all_train_outputs = [], [] for inputs, targets in train_dataloader: optimizer.zero_grad () outputs = model (inputs) loss = loss_fn (outputs.squeeze (), targets) .backward () optimizer.step) total_train_loss += loss.item () all_train_targets.extend (targets.numpy ()) all_train_outputs.extend (outputs.detach () .numpy ()) model.eval () total_test_loss = 0.0 all_test_ targets, all_test_outputs = [], [] for inputs, targets intest_dataloader: with torch.no_grad (): outputs = model (inputs) loss = loss_fn (outputs.squeeze (), targets) total_test_loss += loss.item () all_test_targets.extend (targets.numpy ()) all_test_outputs.extend (outputs.detach () .numpy ()) average_epoch_train_loss = total_train_loss/len (train_dataloader)) average_epoch_test_loss = total_test_loss/len (test_dataloader) train_rmse = math.sqrt (mean_squared_error (all_train_targets, all_train_outputs)) test_rmse = math.sqrt (mean_squared_error (all_test_targets, all_test_ outputs)) print (f” Epoch [{epoch + 1}/{num_epochs}], Train Loss: {average_epoch_train_loss: .4f}, Test Loss: {average_epoch_test_loss: .4f}, Train RMSE: {train_rmse: .4f}, RMSE Test: {test_rmse: .4f}”) duration = time.time () — start
Final Thoughts
We introduced the use of LSTM and GRU for predicting cryptocurrency prices, using a methodical and detailed approach. Remember that the quality of the model depends on the computing power and the selection of hyperparameters. Keep experimenting with different models and techniques to improve your forecasts.