Nothing Special   »   [go: up one dir, main page]

Skip to main content

Profitability trend prediction in crypto financial markets using Fibonacci technical indicator and hybrid CNN model

Abstract

Cryptocurrency has become a popular trading asset due to its security, anonymity, and decentralization. However, predicting the direction of the financial market can be challenging, leading to difficult financial decisions and potential losses. The purpose of this study is to gain insights into the impact of Fibonacci technical indicator (TI) and multi-class classification based on trend direction and price-strength (trend-strength) to improve the performance and profitability of artificial intelligence (AI) models, particularly hybrid convolutional neural network (CNN) incorporating long short-term memory (LSTM), and to modify it to reduce its complexity. The main contribution of this paper lies in its introduction of Fibonacci TI, demonstrating its impact on financial prediction, and incorporation of a multi-classification technique focusing on trend strength, thereby enhancing the depth and accuracy of predictions. Lastly, profitability analysis sheds light on the tangible benefits of utilizing Fibonacci and multi-classification. The research methodology employed to carry out profitability analysis is based on a hybrid investment strategy—direction and strength by employing a six-stage predictive system: data collection, preprocessing, sampling, training and prediction, investment simulation, and evaluation. Empirical findings show that the Fibonacci TI has improved its performance (44% configurations) and profitability (68% configurations) of AI models. Hybrid CNNs showed most performance improvements particularly the C-LSTM model for trend (binary-0.0023) and trend-strength (4 class-0.0020) and 6 class-0.0099). Hybrid CNNs showed improved profitability, particularly in CLSTM, and performance in CLSTM mod. Trend-strength prediction showed max improvements in long strategy ROI (6.89%) and average ROIs for long-short strategy. Regarding the choice between hybrid CNNs, the C-LSTM mod is a viable option for trend-strength prediction at 4-class and 6-class due to better performance and profitability.

Introduction

Cryptocurrency is a newer form of digital asset, which has become a global phenomenon among investors and enthusiasts alike. The security, anonymity, and decentralized nature of its operation are a few of the key attractions for its popularity. Forecasting of financial markets is a difficult task but it becomes extremely challenging due to the highly volatile nature of cryptocurrencies, which makes investors perform financially unfeasible decisions leading to financial difficulties and loss of capital. Popular techniques to study financial data for the financial prediction of markets include technical analysis (TA) and fundamental analysis (FA). TA is an approach to predict the future action of an asset depending on the past actions of the market with the help of technical indicators (TI) whereas FA studies the investor sentiment, behavior, and changes in these factors due to changing information (news or financial data). Nowadays, both approaches are used. However, the effects of information are sometimes not easily extractable in the short term [1]. Moreover, TIs are more accurate forecasters of financial assets than fundamental indicators [2]. Thus, TA is very suitable for the application of artificial intelligence (AI) techniques to extract meaningful patterns to forecast either future trends or prices.

In the traditional sense, the TA of financial markets is based on asset price or chart data, which uses several TIs to infer market behavior and generate potential actionable decisions based on interpretations of such inferences. The most common categories of technical indicators include trend, momentum, and oscillator indicators however, support/resistance TIs have not been given attention in the context of AI modeling. In this context, Fibonacci TI (named after the famous mathematician Fibonacci) is one of the most popular support/resistance TI, which gives thresholds with particular significance for the movement of the market (trend) or price.

Various studies have been carried out where neural networks (NN) have been used to assess an asset’s price to predict the future trends of the market. This includes NN which can be broadly categorized into; recurrent neural networks (RNN), deep neural networks (DNN), and convolution neural networks (CNN). Among RNN and DNN, long-short term memory (LSTM) and multilayer perceptron (MLP) are very popular and show good accuracy [3,4,5,6,7,8]. On the other hand, CNN is not frequently used, due to dimensional input structure, complexity, cost, or response time but its effectiveness in extracting patterns as shown in a few studies is comparable to other NN architectures [9,10,11]. It also performs sufficiently well, in a hybrid model as its output is used as features for RNN and DNN [12]. Other studies have also been reviewed to assess this growing market based on its efficiency, volatility, liquidity, diversification, and transactional cost of trading for the financial market, in particular [13, 14].

To facilitate decision-making trend prediction based on a single output feature is the most common type of prediction technique in AI. Very few studies adopt a multi-class or output approach, which emphasizes predicting the trend, price, volatility, or such market metrics simultaneously [15,16,17,18]. In the financial markets, traders strive to achieve accurate predictions and generate profits. The ultimate aim of processing financial data is to forecast future actions to perform well-informed decisions for enhancing monetary gains. Thus, prediction is meaningful, only if it leads to profitable decisions, and is evaluated with metrics that show the profitability of an asset or market. The research objectives are as under:

  1. i)

    To gain insights into the impact of technical indicators and multi-classification,

  2. ii)

    To improve the hybrid CNN model by reducing model complexity,

  3. iii)

    To simulate a hybrid investment strategy based on the direction and strength of predicted trends, and

  4. iv)

    To evaluate the AI models based on performance and profitability.

This research provides a means for traders to focus on profitability as a viable measure for model efficacy by implementing a multi-class trend prediction method, featuring the Fibonacci retracement level to enhance accuracy and profitability in predicting near-future market trends. Hence, the research questions addressed in our study are outlined as:

  • RQ1: What is the extent of performance improvement achievable through the integration of multi-classification and Fibonacci?

  • RQ2: To what degree does an investment strategy, considering both trend direction and strength, impact Return on Investment (ROI)?

  • RQ3: What modifications can enhance the performance of the C-LSTM model?

The rest of this study is organized into six sections; relevant state-of-the-art works on financial market prediction are presented in “Literature review and background” section. Theoretical background of AI in the financial domain, related to multi-class problems and trading strategies are also discussed in this section. “The proposed methodology” section presents the research methodology and empirical framework for this study while analysis and discussion of the results are presented in “Results and discussion” section. In addition, limitations and future directions are also discussed in the same section. “Conclusions” section concludes this study.

Literature review and background

AI and DL as data processing techniques have attained the attention of researchers worldwide. Specifically in the field of stock prediction, a survey from 1993 to 2017 shows that more than 80 percent of studies used ML and DL models [19, 20]. Crypto market is analyzed from various perspectives like sentiment analysis [21, 22], trading recommender system [23], impact of pandemic [24], cost estimation [25]. In addition, algorithmic trading [26] has emerged as a lucrative option for investors to effectively forecast stock market prices. Despite the challenges of market volatility, complexity and involvement of multiple factors such as political, social, etc. automated approaches for assisting investors to make appropriate and timely decisions have become widespread. At the current time, approximately more than 50 percent of studies use these models [27]. With the increased use of Fintech, DL has become predominant [28]. Financial markets work in three major domains:

  1. i)

    predict (stock, forex, commodity, and cryptocurrency prediction),

  2. ii)

    manage portfolios, and

  3. iii)

    trade (algorithms, strategies, and optimization)

Timely buying or selling of assets in the financial market is crucial for any investor, which makes its application an interesting field. In this regard, the following groups of studies emerge depending on the type of AI architecture implemented. Figure 1 shows the simplified structure of each AI architecture.

Fig. 1
figure 1

Simplified structure of AI architectures implemented in this study, a multi-layer perceptron, b convolutional neural network, c long short-term memory, and d hybrid CNN (C-LSTM)

Multi-layer perceptron

MLP demonstrates remarkable capabilities in approximating arbitrary functions through effective mapping from inputs to outputs. It incorporates single and multiple hidden layers containing \(m\) the number of hidden neurons, while the input layer accommodates \(n\) neurons, corresponding to the number of input values in an input vector [29]. In a study [5], to examine S&P-500, a feed-forward MLP is trained using three hidden layers to predict closing prices from 10 to 30 days ahead with nominal accuracy. Another MLP implementation [3] uses simple TIs on Bitcoin-US Dollar (USD) to forecast returns, and this architecture has been found to give good results, however, this study uses a smaller feature set which can be potentially improved by adding extra features. In [30], a 3-state labeling scheme is employed to predict crypto trend prediction. The empirical framework implements comprehensive backtesting over different market conditions (bull, bear, and flat) to validate three major cryptocurrencies. Moreover, the contribution of each feature is also analyzed using SHAP values to improve the transparency/explainability of the AI model.

Convolutional neural networks

A known feed-forward NN for processing bi-dimensional matrices, known as convolutional neural networks (CNN), is typically utilized to encompass successive convolutional and subsampling layers, along with single or multiple hidden layers. The extracted high-level vectors are processed stepwise in the hidden and output layers, similar to MLP [31]. Simultaneously, pooling layers minimize the dimensionality of obtained features, which reduces the impact of noise. In a study [9], CNN-sliding window architecture is compared with LSTM and RNN for three companies in the NIFTY-Pharma index and NIFTY-IT index. In another study [32], a novel trading algorithm is proposed where the CNN-TA model is used on a 2-D matrix (15 × 15) generated from data of 15× TIs. Another study proposed an algorithm, called the random sampling method (RSM) [10], for the prediction of trends for Bitcoin and lite coin. Based on DL, it delivers better performance against the reference models (MLP and LSTM).

Long short-term memory

LSTM was introduced in 1997 by Schmidhuber and Hochreiter [33]. It retains contiguous temporal information while demonstrating an exceptional capacity for long-term memory. In other words, it is an extension of recurrent neural network architectures, with a feedback loop and memory. Each memory unit, commonly referred to as a cell, comprises three components: the input, forget, and output controlling gates [34]. In a study [4], the DL framework was used to study six market indices, 12 TIs, and interest/exchange rates for predicting relevant prices. Similarly, LSTM has also been used to forecast out-of-sample directional movements of stock for the S&P 500 [6]. Model performance optimization by fine-tuning hyperparameters is also an important research direction in stock price prediction, as carried out in [35]. It is reported that combining hyperparameter fine-tuning and preprocessing could potentially improve a model’s performance. Along the same lines, the study [36] proposes a novel cuckoo search optimization (CSO) approach to predict stock prices based on user sentiments about the stock market. The proposed CSO-based LSTM model shows superior performance.

Hybrid CNN (CNN-LSTM)

The architectural framework is known as the LRCN or long-term recurrent convolutional network [37], also known as the “CNN LSTM” model. This architectural paradigm has been utilized in diverse domains, including speech recognition, natural language processing (NLP) [38], and the generation of textual descriptions for images, whereby CNNs are employed to extract relevant features for LSTMs. A salient facet of this model is the incorporation of a CNN, initially trained for intricate pattern recognition, which is subsequently repurposed as a feature extractor. This architecture is beneficial for problems where input data has a spatial or temporal structure. LSTM is frequently employed in time series and financial market predictions due to its ability to capture complex temporal dependencies, handle long sequences of data, and handle the inherent noise and non-linearity. In the hybrid CNN model, LSTM layers are integrated with CNNs to leverage the strengths of both architectures for financial time series forecasting. CNNs excel at capturing spatial patterns in data, while LSTM networks are proficient at modeling temporal dependencies. By combining these two architectures, the model can capture both spatial and temporal features in financial time series data for effective, accurate, and robust predictions.

The CNN-LSTM (CLSTM) configuration employed in this study draws inspiration from the model introduced by Stoye [12, 39]. This version encompasses an initial section comprising five convolutional layers equipped with a 1 × 1 matrix so that the feature vector maintains the same dimensions as the original input. Subsequently, a singular LSTM layer with 150 units processes these learned features, followed by a sophisticated MLP with eight layers (200 neurons in the first layer and 100 neurons in the remaining).

Fibonacci technical indicator

The Fibonacci sequence progresses with an unpredictable sequence of numbers. This ratio is usually denoted as \(1.618\), commonly referred to as the golden ratio (\(\phi\)). Each new member in the sequence divides the next, approaching the unattainable \(\phi\) value. Notably, humans subconsciously seek out the golden ratio where objects are perceived as unappealing and dis-proportional if they deviate from this principle. Similarly, market traders draw parallels to this psychological trait during their analysis.

Fibonacci retracement is employed to discern trends and retracements, enabling investors to accurately predict the entry and exit points in approximately 70% of cases [40]. This hypothesis is rooted in technical analysis, offering insights into forecasting asset prices [41]. The financial industry has leveraged Fibonacci’s sequence to develop five distinct trading tools: arcs, fans, retracements, extensions, and time zones. These tools utilize lines derived from the Fibonacci sequence to indicate potential trend changes. However, the scope of this study is limited to gaining insights into Fibonacci retracement levels only. The algorithm for Fibonacci based on [42] and utilized throughout this study is given as Algorithm 1.

Algorithm 1
figure a

Fibonacci retracement level

Profitability metrics for financial predictive systems

To evaluate the performance of a financial prediction system, both performance metrics and profitability metrics are important. However, researchers tend to focus only on performance metrics, possibly due to their interest in evaluating performance and implementation of hybrid and ensemble approaches. Profitability can be categorized based on the financial predictive system being evaluated. These metrics include risk-based (Sharpe ratio and Sortino ratio), return-based (ROI and annualized ROI), and trade/bets-based metrics (max profit per trade and percentage of profitable trades). Out of all these metrics, the Sharpe ratio [8, 16, 43,44,45,46,47,48], return on investment (ROI) [8, 16, 43, 45,46,47,48,49,50], and Sortino ratio [8, 45, 46] are the most prominent. In order to assess the viability of the prediction matrices, backtesting (on historical data) is used to ascertain the working of a financial prediction system. Two common strategies have been implemented such as (i) Long and (ii) Long-Short strategies. However, other trading strategies do exist that require a deeper understanding of the market to implement such as portfolio optimization [51], asset pair-wise strategy [8] and trading agents [43].

Related works

In price prediction, models are trained on a particular asset or group of assets to later predict its price. In this regard, Zoumpekas [11] carried out a study on Ethereum (ETH) by taking training data from August 8, 2015, to May 28, 2018, while test data was randomly selected for random time frames from May 29, 2018, to April 26, 2020. The time interval for data is 5 min and is sourced from a single exchange (Poloniex). This study uses the open, high, low, close, and volume (OHLCV), and weighted averages (Wavg) for each day. Deep learning models like CNN, LSTM, s-LSTM, bi-LSTM, and GRU have been used. Results show that LSTM and GRU perform best in terms of directional accuracy (71%). It has been observed that the performance of bi-LSTM degrades in validation and test sets as compared to LSTM and GRU, which illustrates that some components of over-fitting of these models exist. Two investment strategies (buy–sell and buy–hold) have also been used to evaluate the model, which validates the performance but the results vary in the type of trend being observed in the market.

Another comparative analysis carried out on three recurrent neural networks (RNN) carried out successful close price prediction [7]. LSTM, bi-LSTM, and GRU have been implemented on cryptocurrency daily data retrieved from yahoo.finance.com. The authors carried out a comparison with other studies related to RNNs and achieved better performance. However, the study has not applied an investment strategy or evaluated it in terms of profit-cost benefit. In [52], CNN was used to extract future price trends for various popular stock indices and applied a novel deep trend following investment strategy. The investment strategy proved to be profitable with max accumulated returns of 33% for NASDAQ and 31% for DJIA. Trend Prediction of crypto assets has been successfully implemented using C-LSTM, CNN, MLP, and RBFN in [12], where C-LSTM (CNN-LSTM) and CNN show good accuracy and statistical significance. Suitability of NNs for intraday trading using high-frequency data (HFT), i.e., 1-min interval, of six crypto coins based on USD exchange rates namely Bitcoin (BTC-USD), Ethereum (ETC-USD), dash (Dash-USD), Litecoin (LTC-USD), Ripple (XRP-USD) and Monero (XMR-USD) is validated, where 18 TIs are selected, which are inspired from a previous study [53]. The results also confirm that C-LSTM can be successfully implemented and accurate trend prediction on most cryptocurrencies with high volume and liquidity can be achieved. Literature analysis of related models has been enlisted in Table 1.

Table 1 Literature analysis

Based on the analysis of the above-discussed research works, the following limitations are found in the existing literature. Table 2 shows the details of shortcomings of existing works that are resolved in this study.

Table 2 Limitations found in the existing literature and resolved in this study

The proposed methodology

In this section, the research design and methods are presented which gives an overview of a generic financial market prediction system with the help of a flow diagram, and a detailed description of the predictive framework being followed in this document.

System overview

A comprehensive predictive framework for the stock market is based on six rudimentary steps involving the collection of input features (data), selection of features, preprocessing of selected features, application of AI model, model evaluation with suitable metrics and trading decisions. review study carried out for different input indicators for AI models [54]. Similarly, the financial predictive system employed in this paper comprises six stages. Initially, the data is retrieved or collected from freely available data sources.Footnote 1 Afterward, pre-processing and sampling of the dataset are carried out to prep data for the training AI model on the training and validation set. A test set is used to predict the required output in the prediction stage. Another aspect that is rarely considered is the profitability of the model. The investment simulation stage is included to simulate the model based on a trading strategy to evaluate if the model performs robustly in terms of profitability. In the last stage, evaluation is carried out based on metrics that assess predictive performance (accuracy) and profitability (ROI). The diagrammatic view of the conceptual framework of a financial predictive system and its detailed implementation is shown in Figs. 2 and 3.

Fig. 2
figure 2

Flow diagram for research design and method

Fig. 3
figure 3

Detailed research methodology and architectural framework of the proposed approach

Dataset

The dataset contains stock data for Bitcoin retrieved from eatrading.com for 1-min intervals which consist of six columns: date-time, open, high, low, close, and volume. The data set contains data from 7 October to 13 December 22. The number of BTC/USDT samples is 97,929 as shown in Table 3.

Table 3 Analysis of the dataset

Preprocessing

The Pre-processing Module is responsible for processing of dataset to extract 18 TIs and Fibonacci retracement TI. This list of TIs represents and overlaps the set of the trend-following, momentum, and oscillator indicators, as shown in Table 4. Two categories of indicators are used by practitioners, where moving averages are often used to define trading rules that generate buy or sell signals based on the price movements, while momentum indicators, on the other hand, track the rate of price change. These categories of indicators are used by investors to measure the trend direction and reversals to define trading signals [55], which are computed with the python package TA-lib.Footnote 2 Fibonacci TI, on the other hand, combines both the buy-sell signal points as well as the likely trend reversal/direction based on support and resistance. Due to the high computational cost of algorithms, the current selection has been limited to these indicators.

Table 4 Categories of technical indicators

After the calculation of the TIs, all numerical columns are normalized using the min–max scalar from the sci-kit learn library. After these sparse labels are encoded for the target variable for the trend direction and strength of price change (0.05%, 0.10%, 0.15%, 0.23%, and 0.23%+). The target label is an integer value from zero to \(n-1\) where \(n\) represents the number of classes being encoded. The number of features is increased from six (five stock features + one Target-Label) to 19 (19× TIs + one Target-Label).

Sampling

In this module, data is sampled using Stratified random sampling over the whole dataset, as the underlying hypothesis is that randomization of the data sample will not affect the results, as models will be sufficiently trained and generalized on the available data set to extract meaningful patterns without affecting the efficiency of models and prevent over-fitting. Due to the requirement of lagged information for the processing of TI, data set samples are slightly extended up to the last minutes of 6 October 2022 to get the 19 required features from the very beginning.

Data samples have been constructed, where each pattern consists of \(i\) indicators for “\(l\)” time steps and multi-class labels to be predicted. Two-dimensional vector attributes, including the class label, are input into CNN and hybrid CNN in the matrix form \(l \times i\). The generated patterns are split into three datasets—train (70%), validate (15%), and test (15%). Moreover, another set of data is also created for the investment simulator module from 1 to 13 December 2022.

Training and prediction

AI models are independently trained on the training dataset after which predictions are based on validation and test sets. The predictions obtained from an investment set are fed into an investment simulation module to simulate a real trading environment.

Investment simulation

This module simulates two investment strategies which are Long and Long-Short trading strategies have been implemented on data and predicted values. The profits accrued on the trades have been registered and a running tally of net profit has been maintained, which is later used by the evaluation module for analysis. The AI model has been simulated on each type of investment strategy on the investment set.

Evaluation

Performance and profitability evaluation of the model is carried out on the test and investment dataset results. The evaluation is based on factors such as accuracy and ROI.

Empirical framework

Two phases of preliminary tests are carried out to select the best configuration for final experimentation. In the first phase, appropriate network structures and parameters are identified for each model using the validation set based on accuracy and ROI, over three independent experiments on Bitcoin at 1-min granularity. In the second phase, the best configurations for each model are evaluated on the test set over 20 independent experiments. The same procedure is used to modify the structure and configuration of the C-LSTM model to reduce its complexity in terms of parameters while improving the performance and profitability of the model.

Phase 1—exploratory experiments

Exploratory experiments for MLP are carried out where 15 combinations of hidden layers and hidden neurons per layer have been tested. For CNN, 12 configurations were tested in this stage, which included the following architectures based on a convolutional matrix and number/type of blocks:

  • Vertical filters: Five blocks, each consisting of a convolutional layer, batch normalization, dropout (first four blocks), and ReLU activation, followed by a global average pooling layer to minimize overfitting. The network’s last layer consists of a single neuron that performs the final prediction. The shape of the convolutional matrix is vertical (K × 1), allowing each kernel to work on a single indicator over time, the focus is on a particular TI.

  • Horizontal filters: Four blocks are similar to vertical filters except the dropout layer is in the first three blocks. The shape of the convolutional matrix is vertical (1 × K), allowing each kernel to work on a single instance over time.

  • DNN: a deep neural network that consists of three blocks similar to vertical and horizontal in construction except the convolutional matrix is of the shape K × K.

The best results are obtained from DNN as compared to vertical and horizontal filters. For LSTM, only four combinations of LSTM units are tested due to the computational complexity, whereas exploratory analysis on two configurations of CLSTM architecture with four- and five-blocks is carried out on the basis of training and validation samples. In all cases, average accuracy and ROI are calculated on the validation test. This leads to the selection of several sets of best parameters based on validation accuracy and ROI, as reported in Table 5. LSTM and CLSTM architectures provide us with a single best model that is best in terms of accuracy and ROI.

Table 5 Screening of models based on stage one of exploratory experiments

In stage two, the best configurations for MLP and CNN are tested over validation sets through 20 independent experiments to select the best-performing configuration over validation accuracy, and Long ROI and Long-Short ROI over investment set. This set is essential to reach a single best and most suitable configuration. This stage results in the screening of the configuration to only the best configuration for both these NN architectures as reported in Table 6.

Table 6 Selection of models based on stage two of exploratory models

Phase 2—evaluation experiments

In the second phase, the single best configurations of each model are compared based on the performance and profitability of cryptocurrencies and granularity based on the defined research questions. Results are based on test accuracy, Long ROI, and Long-Short ROI on performance and profitability. The following stages of experiments which are based on the defined research questions in the earlier chapter are described as under:

  • Stage 1—Fibonacci retracement indicator and multi-class problem: In the initial stage, two separate experiments are conducted to get performance results for accuracy based on test sets with and without Fibonacci retracement levels for each class of output (binary, 4-class, 6-class, 8-class, and 10-class). The results reported are obtained from 20 independent experiments and reported for test accuracy for BTC at 1-min data granularity.

  • Stage 2—Profitability analysis of trading/investment strategy: Similar to stage one, separate experiments are conducted to get profitability results for ROI (Long and Long-Short) based on investment set with and without Fibonacci retracement levels for each class of output. The experiments are conducted for BTC at 1-min data granularity.

  • Stage 3—C-LSTM modification: To address RQ3, modification in the structure of the C-LSTM network has been carried out in the following sequence: (i) number of CNN blocks, (ii) the number of LSTM units, (iii) number of dense layers (Dlayers), and iv) number of dense layer neurons (Dneurons). The results have been optimized for 4-class output over 20 independent experiments. Configurations have been tested for each parameter and the best parameters are selected as shown in Table 7.

Table 7 Parameter tuning for modification of CLSTM model configurations tested to select the best parameters

Results and discussion

In this section, the outcome of the evaluation phase on the test and investment set has been discussed and reported in Table 8.

Table 8 Improvement in performance/profitability vis-à-vis Fib/non-Fib and multi-class output for BTC 1-min

RQ 1—performance improvement due to Fibonacci TI and multi-class prediction

Keeping in view the NN architectures being implemented with regards to Fibonacci/non-Fibonacci sets for BTC 1-min, 25 combinations of different NN models and binary and multi-class have been implemented. Results for performance improvements have been presented in Fig. 4. In the case of Fibonacci/non-Fibonacci sets, performance improvements have been observed in only 11 combinations when Fibonacci has been employed. Among these NN, CLSTM (four out of five configurations) and CLSTM mod (three out of five configurations) performed better with four and three of the configurations giving performance improvements. Few configurations showed improvements in performance in the case of CNN, LSTM, and MLP.

Fig. 4
figure 4

Performance improvements of Fibonacci TI in trend and trend-strength prediction as compared to multi-class output for BTC 1-min

Trend prediction showed significant improvements in performance, especially CLSTM mod, CLSTM, CNN, and MLP. Only in the case of LSTM performance has depreciated by (− 0.0002%) which is negligible. Alternatively, performance improvements have been observed in trend-strength prediction but it also increased the associated risk as models with depreciating performance had an average decrease of (− 0.0065%) in the case of MLP 4-classes. Performance improvements have been scarce with max improvement observed for CLSTM mod with an increase of + 0.0099%. Overall, out of 5 configurations for each class of outputs (binary, 4-class, 6-class, 8-class, and 10-class) performance improvements were observed in four, two, two, two, and one NN architecture respectively.

RQ 2—investment strategy based on trend and trend-strength direction

With regards to investment strategy being implemented and Fibonacci/non-Fibonacci sets for BTC 1-min, 25 configurations of different NN models and binary and multi-class are implemented over 20 runs of the experiment. These experiments can be categorized as: (a) Long strategy, and (b) Long-short strategy. Results for profitability metrics are presented in Figs. 5 and 6, and Table 8.

Fig. 5
figure 5

Profitability improvements of Fibonacci TI in trend prediction and trend-strength prediction as compared to multi-class output based on long short strategy for BTC 1-min

Fig. 6
figure 6

Profitability improvements of Fibonacci TI in trend prediction and trend-strength prediction as compared to multi-class output based on long short strategy for BTC 1-min

Long strategy

In the case of long strategy for BTC 1-min and Fibonacci/non-Fibonacci sets, profitability (ROI) improved in 15 combinations out of 25 (60% improvement) when Fibonacci sets are employed with max improvement of 6.89% for LSTM (8-class) and depreciation of (− 4.29%) for CNN (10-class). Among these CLSTM (five out of five configurations) and LSTM (four out of five configurations) showed increased profits. These were followed by CLSTM mod, CNN, and MLP, which showed increased profits in two out of five configurations. Results for performance improvements are presented in Fig. 5.

In the case of trend prediction and trend-strength direction vis-à-vis Fibonacci/non-Fibonacci sets, trend prediction showed improvements in profitability for all NN architectures, especially CLSTM mod, CLSTM, CNN, and LSTM. Only in the case of MLP did performance depreciate by (− 0.49%). Alternatively, performance improvements are observed in trend-strength prediction but it also increased the risk associated as models with diminishing profits had an average decrease of (− 4.29%) in the case of CNN 10-classes, however, profitability improvements are significant with max improvement observed for LSTM with an increase of + 6.89%. Overall, out of 5× configurations for each class of outputs; binary, 4-class, 6-class, 8-class, and 10-class, performance improvements are observed in four, two, three, four, and two NN architecture respectively. It is also important to highlight that CLSTM did not give any losses but ROI did not increase/decrease for three out of five configurations.

Long short strategy

In the case of long short strategy for BTC 1-min and Fibonacci/non-Fibonacci sets, profitability (ROI) improved are observed in 17 combinations out of 25 (increased ROI in 68% configurations) when Fibonacci sets are employed with max improvement of 2.83% for LSTM (binary) and depreciation of (− 1.12%) for CNN (6-class). Among these CLSTM (five out of five configurations), CLSTM mod (four out of five configurations), and LSTM (four out of five configurations) showed increased long-short ROI. These were followed by CNN and MLP, which showed increased profits in two out of five configurations. Results for performance improvements are presented in Fig. 6.

In the case of trend prediction and trend-strength direction vis-à-vis Fibonacci/non-Fibonacci sets, trend prediction showed improvements in profitability for all NN architectures with no depreciation in long short ROI. Alternatively, performance improvements have been observed in trend-strength prediction but it also increased the risk associated as models with diminishing profits had an average decrease of (− 4.29%) in the case of CNN 10 classes. However, profitability improvements are significant with max improvement observed for LSTM with an increase of + 6.89%. Overall, out of 5 configurations for each class of outputs; binary, 4-class, 6-class, 8-class, and 10-class, performance improvements are observed in five (all configurations), four, three, two, and one NN architecture respectively. Notable, CLSTM suffered no losses and four out of five configurations presented profitable results.

RQ 3—CLSTM modification

Results related to modifications in hybrid CNN along with a summarized conclusion are discussed here. Results of the performance and profitability metrics are presented in Figs. 7, 8, and Table 9.

Fig. 7
figure 7

Performance improvements of Fibonacci TI in trend prediction and trend-strength prediction as compared to multi-class output for BTC 1-min for CLSTM mod and CLSTM

Fig. 8
figure 8

Profitability improvements of Fibonacci TI in trend prediction and trend-strength prediction as compared to multi-class output based on the long and long-short strategy for BTC 1-min for CLSTM mod and CLSTM

Table 9 Comparison of experimental results for CLSTM mod multi-class output for BTC 1-min

In terms of performance (accuracy) of Fibonacci TI (refer to Fig. 7), CLSTM mod performed better for trend prediction and lower levels of trend-strength prediction. At higher levels, performance degradation occurred. Similarly, the performance of CLSTM was better for trend prediction but in the case of trend-strength prediction, performance varied for different levels of trend-strength; 4-class (degradation), 6-class (improvement), 8-class (no change) and 10-class (no change). Moreover, performance improvements in CLSTM mod are negligible but as compared to CLSTM, they are significant. As can be seen in Fig. 7, the CLSTM mod has shown improved performance for the 4-class and 6-class levels of trend-strength prediction. Thus, the CLSTM mod is a viable option for trend-strength prediction at these levels.

In terms of profitability (long ROI) of Fibonacci TI (refer to Fig. 8), CLSTM mod performed better for trend prediction and 6-class trend-strength prediction while depreciating in all other cases of trend-strength prediction. The performance of CLSTM was better for both trend and trend-strength prediction with stable but smaller ROI margins. Contrary to this long short ROI performed better for CLSTM as compared to long ROI but still at higher levels of trend-strength registers depreciation in profitability. Thus, the CLSTM mod is a viable option for trend-strength prediction at these levels.

Summary and discussions

Based on the results discussed in the preceding subsections, summarized findings are presented below:

Bitcoin 1-min

  • Fibonacci enhanced performance and profitability of hybrid CNNs.

  • CNN, LSTM, and MLP generally decreased performance.

  • Traditional NNs improved but with higher risk.

CLSTM modification

  • The incorporation of Fibonacci enhances profitability in trend prediction and mid-level trend-strength prediction.

  • CLSTM proves to be a stable option for profitable trading, while

  • CLSTM mod exhibits the potential for profitability with the implementation of effective risk management.

Multi-classification-based trend-strength prediction provides effective and easy-to-interpret trading signals that can be incorporated into trading strategies. Traders can easily employ these trading signals in simulations and backtesting of trading strategies as well as in a real-world scenario for effective and profitable trading. Binary (BUY–SELL) or 3-state (BUY–HOLD–SELL) provides only the direction of trend but does not provide any useful information on the degree of expected price change. This handicap can be removed by using trend-strength prediction (multi-classification problem) for predicting the expected price change thus, ensuring better utilization of investment amount and better decision-making. Moreover, it has been observed that the 4-, 6- and 8-class provide better performance or profitability as compared to the 10-class, which can be attributed to shrinking price changes affecting the magnitude/degree of trend strength of a particular class.

Research on high-frequency trading (HFT) data is scarce due to high volatility, which is particularly true for cryptocurrencies. Performance comparison has been carried out for Bitcoin with the study carried out by Alonso-Monsalve [12]. The models have been implemented on the dataset under consideration for this paper and the results accrued have been compared. Improvements have been observed in MLP, CNN, C-LSTM, and C-LSTM mod. Notably, CLSTM mod (50.81%) also showed improved performance as compared to CLSTM (50.60%) without modification as shown in Table 10.

Table 10 Comparison with similar existing work [12]

Statistical analysis

Statistical significance of the relative predictive outcome of the AI models on investment samples has been carried out to provide more soundness to the study. In this regard, the McNemar Test evaluates the null hypothesis that the two forecasts have the same performance (no improvement in the predictive outcome). The p-values show the results as a comparison between AI models with and without Fibonacci TI for each class of output, as reported in Table 11. It is observed that hybrid CNNs seems to be more predictable than the rest for all classes of experiments. All AI models are better for predicting binary (2-class) trend prediction except CNN. 8-class has emerged as statistically better than other 4-class, 6-class, and 10-class in trend strength prediction.

Table 11 Statistical significance of the differences in the relative outcome of AI models on investment sample

Limitations and future work

For this study, we conducted investment simulations to assess profitability using 13 days however, data duration will be based on its economic and financial sector significance such as quarterly or fiscal year data. Evaluation of additional granularities and other assets or financial markets like stocks, commodities, and index funds to gain a more comprehensive understanding of the impact of data granularity on performance and profitability.

This study solely incorporates ROI within the domain of return-based metrics. However, it is worth noting that implementing different categories of metrics, including risk-based and trade/bet-based metrics could be more beneficial. This will comprehensively evaluate profitability optimization through risk minimization and return maximization. Another aspect that requires attention is the overfitting problem particularly prevalent in LSTM and hybrid CNNs. Further hyper-tuning and data preprocessing are intended to improve model generalization in future works. Lastly, the window size for Fibonacci TI has been set for 15 for all experiments, which can be optimized to gain insights into its effect on the performance or profitability of the model.

Conclusions

This study endeavors to investigate and gain insights into the effects of the multi-classification, as well as, the implications of Fibonacci as an input feature based on profitability metrics, specifically return-based (ROI), with significant contributions to the investment and finance sector within the realm of algorithmic trading. CNN, LSTM, and MLP depreciated in most cases, while hybrid CNNs performed better for trend and trend strength in most configurations.

Traditional neural networks like MLP, CNN, and LSTM networks and hybrid CNN algorithms based on CNN-LSTM architecture are introduced for the prediction of trend direction along with a degree of price change (strength) component to help the financial predictive system or investor in decision-making. The research findings presented here offer pragmatic insights for the practical implementation of high-frequency algorithmic trading. Fibonacci TI is effective in improving the performance and profitability of hybrid CNNs like CLSTM mod and CLSTM. Traditional neural network architectures like CNN, LSTM, and MLP depreciated, while hybrid CNNs performed better for trend and trend-strength prediction in most configurations.

For this study, we conducted investment simulations to assess profitability using 13 days however, data duration will be based on its economic and financial sector significance such as quarterly or fiscal year data. Evaluation of additional granularities and other assets or financial markets like stocks, commodities, and index funds to gain a more comprehensive understanding of the impact of data granularity on performance and profitability.

This study solely incorporates ROI within the domain of return-based metrics. However, it is worth noting that implementing different categories of metrics, including risk-based and trade/bet-based metrics could be more beneficial. This will comprehensively evaluate profitability optimization through risk minimization and return maximization. Secondly, the overfitting problem requires attention particularly prevalent in LSTM and hybrid CNNs. Further hyper-tuning and data preprocessing are intended to improve model generalization. Thirdly, the window size for Fibonacci TI has been set for 15 for all experiments, which can be optimized to gain insights into its effect on the performance and profitability of the model. Other future avenues of research include increasing dataset time duration based on economic and financial sector significance, adding other financial assets, assessment of re-training time, and different categories of profitability metrics.

Data availability

The dataset used in this study, is downloaded from https://eatradingacademy.com/, which can be made available from the authors upon reasonable request.

Notes

  1. Yahoo Finance (https://finance.yahoo.com/). Investing (https://www.investing.com), and EA Trading Academy (https://www.eatradingacademy.com). The most commonly used data granularity is daily. This study focuses on high-frequency trading thus very short granularity is being considered i.e. 1-min.

  2. Technical analysis library (http://ta-lib.org/).

References

  1. Agrawal J, Chourasia DV, Mittra A. State-of-the-art in stock prediction techniques. Int J Adv Res Electr Electron Instrum Eng. 2013;2:1360–6.

    Google Scholar 

  2. Huang J-Z, Huang W, Ni J. Predicting bitcoin returns using high-dimensional technical indicators. J Finance Data Sci. 2019;5:140–55.

    Article  Google Scholar 

  3. Adcock R, Gradojevic N. Non-fundamental, non-parametric bitcoin forecasting. Physica A. 2019;531: 121727. https://doi.org/10.1016/j.physa.2019.121727.

    Article  Google Scholar 

  4. Bao W, Yue J, Rao Y. A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PLoS ONE. 2017;12: e0180944. https://doi.org/10.1371/journal.pone.0180944.

    Article  Google Scholar 

  5. Das S, Mokashi K, Culkin R. Are markets truly efficient? Experiments using deep learning algorithms for market movement prediction. Algorithms. 2018;11:138. https://doi.org/10.3390/a11090138.

    Article  MathSciNet  Google Scholar 

  6. Fischer T, Krauss C. Deep learning with long short-term memory networks for financial market predictions. Eur J Oper Res. 2018;270:654–69. https://doi.org/10.1016/j.ejor.2017.11.054.

    Article  MathSciNet  Google Scholar 

  7. Hansun S, Wicaksana A, Khaliq AQM. Multivariate cryptocurrency prediction: comparative analysis of three recurrent neural networks approaches. J Big Data. 2022;9:50. https://doi.org/10.1186/s40537-022-00601-7.

    Article  Google Scholar 

  8. Lin Y, Liu S, Yang H, Wu H. Stock trend prediction using candlestick charting and ensemble machine learning techniques with a novelty feature engineering scheme. IEEE Access. 2021;9:101433–46. https://doi.org/10.1109/ACCESS.2021.3096825.

    Article  Google Scholar 

  9. Selvin S, Vinayakumar R, Gopalakrishnan EA, Menon VK, Soman KP. Stock price prediction using LSTM, RNN and CNN-sliding window model. In: 2017 international conference on advances in computing, communications and informatics (ICACCI). Udupi: IEEE; 2017. p. 1643–7. https://doi.org/10.1109/ICACCI.2017.8126078.

  10. Shintate T, Pichl L. trend prediction classification for high frequency bitcoin time series with deep learning. J Risk Financ Manag. 2019;12:17. https://doi.org/10.3390/jrfm12010017.

    Article  Google Scholar 

  11. Zoumpekas T, Houstis E, Vavalis M. ETH analysis and predictions utilizing deep learning. Expert Syst Appl. 2020;162: 113866. https://doi.org/10.1016/j.eswa.2020.113866.

    Article  Google Scholar 

  12. Alonso-Monsalve S, Suàirez-Cetrulo AL, Cervantes A, Quintana D. Convolution on neural networks for high-frequency trend prediction of cryptocurrency exchange rates using technical indicators. Expert Syst Appl. 2020;149: 113250. https://doi.org/10.1016/j.eswa.2020.113250.

    Article  Google Scholar 

  13. Corbet S, Lucey B, Urquhart A, Yarovaya L. Cryptocurrencies as a financial asset: a systematic analysis. Int Rev Financ Anal. 2019;62:182–99. https://doi.org/10.1016/j.irfa.2018.09.003.

    Article  Google Scholar 

  14. Li AW, Bastos GS. Stock market forecasting using deep learning and technical analysis: a systematic review. IEEE Access. 2020;8:185232–42. https://doi.org/10.1109/ACCESS.2020.3030226.

    Article  Google Scholar 

  15. Critien JV, Gatt A, Ellul J. Bitcoin price change and trend prediction through twitter sentiment and data volume. Financ Innov. 2022;8:45. https://doi.org/10.1186/s40854-022-00352-7.

    Article  Google Scholar 

  16. Dezhkam A, et al. A Bayesian-based classification framework for financial time series trend prediction. J Supercomput. 2023;79:4622–59. https://doi.org/10.1007/s11227-022-04834-4.

    Article  Google Scholar 

  17. Lee M-C, et al. Applying attention-based BiLSTM and technical indicators in the design and performance analysis of stock trading strategies. Neural Comput Appl. 2022;34:13267–79. https://doi.org/10.1007/s00521-021-06828-4.

    Article  Google Scholar 

  18. Nti IK, Adekoya AF, Weyori BA. A comprehensive evaluation of ensemble learning for stock-market prediction. J Big Data. 2020;7:20. https://doi.org/10.1186/s40537-020-00299-5.

    Article  Google Scholar 

  19. Henrique BM, Sobreiro VA, Kimura H. Literature review: machine learning techniques applied to financial market prediction. Expert Syst Appl. 2019;124:226–51. https://doi.org/10.1016/j.eswa.2019.01.012.

    Article  Google Scholar 

  20. Strader TJ, Rozycki JJ, Root TH, Huang Y-HJ. Machine learning stock market prediction studies: review and research directions. J Int Technol Inf Manag. 2020;28:63–83.

    Google Scholar 

  21. Aslam N, Rustam F, Lee E, Washington PB, Ashraf I. Sentiment analysis and emotion detection on cryptocurrency related tweets using ensemble LSTM-GRU model. IEEE Access. 2022;10:39313–24.

    Article  Google Scholar 

  22. Swathi T, Kasiviswanath N, Rao AA. An optimal deep learning-based LSTM for stock price prediction using twitter sentiment analysis. Appl Intell. 2022;52:13675–88.

    Article  Google Scholar 

  23. Rahaman A, et al. Bitcoin trading indicator: a machine learning driven real time bitcoin trading indicator for the crypto market. Bull Electr Eng Inform. 2023;12:1762–72.

    Article  Google Scholar 

  24. Washington PB, Gali P, Rustam F, Ashraf I. Analyzing influence of covid-19 on crypto & financial markets and sentiment analysis using deep ensemble model. PLoS ONE. 2023;18: e0286541.

    Article  Google Scholar 

  25. Rashid CH, et al. Software cost and effort estimation: current approaches and future trends. IEEE Access. 2023. https://doi.org/10.1109/ACCESS.2023.3312716.

    Article  Google Scholar 

  26. Kumbhare P, Kolhe L, Dani S, Fandade P, Theng D. Algorithmic trading strategy using technical indicators. In: 2023 11th international conference on emerging trends in engineering & technology-signal and information processing (ICETET-SIP). IEEE; 2023. p. 1–6.

  27. Yun KK, Yoon SW, Won D. Prediction of stock price direction using a hybrid GA-XGBoost algorithm with a three-stage feature engineering process. Expert Syst Appl. 2021;186: 115716. https://doi.org/10.1016/j.eswa.2021.115716.

    Article  Google Scholar 

  28. Huang J, Chai J, Cho S. Deep learning in finance and banking: a literature review and classification. Front Bus Res China. 2020;14:13. https://doi.org/10.1186/s11782-020-00082-6.

    Article  Google Scholar 

  29. Nayak SC, Misra BB. Estimating stock closing indices using a GA-weighted condensed polynomial neural network. Financ Innov. 2018;4:21. https://doi.org/10.1186/s40854-018-0104-2.

    Article  Google Scholar 

  30. Parente M, Rizzuti L, Trerotola M. A profitable trading algorithm for cryptocurrencies using a neural network model. Expert Syst Appl. 2024;238: 121806.

    Article  Google Scholar 

  31. Shafi I, Aziz A, Din S, Ashraf I. Reduced features set neural network approach based on high-resolution time-frequency images for cardiac abnormality detection. Comput Biol Med. 2022;145: 105425.

    Article  Google Scholar 

  32. Sezer OB, Ozbayoglu AM. Algorithmic financial trading with deep convolutional neural networks: time series to image conversion approach. Appl Soft Comput. 2018;70:525–38. https://doi.org/10.1016/j.asoc.2018.04.024.

    Article  Google Scholar 

  33. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9:1735–80. https://doi.org/10.1162/neco.1997.9.8.1735.

    Article  Google Scholar 

  34. Le XH, Ho HV, Lee G, Jung S. Application of long short-term memory (LSTM) neural network for flood forecasting. Water. 2019;11:1387. https://doi.org/10.3390/w11071387.

    Article  Google Scholar 

  35. Swathi T, Kasiviswanath N, Rao AA. Modelling of hyperparameter tuned bidirectional long short-term memory with TLBO for stock price prediction model. Int J Intell Syst Appl Eng. 2023;11:753–65.

    Google Scholar 

  36. Swathi T, Kasiviswanath N, Rao AA. Cuckoo search optimization and long-term memory-based stock price prediction model with improved classification performance. In: 2022 IEEE 4th international conference on cybernetics, cognition and machine learning applications (ICCCMLA). IEEE; 2022. p. 397–402.

  37. Donahue J, et al. Long-term recurrent convolutional networks for visual recognition and description. 2016. _eprint: 1411.4389.

  38. Umer M, et al. Sentiment analysis of tweets using a unified convolutional neural network-long short-term memory network model. Comput Intell. 2021;37:409–34.

    Article  MathSciNet  Google Scholar 

  39. Stoye M, CMS collaboration. Deep learning in jet reconstruction at CMS. J Phys Conf Ser. 2018;1085: 042029. https://doi.org/10.1088/1742-6596/1085/4/042029.

    Article  Google Scholar 

  40. Vonko D. Understanding Fibonacci numbers and their value as a research tool. 2022.

  41. Sethi N, Bhateja N, Singh J, Mor P. Fibonacci retracement in stock market. SSRN Electron J. 2020. https://doi.org/10.2139/ssrn.3701439.

    Article  Google Scholar 

  42. Malato G. Fibonacci retracements in Python. 2021.

  43. Alsubaie Y, Hindi KE, Alsalman H. Cost-sensitive prediction of stock price direction: selection of technical indicators. IEEE Access. 2019;7:146876–92. https://doi.org/10.1109/ACCESS.2019.2945907.

    Article  Google Scholar 

  44. Fister D, Perc M, Jagrič T. Two robust long short-term memory frameworks for trading stocks. Appl Intell. 2021;51:7177–95. https://doi.org/10.1007/s10489-021-02249-x.

    Article  Google Scholar 

  45. Guarino A, Grilli L, Santoro D, Messina F, Zaccagnino R. To learn or not to learn? Evaluating autonomous, adaptive, automated traders in cryptocurrencies financial bubbles. Neural Comput Appl. 2022;34:20715–56. https://doi.org/10.1007/s00521-022-07543-4.

    Article  Google Scholar 

  46. Lu J-Y, et al. Structural break-aware pairs trading strategy using deep reinforcement learning. J Supercomput. 2022;78:3843–82. https://doi.org/10.1007/s11227-021-04013-x.

    Article  Google Scholar 

  47. Sebastião H, Godinho P. Forecasting and trading cryptocurrencies with machine learning under changing market conditions. Financ Innov. 2021;7:3. https://doi.org/10.1186/s40854-020-00217-x.

    Article  Google Scholar 

  48. Wu JM-T, Li Z, Herencsar N, Vo B, Lin JC-W. A graph-based CNN-LSTM stock price prediction algorithm with leading indicators. Multimed Syst. 2021. https://doi.org/10.1007/s00530-021-00758-w.

    Article  Google Scholar 

  49. Li Y, Jiang S, Li X, Wang S. Hybrid data decomposition-based deep learning for bitcoin prediction and algorithm trading. Financ Innov. 2022;8:31. https://doi.org/10.1186/s40854-022-00336-7.

    Article  Google Scholar 

  50. Touzani Y, Douzi K. An LSTM and GRU based trading strategy adapted to the Moroccan market. J Big Data. 2021;8:126. https://doi.org/10.1186/s40537-021-00512-z.

    Article  Google Scholar 

  51. Shah A, Gor M, Sagar M, Shah M. A stock market trading framework based on deep learning architectures. Multimed Tools Appl. 2022;81:14153–71. https://doi.org/10.1007/s11042-022-12328-x.

    Article  Google Scholar 

  52. Chakole J, Kurhekar MP. Convolutional neural network-based a novel deep trend following strategy for stock market trading. In: Cong G, Ramanath M, editors. Proceedings of the CIKM 2021 workshops co-located with 30th ACM international conference on information and knowledge management (CIKM 2021), Gold Coast, Queensland, Australia, November 1–5, 2021, vol. 3052 of CEUR workshop proceedings (CEUR-WS.org, 2021).

  53. Kara Y, Acar Boyacioglu M, Baykan ÖK. Predicting direction of stock price index movement using artificial neural networks and support vector machines: the sample of the Istanbul stock exchange. Expert Syst Appl. 2011;38:5311–9. https://doi.org/10.1016/j.eswa.2010.10.027.

    Article  Google Scholar 

  54. Verma S, Sahu SP, Sahu TP. Stock market forecasting with different input indicators using machine learning and deep learning techniques: a review. Eng Lett. 2023;31.

  55. Neely CJ, Rapach DE, Tu J, Zhou G. Forecasting the equity risk premium: the role of technical indicators. Manag Sci. 2014;60:1772–91. https://doi.org/10.1287/mnsc.2013.1838.

    Article  Google Scholar 

Download references

Acknowledgements

The authors extend their appreciation to King Saud University for funding this research through Researchers Supporting Project Number (RSPD2024R890), King Saud University, Riyadh, Saudi Arabia.

Funding

This research is funded by the Researchers Supporting Project Number (RSPD2024R890), King Saud University, Riyadh, Saudi Arabia.

Author information

Authors and Affiliations

Authors

Contributions

BHAK conceived the idea, performed data curation and wrote the original manuscript. IS conceived the idea, performed formal analysis and wrote the original manuscript. CHR performed data curation and formal analysis and designed the methodology. MS designed methodology, dealt with software and performed visualization. SA acquired funding, performed investigation and visualization. IA supervised this work, performed validation and the write-review and editing. All authors reviewed the manuscript.

Corresponding authors

Correspondence to Sultan Alfarhood or Imran Ashraf.

Ethics declarations

Competing interests

The authors declare that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khattak, B.H.A., Shafi, I., Rashid, C.H. et al. Profitability trend prediction in crypto financial markets using Fibonacci technical indicator and hybrid CNN model. J Big Data 11, 58 (2024). https://doi.org/10.1186/s40537-024-00908-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40537-024-00908-7

Keywords