Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Survey on Knowledge Representation Models in Healthcare
Previous Article in Journal
Dynamic Evolution Model of Internet Financial Public Opinion
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method for Maintaining a Unique Kurume Kasuri Pattern of Woven Textile Classified by EfficientNet by Means of LightGBM-Based Prediction of Misalignments

1
Information Science Department, Science and Engineering Faculty, Saga University, Saga 840-8502, Japan
2
Applied AI Laboratory, Kurume Institute of Technology, Kurume 830-0052, Japan
*
Author to whom correspondence should be addressed.
Information 2024, 15(8), 434; https://doi.org/10.3390/info15080434
Submission received: 11 June 2024 / Revised: 22 July 2024 / Accepted: 24 July 2024 / Published: 26 July 2024
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)

Abstract

:
Methods for evaluating the fluctuation of texture patterns that are essentially regular have been proposed in the past, but the best method has not been determined. Here, as an attempt at this, we propose a method that applies AI technology (learning EfficientNet, which is widely used as a classification problem solving method) to determine when the fluctuation exceeds the tolerable limit and what the acceptable range is. We also apply this to clarify the tolerable limit of fluctuation in the “Kurume Kasuri” pattern, which is unique to the Chikugo region of Japan, and devise a method to evaluate the fluctuation in real time when weaving the Kasuri and keep it within the acceptable range. This study proposes a method for maintaining a unique faded pattern of woven textiles by utilizing EfficientNet for classification, fine-tuned with Optuna, and LightGBM for predicting subtle misalignments. Our experiments show that EfficientNet achieves high performance in classifying the quality of unique faded patterns in woven textiles. Additionally, LightGBM demonstrates near-perfect accuracy in predicting subtle misalignments within the acceptable range for high-quality faded patterns by controlling the weaving thread tension. Consequently, this method effectively maintains the quality of Kurume Kasuri patterns within the desired criteria.

1. Introduction

Methods for evaluating the fluctuation of texture patterns that are essentially regular have been proposed in the past, but the best method has not been determined. Here, as an attempt at this, we propose a method that applies AI technology (learning EfficientNet, which is widely used as a classification problem solving method) to determine when the fluctuation exceeds the tolerable limit and what the acceptable range is. We also apply this to clarify the tolerable limit of fluctuation in the “Kurume Kasuri” [1] pattern, which is unique to the Chikugo region of Japan, and devise a method to evaluate the fluctuation in real time when weaving the Kasuri and keep it within the acceptable range.
Kurume Kasuri is a traditional cotton fabric from the Chikugo region in Kyushu, Japan, crafted through more than 30 steps, including design, tying, dyeing, and weaving. This process, involving yarn-dyed weaving where threads are pre-dyed and patterns are matched during weaving, results in subtle misalignments that create a unique faded pattern, which is the charm of Kurume Kasuri. Although Kurume Kasuri is woven by machine, it is extremely difficult for inexperienced craftsmen to adjust the patterns accurately. Skilled craftsmen rely on their years of experience and intuition, making it challenging to transfer this tacit knowledge to younger workers.
In this study, we first use a Convolutional Neural Network (CNN) to evaluate the texture (quality) of Kurume Kasuri and build a pattern misalignment classification model to determine whether the texture is acceptable (i.e., whether the pattern misalignment is within an acceptable range). This model clarifies the evaluation criteria for pattern deviation, which vary depending on the weaver. By classifying Kurume Kasuri patterns based on weavers’ subjective evaluations, we train the CNN to distinguish between acceptable and unacceptable pattern deviations, thereby defining the evaluation criteria for pattern misalignment.
Adjusting patterns appropriately during the weaving of Kurume Kasuri is extremely challenging for inexperienced craftsmen. Additionally, the reliance on skilled craftsmen’s experience and intuition makes it difficult to transfer this knowledge, resulting in incomplete skill transfer to younger workers. To address this, we propose a pattern misalignment prediction model for Kurume Kasuri, aiming to optimize pattern misalignment by automatically adjusting warp threads based on the model.
Specifically, we taught the height of the circumscribed rectangle of the pattern as time-series data and constructed a one-step-ahead prediction model. Due to the limited availability of actual pattern data, we created simulation data that closely resembled the actual patterns. Using the constructed model, we performed one-step-ahead predictions. If the predicted value exceeded the threshold, we adjusted the black pixels on the top and bottom sides of the circumscribed rectangle by 1 pixel to maintain the pattern deviation within an acceptable range. The threshold value was determined by the pattern shift prediction model. We then compared the results of the adjustments made with and without predictions to verify the effectiveness of the prediction.
This research involves creating a pattern misalignment prediction model using LightGBM to optimize pattern misalignment through predictions and adjustments. We defined the boundary conditions and application limits of this method and verified it against several patterns. The following sections describe related research, the proposed classification and prediction methods, experimental results, and a conclusion with discussions.

2. Related Research Works

As part of the research on Kurume Kasuri, an analysis of the demand structure for this traditional textile was conducted through an online questionnaire survey [2]. To evaluate the quality of Kurume Kasuri, we proposed a method that assesses the pattern deviation by considering it as 1/f fluctuation [3]. Additionally, a visual evaluation model for lace patterns was constructed using a neural network with a Kalman filter as a learning algorithm [4], though this did not evaluate pattern quality.
Notable image recognition models using Convolutional Neural Networks (CNNs) include AlexNet [5], which significantly outperformed previous records in the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in 2012 and ranked second in 2014. VGGNet [6] and GoogLeNet [7] also ranked first in subsequent years, demonstrating the effectiveness of CNN.
Hyperparameter optimization methods such as Hyperopt [8], GPyOpt [9], PyCaret [10], and Optuna [11] have been proposed. Optuna, which uses the Tree-structured Parzen Estimator (TPE) algorithm, is notable for its capabilities in parallel processing and resuming mid-optimization by saving results in a database. We previously proposed a hyperparameter tuning method for image classification using PyCaret and verified its effectiveness [12]. Research comparing hyperparameter optimization tools has shown that Optuna performs best in searching for optimal machine learning algorithms and corresponding hyperparameters (CASH) [13].
In time series prediction research, models like XGBoost [14] and LightGBM [15] have been used, with LightGBM showing the highest performance [16] in tasks such as predicting subway passenger numbers and greenhouse temperature. LightGBM’s short learning times and accurate predictions have been demonstrated in various applications [17], including the Kaggle M5 Forecasting Accuracy competition, where it was used by most of the top 50 teams and the winning team [18].
For optimal threshold determination, the Otsu method is commonly used [19]. Other approaches include mixup for empirical risk minimization [20] and Generative Adversarial Nets (GANs) [21]. Recently, research on classifying pattern deviations in Kurume Kasuri using CNN has been reported [22].
To build a high-accuracy pattern misalignment classification model, we used pretrained EfficientNetV2, which achieves both learning efficiency and high classification accuracy through Neural Architecture Search (NAS) and model scaling. Hyperparameter tuning for EfficientNetV2 [23] was performed using Optuna. The implementation involved using TensorFlow 2.0 in Python, then adding Global Average Pooling (GAP) [24] and dropout to the final layer of the pretrained EfficientNetV2 to adapt it for binary classification. GAP reduces the number of parameters compared to fully connected layers. We propose a method for hyperparameter tuning of EfficientNetV2-based image classification by modifying the results tuned by Optuna deliberately [25].

3. Proposed Method

3.1. Example of Kurume Kasuri

Figure 1 shows a typical Kurume Kasuri pattern, consisting of 900 warp threads (there are two types of weaving threads, warp and weft threads). During the weaving process, variations in the tension of these vertical threads can lead to the creation of unique faded patterns, contributing to the fabric’s charm. However, if the tension variations exceed a certain limit, the resulting irregularities in the faded patterns can render the fabric unsellable, as they fall outside the acceptable range. The tension of the warp threads on the loom or weaving machine is defined as the downward direction as shown in Figure 1.
Figure 2 illustrates typical “Good” and “Bad” patterns of Kurume Kasuri. It is essential to classify and detect these patterns accurately. If a “Bad” pattern is predicted, the tension of the vertical threads must be adjusted to bring it within the allowable range.

3.2. Classification Method

To classify whether the pattern deviation is within an acceptable range, we used the deep learning library Keras in Python 3.12.4 to build a CNN model, commonly used in image recognition. A CNN mainly consists of convolutional layers and pooling layers, typically structured so that these layers are repeated several times before passing through a fully connected layer to produce the output. The convolutional layer applies a filter (kernel) to the input image, performing a convolution operation. A pooling layer generally follows the convolutional layer to downsample the convolved image. By incorporating dropout and ReLU, as in existing CNN-based image recognition models, and performing data augmentation, we drew inspiration from notable architectures such as AlexNet, which set a record in the ILSVRC 2012, and VGGNet, which enhances expressiveness through multi-layering and reduces the number of parameters by using 3 × 3 convolution filters.
In this study, we employed pretrained EfficientNet for classification, alongside Optuna for hyperparameter tuning. EfficientNet is renowned for its high classification performance, making it suitable for our study. For implementation, we used TensorFlow 2.0 in Python 3.12.4 to add Global Average Pooling (GAP) and dropout to the final layer of EfficientNetV2, a model pretrained on ImageNet, adapting it for binary classification. GAP is a layer that averages each feature map obtained in the previous layer, reducing the number of parameters compared to fully connected layers. The network configuration using EfficientNetV2 with GAP and dropout is shown in Figure 3. The model classifies patterns as good or bad based on overall criteria obtained from the questionnaire.

3.3. Prediction Method

We used LightGBM [15] as a one-period prediction model. The learning data consisted of the heights of the circumscribed rectangles from the simulation data. Additionally, hyperparameter tuning was performed using Optuna during the training process. The proposed method was applied to several other texture patterns. One of the examples is shown in Figure 4.
The top three patterns (plus signs) exceeded the acceptable range, while the bottom two patterns did not exceed the acceptable range. The data used were pattern images extracted from scanned Kurume Kasuri and image data that had been subjected to data augmentation (noise, skew) and sorted into acceptable range (good) and unacceptable range (bad) (training: 180 images; test: 30 images). For the image recognition model, transfer learning and fine-tuning were performed using a model in which a GAP layer and dropout were added to the final layer of EfficientNetV2 that had been trained on ImageNet. As a result, classification results were similar to those of the Kasuri patterns in Figure 1.

4. Experiment

4.1. Data Used

The data used consisted of 70 pattern images extracted from scanned images of Kurume Kasuri referencing the Kasuri pattern in Figure 1. Figure 1 shows 72 of the scanned texture patterns (essentially forming a rectangular texture pattern). By adding salt-and-pepper noise and skewing, we expanded this dataset to a total of 210 images (180 for training and 30 for testing). These images were used for classification.
Since the dataset was too small to use actual patterns as time series data, we created simulation data. To generate patterns that closely resemble the actual ones, we needed information about the real patterns. Known information included the fact that the actual pattern consists of 16 threads, and that deviations generally increase over time unless adjustments are made. One of the texture patterns also consisted of 16 warp threads. Additional necessary information, such as the pattern’s width, height, and thread thickness, was obtained by processing the ideal pattern image using OpenCV (Figure 5a). The specific steps are as follows:
Tilt correction using affine transformation (Figure 5b);
Contrast enhancement (Figure 5c);
Binarization (Figure 5d);
Closing to remove noise (Figure 5e);
Opening to eliminate gaps in the pattern (Figure 5f);
Counting the number of black pixels in each column and row, using the most frequent values as the width and height of the pattern;
Determining the thread thickness by dividing the pattern width by 16 (the number of threads).
As a result, the pattern width was determined to be 48 pixels, the height 36 pixels, and the thread thickness 3 pixels. Although the mode of the number of black pixels per row obtained in step (6) was 47 pixels, we opted for 48 pixels because it was the second most frequent value, and it aligned with the 16-thread structure. Based on this information, we generated simulation data by adding a discrete uniform random number (0 or 1) to the starting position of the pattern, causing the deviation to gradually increase. A total of 500 simulated patterns were created (Figure 6). Note that the seed value for random number generation was not specified.

4.2. Results

Regarding transfer learning with pretrained EfficientNet, Table 1 presents the hyperparameters and prediction accuracy for test data when optimized manually and using Optuna. The two hyperparameters optimized were the dropout rate and batch size. The results indicate that using the optimal hyperparameters obtained through Optuna resulted in better accuracy.
Figure 7 illustrates the importance of hyperparameters as determined by Optuna optimization. The importance score for the dropout rate is 0.77, while for the batch size it is 0.23, indicating that the dropout rate has a higher impact on model performance.
Table 2 presents the hyperparameters and prediction accuracy for test data when optimized manually and using Optuna in the context of transfer learning and fine-tuning.
The hyperparameters searched included the dropout rate, learning rate, number of epochs, and batch size for transfer learning, as well as the batch size for fine-tuning. Like the scenario where only transfer learning was performed, using the optimal hyperparameters identified by Optuna resulted in improved accuracy. Figure 8 shows the importance of hyperparameters obtained through Optuna optimization. The dropout rate was the most important with an importance score of 0.49, while the batch size for transfer learning had the lowest importance at 0.05.
Figure 9a illustrates the change in accuracy when only the highly important dropout rate was adjusted. Unlike in the case of transfer learning alone, the accuracy changed significantly. When the dropout rate was set to 0.4, the accuracy reached 90%, surpassing the Optuna-optimized accuracy of 80%. Figure 9b shows the change in accuracy when only the batch size for transfer learning, which was of low importance, was adjusted. Unlike in the case of transfer learning alone, the accuracy varied to some extent. The accuracy with the Optuna-optimized batch size exceeded 80%, matching the result obtained when only the dropout rate was adjusted.
In both the case of transfer learning only and the case of performing fine-tuning after transfer learning, the dropout rate was the most important hyperparameter, while the other hyperparameters had categorical search ranges. This importance is likely due to the dropout rate’s continuous and wide search range. The irregular accuracy changes observed when only the dropout rate was adjusted can be attributed to the random nature of dropout, which may occasionally remove nodes with significant features for classification. To measure accuracy more reliably, K-fold cross-validation is necessary.
In transfer learning with fine-tuning, adjusting the hyperparameters with the highest and lowest importance improved accuracy compared to the Optuna results. This improvement is likely because the Tree-structured Parzen Estimator (TPE) algorithm used by Optuna, based on Bayesian optimization, does not exhaustively search all hyperparameters like grid search, making it challenging to find a locally optimal solution.
We proposed a method of intentionally altering the hyperparameters obtained through Optuna optimization and selecting those that yield greater accuracy through trial and error. In both transfer learning alone and fine-tuning after transfer learning, using the hyperparameters optimized by Optuna significantly improved accuracy compared to manual settings. In transfer learning with fine-tuning, adjusting the most important hyperparameters resulted in better accuracy than the original Optuna results. Ultimately, the highest accuracy achieved in transfer learning was 90% when the dropout rate, a crucial hyperparameter, was adjusted to 0.4. Similarly, in the case of fine-tuning after transfer learning, changing the dropout rate to 0.4 also resulted in an accuracy of 90%.
We used LightGBM [15] as a one-period prediction model. The learning data consisted of the height of the circumscribed rectangle from the simulation data created in the previous section, determined using the same procedure discussed earlier. Figure 10 shows the transition of the height of the bounding rectangle. Additionally, hyperparameter tuning during learning was performed using Optuna.
Using the constructed LightGBM model, we performed one-period predictions on newly created simulation data (100 patterns). The results demonstrated high prediction accuracy, with an RMSE of 0.689 and an R2 of 0.986 (Figure 11).
To effectively use the LightGBM model for one-period predictions and make adjustments when the predicted value exceeds a certain threshold, it is necessary to determine this threshold value. In this study, the threshold was determined using the constructed LightGBM model.
The revised steps are as follows:
(1)
Set the initial threshold value to the height of the actual pattern obtained from the data used: 36 pixels + 10 = 46 pixels;
(2)
Create a pattern, determine the height of the circumscribed rectangle, and perform one-step-ahead prediction using the LightGBM model;
(3)
If the predicted value is greater than or equal to the threshold:
Adjust the pattern by increasing black pixels on the top side of the circumscribed rectangle by 1 pixel and decreasing black pixels on the bottom side by 1 pixel;
(4)
If the height of the circumscribed rectangle of the pattern created in step (3) exceeds the current threshold three times in total:
Lower the threshold by 1 pixel and repeat step (2);
(5)
Repeat the above process 500 times and use the threshold value at which the threshold no longer decreases as the final threshold.
As a result of the above steps, the height of the circumscribed rectangle of the created pattern stabilized at 40 pixels, and the final threshold was set to 41 pixels (Figure 12).
By using the determined threshold and comparing the degree of pattern deviation when adjusting with and without prediction, we can evaluate the effectiveness of the proposed method in suppressing pattern deviation. The specific steps are as follows:
For adjustment without prediction:
(1)
Set the threshold to 41;
(2)
Create a pattern and determine the height of the circumscribed rectangle;
(3)
If the height of the circumscribed rectangle is greater than or equal to the threshold:
Adjust the pattern by increasing black pixels on the top side of the circumscribed rectangle by 1 pixel and decreasing black pixels on the bottom side by 1 pixel;
Repeat step (2);
(4)
Repeat the above process 100 times.
For adjustment with prediction:
(1)
Set the threshold to 41;
(2)
Create a pattern, determine the height of the circumscribed rectangle, and perform one-step-ahead prediction using the LightGBM model;
(3)
If the predicted value is greater than or equal to the threshold:
Adjust the pattern by increasing black pixels on the top side of the circumscribed rectangle by 1 pixel and decreasing black pixels on the bottom side by 1 pixel;
Repeat step (2);
(4)
Repeat the above process 100 times.
Upon analyzing the height of the circumscribed rectangle of each pattern created, it was observed that when adjustments are made with prediction, there are almost no patterns that reach the threshold, indicating that deviation can be effectively suppressed (Figure 13).
Additionally, it appears that all patterns were created using the EfficientNetV2 model, which achieved 90% accuracy when the dropout rate, the most important hyperparameter obtained by Optuna in transfer learning, was set to 0.4. Upon classification, two patterns were classified as bad when adjusted without prediction, whereas all patterns were classified as good when adjusted with prediction (Table 3).
We proposed a method for optimizing pattern alignment by automatically adjusting warp threads based on a pattern misalignment prediction model. Through experiments, we confirmed the effectiveness of LightGBM’s one-step-ahead prediction in optimizing pattern misalignment using the constructed model.
Regarding the method used to determine the threshold, it relies on the prediction accuracy of the LightGBM model and may not always result in an accurate threshold. Additionally, the EfficientNetV2 model used for comparison, which achieved 90% accuracy, may not always produce correct classification results.

5. Conclusions

In this paper, we propose a pattern shift classification method for Kurume Kasuri using an image recognition model, along with a pattern shift prediction and adjustment method using LightGBM. Through experiments, we demonstrate the capability to classify pattern misalignment using image recognition and optimize pattern alignment by predicting and adjusting pattern misalignment.
We describe a method to enhance the accuracy of pattern misalignment classification models through hyperparameter tuning using Optuna. Additionally, we illustrate the potential to optimize pattern alignment by automatically adjusting warp threads based on a one-step-ahead prediction model using LightGBM.
The proposed method shown in the first half of this paper can be applied to any pattern as a method for determining whether the fluctuation of a regular texture pattern exceeds the tolerance limit. The Kasuri weaving method shown in the second half of this paper, which maintains pattern fluctuation within the tolerance limit, is only applicable in this case and cannot be applied to anything else. When generating textile patterns over time, the method shown in this paper that takes into account Optuna hyperparameter tuning and LightGBM can be applied. In other words, the proposed method can be used to detect pattern fluctuation in real time and suppress the fluctuation before it exceeds the tolerance limit.

6. Future Research Works

For quality evaluation, we think it is necessary to verify patterns other than rectangular patterns (polka dots, etc.) and to consider other optimization methods to automate hyperparameter search. In pattern shift prediction, the challenge is to improve the accuracy of the constructed model.

Author Contributions

Methodology, K.A.; Software, J.S.; Resources, M.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank Hiroshi Okumura and Osamu Fukuda of Saga University for their valuable discussions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nakamura, K. History of Kurume Kasuri. J. Jpn. Inst. Text. Technol. 2005, 61, 152–156. [Google Scholar]
  2. Uchiyama, T. Demand structure analysis of traditional craft Kurume Kasuri. Economics 2020, 24, 33–52. [Google Scholar]
  3. Shimazoe, J.; Arai, K.; Oda, M.; Oh, J. Method for 1/f Fluctuation Component Extraction from Images and Its Application to Improve Kurume Kasuri Quality Estimation. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 465–471. [Google Scholar] [CrossRef]
  4. Mori, T. Visual evaluation of lace patterns using neural networks. J. Jpn. Soc. Home Econ. 2000, 51, 147–156. [Google Scholar]
  5. Alex, K.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems 25 (NIPS 2012), Lake Tahoe, NV, USA, 3–6 December 2012. [Google Scholar]
  6. Karen, S.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  7. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  8. James, B.; Yamins, D.; Cox, D.D. Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in Science Conference, Austin, TX, USA, 9–24 June 2013. [Google Scholar]
  9. Authors, T.G. GPyOpt: A Bayesian Optimization Framework in Python. 2016. Available online: http://github.com/SheffieldML/GPyOpt (accessed on 23 July 2024).
  10. Ali, M. PyCaret: An Open Source, Low-Code Machine Learning Library in Python. Available online: https://www.pycaret.org (accessed on 23 July 2024).
  11. Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A next generation hyperparameter optimization framework. In Proceedings of the the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019. [Google Scholar]
  12. Arai, K.; Shimazoe, J.; Oda, M. Method for Hyperparameter Tuning of Image Classification with PyCaret. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 276–282. [Google Scholar] [CrossRef]
  13. Shashank, S.; Bansode, A.; Salim, A. A comparative study of hyper-parameter optimization tools. In Proceedings of the 2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), Brisbane, Australia, 8–10 December 2021. [Google Scholar]
  14. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
  15. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. Lightgbm: A highly efficient gradient boosting decision tree. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  16. Zhang, Y.; Zhu, C.; Wang, Q. LightGBM-based model for metro passenger volume forecasting. IET Intell. Transp. Syst. 2020, 14, 1815–1823. [Google Scholar] [CrossRef]
  17. Cao, Q.; Wu, Y.; Yang, J.; Yin, J. Greenhouse Temperature Prediction Based on Time-Series Features and LightGBM. Appl. Sci. 2023, 13, 1610. [Google Scholar] [CrossRef]
  18. Makridakis, S.; Spiliotis, E.; Assimakopoulos, V. M5 accuracy competition: Results, findings, and conclusions. Int. J. Forecast. 2022, 38, 1346–1364. [Google Scholar] [CrossRef]
  19. Otsu, N. Automatic threshold selection method based on discriminant and least squares criterion. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 1980, 63, 349–356. [Google Scholar]
  20. Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv 2017, arXiv:1710.09412. [Google Scholar]
  21. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
  22. Shimazoe, J.; Arai, K.; Oda, M.; Oh, J. Classification of pattern deviation in Kurume Kasuri using convolutional neural network. Kurume Inst. Technol. Res. Rep. 2022, 45, 87–94. [Google Scholar]
  23. Tan, M.; Le, Q. EfficientnetV2: Smaller models and faster training. Int. Conf. Mach. Learn. PMLR 2021, 139, 10096–10106. [Google Scholar]
  24. Lin, M.; Chen, Q.; Yan, S. Network in network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
  25. Shimazoe, J.; Arai, K.; Oda, M. Method for Hyperparameter Tuning of EfficientNetV2-based Image Classification by Deliberately Modifying Optuna Tuned Result. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 463–468. [Google Scholar] [CrossRef]
Figure 1. A typical Kurume Kasuri pattern (the warp yarn is black cotton, while the weft is white).
Figure 1. A typical Kurume Kasuri pattern (the warp yarn is black cotton, while the weft is white).
Information 15 00434 g001
Figure 2. Examples of good patterns and bad patterns.
Figure 2. Examples of good patterns and bad patterns.
Information 15 00434 g002
Figure 3. Network configuration of the classification method used (EfficientNetV2 with GAP for dropout).
Figure 3. Network configuration of the classification method used (EfficientNetV2 with GAP for dropout).
Information 15 00434 g003
Figure 4. Another example of Kasuri pattern fluctuations.
Figure 4. Another example of Kasuri pattern fluctuations.
Information 15 00434 g004
Figure 5. Procedures of image processing.
Figure 5. Procedures of image processing.
Information 15 00434 g005
Figure 6. Generated simulation patterns.
Figure 6. Generated simulation patterns.
Information 15 00434 g006
Figure 7. Importance of hyperparameters when transfer learning was performed.
Figure 7. Importance of hyperparameters when transfer learning was performed.
Information 15 00434 g007
Figure 8. Importance of hyperparameters when transfer learning and fine-tuning were performed.
Figure 8. Importance of hyperparameters when transfer learning and fine-tuning were performed.
Information 15 00434 g008
Figure 9. Change in accuracy when changing hyperparameters of high versus low importance among the hyperparameters obtained by optimization with Optuna.
Figure 9. Change in accuracy when changing hyperparameters of high versus low importance among the hyperparameters obtained by optimization with Optuna.
Information 15 00434 g009
Figure 10. Transition of height of bounding rectangle.
Figure 10. Transition of height of bounding rectangle.
Information 15 00434 g010
Figure 11. Accuracy of trained LightGBM models.
Figure 11. Accuracy of trained LightGBM models.
Information 15 00434 g011
Figure 12. Transition of height of bounding rectangle by decreasing threshold.
Figure 12. Transition of height of bounding rectangle by decreasing threshold.
Information 15 00434 g012
Figure 13. The difference in the height of the pattern’s bounding rectangle with and without prediction.
Figure 13. The difference in the height of the pattern’s bounding rectangle with and without prediction.
Information 15 00434 g013
Table 1. Hyperparameters and prediction accuracy for test data when optimized manually and with Optuna.
Table 1. Hyperparameters and prediction accuracy for test data when optimized manually and with Optuna.
ManualOptuna 1
Dropout Rate0.50.129 [0~0.5]
Batch Size1632 [16, 32, 64]
Accuracy76.67%90%
1 The hyperparameter search range is in [ ].
Table 2. Hyperparameters and prediction accuracy for test data when optimized manually and with Optuna in the context of transfer learning and fine-tuning.
Table 2. Hyperparameters and prediction accuracy for test data when optimized manually and with Optuna in the context of transfer learning and fine-tuning.
ManualOptuna 1
Dropout Rate0.50.124 [0~0.5]
Learning Rate0.0010.001 [0.001, 0.0005, 0.0001]
Epoch (Transfer Learning)1015 [10, 15, 20]
Batch Size (Transfer Learning)1632 [16, 32]
Batch Size (Fine-Tuning)1632 [16, 32]
Accuracy50%80%
1 The hyperparameter search range is in [ ].
Table 3. Pattern shift classification results using EfficientNetV2 model.
Table 3. Pattern shift classification results using EfficientNetV2 model.
GoodBad
Non-prediction982
Prediction1000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arai, K.; Shimazoe, J.; Oda, M. A Method for Maintaining a Unique Kurume Kasuri Pattern of Woven Textile Classified by EfficientNet by Means of LightGBM-Based Prediction of Misalignments. Information 2024, 15, 434. https://doi.org/10.3390/info15080434

AMA Style

Arai K, Shimazoe J, Oda M. A Method for Maintaining a Unique Kurume Kasuri Pattern of Woven Textile Classified by EfficientNet by Means of LightGBM-Based Prediction of Misalignments. Information. 2024; 15(8):434. https://doi.org/10.3390/info15080434

Chicago/Turabian Style

Arai, Kohei, Jin Shimazoe, and Mariko Oda. 2024. "A Method for Maintaining a Unique Kurume Kasuri Pattern of Woven Textile Classified by EfficientNet by Means of LightGBM-Based Prediction of Misalignments" Information 15, no. 8: 434. https://doi.org/10.3390/info15080434

APA Style

Arai, K., Shimazoe, J., & Oda, M. (2024). A Method for Maintaining a Unique Kurume Kasuri Pattern of Woven Textile Classified by EfficientNet by Means of LightGBM-Based Prediction of Misalignments. Information, 15(8), 434. https://doi.org/10.3390/info15080434

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop