Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

A Factor Marginal Effect Analysis Approach and Its Application in E-Commerce Search System

Published: 01 January 2023 Publication History

Abstract

Feature explanation plays an increasingly essential role in the e-commerce search platform. Most of the existing studies focus on modeling the user’s interests to estimate the click-through rate (CTR). A good e-commerce system not only needs precise ranking to inspire users’ shopping desire but also needs feature explanation to meet the demands of shop owners. The e-commerce traffic health of merchants is very important. How to effectively achieve shop owners’ multiple goals still remains as an open problem. In industrial search systems, merchants’ key demands mainly include two aspects. On the one hand, merchants want to know rule analysis of online traffic distribution, so as to help them understand the logistics of online traffic. On the other hand, they need relevant online traffic participation tools, which instruct them to participate. To address these issues, we propose a factor marginal effect analysis approach (FMEA) based on game theory, which can compute the contribution of one-dimensional features to the enhancement of online traffic. First, we use machine learning to model the business target. Then, we improve the SHAP value algorithm, which can provide clear business insights. Finally, we calculate the marginal effect of each feature on the business outcome. In this way, we provide a traffic analysis guidance method and address merchants’ participation challenges. In fact, the FMEA has been deployed in a real-world Large-Internet-Company’s App search systems and successfully serves online e-commerce service to over hundreds of millions of consumers. Our approach can guide operational decisions effectively and bring +10.05% revenue for the flow index, +7.54% for the user feedback index, and +2.46% for the service index.

References

[1]
W. J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu, “Definitions, methods, and applications in interpretable machine learning,” Proceedings of the National Academy of Sciences, vol. 116, no. 44, pp. 22071–22080, 2019.
[2]
R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Computing Surveys, vol. 51, no. 5, pp. 1–42, 2018.
[3]
R. Kohavi, A. Deng, and L. Vermeer, “A/B testing intuition busters,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, June 2022.
[4]
M. Scott, “A unified approach to interpreting model predictions,” Advances in Neural Information Processing Systems, vol. 30, 2017.
[5]
A. Shrikumar, P. Greenside, and A. Kundaje, “Learning important features through propagating activation differences,” in Proceedings of the International Conference on Machine Learning, New York, NY, USA, July 2017.
[6]
B. Bai, J. Liang, G. Zhang, H. Li, K. Bai, and F. Wang, “Why attentions may not be interpretable?” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, New York, NY, USA, March 2021.
[7]
C. H. Chang, S. Tan, B. Lengerich, A. Goldenberg, and R. Caruana, “How interpretable and trustworthy are gams?” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Singapore, August 2021.
[8]
S. Hao, Y. Liu, Y. Wang, Y. Wang, and W. Zhe, “Three-stage root cause analysis for logistics time efficiency via explainable machine learning,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 2022.
[9]
H. Deng, N. Zou, W. Chen, G. Feng, M. Du, and X. Hu, “Mutual information preserving back-propagation: learn to invert for faithful attribution,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Singapore, August 2021.
[10]
Y. S. Lin and Z. B. Celik, “What do you see? evaluation of explainable artificial intelligence (xai) interpretability through neural backdoors,” 2020, https://arxiv.org/abs/2009.10639.
[11]
R. Luss, P. Y. Chen, D. Amit, P. Sattigeri, Y. Zhang, K. Shanmugam, and C. C. Tu, “Leveraging latent features for local explanations,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Singapore, August 2021.
[12]
T. Qi, K. Kuang, K. Jiang, F. Wu, and Y. Wang, “Analysis and applications of class-wise robustness in adversarial training,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Singapore, August 2021.
[13]
Y. Ge, J. Tan, Y. Zhu, Y. Xia, J. Luo, S. Liu, Z. Fu, S. Geng, Z. Li, and Y. Zhang, “Explainable fairness in recommendation,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA, July 2022.
[14]
P. Yu, R. Rahimi, and J. Allan, “Towards explainable search results: a listwise explanation generator,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA, July 2022.
[15]
A. Rossi, D. Firmani, P. Merialdo, and T. Teofili, “Kelpie: an explainability framework for embedding-based link prediction models,” Proceedings of the VLDB Endowment, vol. 15, no. 12, pp. 3566–3569, 2022.
[16]
J. Zheng, J. Mai, and Y. Wen, “Explainable session-based recommendation with meta-path guided instances and self-attention mechanism,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA, July 2022.
[17]
M. Nizri, A. Amos, and N. Hazon, “Explainable Shapley-Based Allocation,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, 2022.
[18]
K. Balog and F. Radlinski, “Measuring recommendation explanation quality: the conflicting goals of explanations,” in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA, July 2022.
[19]
K. Tsukuda and M. Goto, “Explainable recommendation for repeat consumption,” in Proceedings of the Fourteenth ACM Conference on Recommender Systems, Brazil, South America, September 2020.
[20]
F. Wang, Y. Wang, D. Li, H. Gu, T. Lu, P. Zhang, and N. Gu, “Enhancing ctr prediction with context-aware feature representation learning,” 2022, https://arxiv.org/abs/2204.08758.
[21]
J. Kunkel, L. Michael, and J. Ziegler, “Let me explain: impact of personal and impersonal explanations on trust in recommender systems,” in Proceedings of the 2019 CHI conference on human factors in computing systems, New York, NY, USA, May 2019.
[22]
B. Mahesh, “Machine learning algorithms-a review,” International Journal of Science and Research, vol. 9, 2020.
[23]
J. I. Santos, M. Pereda, V. Ahedo, and J. M. Galán, “Explainable machine learning for project management control,” Computers and Industrial Engineering, vol. 180, 2023.
[24]
B. Pradhan, A. Dikshit, S. Lee, and H. Kim, “An explainable ai (xai) model for landslide susceptibility modeling,” Applied Soft Computing, vol. 142, 2023.
[25]
B. Gregorutti and B. Michel, “Correlation and variable importance in random forests,” 2013, https://arxiv.org/abs/1310.5726.
[26]
N. Shimizu and H. Kaneko, “Constructing regression models with high prediction accuracy and interpretability based on decision tree and random forests,” Journal of Computer Chemistry, Japan, vol. 20, no. 2, pp. 71–87, 2021.
[27]
M. Ancona, C. Oztireli, and M. Gross, “Explaining deep neural networks with a polynomial time algorithm for Shapley value approximation,” in Proceedings of the International Conference on Machine Learning, Honolulu, Hawai, July 2019.
[28]
A. Fisher, C. Rudin, and F. Dominici, “All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously,” Journal of Machine Learning Research, vol. 20, pp. 177–181, 2019.
[29]
G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K. R. Müller, “Explaining nonlinear classification decisions with deep taylor decomposition,” Pattern Recognition, vol. 65, pp. 211–222, 2017.
[30]
A. Movsessian, D. G. Cava, and D. Tcherniak, “Interpretable machine learning in damage detection using Shapley additive explanations,” ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering, vol. 8, no. 2, 2022.
[31]
P. Liu, L. Zhang, and J. A. Gulla, “Dynamic attention-based explainable recommendation with textual and visual fusion,” Information Processing and Management, vol. 57, no. 6, 2020.
[32]
L. Yang, S. Liu, S. Tsoka, and L. G. Papageorgiou, “Mathematical programming for piecewise linear regression analysis,” Expert Systems with Applications, vol. 44, pp. 156–167, 2016.
[33]
J. Liu, J. Chen, and J. Ye, “Large-scale sparse logistic regression,” in Proceedings Of the 15th ACM SIGKDD International Conference On Knowledge Discovery And Data Mining, KDD ’09, New York, NY, USA, June 2009.
[34]
A. J. Myles, R. N. Feudale, Y. Liu, N. A. Woody, and S. D. Brown, “An introduction to decision tree modeling,” Journal of Chemometrics, vol. 18, no. 6, pp. 275–285, 2004.
[35]
A. K. Sangaiah, S. Rezaei, A. Javadpour, and W. Zhang, “Explainable ai in big data intelligence of community detection for digitalization e-healthcare services,” Applied Soft Computing, vol. 136, 2023.
[36]
I. Ilic, B. Görgülü, M. Cevik, and M. G. Baydoğan, “Explainable boosted linear regression for time series forecasting,” Pattern Recognition, vol. 120, 2021.
[37]
G. Peake and J. Wang, Explanation Mining: Post Hoc Interpretability of Latent Factor Models for Recommendation Systems, SIGKDD explorations Udisk, Long Beach, CA, USA, 2018.
[38]
A. R. Troncoso-García, M. Martínez-Ballesteros, F. Martínez-Álvarez, and A. Troncoso, “A new approach based on association rules to add explainability to time series forecasting models,” Information Fusion, vol. 94, pp. 169–180, 2023.
[39]
P. Angelov and E. Soares, “Towards explainable deep neural networks (xdnn),” Neural Networks, vol. 130, pp. 185–194, 2020.
[40]
J. Zhang, Q. Sun, J. Liu, L. Xiong, J. Pei, and K. Ren, “Efficient sampling approaches to Shapley value approximation,” Proc. ACM Manag. Data, vol. 1, no. 1, pp. 1–24, May 2023.
[41]
L. Lin, “E-commerce data analysis based on big data and artificial intelligence,” in Proceedings of the 2019 International Conference on Computer Network, Qingdao, China, July 2019.
[42]
T. Chen and C. Guestrin, “XgBoost: a scalable tree boosting system,” in Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 2016.
[43]
H. Kaneko, “Cross-validated permutation feature importance considering correlation between features,” Analytical Science Advances, vol. 3, no. 9-10, pp. 278–287, 2022.

Index Terms

  1. A Factor Marginal Effect Analysis Approach and Its Application in E-Commerce Search System
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image International Journal of Intelligent Systems
    International Journal of Intelligent Systems  Volume 2023, Issue
    2023
    3189 pages
    This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    Publisher

    John Wiley and Sons Ltd.

    United Kingdom

    Publication History

    Published: 01 January 2023

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 0
      Total Downloads
    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 25 Nov 2024

    Other Metrics

    Citations

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media