Nothing Special   »   [go: up one dir, main page]

Goto

Collaborating Authors

 Retail


Fast Multi-Resolution Transformer Fine-tuning for Extreme Multi-label Text Classification

Neural Information Processing Systems

Extreme multi-label text classification (XMC) seeks to find relevant labels from an extreme large label collection for a given text input. Many real-world applications can be formulated as XMC problems, such as recommendation systems, document tagging and semantic search. Recently, transformer based XMC methods, such as X-Transformer and LightXML, have shown significant improvement over other XMC methods. Despite leveraging pre-trained transformer models for text representation, the fine-tuning procedure of transformer models on large label space still has lengthy computational time even with powerful GPUs. In this paper, we propose a novel recursive approach, XR-Transformer to accelerate the procedure through recursively fine-tuning transformer models on a series of multi-resolution objectives related to the original XMC objective function. Empirical results show that XR-Transformer takes significantly less training time compared to other transformerbased XMC models while yielding better state-of-the-art results. In particular, on the public Amazon-3M dataset with 3 million labels, XR-Transformer is not only 20x faster than X-Transformer but also improves the Precision@1 from 51% to 54%. Our code is publicly available at https://github.com/amzn/pecos.


Maximizing Revenue under Market Shrinkage and Market Uncertainty

Neural Information Processing Systems

A shrinking market is a ubiquitous challenge faced by various industries. In this paper we formulate the first formal model of shrinking markets in multi-item settings, and study how mechanism design and machine learning can help preserve revenue in an uncertain, shrinking market. Via a sample-based learning mechanism, we prove the first guarantees on how much revenue can be preserved by truthful multi-item, multi-bidder auctions (for limited supply) when only a random unknown fraction of the population participates in the market. We first present a general reduction that converts any sufficiently rich auction class into a randomized auction robust to market shrinkage. Our main technique is a novel combinatorial construction called a winner diagram that concisely represents all possible executions of an auction on an uncertain set of bidders. Via a probabilistic analysis of winner diagrams, we derive a general possibility result: a sufficiently rich class of auctions always contains an auction that is robust to market shrinkage and market uncertainty. Our result has applications to important practically-constrained settings such as auctions with a limited number of winners. We then show how to efficiently learn an auction that is robust to market shrinkage by leveraging practically-efficient routines for solving the winner determination problem.


Optimal Rates for Random Order Online Optimization

Neural Information Processing Systems

We study online convex optimization in the random order model, recently proposed by Garber et al. [8], where the loss functions may be chosen by an adversary, but are then presented to the online algorithm in a uniformly random order. Focusing on the scenario where the cumulative loss function is (strongly) convex, yet individual loss functions are smooth but might be non-convex, we give algorithms that achieve the optimal bounds and significantly outperform the results of Garber et al. [8], completely removing the dimension dependence and improving their scaling with respect to the strong convexity parameter. Our analysis relies on novel connections between algorithmic stability and generalization for sampling without-replacement analogous to those studied in the with-replacement i.i.d.


John Lewis is using AI to scan shoppers' faces to check they are old enough to buy KNIVES

Daily Mail - Science & tech

Using a technology called facial age estimation, John Lewis' online store can now check whether shoppers are over 18 without needing to see their ID.


Robot packers and AI cameras: UK retail embraces automation to cut staff costs

The Guardian

Electronic shelf labels, returns machines, robot bag packers and yet more self-service tills โ€“ just some of the many technologies that UK retailers are embracing as they try to solve the problem of rising labour costs. Investment in automation was a constant drumbeat amid the flurry of festive trading updates from big retailers in the past few weeks, as they face higher staffing bills from April after the rise in the national minimum wage and employers' national insurance contributions (NICs). The investments could improve productivity โ€“ a key government aim โ€“ in an industry long reliant on cheap labour. However, they will also replace entry-level jobs and reduce the number of roles in a sector that is the UK's biggest employer. When the British Retail Consortium asked leading retailers' finance directors how they would be responding to the impending increase in employers' NICs, almost a third said they would be using more automation, although this sat behind raising prices, cutting head office jobs and reducing working hours.


Large-Scale Price Optimization via Network Flow

Neural Information Processing Systems

This paper deals with price optimization, which is to find the best pricing strategy that maximizes revenue or profit, on the basis of demand forecasting models. Though recent advances in regression technologies have made it possible to reveal price-demand relationship of a large number of products, most existing price optimization methods, such as mixed integer programming formulation, cannot handle tens or hundreds of products because of their high computational costs. To cope with this problem, this paper proposes a novel approach based on network flow algorithms. We reveal a connection between supermodularity of the revenue and cross elasticity of demand. On the basis of this connection, we propose an efficient algorithm that employs network flow algorithms. The proposed algorithm can handle hundreds or thousands of products, and returns an exact optimal solution under an assumption regarding cross elasticity of demand. Even if the assumption does not hold, the proposed algorithm can efficiently find approximate solutions as good as other state-of-the-art methods, as empirical results show.


Assortment Optimization Under the Mallows model

Neural Information Processing Systems

We consider the assortment optimization problem when customer preferences follow a mixture of Mallows distributions. The assortment optimization problem focuses on determining the revenue/profit maximizing subset of products from a large universe of products; it is an important decision that is commonly faced by retailers in determining what to offer their customers. There are two key challenges: (a) the Mallows distribution lacks a closed-form expression (and requires summing an exponential number of terms) to compute the choice probability and, hence, the expected revenue/profit per customer; and (b) finding the best subset may require an exhaustive search. Our key contributions are an efficiently computable closed-form expression for the choice probability under the Mallows model and a compact mixed integer linear program (MIP) formulation for the assortment problem.


Efficient Second Order Online Learning by Sketching Haipeng Luo

Neural Information Processing Systems

We propose Sketched Online Newton (SON), an online second order learning algorithm that enjoys substantially improved regret guarantees for ill-conditioned data. SON is an enhanced version of the Online Newton Step, which, via sketching techniques enjoys a running time linear in the dimension and sketch size. We further develop sparse forms of the sketching methods (such as Oja's rule), making the computation linear in the sparsity of features. Together, the algorithm eliminates all computational obstacles in previous second order online learning approaches.


AI isn't the future of online shopping - here's what is

ZDNet

I think AI is amazing, and nothing will come between my ChatGPT subscription and me. RELAX and stay out of my DMs. Also: How to sign up for the'next TikTok' - and why you should do it right away Now that I've got my disclaimer out the way, I NEED to tell you about the future of online shopping. If you've ever thought about starting an online business or are already selling stuff online and looking for the edge, I've got you. In this article, we'll explore: So kick back and put the pitchfork down as we dive into what I believe could be a game-changer for online shopping in 2025.


Using Pre-trained LLMs for Multivariate Time Series Forecasting

arXiv.org Artificial Intelligence

Time series forecasting refers to a class of techniques for the prediction of events through a sequence of time, typically to inform strategic or tactical decision making. Going beyond strategic forecasting problems (e.g., those commonly-used historically in statistics and econometrics [1]), operational forecasting problems are increasingly-important. For example, at large internet retail companies, this includes demand forecasting for products at an online retailer, work force cohorts of a company in its locations, compute capacity needs per region and server type, etc.; in scientific machine learning, this includes prediction of extreme events in, e.g., climate and weather models; and so on. In particular, MQCNN [2] and MQTransformer [3] are stateof-the-art (SOTA) neural network (NN) based multivariate time series forecasting models that are used to predict future demand at the product level for hundreds of millions of products.