Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3539618.3591888acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

The Information Retrieval Experiment Platform

Published: 18 July 2023 Publication History

Abstract

We integrate irdatasets, ir_measures, and PyTerrier with TIRA in the Information Retrieval Experiment Platform (TIREx) to promote more standardized, reproducible, scalable, and even blinded retrieval experiments. Standardization is achieved when a retrieval approach implements PyTerrier's interfaces and the input and output of an experiment are compatible with ir_datasets and ir_measures. However, none of this is a must for reproducibility and scalability, as TIRA can run any dockerized software locally or remotely in a cloud-native execution environment. Version control and caching ensure efficient (re)execution. TIRA allows for blind evaluation when an experiment runs on a remote server or cloud not under the control of the experimenter. The test data and ground truth are then hidden from public access, and the retrieval software has to process them in a sandbox that prevents data leaks.
We currently host an instance of TIREx with 15 corpora (1.9~billion documents) on which 32 shared retrieval tasks are based. Using Docker images of 50~standard retrieval approaches, we automatically evaluated all approaches on all tasks (50 ⋅ 32 = 1,600 runs) in less than a week on a midsize cluster (1,620 cores and 24 GPUs). This instance of TIREx is open for submissions and will be integrated with the IR Anthology, as well as released open source.

Supplemental Material

MP4 File
Presentation for the paper on The Information Retrieval Experiment Platform (TIREx).

References

[1]
J. Arguello, F. Diaz, J. Lin, and A. Trotman. SIGIR 2015 workshop on repro- ducibility, inexplicability, and generalizability of results (RIGOR). SIGIR 2015. 1147--1148.
[2]
T. G. Armstrong, A. Moffat, W. Webber, and J. Zobel. EvaluatIR: An online tool for evaluating and comparing IR systems. SIGIR 2009. 833.
[3]
T. G. Armstrong, A. Moffat, W. Webber, and J. Zobel. Improvements that don't add up: Ad-hoc retrieval results since 1998. CIKM 2009. 601--610.
[4]
P. Bailey, A. Moffat, F. Scholer, and P. Thomas. UQV100: A test collection with query variability. SIGIR 2016. 725--728.
[5]
M. Baker. 1,500 scientists lift the lid on reproducibility. Nature 533, 7604 (2016).
[6]
R. Benham, L. Gallagher, J. M. Mackenzie, B. Liu, X. Lu, F. Scholer, J. Shane Culpepper, and A. Moffat. RMIT at the 2018 TREC CORE Track. TREC 2018.
[7]
J. Bevendorff, B. Stein, M. Hagen, and M. Potthast. Elastic ChatNoir: Search engine for the ClueWeb and the Common Crawl. ECIR 2018. 820--824.
[8]
A. Bondarenko, M. Fröbe, M. Beloucif, L. Gienapp, Y. Ajjour, A. Panchenko, C. Biemann, B. Stein, H. Wachsmuth, M. Potthast, and M. Hagen. Overview of Touché 2020: Argument retrieval. CLEF 2020. 384--395.
[9]
A. Bondarenko, M. Fröbe, J. Kiesel, F. Schlatt, V. Barriere, B. Ravenet, L. Hemamou, S. Luck, J. Heinrich Reimer, B. Stein, M. Potthast, and M. Hagen. Overview of Touché 2023: Argument and causal retrieval. ECIR 2023. 527--535.
[10]
A. Bondarenko, M. Fröbe, J. Kiesel, S. Syed, T. Gurcke, M. Beloucif, A. Panchenko, C. Biemann, B. Stein, H. Wachsmuth, M. Potthast, and M. Hagen. Overview of Touché 2022: Argument retrieval. CLEF 2022. 311--336.
[11]
A. Bondarenko, L. Gienapp, M. Fröbe, M. Beloucif, Y. Ajjour, A. Panchenko, C. Biemann, B. Stein, H. Wachsmuth, M. Potthast, and M. Hagen. Overview of Touché 2021: Argument retrieval. CLEF 2021. 450--467.
[12]
L. Bonifacio, H. Abonizio, M. Fadaee, and R. Nogueira. InPars: Unsupervised dataset generation for information retrieval. SIGIR 2022. 2387--2392.
[13]
V. Boteva, D. Gholipour Ghalandari, A. Sokolov, and S. Riezler. A full-text learning to rank dataset for medical information retrieval. ECIR 2016. 716--722.
[14]
L. Boytsov and E. Nyberg. Flexible retrieval with NMSLIB and FlexNeuART. NLP-OSS 2020. 32--43.
[15]
T. Breuer, N. Ferro, N. Fuhr, M. Maistro, T. Sakai, P. Schaer, and I. Soboroff. How to measure the reproducibility of system-oriented IR experiments. SIGIR 2020. 349--358.
[16]
T. Breuer, N. Ferro, M. Maistro, and P. Schaer. repro_eval: A Python interface to reproducibility measures of system-oriented IR experiments. ECIR 2021. 481--486.
[17]
T. Breuer, J. Keller, and P. Schaer. ir_metadata: An extensible metadata schema for IR experiments. SIGIR 2022. 3078--3089.
[18]
T. Breuer, P. Schaer, N. Tavakolpoursaleh, J. Schaible, B. Wolff, and B. Müller. STELLA: Towards a framework for the reproducibility of online search experiments. OSIRRC at SIGIR 2019. 8--11.
[19]
S. Büttcher, C. L. A. Clarke, and I. Soboroff. The TREC 2006 Terabyte track. TREC 2006.
[20]
H. Won Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma, A. Webson, S. Shane Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowdhery, S. Narang, G. Mishra, A. Yu, V. Y. Zhao, Y. Huang, A. M. Dai, H. Yu, S. Petrov, E. H. Chi, J. Dean, J. Devlin, A. Roberts, D. Zhou, Q. V. Le, and J. Wei. Scaling instruction-finetuned language models. arXiv:2210.11416 (2022).
[21]
R. Clancy, N. Ferro, C. Hauff, J. Lin, T. Sakai, and Z. Z. Wu. Overview of the 2019 Open-Source IR Replicability Challenge (OSIRRC 2019). OSIRRC at SIGIR 2019. 1--7.
[22]
C. L. A. Clarke, N. Craswell, and I. Soboroff. Overview of the TREC 2004 Terabyte track. TREC 2004.
[23]
C. L. A. Clarke, N. Craswell, and I. Soboroff. Overview of the TREC 2009 Web track. TREC 2009.
[24]
C. L. A. Clarke, N. Craswell, I. Soboroff, and G. V. Cormack. Overview of the TREC 2010 Web track. TREC 2010.
[25]
C. L. A. Clarke, N. Craswell, I. Soboroff, and E. M. Voorhees. Overview of the TREC 2011 Web track. TREC 2011.
[26]
C. L. A. Clarke, N. Craswell, and E. M. Voorhees. Overview of the TREC 2012 Web track. TREC 2012.
[27]
C. L. A. Clarke, F. Scholer, and I. Soboroff. The TREC 2005 Terabyte track. TREC 2005.
[28]
C. Cleverdon. The Cranfield tests on index language devices. ASLIB Proceedings, 1967, 173--192.
[29]
C. Cleverdon. The significance of the Cranfield tests on index languages. SIGIR 1991. 3--12.
[30]
K. Collins-Thompson, P. N. Bennett, F. Diaz, C. Clarke, and E. M. Voorhees. TREC 2013 Web track overview. TREC 2013.
[31]
K. Collins-Thompson, C. Macdonald, P. N. Bennett, F. Diaz, and E. M. Voorhees. TREC 2014 Web track overview. TREC 2014.
[32]
C. Costello, E. Yang, D. Lawrie, and J. Mayfield. Patapasco: A Python framework for cross-language information retrieval experiments. ECIR 2022.
[33]
N. Craswell and D. Hawking. Overview of the TREC-2002 Web track. TREC 2002.
[34]
N. Craswell and D. Hawking. Overview of the TREC 2004 Web track. TREC 2004.
[35]
N. Craswell, D. Hawking, R. Wilkinson, and M. Wu. Overview of the TREC 2003 Web track. TREC 2003. 78--92.
[36]
N. Craswell, B. Mitra, E. Yilmaz, and D. Campos. Overview of the TREC 2020 Deep Learning track. TREC 2020.
[37]
N. Craswell, B. Mitra, E. Yilmaz, D. Campos, and E. M. Voorhees. Overview of the TREC 2019 Deep Learning track. TREC 2019.
[38]
N. Ferro, N. Fuhr, K. Järvelin, N. Kando, M. Lippold, and J. Zobel. Increasing reproducibility in IR: Findings from the Dagstuhl seminar on "Reproducibility of Data-Oriented Experiments in E-Science". SIGIR Forum 50, 1 (2016), 68--82.
[39]
N. Ferro, N. Fuhr, M. Maistro, T. Sakai, and I. Soboroff. Overview of CENTRE@CLEF 2019: Sequel in the systematic reproducibility realm. CLEF 2019. 287--300.
[40]
N. Ferro, M. Maistro, T. Sakai, and I. Soboroff. Overview of CENTRE@CLEF 2018: A first tale in the systematic reproducibility realm. CLEF 2018.
[41]
T. Formal, C. Lassance, B. Piwowarski, and S. Clinchant. SPLADE v2: Sparse lexical and expansion model for information retrieval. arXiv:2109.10086 (2021).
[42]
M. Fröbe, T. Gollub, M. Hagen, and M. Potthast. SemEval-2023 task 5: Clickbait spoiling. SemEval-2023. 2278--2289.
[43]
M. Fröbe, M. Wiegmann, N. Kolyada, B. Grahm, T. Elstner, F. Loebe, M. Hagen, B. Stein, and M. Potthast. Continuous integration for reproducible shared tasks with TIRA.io. ECIR 2023. 236--241.
[44]
N. Fuhr. Some common mistakes in IR evaluation, and how they can be avoided. SIGIR Forum 51, 3 (2017), 32--41.
[45]
N. Fuhr. Proof by experimentation? Towards better IR research. SIGIR Forum 54, 2 (2020), 2:1--2:4.
[46]
N. Fuhr. Proof by experimentation? Towards better IR research. SIGIR 2020. 2.
[47]
L. Gao, X. Ma, J. Lin, and J. Callan. Precise zero-shot dense retrieval without relevance labels. arXiv:2212.10496 (2022).
[48]
T. Gollub, B. Stein, S. Burrows, and D. Hoppe. TIRA: Configuring, executing, and disseminating information retrieval experiments. TIR 2012 at DEXA. 151--155.
[49]
M. Hagen, M. Potthast, and B. Stein. Overview of the author obfuscation task at PAN 2017: Safety evaluation revisited. CLEF 2017. 1613--0073.
[50]
H. Hashemi, M. Aliannejadi, H. Zamani, and W. Bruce Croft. ANTIQUE: A non-factoid question answering benchmark. ECIR 2020. 166--173.
[51]
W. R. Hersh, R. Teja Bhupatiraju, L. Ross, A. M. Cohen, D. Kraemer, and P. Johnson. TREC 2004 Genomics track overview. TREC 2004.
[52]
W. R. Hersh, A. M. Cohen, J. Yang, R. Teja Bhupatiraju, P. M. Roberts, and M. A. Hearst. TREC 2005 Genomics track overview. TREC 2005.
[53]
S. Hofstätter, S. Lin, J. Yang, J. Lin, and A. Hanbury. Efficiently teaching an effective dense retriever with balanced topic aware sampling. SIGIR 2021. 113--122.
[54]
F. Hopfgartner, T. Brodt, J. Seiler, B. Kille, A. Lommatzsch, M. A. Larson, R. Turrin, and A. Serény. Benchmarking news recommendations: The CLEF NewsREEL use case. SIGIR Forum 49, 2 (2015), 129--136.
[55]
F. Hopfgartner, A. Hanbury, H. Müller, I. Eggel, K. Balog, T. Brodt, G. V. Cormack, J. Lin, J. Kalpathy-Cramer, N. Kando, M. P. Kato, A. Krithara, T. Gollub, M. Potthast, E. Viegas, and S. Mercer. Evaluation-as-a-service for the computational sciences: Overview and outlook. Journal of Data and Information Quality 10, 4 (2018), 15:1--15:32.
[56]
R. Jagerman, K. Balog, and M. de Rijke. OpenSearch: Lessons learned from an online evaluation campaign. Journal of Data and Information Quality 10, 3 (2018), 13:1--13:15.
[57]
K. M. Jose, T. Nguyen, S. MacAvaney, J. Dalton, and A. Yates. DiffIR: Exploring differences in ranking models' behavior. SIGIR 2021. 2595--2599.
[58]
V. Karpukhin, B. Oguz, S. Min, P. S. H. Lewis, L. Wu, S. Edunov, D. Chen, and W. Yih. Dense passage retrieval for open-domain question answering. EMNLP 2020. 6769--6781.
[59]
O. Khattab and M. Zaharia. ColBERT: Efficient and effective passage search via contextualized late interaction over BERT. SIGIR 2020. 39--48.
[60]
J. Kiesel, M. Alshomary, N. Mirzakhmedova, M. Heinrich, N. Handke, H. Wachsmuth, and B. Stein. SemEval-2023 task 4: ValueEval: Identification of human values behind arguments. SemEval-2023. 2290--2306.
[61]
J. Leipzig, D. Nüst, C. Tapley Hoyt, K. Ram, and J. Greenberg. The role of metadata in reproducible computational research. Patterns 2, 9 (2021), 100322.
[62]
J. Lin. The neural hype and comparisons against weak baselines. SIGIR Forum 52, 2 (2018), 40--51.
[63]
J. Lin, D. Campos, N. Craswell, B. Mitra, and E. Yilmaz. Fostering coopetition while plugging leaks: The design and implementation of the MS MARCO leaderboards. SIGIR 2022. 2939--2948.
[64]
J. Lin, X. Ma, S. Lin, J. Yang, R. Pradeep, and R. Nogueira. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. SIGIR 2021. 2356--2362.
[65]
J. Lin and Q. Zhang. Reproducibility is a process, not an achievement: The replicability of IR reproducibility experiments. ECIR 2020. 43--49.
[66]
S. MacAvaney, C. Macdonald, and I. Ounis. Streamlining evaluation with ir_measures. ECIR 2022. 305--310.
[67]
S. MacAvaney, A. Yates, S. Feldman, D. Downey, A. Cohan, and N. Goharian. OpenNIR: A complete neural ad-hoc ranking pipeline. WSDM 2020. 845--848.
[68]
S. MacAvaney, A. Yates, S. Feldman, D. Downey, A. Cohan, and N. Goharian. Simplified data wrangling with ir_datasets. SIGIR 2021. 2429--2436.
[69]
C. Macdonald, N. Tonellotto, S. MacAvaney, and I. Ounis. PyTerrier: Declarative experimentation in Python from BM25 to dense retrieval. CIKM 2021. 4526--4533.
[70]
C. Macdonald and N. Tonellotto. Declarative experimentation in information retrieval using PyTerrier. ICTIR 2020. 161--168.
[71]
A. Mallia, M. Siedlaczek, J. M. Mackenzie, and T. Suel. PISA: Performant indexes and search for academia. OSIRRC at SIGIR 2019. 50--56.
[72]
A. Moffat. Batch evaluation metrics in information retrieval: Measures, scales, and meaning. IEEE Access 10 (2022), 105564--105577.
[73]
T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng. MS MARCO: A human cenerated machine reading comprehension dataset. CoCo at NIPS 2016.
[74]
R. Frassetto Nogueira, W. Yang, K. Cho, and J. Lin. Multi-stage document ranking with BERT. arXiv:1910.14424 (2019).
[75]
I. Ounis, G. Amati, V. Plachouras, B. He, C. Macdonald, and D. Johnson. Terrier information retrieval platform. ECIR 2005. 517--519.
[76]
B. Piwowarski. Experimaestro and Datamaestro: Experiment and dataset managers (for IR). SIGIR 2020. 2173--2176.
[77]
M. Potthast, T. Gollub, M. Wiegmann, and B. Stein. TIRA integrated research architecture. Information Retrieval Evaluation in a Changing World, 2019. 123--160.
[78]
M. Potthast, S. Günther, J. Bevendorff, J. P. Bittner, A. Bondarenko, M. Fröbe, C. Kahmann, A. Niekler, M. Völske, B. Stein, and M. Hagen. The information retrieval anthology. SIGIR 2021. 2550--2555.
[79]
R. Pradeep, R. Nogueira, and J. Lin. The Expando-Mono-Duo design pattern for text ranking with pretrained sequence-to-sequence models. arXiv:2101.05667 (2021).
[80]
N. Reimers and I. Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. EMNLP-IJCNLP 2019. 3980--3990.
[81]
K. Roberts, D. Demner-Fushman, E. M. Voorhees, W. R. Hersh, S. Bedrick, and A. J. Lazar. Overview of the TREC 2018 Precision Medicine track. TREC 2018.
[82]
K. Roberts, D. Demner-Fushman, E. M. Voorhees, W. R. Hersh, S. Bedrick, A. J. Lazar, and S. Pant. Overview of the TREC 2017 Precision Medicine track. TREC 2017.
[83]
T. Sakai. On Fuhr's guideline for IR evaluation. SIGIR Forum 54, 1 (2020), 12:1--12:8.
[84]
T. Sakai, N. Ferro, I. Soboroff, Z. Zeng, P. Xiao, and M. Maistro. Overview of the NTCIR-14 CENTRE task. NTCIR 2019.
[85]
T. Sakai, S. Tao, Z. Zeng, Y. Zheng, J. Mao, Z. Chu, Y. Liu, M. Maistro, Z. Dou, N. Ferro, et al. Overview of the NTCIR-15 We Want Web with CENTRE (WWW-3) task. NTCIR 2020.
[86]
N. Thakur, N. Reimers, A. Rücklé, A. Srivastava, and I. Gurevych. BEIR: A het- erogeneous benchmark for zero-shot evaluation of information retrieval models. NeurIPS Datasets and Benchmarks 2021.
[87]
G. Tsatsaronis, G. Balikas, P. Malakasiotis, I. Partalas, M. Zschunke, M. R. Alvers, D. Weissenborn, A. Krithara, S. Petridis, D. Polychronopoulos, Y. Almirantis, J. Pavlopoulos, N. Baskiotis, P. Gallinari, T. Artières, A. Ngonga Ngomo, N. Heino, É. Gaussier, L. Barrio-Alvers, M. Schroeder, I. Androutsopoulos, and G. Paliouras. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. BMC Bioinformatics 16 (2015), 138:1--138:28.
[88]
J. Vanschoren, J. N. van Rijn, B. Bischl, and L. Torgo. OpenML: Networked science in machine learning. SIGKDD Explor. 15, 2 (2013), 49--60.
[89]
E. Voorhees. Overview of the TREC 2004 Robust Retrieval track. TREC 2004.
[90]
E. M. Voorhees. NIST TREC Disks 4 and 5: Retrieval test collections document set. 1996.
[91]
E. M. Voorhees. The philosophy of information retrieval evaluation. CLEF 2001. 355--370.
[92]
E. M. Voorhees. The evolution of Cranfield. Information Retrieval Evaluation in a Changing World, 2019. 45--69.
[93]
E. M. Voorhees, T. Alam, S. Bedrick, D. Demner-Fushman, W. R. Hersh, K. Lo, K. Roberts, I. Soboroff, and L. Lu Wang. TREC-COVID: Constructing a pandemic information retrieval test collection. SIGIR Forum 54, 1 (2020), 1:1--1:12.
[94]
E. M. Voorhees, N. Craswell, and J. Lin. Too many relevants: Whither Cranfield test collections? SIGIR 2022. 2970--2980.
[95]
E. M. Voorhees and D. Harman. Overview of the seventh text retrieval conference (TREC-7). TREC 1998.
[96]
E. M. Voorhees and D. Harman. Overview of the eighth text retrieval conference (TREC-8). TREC 1999.
[97]
E. M. Voorhees, I. Soboroff, and J. Lin. Can old TREC collections reliably evaluate modern neural retrieval models? arXiv:2201.11086 (2022).
[98]
L. Lu Wang, K. Lo, Y. Chandrasekhar, R. Reas, J. Yang, D. Eide, K. Funk, R. Kinney, Z. Liu, W. Merrill, P. Mooney, D. A. Murdick, D. Rishi, J. Sheehan, Z. Shen, B. Stilson, A. D. Wade, K. Wang, C. Wilhelm, B. Xie, D. Raymond, D. S. Weld, O. Etzioni, and S. Kohlmeier. CORD-19: The Covid-19 open research dataset. arXiv:2004.10706 (2020).
[99]
L. Xiong, C. Xiong, Y. Li, K. Tang, J. Liu, P. N. Bennett, J. Ahmed, and A. Over-wijk. Approximate nearest neighbor negative contrastive learning for dense text retrieval. ICLR 2021.
[100]
D. Yadav, R. Jain, H. Agrawal, P. Chattopadhyay, T. Singh, A. Jain, S. Singh, S. Lee, and D. Batra. EvalAI: Towards better evaluation systems for AI agents. arXiv:1902.03570 (2019).
[101]
P. Yang, H. Fang, and J. Lin. Anserini: Enabling the use of Lucene for information retrieval research. SIGIR 2017. 1253--1256.
[102]
A. Yates, S. Arora, X. Zhang, W. Yang, K. Martin Jose, and J. Lin. Capreolus: A toolkit for end-to-end neural ad hoc retrieval. WSDM 2020. 861--864.
[103]
X. Zhang, N. Thakur, O. Ogundepo, E. Kamalloo, D. Alfonso-Hermelo, X. Li, Q. Liu, M. Rezagholizadeh, and J. Lin. Making a MIRACL: Multilingual informtion retrieval across a continuum of languages. arXiv:2210.09984 (2022).
[104]
J. Zobel. When measurement misleads: The limits of batch assessment of retrieval systems. SIGIR Forum 56, 1 (2022), 12:1--12:20.

Cited By

View all
  • (2024)Report on the 1st International Workshop on Open Web Search (WOWS 2024) at ECIR 2024ACM SIGIR Forum10.1145/3687273.368729058:1(1-13)Online publication date: 1-Jun-2024
  • (2024)ReNeuIR at SIGIR 2024: The Third Workshop on Reaching Efficiency in Neural Information RetrievalProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657994(3051-3054)Online publication date: 10-Jul-2024
  • (2024)Resources for Combining Teaching and Research in Information Retrieval CourseworkProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657886(1115-1125)Online publication date: 10-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval
July 2023
3567 pages
ISBN:9781450394086
DOI:10.1145/3539618
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 July 2023

Permissions

Request permissions for this article.

Check for updates

Badges

  • Best Paper

Author Tags

  1. reproducibility
  2. retrieval evaluation
  3. shared tasks
  4. tirex

Qualifiers

  • Research-article

Funding Sources

  • European Union?s Horizon Europe research and innovation programme (OpenWebSearch.EU)

Conference

SIGIR '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 792 of 3,983 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)496
  • Downloads (Last 6 weeks)23
Reflects downloads up to 18 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Report on the 1st International Workshop on Open Web Search (WOWS 2024) at ECIR 2024ACM SIGIR Forum10.1145/3687273.368729058:1(1-13)Online publication date: 1-Jun-2024
  • (2024)ReNeuIR at SIGIR 2024: The Third Workshop on Reaching Efficiency in Neural Information RetrievalProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657994(3051-3054)Online publication date: 10-Jul-2024
  • (2024)Resources for Combining Teaching and Research in Information Retrieval CourseworkProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657886(1115-1125)Online publication date: 10-Jul-2024
  • (2024)Browsing and Searching Metadata of TRECProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657873(313-323)Online publication date: 10-Jul-2024
  • (2024)The First International Workshop on Open Web Search (WOWS)Advances in Information Retrieval10.1007/978-3-031-56069-9_58(426-431)Online publication date: 24-Mar-2024
  • (2024)The Open Web IndexAdvances in Information Retrieval10.1007/978-3-031-56069-9_10(130-143)Online publication date: 24-Mar-2024
  • (2024)Investigating the Effects of Sparse Attention on Cross-EncodersAdvances in Information Retrieval10.1007/978-3-031-56027-9_11(173-190)Online publication date: 24-Mar-2024

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media