Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3626772.3657886acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article
Open access

Resources for Combining Teaching and Research in Information Retrieval Coursework

Published: 11 July 2024 Publication History

Abstract

The first International Workshop on Open Web Search (WOWS) was held on Thursday, March 28th, at ECIR 2024 in Glasgow, UK. The full-day workshop had two calls for contributions: the first call aimed at scientific contributions to building, operating, and evaluating search engines cooperatively and the cooperative use of the web as a resource for researchers and innovators. The second call for implementations of retrieval components aimed to gain practical experience with joint, cooperative evaluation of search engines and their components. In total, 2~papers were accepted for the first call, and 11~software components were submitted for the second. The workshop ended with breakout sessions on how the OpenWebSearch.eu project can incorporate collaborative evaluations and a hub of search engines.

References

[1]
Rabab Alkhalifa, Iman Munire Bilal, Hsuvas Borkakoty, José Camacho-Collados, Romain Deveaud, Alaa El-Ebshihy, Luis Espinosa Anke, Gabriela Gonzá lez Sá ez, Petra Galuscá ková, Lorraine Goeuriot, Elena Kochkina, Maria Liakata, Daniel Loureiro, Philippe Mulhem, Florina Piroi, Martin Popel, Christophe Servan, Harish Tayyar Madabushi, and Arkaitz Zubiaga. 2023 a. Overview of the CLEF-2023 LongEval Lab on Longitudinal Evaluation of Model Performance. In Proceedings of CLEF 2023 (LNCS, Vol. 14163). Springer, Berlin, 440--458. https://doi.org/10.1007/978--3-031--42448--9_28
[2]
Rabab Alkhalifa, Iman Munire Bilal, Hsuvas Borkakoty, José Camacho-Collados, Romain Deveaud, Alaa El-Ebshihy, Luis Espinosa Anke, Gabriela Nicole Gonzá lez Sá ez, Petra Galuscá ková, Lorraine Goeuriot, Elena Kochkina, Maria Liakata, Daniel Loureiro, Philippe Mulhem, Florina Piroi, Martin Popel, Christophe Servan, Harish Tayyar Madabushi, and Arkaitz Zubiaga. 2023 b. Extended Overview of the CLEF-2023 LongEval Lab on Longitudinal Evaluation of Model Performance. In Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2023), Thessaloniki, Greece, September 18th to 21st, 2023 (CEUR Workshop Proceedings, Vol. 3497), Mohammad Aliannejadi, Guglielmo Faggioli, Nicola Ferro, and Michalis Vlachos (Eds.). CEUR-WS.org, 2181--2203. https://ceur-ws.org/Vol-3497/paper-184.pdf
[3]
Rabab Alkhalifa, Hsuvas Borkakoty, Romain Deveaud, Alaa El-Ebshihy, Luis Espinosa Anke, Tobias Fink, Gabriela Gonzá lez Sá ez, Petra Galuscá ková, Lorraine Goeuriot, David Iommi, Maria Liakata, Harish Tayyar Madabushi, Pablo Medina-Alias, Philippe Mulhem, Florina Piroi, Martin Popel, Christophe Servan, and Arkaitz Zubiaga. 2024. LongEval: Longitudinal Evaluation of Model Performance at CLEF 2024. In Advances in Information Retrieval - 46th European Conference on Information Retrieval, ECIR 2024, Glasgow, UK, March 24--28, 2024, Proceedings, Part VI (Lecture Notes in Computer Science, Vol. 14613), Nazli Goharian, Nicola Tonellotto, Yulan He, Aldo Lipani, Graham McDonald, Craig Macdonald, and Iadh Ounis (Eds.). Springer, 60--66. https://doi.org/10.1007/978--3-031--56072--9_8
[4]
Sophia Althammer, Sebastian Hofst"a tter, Suzan Verberne, and Allan Hanbury. 2022. TripJudge: A Relevance Judgement Test Collection for TripClick Health Retrieval. In Proceedings of CIKM 2022. ACM, New York, 3801--3805. https://doi.org/10.1145/3511808.3557714
[5]
Leif Azzopardi, Matt Crane, Hui Fang, Grant Ingersoll, Jimmy Lin, Yashar Moshfeghi, Harrisen Scells, Peilin Yang, and Guido Zuccon. 2017. The Lucene for Information Access and Retrieval Research (LIARR) Workshop at SIGIR 2017. In Proceedings of SIGIR 2017. ACM, New York, 1429--1430. https://doi.org/10.1145/3077136.3084374
[6]
Peter Bailey, Alistair Moffat, Falk Scholer, and Paul Thomas. 2016. UQV100: A Test Collection with Query Variability. In Proceedings of SIGIR 2016. ACM, New York, 725--728. https://doi.org/10.1145/2911451.2914671
[7]
Elias Bassani and Nicola Tonellotto. 2024. indxr: A Python Library for Indexing File Lines. In Proceedings of ECIR 2024 (LNCS ). Springer, Berlin.
[8]
Christine Bauer, Ben Carterette, Nicola Ferro, Norbert Fuhr, Joeran Beel, Timo Breuer, Charles L. A. Clarke, Anita Crescenzi, Gianluca Demartini, Giorgio Maria Di Nunzio, Laura Dietz, Guglielmo Faggioli, Bruce Ferwerda, Maik Frö be, Matthias Hagen, Allan Hanbury, Claudia Hauff, Dietmar Jannach, Noriko Kando, Evangelos Kanoulas, Bart P. Knijnenburg, Udo Kruschwitz, Meijie Li, Maria Maistro, Lien Michiels, Andrea Papenmeier, Martin Potthast, Paolo Rosso, Alan Said, Philipp Schaer, Christin Seifert, Damiano Spina, Benno Stein, Nava Tintarev, Juliá n Urbano, Henning Wachsmuth, Martijn C. Willemsen, and Justin Zobel. 2023 a. Report on the Dagstuhl Seminar on Frontiers of Information Access Experimentation for Research and Education. SIGIR Forum, Vol. 57, 1 (2023), 7:1--7:28. https://doi.org/10.1145/3636341.3636351
[9]
Christine Bauer, Maik Fröbe, Dietmar Jannach, Udo Kruschwitz, Paolo Rosso, Damiano Spina, and Nava Tintarev. 2023 b. Overcoming Methodological Challenges in Information Retrieval and Recommender Systems through Awareness and Education. arXiv 2305.01509. https://doi.org/10.48550/arXiv.2305.01509
[10]
Daniel Blank, Norbert Fuhr, Andreas Henrich, Thomas Mandl, Thomas Rö lleke, Hinrich Schü tze, and Benno Stein. 2011. Teaching IR: Curricular Considerations. In Teaching and Learning in Information Retrieval. INRE, Vol. 31. Springer, Berlin, 31--46. https://doi.org/10.1007/978--3--642--22511--6_3
[11]
Alexander Bondarenko, Maik Frö be, Meriem Beloucif, Lukas Gienapp, Yamen Ajjour, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Hagen. 2020. Overview of Touché 2020: Argument Retrieval. In Proceedings of CLEF 2020 (LNCS, Vol. 12260). Springer, Berlin, 384--395. https://doi.org/10.1007/978--3-030--58219--7_26
[12]
Alexander Bondarenko, Lukas Gienapp, Maik Frö be, Meriem Beloucif, Yamen Ajjour, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Hagen. 2021. Overview of Touché 2021: Argument Retrieval. In Proceedings of CLEF 2021 (LNCS, Vol. 12880). Springer, Berlin, 450--467. https://doi.org/10.1007/978--3-030--85251--1_28
[13]
Vera Boteva, Demian Gholipour Ghalandari, Artem Sokolov, and Stefan Riezler. 2016. A Full-Text Learning to Rank Dataset for Medical Information Retrieval. In Proceedings of ECIR 2016 (LNCS, Vol. 9626). Springer, Berlin, 716--722. https://doi.org/10.1007/978--3--319--30671--1_58
[14]
Chris Buckley, Darrin Dimmick, Ian Soboroff, and Ellen M. Voorhees. 2007. Bias and the Limits of Pooling for Large Collections. Inf. Retr., Vol. 10, 6 (2007), 491--508. https://doi.org/10.1007/S10791-007--9032-X
[15]
Cyril W. Cleverdon. 1967. The Cranfield Tests on Index Language Devices. In ASLIB Proceedings, Vol. 19. Emerald, Leeds, 173--192. https://doi.org/10.1108/eb050097
[16]
Cyril W. Cleverdon. 1991. The Significance of the Cranfield Tests on Index Languages. In Proceedings of SIGIR 1991. ACM, New York, 3--12. https://doi.org/10.1145/122860.122861
[17]
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2020. Overview of the TREC 2020 Deep Learning Track. In Proceedings of TREC 2020 (NIST Special Publication, Vol. 1266). NIST, Gaithersburg, bibinfonumpages13 pages. https://trec.nist.gov/pubs/trec29/papers/OVERVIEW.DL.pdf
[18]
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2019. Overview of the TREC 2019 Deep Learning Track. In Proceedings of TREC 2019 (NIST Special Publication, Vol. 1250). NIST, Gaithersburg, bibinfonumpages22 pages. https://trec.nist.gov/pubs/trec28/papers/OVERVIEW.DL.pdf
[19]
Theresa Elstner, Frank Loebe, Yamen Ajjour, Christopher Akiki, Alexander Bondarenko, Maik Frö be, Lukas Gienapp, Nikolay Kolyada, Janis Mohr, Stephan Sandfuchs, Matti Wiegmann, Jö rg Frochte, Nicola Ferro, Sven Hofmann, Benno Stein, Matthias Hagen, and Martin Potthast. 2023. Shared Tasks as Tutorials: A Methodical Approach. In Proceedings of EAAI 2023. AAAI Press, Washington, DC, 15807--15815. https://doi.org/10.1609/AAAI.V37I13.26877
[20]
Guglielmo Faggioli, Laura Dietz, Charles L. A. Clarke, Gianluca Demartini, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Benno Stein, and Henning Wachsmuth. 2023. Perspectives on Large Language Models for Relevance Judgment. In Proceedings of ICTIR 2023. ACM, New York, 39--50. https://doi.org/10.1145/3578337.3605136
[21]
Guglielmo Faggioli, Oleg Zendel, J. Shane Culpepper, Nicola Ferro, and Falk Scholer. 2021. An Enhanced Evaluation Framework for Query Performance Prediction. In Proceedings of ECIR 2021 (LNCS, Vol. 12656). Springer, Berlin, 115--129. https://doi.org/10.1007/978--3-030--72113--8_8
[22]
Sheikh Farzana, Maik Fröbe, Michael Granitzer, Gijs Hendriksen, Djoerd Hiemstra, Martin Potthast, and Saber Zerhoudi. 2024. The First International Workshop on Open Web Search (WOWS). In Proceedings of ECIR 2024 (LNCS ). Springer, Berlin.
[23]
Juan M. Ferná ndez-Luna, Juan F. Huete, Andrew MacFarlane, and Efthimis N. Efthimiadis. 2009. Teaching and Learning in Information Retrieval. Inf. Retr., Vol. 12, 2 (2009), 201--226. https://doi.org/10.1007/S10791-009--9089--9
[24]
Maik Frö be, Jan Heinrich Reimer, Sean MacAvaney, Niklas Deckers, Simon Reich, Janek Bevendorff, Benno Stein, Matthias Hagen, and Martin Potthast. 2023 a. The Information Retrieval Experiment Platform. In Proceedings of SIGIR 2023. ACM, New York, 2826--2836. https://doi.org/10.1145/3539618.3591888
[25]
Maik Frö be, Matti Wiegmann, Nikolay Kolyada, Bastian Grahm, Theresa Elstner, Frank Loebe, Matthias Hagen, Benno Stein, and Martin Potthast. 2023 b. Continuous Integration for Reproducible Shared Tasks with TIRA.io. In Proceedings of ECIR 2023 (LNCS, Vol. 13982). Springer, Berlin, 236--241. https://doi.org/10.1007/978--3-031--28241--6_20
[26]
Norbert Fuhr. 2017. Some Common Mistakes In IR Evaluation, And How They Can Be Avoided. SIGIR Forum, Vol. 51, 3 (2017), 32--41. https://doi.org/10.1145/3190580.3190586
[27]
Petra Galuscá ková, Romain Deveaud, Gabriela Gonzá lez Sá ez, Philippe Mulhem, Lorraine Goeuriot, Florina Piroi, and Martin Popel. 2023. LongEval-Retrieval: French-English Dynamic Test Collection for Continuous Web Search Evaluation. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23--27, 2023, Hsin-Hsi Chen, Wei-Jou (Edward) Duh, Hen-Hsen Huang, Makoto P. Kato, Josiane Mothe, and Barbara Poblete (Eds.). ACM, 3086--3094. https://doi.org/10.1145/3539618.3591921
[28]
Matthias Hagen, Martin Potthast, Benno Stein, and Christof Br"a utigam. 2011. Query Segmentation Revisited. In Proceedings of WWW 2011. ACM, New York, 97--106. https://doi.org/10.1145/1963405.1963423
[29]
Helia Hashemi, Mohammad Aliannejadi, Hamed Zamani, and W. Bruce Croft. 2020. ANTIQUE: A Non-factoid Question Answering Benchmark. In Proceedings of ECIR 2020 (LNCS, Vol. 12036). Springer, 166--173.
[30]
Faegheh Hasibi, Fedor Nikolaev, Chenyan Xiong, Krisztian Balog, Svein Erik Bratsberg, Alexander Kotov, and Jamie Callan. 2017. DBpedia-Entity v2: A Test Collection for Entity Search. In Proceedings of SIGIR 2017. ACM, New York, 1265--1268. https://doi.org/10.1145/3077136.3080751
[31]
William R. Hersh, Ravi Teja Bhupatiraju, L. Ross, Aaron M. Cohen, Dale Kraemer, and Phoebe Johnson. 2004. TREC 2004 Genomics Track Overview. In Proceedings of TREC 2004 (NIST Special Publication, Vol. 500--261). NIST, Gaithersburg, bibinfonumpages19 pages. http://trec.nist.gov/pubs/trec13/papers/GEO.OVERVIEW.pdf
[32]
William R. Hersh, Aaron M. Cohen, Jianji Yang, Ravi Teja Bhupatiraju, Phoebe M. Roberts, and Marti A. Hearst. 2005. TREC 2005 Genomics Track Overview. In Proceedings of TREC 2005 (NIST Special Publication, Vol. 500--266). NIST, Gaithersburg, bibinfonumpages26 pages. http://trec.nist.gov/pubs/trec14/papers/GEO.OVERVIEW.pdf
[33]
Konrad Hinsen. 2015. ActivePapers: A Platform for Publishing and Archiving Computer-aided Research. F1000Research, Vol. 3, 289 (2015). https://doi.org/10.12688/f1000research.5773.3
[34]
Oana Inel, Giannis Haralabopoulos, Dan Li, Christophe Van Gysel, Zoltá n Szlá vik, Elena Simperl, Evangelos Kanoulas, and Lora Aroyo. 2018. Studying Topical Relevance with Evidence-based Crowdsourcing. In Proceedings of CIKM 2019. ACM, New York, 1253--1262. https://doi.org/10.1145/3269206.3271779
[35]
Kevin Martin Jose, Thong Nguyen, Sean MacAvaney, Jeffrey Dalton, and Andrew Yates. 2021. DiffIR: Exploring Differences in Ranking Models' Behavior. In Proceedings of SIGIR 2021. ACM, 2595--2599. https://doi.org/10.1145/3404835.3462784
[36]
Omar Khattab and Matei Zaharia. 2020. ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT. In Proceedings of SIGIR 2020. ACM, New York, 39--48. https://doi.org/10.1145/3397271.3401075
[37]
Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Frassetto Nogueira. 2021. Pyserini: A Python Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations. In Proceedings of SIGIR 2021. ACM, New York, 2356--2362. https://doi.org/10.1145/3404835.3463238
[38]
Rafael Ló pez-Garc'i a and Fidel Cacheda. 2011. A Technical Approach to Information Retrieval Pedagogy. In Teaching and Learning in Information Retrieval. INRE, Vol. 31. Springer, Berlin, 89--105. https://doi.org/10.1007/978--3--642--22511--6_7
[39]
Anke Lüdeling, Seanna Doolittle, Hagen Hirschmann, Karin Schmidt, and Maik Walter. 2008. Das Lernerkorpus Falko. Deutsch als Fremdsprache, Vol. 45, 2 (2008), 67. https://doi.org/10.37307/j.2198--2430.2008.02.02
[40]
Sean MacAvaney, Craig Macdonald, and Iadh Ounis. 2022. Streamlining Evaluation with ir-measures. In Proceedings of ECIR 2022 (LNCS, Vol. 13186). Springer, 305--310. https://doi.org/10.1007/978--3-030--99739--7_38
[41]
Sean MacAvaney and Luca Soldaini. 2023. One-Shot Labeling for Automatic Relevance Estimation. In Proceedings of SIGIR 2023. ACM, New York, 2230--2235. https://doi.org/10.1145/3539618.3592032
[42]
Sean MacAvaney, Andrew Yates, Sergey Feldman, Doug Downey, Arman Cohan, and Nazli Goharian. 2021. Simplified Data Wrangling with ir_datasets. In Proceedings of SIGIR 2021. ACM, 2429--2436. https://doi.org/10.1145/3404835.3463254
[43]
Craig Macdonald and Nicola Tonellotto. 2020. Declarative Experimentation in Information Retrieval using PyTerrier. In Proceedings of ICTIR 2020. ACM, New York, 161--168. https://doi.org/10.1145/3409256.3409829
[44]
Craig Macdonald, Nicola Tonellotto, and Sean MacAvaney. 2021a. IR From Bag-of-words to BERT and Beyond through Practical Experiments. In Proceedings of CIKM 2021. 4861.
[45]
Craig Macdonald, Nicola Tonellotto, Sean MacAvaney, and Iadh Ounis. 2021b. PyTerrier: Declarative Experimentation in Python from BM25 to Dense Retrieval. In Proceedings of CIKM 2021. 4526--4533.
[46]
Ilya Markov and Maarten de Rijke. 2018. What Should We Teach in Information Retrieval? SIGIR Forum, Vol. 52, 2 (2018), 19--39. https://doi.org/10.1145/3308774.3308780
[47]
Alistair Moffat. 2023. Categorical, Ratio, and Professorial Data: The Case for Reciprocal Rank. arXiv 2312.12672. https://doi.org/10.48550/arXiv.2312.12672
[48]
Martin Potthast, Sebastian Gü nther, Janek Bevendorff, Jan Philipp Bittner, Alexander Bondarenko, Maik Frö be, Christian Kahmann, Andreas Niekler, Michael Vö lske, Benno Stein, and Matthias Hagen. 2021. The Information Retrieval Anthology. In Proceedings of SIGIR 2021. ACM, New York, 2550--2555. https://doi.org/10.1145/3404835.3462798
[49]
Ronak Pradeep, Rodrigo Frassetto Nogueira, and Jimmy Lin. 2021. The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models. arXiv 2101.05667. https://doi.org/10.48550/arXiv.2101.05667
[50]
Navid Rekabsaz, Oleg Lesota, Markus Schedl, Jon Brassey, and Carsten Eickhoff. 2021. TripClick: The Log Files of a Large Health Web Search Engine. In Proceedings of SIGIR 2021. 2507--2513. https://doi.org/10.1145/3404835.3463242
[51]
Kirk Roberts, Dina Demner-Fushman, Ellen M. Voorhees, William R. Hersh, Steven Bedrick, and Alexander J. Lazar. 2018. Overview of the TREC 2018 Precision Medicine Track. In Proceedings of TREC 2018 (NIST Special Publication, Vol. 500--331). NIST.
[52]
Kirk Roberts, Dina Demner-Fushman, Ellen M. Voorhees, William R. Hersh, Steven Bedrick, Alexander J. Lazar, and Shubham Pant. 2017. Overview of the TREC 2017 Precision Medicine Track. In Proceedings of TREC 2017 (NIST Special Publication, Vol. 500--324). NIST.
[53]
Tetsuya Sakai, Sijie Tao, Nuo Chen, Yujing Li, Maria Maistro, Zhumin Chu, and Nicola Ferro. 2024. On the Ordering of Pooled Web Pages, Gold Assessments, and Bronze Assessments. ACM Trans. Inf. Syst., Vol. 42, 1 (2024), 23:1--23:31. https://doi.org/10.1145/3600227
[54]
Harrisen Scells, Shengyao Zhuang, and Guido Zuccon. 2022. Reduce, Reuse, Recycle: Green Information Retrieval Research. In Proceedings of SIGIR 2022. ACM, New York, 2825--2837. https://doi.org/10.1145/3477495.3531766
[55]
Philipp Schaer. 2012. Better than Their Reputation? On the Reliability of Relevance Assessments with Students. In Proceedings of CLEF 2012. Springer, Berlin, 124--135. https://doi.org/10.1007/978--3--642--33247-0_14
[56]
Alan F. Smeaton and Donna K. Harman. 1997. The TREC Experiments and their Impact on Europe. J. Inf. Sci., Vol. 23, 2 (1997), 169--174. https://doi.org/10.1177/016555159702300208
[57]
Manuel Steiner, Damiano Spina, Falk Scholer, and Lawrence Cavedon. 2021. Crowdsourcing Backstories for Complex Task-Based Search. In Proceedings of ADCS 2021. ACM, New York, 5:1--5:6. https://doi.org/10.1145/3503516.3503526
[58]
Rudolf Stichweh. 1994. The Unity of Teaching and Research. In Romanticism in Science: Science in Europe, 1790--1840. BSPS, Vol. 152. Springer, Berlin, 189--202. https://doi.org/10.1007/978--94-017--2921--5_12
[59]
Clare Thornley. 2011. Teaching Information Retrieval Through Problem-Based Learning. In Teaching and Learning in Information Retrieval. INRE, Vol. 31. Springer, Berlin, 183--198. https://doi.org/10.1007/978--3--642--22511--6_13
[60]
Margaret Thornton. 2009. Academic Un-Freedom in the New Knowledge Economy. Australian National University College of Law Legal Studies Research Paper Series 10--47 (2009), 19--34. https://ssrn.com/abstract=1599365
[61]
Andrew Trotman and Kat Lilly. 2020. JASSjr: The Minimalistic BM25 Search Engine for Teaching and Learning Information Retrieval. In Proceedings of SIGIR 2020. ACM, New York, 2185--2188. https://doi.org/10.1145/3397271.3401413
[62]
Juliá n Urbano, Mó nica Marrero, Diego Mart'i n, and Jorge Morato. 2011. Bringing Undergraduate Students Closer to a Real-world Information Retrieval Setting: Methodology and Resources. In Proceedings of ITiCSE 2011. ACM, New York, 293--297. https://doi.org/10.1145/1999747.1999829
[63]
Ellen Voorhees. 2004. Overview of the TREC 2004 Robust Retrieval Track. In Proceedings of TREC 2004 (NIST Special Publication). NIST.
[64]
Ellen M. Voorhees. 1996. NIST TREC Disks 4 and 5: Retrieval Test Collections Document Set.
[65]
Ellen M. Voorhees. 2001. Philosophy of IR Evaluation. In Working Notes of CLEF 2001 (CEUR Workshop Proceedings, Vol. 1167). CEUR-WS.org. https://ceur-ws.org/Vol-1167/CLEF2001wn-other-Voorhees2001.pdf
[66]
Ellen M. Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R. Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2020. TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection. SIGIR Forum, Vol. 54, 1 (2020), 1:1--1:12.
[67]
Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William Merrill, Paul Mooney, Dewey A. Murdick, Devvret Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stilson, Alex D. Wade, Kuansan Wang, Chris Wilhelm, Boya Xie, Douglas Raymond, Daniel S. Weld, Oren Etzioni, and Sebastian Kohlmeier. 2020. CORD-19: The Covid-19 Open Research Dataset. arXiv 2004.10706. https://doi.org/10.48550/arXiv.2004.10706
[68]
Thomas Wilhelm-Stein, Stefan Kahl, and Maximilian Eibl. 2017. Teaching the Information Retrieval Process Using a Web-Based Environment and Game Mechanics. In Proceedings of SIGIR 2017. ACM, New York, 1293--1296. https://doi.org/10.1145/3077136.3084143
[69]
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. In Proceedings of ICLR 2021. OpenReview.net. https://openreview.net/forum?id=zeFrfgyZln
[70]
Zeynep Akkalyoncu Yilmaz, Charles L. A. Clarke, and Jimmy Lin. 2020. A Lightweight Environment for Learning Experimental IR Research Practices. In Proceedings of SIGIR 2020. ACM, New York, 2113--2116. https://doi.org/10.1145/3397271.3401395
[71]
Amir Zeldes. 2017. The GUM corpus: Creating Multilayer Resources in the Classroom. Language Resources and Evaluation, Vol. 51, 3 (2017), 581--612. https://doi.org/10.1007/s10579-016--9343-x

Cited By

View all
  • (2024)Report on the 1st International Workshop on Open Web Search (WOWS 2024) at ECIR 2024ACM SIGIR Forum10.1145/3687273.368729058:1(1-13)Online publication date: 1-Jun-2024

Index Terms

  1. Resources for Combining Teaching and Research in Information Retrieval Coursework

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGIR '24: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval
      July 2024
      3164 pages
      ISBN:9798400704314
      DOI:10.1145/3626772
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 11 July 2024

      Check for updates

      Author Tags

      1. retrieval evaluation
      2. shared tasks
      3. teaching ir
      4. test collections

      Qualifiers

      • Research-article

      Funding Sources

      • OpenWebSearch.eu project

      Conference

      SIGIR 2024
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 792 of 3,983 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)120
      • Downloads (Last 6 weeks)47
      Reflects downloads up to 18 Sep 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Report on the 1st International Workshop on Open Web Search (WOWS 2024) at ECIR 2024ACM SIGIR Forum10.1145/3687273.368729058:1(1-13)Online publication date: 1-Jun-2024

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media