Nothing Special   »   [go: up one dir, main page]

skip to main content
article

VIAL: a unified process for visual interactive labeling

Published: 01 September 2018 Publication History

Abstract

The assignment of labels to data instances is a fundamental prerequisite for many machine learning tasks. Moreover, labeling is a frequently applied process in visual interactive analysis approaches and visual analytics. However, the strategies for creating labels usually differ between these two fields. This raises the question whether synergies between the different approaches can be attained. In this paper, we study the process of labeling data instances with the user in the loop, from both the machine learning and visual interactive perspective. Based on a review of differences and commonalities, we propose the "visual interactive labeling" (VIAL) process that unifies both approaches. We describe the six major steps of the process and discuss their specific challenges. Additionally, we present two heterogeneous usage scenarios from the novel VIAL perspective, one on metric distance learning and one on object detection in videos. Finally, we discuss general challenges to VIAL and point out necessary work for the realization of future VIAL approaches.

References

[1]
Attenberg, J., Provost, F.: Inactive learning? Difficulties employing active learning in practice. SIGKDD Explor. Newsl. 12(2), 36---41 (2011).
[2]
Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798---1828 (2013)
[3]
Bernard, J., Daberkow, D., Fellner, D., Fischer, K., Koepler, O., Kohlhammer, J., Runnwerth, M., Ruppert, T., Schreck, T., Sens, I.: Visinfo: a digital library system for time series research data based on exploratory search--a user-centered design approach. Int. J. Digit. Libr. (IJoDL) 1, 37---59 (2015).
[4]
Bernard, J., Dobermann, E., Vögele, A., Krüger, B., Kohlhammer, J., Fellner, D.: Visual-interactive semi-supervised labeling of human motion capture data. In: Visualization and Data Analysis (VDA) (2017).
[5]
Bengio, Y.: Deep learning of representations for unsupervised and transfer learning. In: ICML Workshop on Unsupervised and Transfer Learning, pp. 17---36 (2012)
[6]
Bernard, J.: Exploratory Search in Time-Oriented Primary Data. Dissertation, Ph.D. Technische Universität Darmstadt, Graphisch-Interaktive Systeme (GRIS), Darmstadt (2015). http://tuprints.ulb.tu-darmstadt.de/5173/
[7]
Bellet, A., Habrard, A., Sebban M.: A Survey on Metric Learning for Feature Vectors and Structured Data. CoRR arXiv:1306.6709 (2013)
[8]
Bernard, J., Hutter, M., Zeppelzauer, M., Fellner, D., Sedlmair, M.: Comparing visual-interactive labeling with active learning: an experimental study. IEEE Trans. Vis. Comput. Graph. (TVCG) (2017).
[9]
Buhrmester, M., Kwang, T., Gosling, S.D.: Amazon's mechanical turk: a new source of inexpensive, yet high-quality, data? Perspect. Psychol. Sci. 6(1), 3---5 (2011)
[10]
Blascheck, T., Kurzhals, K., Raschke, M., Burch, M., Weiskopf, D., Ertl, T.: State-of-the-art of visualization for eye tracking data. In: EuroVis (STAR) (2014), Eurograph.
[11]
Behrisch, M., Korkmaz, F., Shao, L., Schreck, T.: Feedback-driven interactive exploration of large multidimensional data supported by visual classifier. In: IEEE Visual Analytics Science and Technology (VAST), pp. 43---52 (2014)
[12]
Brown, E.T., Liu, J., Brodley, C.E., Chang, R.: Dis-function: Learning distance functions interactively. In: IEEE Visual Analytics Science and Technology (VAST), pp. 83---92. IEEE (2012)
[13]
Bernard, J., Ruppert, T., Goroll, O., May, T., Kohlhammer, J.: Visual-interactive preprocessing of time series data. In: SIGRAD, Swedish Chapter of Eurographics, vol. 81 of Linköping Electronic Conference Proceedings, Linköping University Electronic Press, pp. 39---48 (2012). http://www.ep.liu.se/ecp_article/index.en.aspx?issue=081;article=006
[14]
Bernard, J., Ruppert, T., Scherer, M., Schreck, T., Kohlhammer, J.: Guided discovery of interesting relationships between time series clusters and metadata properties. In: Knowledge Management and Knowledge Technologies (i-KNOW), pp. 22:1---22:8. ACM (2012).
[15]
Bernard, J., Ritter, C., Sessler, D., Zeppelzauer, M., Kohlhammer, J., Fellner, D.: Visual-interactive similarity search for complex objects by example of soccer player analysis. In: IVAPP, VISIGRAPP, pp. 75---87 (2017).
[16]
Bernard, J., Sessler, D., Berisch, M., Hutter, M., Schreck, T., Kohlhammer, J.: Towards a user-defined visual-interactive definition of similarity functions for mixed data. In: IEEE Visual Analytics Science and Technology (Poster Paper) (2014).
[17]
Bernard, J., Sessler, D., Bannach, A., May, T., Kohlhammer, J.: A visual active learning system for the assessment of patient well-being in prostate cancer research. In: VIS Workshop on Visual Analytics in Healthcare, pp. 1---8. ACM (2015).
[18]
Bernard, J., Sessler, D., Ruppert, T., Davey, J., Kuijper, A., Kohlhammer, J.: User-based visual-interactive similarity definition for mixed data objects-concept and first implementation. J. WSCG 22, 329---338 (2014)
[19]
Baeza-Yates, R.A., Ribeiro-Neto, B.: Modern Information Retrieval. Addison-Wesley, Longman (1999)
[20]
Bernard, J., Zeppelzauer, M., Sedlmair, M., Aigner, W.: A unified process for visual-interactive labeling. In: Sedlmair, M., Tominski, C. (eds.) EuroVis Workshop on Visual Analytics (EuroVA), Eurographics (2017).
[21]
Chen, M., Golan, A.: What may visualization processes optimize? IEEE Trans. Vis. Comput. Graph. (TVCG) 22(12), 2619---2632 (2016).
[22]
Card, S.K., Mackinlay, J.D., Shneiderman, B. (eds.): Readings in Information Visualization: Using Vision to Think. Morgan Kaufmann, San Francisco (1999)
[23]
Choo, J., Park, H.: Customizing computational methods for visual analytics with big data. IEEE Comput. Graph. Appl. (CG&A) 33(4), 22---28 (2013)
[24]
Craik, K. (ed.): The Nature of Explanation. Cambridge University Press, Cambridge (1943)
[25]
Card, S.K., Robertson, G.G., Mackinlay, J.D.: The information visualizer, an information workspace. In: SIGCHI Conference on Human Factors in Computing Systems (CHI). ACM, pp. 181---186 (1991).
[26]
Chapelle, O., Schölkopf, B., Zien, A.: Semi-Supervised Learning. Adaptive Computation and Machine Learning Series. The MIT Press, Cambridge, MA (2006)
[27]
Dagli, C.K., Rajaram, S., Huang, T.S.: Leveraging active learning for relevance feedback using an information theoretic diversity measure. In: Conference on Image and Video Retrieval, pp. 123---132. Springer, Berlin (2006).
[28]
Elmqvist, N., Fekete, J.-D.: Hierarchical aggregation for information visualization: overview, techniques, and design guidelines. IEEE Trans. Vis. Comput. Graph. (TVCG) 16(3), 439---454 (2010).
[29]
Endert, A., Fiaux, P., North, C.: Semantic interaction for sensemaking: inferring analytical reasoning for model steering. IEEE Trans. Vis. Comput. Graph. 18(12), 2879---2888 (2012).
[30]
Endert, A., Fiaux, P., North, C.: Semantic interaction for visual text analytics. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, pp. 473---482. ACM, New York (2012).
[31]
Endert, A., Han, C., Maiti, D., House, L., Leman, S., North, C.: Observation-level interaction with statistical models for visual analytics. In: 2011 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 121---130 (2011).
[32]
Endert, A., Ribarsky, W., Turkay, C., Wong, B.W., Nabney, I., Blanco, I.D., Rossi, F.: The state of the art in integrating machine learning into visual analytics. In: Computer Graphics Forum (CGF) (2017).
[33]
Gleicher, M., Albers, D., Walker, R., Jusufi, I., Hansen, C.D., Roberts, J.C.: Visual comparison for information visualization. Inf. Vis. 10(4), 289---309 (2011).
[34]
Grabner, H., Bischof, H.: On-line boosting and vision. In: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, vol. 1, pp. 260---267. IEEE (2006)
[35]
Gschwandtner, T., Gärtner, J., Aigner, W., Miksch, S.: A Taxonomy of Dirty Time-Oriented Data, pp. 58---72. Springer, Berlin (2012).
[36]
Gleicher, M.: A framework for considering comprehensibility in modeling. Big Data 4(2), 75---88 (2016).
[37]
Hoi, S.C., Jin, R., Lyu, M.R.: Large-scale text categorization by batch mode active learning. In: World Wide Web. ACM, pp. 633---642 (2006).
[38]
Heimerl, F., Koch, S., Bosch, H., Ertl, T.: Visual classifier training for text document retrieval. IEEE Trans. Vis. Comput. Graph. (TVCG) 18(12), 2839---2848 (2012)
[39]
Höferlin, B., Netzel, R., Höferlin, M., Weiskopf, D., Heidemann, G.: Inter-active learning of ad-hoc classifiers for video visual analytics. In: IEEE Visual Analytics Science and Technology (VAST). IEEE, pp. 23---32 (2012).
[40]
Janetzko, H., Sacha, D., Stein, M., Schreck, T., Keim, D.A., Deussen, O.: Feature-driven visual analytics of soccer data. In: 2014 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 13---22 (2014).
[41]
Keim, D., Andrienko, G., Fekete, J.-D., Görg, C., Kohlhammer, J., Melançon, G.: Visual Analytics: Definition, Process, and Challenges, pp. 154---175. Springer, Berlin (2008).
[42]
Kandel, S., Heer, J., Plaisant, C., Kennedy, J., van Ham, F., Riche, N.H., Weaver, C., Lee, B., Brodbeck, D., Buono, P.: Research directions in data wrangling: visualizations and transformations for usable and credible data. Inf. Vis. 10(4), 271---288 (2011).
[43]
Karpinski, M., Macintyre, A.: Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks. J. Comput. Syst. Sci. 54(1), 169---176 (1997).
[44]
Krause, J., Perer, A., Bertini, E.: Infuse: Interactive feature selection for predictive modeling of high dimensional data. IEEE Trans. Vis. Comput. Graph. (TVCG) 20(12), 1614---1623 (2014).
[45]
Lewis, J.M., Ackerman, M., de Sa, V.R.: Human cluster evaluation and formal quality measures: a comparative study. In: Annual Meeting of the Cognitive Science Society (CogSci), pp. 1870---1875 (2012)
[46]
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436---444 (2015)
[47]
Losing, V., Hammer, B., Wersing, H.: Incremental on-line learning: a review and comparison of state of the art algorithms. Neurocomputing 275, 1261---1274 (2017).
[48]
Liu, T.-Y.: Learning to rank for information retrieval. Found. Trends Inf. Retr. 3(3), 225---331 (2009).
[49]
Liu, Z., Stasko, J.: Mental models, visual reasoning and interaction in information visualization: a top-down perspective. IEEE Trans. Vis. Comput. Graph. 16(6), 999---1008 (2010).
[50]
Mamitsuka, N.A.H.: Query learning strategies using boosting and bagging. In: Shavlik, J.W. (ed.) International Conference on Machine Learning (ICML), vol. 1, pp. 1---9. Morgan Kaufmann, Los Altos (1998)
[51]
Möhrmann, J., Bernstein, S., Schlegel, T., Werner, G., Heidemann, G.: Improving the usability of interfaces for the interactive semi-automatic labeling of large image data sets. In: Jacko, J.A. (ed.) Human-Computer Interaction. Design and Development Approaches, pp. 618---627. Springer, Berlin (2011)
[52]
Mamani, G.M.H., Fatore, F.M., Nonato, L.G., Paulovich, F.V.: User-driven feature space transformation. Comput. Graph. Forum (CGF) 32(3), 291---299 (2013).
[53]
Mühlbacher, T., Piringer, H.: A partition-based framework for building and validating regression models. IEEE Trans. Vis. Comput. Graph. (TVCG) 19(12), 1962---1971 (2013).
[54]
Mühlbacher, T., Piringer, H., Gratzl, S., Sedlmair, M., Streit, M.: Opening the black box: strategies for increased user involvement in existing algorithm implementations. IEEE Trans. Vis. Comput. Graph. 20(12), 1643---1652 (2014)
[55]
Mitrović, D., Zeppelzauer, M., Breiteneder, C.: Features for content-based audio retrieval. Adv. Comput. 78, 71---150 (2010)
[56]
Norman, D.A.: The Design of Everyday Things, reprint, paperback edn. Basic Books, New York (2002)
[57]
Olsson, F.: A Literature Survey of Active Machine Learning in the Context of Natural Language Processing, Technical report. Swedish Institute of Computer Science (2009)
[58]
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345---1359 (2010)
[59]
Qi, G.-J., Hua, X.-S., Rui, Y., Tang, J., Zhang, H.-J.: Two-dimensional multilabel active learning with an efficient online adaptation model for image classification. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 31(10), 1880---1897 (2009).
[60]
Rauber, P.E., Fadel, S.G., Falcao, A.X., Telea, A.C.: Visualizing the hidden activity of artificial neural networks. IEEE Trans. Vis. Comput. Graph. 23(1), 101---110 (2017)
[61]
Riek, L.D., O?connor, M.F., Robinson, P.: Guess what? a game for affective annotation of video using crowd sourcing. In: International Conference on Affective Computing and Intelligent Interaction, pp. 277---285. Springer, Berlin (2011)
[62]
Russell, B.C., Torralba, A., Murphy, K.P., Freeman, W.T.: Labelme: a database and web-based tool for image annotation. Int. J. Comput. Vis. 77(1), 157---173 (2008)
[63]
Sedlmair, M., Aupetit, M.: Data-driven evaluation of visual quality measures. Comput. Graph. Forum (CGF) 34(3), 201---210 (2015).
[64]
Shurkhovetskyy, G., Andrienko, N., Andrienko, G., Fuchs, G.: Data abstraction for visualizing large time series. Comput. Graph. Forum (CGF) (2017).
[65]
Seifert, C., Aamir, A., Balagopalan, A., Jain, D., Sharma, A., Grottel, S., Gumhold, S.: Visualizations of Deep Neural Networks in Computer Vision: A Survey, pp. 123---144. Springer, Cham (2017).
[66]
Salton, G., Buckley, C.: Improving retrieval performance by relevance feedback. Read. Inf. Retr. 24, 5 (1997).
[67]
Sessler, D., Bernard, J., Kuijper, A., Kohlhammer, J.: Adopting Mental Similarity Notions of Categorical Data Objects to Algorithmic Similarity functions. (2014). Poster Paper. http://www.vmv2014.gcc.tu-darmstadt.de/sites/program.html
[68]
Schreck, T., Bernard, J., Von Landesberger, T., Kohlhammer, J.: Visual cluster analysis of trajectory data with interactive kohonen maps. Inf. Vis. 8(1), 14---29 (2009).
[69]
Settles, B., Craven, M.: An analysis of active learning strategies for sequence labeling tasks. In: Empirical Methods in Natural Language Processing, Computational Linguistics, pp. 1070---1079 (2008)
[70]
Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. In: Advances in Neural Information Processing Systems, pp. 1289---1296 (2008)
[71]
Settles, B.: Active Learning Literature Survey, Technical Report 1648. University of Wisconsin---Madison (2009)
[72]
Settles, B.: Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances. In: Conference on Empirical Methods in Natural Language Processing (EMNLP), Computational Linguistics, pp. 1467---1478 (2011)
[73]
Settles, B.: Active learning. Synth. Lect. Artif. Intell. Mach. Learn. 6(1), 1---114 (2012)
[74]
Seifert, C., Granitzer, M.: User-based active learning. In: IEEE International Conference on Data Mining Workshops, pp. 418---425 (2010).
[75]
Stasko, J., Görg, C., Liu, Z.: Jigsaw: supporting investigative analysis through interactive visualization. Inf. Vis. 7(2), 118---132 (2008).
[76]
Sedlmair, M., Heinzl, C., Bruckner, S., Piringer, H., Möller, T.: Visual parameter space analysis: a conceptual framework. IEEE Trans. Vis. Comput. Graph. (TVCG) 20(12), 2161---2170 (2014).
[77]
Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929---1958 (2014)
[78]
Sedlmair, M., Meyer, M., Munzner, T.: Design study methodology: reflections from the trenches and the stacks. IEEE Trans. Vis. Comput. Graph (TVCG) 18(12), 2431---2440 (2012).
[79]
Seung, H.S., Opper, M., Sompolinsky, H.: Query by committee. In: Workshop on Computational Learning Theory (COLT), pp. 287---294. ACM, New York (1992).
[80]
Stolper, C.D., Perer, A., Gotz, D.: Progressive visual analytics: user-driven visual exploration of in-progress analytics. IEEE Trans. Vis. Comput. Graph. 20(12), 1653---1662 (2014)
[81]
Sarkar, A., Spott, M., Blackwell, A.F., Jamnik, M.: Visual discovery and model-driven explanation of time series patterns. In: Visual Languages and Human-Centric Computing (VL/HCC). IEEE, pp. 78---86 (2016).
[82]
Seebacher, D., Stein, M., Janetzko, H., Keim, D.A.: Patent retrieval: a multi-modal visual analytics approach. In: EuroVis Workshop on Visual Analytics (EuroVA), Eurographics, pp. 013---017 (2016)
[83]
Sacha, D., Stoffel, A., Stoffel, F., Kwon, B.C., Ellis, G.P., Keim, D.A.: Knowledge generation model for visual analytics. IEEE Trans. Vis. Comput. Graph. (TVCG) 20(12), 1604---1613 (2014).
[84]
Sacha, D., Sedlmair, M., Zhang, L., Lee, J.A., Weiskopf, D., North, S.C., Keim, D.A.: Human-centered machine learning through interactive visualization: review and open challenges. In: Artificial Neural Networks, Computational Intelligence and Machine Learning (2016)
[85]
Sacha, D., Sedlmair, M., Zhang, L., Lee, J.A., Peltonen, J., Weiskopf, D., North, S.C., Keim, D.A.: What you see is what you can change: human-centered machine learning by interactive visualization. Neurocomputing (2017). ISSN = 0925-2312
[86]
Sacha, D., Zhang, L., Sedlmair, M., Lee, J.A., Peltonen, J., Weiskopf, D., North, S.C., Keim, D.A.: Visual interaction with dimensionality reduction: a structured literature analysis. IEEE Trans. Vis. Comput. Graph. (TVCG) 23(01), 241---250 (2016).
[87]
Turkay, C., Kaya, E., Balcisoy, S., Hauser, H.: Designing progressive and interactive analytics processes for high-dimensional data analysis. IEEE Trans. Vis. Comput. Graph. (TVCG) 23(1), 131---140 (2017)
[88]
Tuia, D., Volpi, M., Copa, L., Kanevski, M., Munoz-Mari, J.: A survey of active learning algorithms for supervised remote sensing image classification. IEEE J. Sel. Top. Signal Process. 5(3), 606---617 (2011)
[89]
Von Ahn, L., Dabbish, L.: Labeling images with a computer game. In: Conference on Human Factors in Computing Systems (SIGCHI), pp. 319---326. ACM (2004)
[90]
Vapnik, V.: The Nature of Statistical Learning Theory. Springer, Berlin (2013)
[91]
van der Corput, P., van Wijk, J.J.: Comparing personal image collections with picturevis. Comput. Graph. Forum (CGF) 36(3), 295---304 (2017).
[92]
van den Elzen, S., van Wijk, J.J.: Baobabview: interactive construction and analysis of decision trees. In: IEEE Visual Analytics Science and Technology (VAST), pp. 151---160 (2011).
[93]
Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137---154 (2004)
[94]
Vendrig, J., Patras, I., Snoek, C., Worring, M., den Hartog, J., Raaijmakers, S., van Rest, J., van Leeuwen, D.A.: Trec feature extraction by active learning. In: TREC (2002)
[95]
Visentini, I., Snidaro, L., Foresti, G.L.: On-line boosted cascade for object detection. In: Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, pp. 1---4. IEEE (2008)
[96]
van Wijk, J.J.: The value of visualization. In: VIS 05. IEEE Visualization, 2005, pp. 79---86 (2005).
[97]
Wall, E., Das, S., Chawla, R., Kalidindi, B., Brown, E.T., Endert, A.: Podium: ranking data using mixed-initiative visual analytics. IEEE Trans. Vis. Comput. Graph. 24(1), 288---297 (2018)
[98]
Wang, M., Hua, X.-S.: Active learning in multimedia annotation and retrieval: a survey. CM Trans. Intell. Syst. Technol. 2(2), 10:1---10:21 (2011).
[99]
Wu, Y., Kozintsev, I., Bouguet, J.-Y., Dulong, C.: Sampling strategies for active learning in personal photo retrieval. In: IEEE International Conference on Multimedia and Expo. IEEE, pp. 529---532 (2006).
[100]
Wenskovitch, J., North, C.: Observation-level interaction with clustering and dimension reduction algorithms. In: Workshop on Human-In-the-Loop Data Analytics (HILDA). ACM, pp. 14:1---14:6 (2017).
[101]
Wongsuphasawat, K., Smilkov, D., Wexler, J., Wilson, J., Mané, D., Fritz, D., Krishnan, D., Viégas, F.B., Wattenberg, M.: Visualizing dataflow graphs of deep learning models in tensorflow. IEEE Trans. Vis. Comput. Graph. 24(1), 1---12 (2018).
[102]
Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 3320---3328. Curran Associates Inc, New York (2014)
[103]
Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding Neural Networks Through Deep Visualization (2015). arXiv preprint arXiv:1506.06579
[104]
Yang, L., Jin, R.: Distance metric learning: a comprehensive survey. Mich. State Univ. 2, 2 (2006)
[105]
Zeiler, M.D., Fergus, R.: Visualizing and Understanding Convolutional Networks, pp. 818---833. Springer, Cham (2014).
[106]
Zhu, Q., Keogh, E.J.: Using captchas to index cultural artifacts. In: International Symposium on Advances in Intelligent Data Analysis IX, pp. 245---257. Springer, Berlin (2010)

Cited By

View all
  • (2024)Man and the Machine: Effects of AI-assisted Human Labeling on Interactive Annotation of Real-time Video StreamsACM Transactions on Interactive Intelligent Systems10.1145/364945714:2(1-22)Online publication date: 29-Feb-2024
  • (2024)Assessing User Trust in Active Learning Systems: Insights from Query Policy and Uncertainty VisualizationProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645207(772-786)Online publication date: 18-Mar-2024
  • (2024)Understanding Novice's Annotation Process For 3D Semantic Segmentation Task With Human-In-The-LoopProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645150(444-454)Online publication date: 18-Mar-2024
  • Show More Cited By
  1. VIAL: a unified process for visual interactive labeling

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image The Visual Computer: International Journal of Computer Graphics
    The Visual Computer: International Journal of Computer Graphics  Volume 34, Issue 9
    September 2018
    136 pages

    Publisher

    Springer-Verlag

    Berlin, Heidelberg

    Publication History

    Published: 01 September 2018

    Author Tags

    1. Active learning
    2. Classification
    3. Information visualization
    4. Labeling
    5. Machine learning
    6. Regression
    7. Similarity search
    8. Visual analytics
    9. Visual interactive labeling

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 14 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Man and the Machine: Effects of AI-assisted Human Labeling on Interactive Annotation of Real-time Video StreamsACM Transactions on Interactive Intelligent Systems10.1145/364945714:2(1-22)Online publication date: 29-Feb-2024
    • (2024)Assessing User Trust in Active Learning Systems: Insights from Query Policy and Uncertainty VisualizationProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645207(772-786)Online publication date: 18-Mar-2024
    • (2024)Understanding Novice's Annotation Process For 3D Semantic Segmentation Task With Human-In-The-LoopProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645150(444-454)Online publication date: 18-Mar-2024
    • (2024)Deciphering Personal Argument Styles – A Comprehensive Approach to Analyzing Linguistic Properties of Argument PreferencesRobust Argumentation Machines10.1007/978-3-031-63536-6_18(296-314)Online publication date: 5-Jun-2024
    • (2023)Eliciting Model Steering Interactions From Users via Data and Visual Design ProbesIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.332289830:9(6005-6019)Online publication date: 16-Oct-2023
    • (2023)Design Patterns for Machine Learning-Based Systems With Humans in the LoopIEEE Software10.1109/MS.2023.334025641:4(151-159)Online publication date: 8-Dec-2023
    • (2022)Requirements and Concepts for Interactive Media Retrieval User InterfacesNordic Human-Computer Interaction Conference10.1145/3546155.3546701(1-10)Online publication date: 8-Oct-2022
    • (2022)MTV: Visual Analytics for Detecting, Investigating, and Annotating Anomalies in Multivariate Time SeriesProceedings of the ACM on Human-Computer Interaction10.1145/35129506:CSCW1(1-30)Online publication date: 7-Apr-2022
    • (2022)OneLabeler: A Flexible System for Building Data Labeling ToolsProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517612(1-22)Online publication date: 29-Apr-2022
    • (2022)Towards Visual Explainable Active Learning for Zero-Shot ClassificationIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2021.311479328:1(791-801)Online publication date: 1-Jan-2022
    • Show More Cited By

    View Options

    View options

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media