Addressing Global Challenges and
Addressing Global Challenges and
Addressing Global Challenges and
com
https://textbookfull.com/product/addressing-
global-challenges-and-quality-education-15th-
european-conference-on-technology-enhanced-
learning-ec-tel-2020-heidelberg-germany-
september-14-18-2020-proceedings-carlos-
alario-hoyos/
textbookfull
More products digital (pdf, epub, mobi) instant
download maybe you interests ...
https://textbookfull.com/product/software-architecture-14th-
european-conference-ecsa-2020-l-aquila-italy-
september-14-18-2020-proceedings-anton-jansen/
Computer Security ESORICS 2020 25th European Symposium
on Research in Computer Security ESORICS 2020 Guildford
UK September 14 18 2020 Proceedings Part II Liqun Chen
https://textbookfull.com/product/computer-security-
esorics-2020-25th-european-symposium-on-research-in-computer-
security-esorics-2020-guildford-uk-
september-14-18-2020-proceedings-part-ii-liqun-chen/
https://textbookfull.com/product/computer-vision-eccv-2018-15th-
european-conference-munich-germany-
september-8-14-2018-proceedings-part-iii-vittorio-ferrari/
https://textbookfull.com/product/computer-vision-eccv-2018-15th-
european-conference-munich-germany-
september-8-14-2018-proceedings-part-iv-vittorio-ferrari/
https://textbookfull.com/product/computer-vision-eccv-2018-15th-
european-conference-munich-germany-
september-8-14-2018-proceedings-part-vii-vittorio-ferrari/
Carlos Alario-Hoyos
María Jesús Rodríguez-Triana
Maren Scheffel
Inmaculada Arnedillo-Sánchez
Sebastian Maximilian Dennerlein (Eds.)
Addressing
LNCS 12315
Global Challenges
and Quality Education
15th European Conference
on Technology Enhanced Learning, EC-TEL 2020
Heidelberg, Germany, September 14–18, 2020
Proceedings
Lecture Notes in Computer Science 12315
Founding Editors
Gerhard Goos
Karlsruhe Institute of Technology, Karlsruhe, Germany
Juris Hartmanis
Cornell University, Ithaca, NY, USA
Addressing
Global Challenges
and Quality Education
15th European Conference
on Technology Enhanced Learning, EC-TEL 2020
Heidelberg, Germany, September 14–18, 2020
Proceedings
123
Editors
Carlos Alario-Hoyos María Jesús Rodríguez-Triana
Universidad Carlos III de Madrid Tallinn University
Leganés (Madrid), Spain Tallinn, Estonia
Maren Scheffel Inmaculada Arnedillo-Sánchez
Open University Netherlands Trinity College Dublin
Heerlen, The Netherlands Dublin, Ireland
Sebastian Maximilian Dennerlein
Graz University of Technology
and Know-Center
Graz, Austria
LNCS Sublibrary: SL3 – Information Systems and Applications, incl. Internet/Web, and HCI
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Program Committee
Marie-Helene Abel Université de Technologie de Compiègne, France
Andrea Adamoli Università della Svizzera Italiana, Switzerland
Nora Ayu Ahmad Uzir University of Edinburgh, UK
Gokce Akcayir University of Alberta, Canada
Carlos Alario-Hoyos Universidad Carlos III de Madrid, Spain
Patricia Albacete University of Pittsburgh, USA
Dietrich Albert University of Graz, Austria
Laia Albó Universitat Pompeu Fabra, Spain
Liaqat Ali Simon Fraser University, Canada
Ishari Amarasinghe Universitat Pompeu Fabra, Spain
Boyer Anne LORIA – KIWI, France
Alessandra Antonaci European Association of Distance Teaching
Universities, The Netherlands
Roberto Araya Universidad de Chile, Chile
Inmaculada Trinity College Dublin, Ireland
Arnedillo-Sanchez
Juan I. Asensio-Pérez Universidad de Valladolid, Spain
Antonio Balderas University of Cádiz, Spain
Nicolas Ballier Université de Paris, France
Jordan Barria-Pineda University of Pittsburgh, USA
Jason Bernard University of Saskatchewan, Canada
Anis Bey University of Paul Sabatier, IRIT, France
Miguel L. Bote-Lorenzo Universidad de Valladolid, Spain
François Bouchet Sorbonne Université, France
Yolaine Bourda LRI, CentraleSupélec, France
Bert Bredeweg University of Amsterdam, The Netherlands
Andreas Breiter Universität Bremen, Germany
Julien Broisin Université Toulouse 3 Paul Sabatier, IRIT, France
Tom Broos KU Leuven, Belgium
Armelle Brun LORIA, Université Nancy 2, France
Daniela Caballero Universidad de Chile, Chile
Manuel Caeiro Rodríguez University of Vigo, Spain
Sven Charleer KU Leuven, Belgium
Mohamed Amine Chatti University of Duisburg-Essen, Germany
John Cook University of West of England, UK
Audrey Cooke Curtin University, Australia
Alessia Coppi SFIVET, Switzerland
Mihai Dascalu University Politehnica of Bucharest, Romania
viii Organization
Additional Reviewers
Anwar, Muhammad
Berns, Anke
Ebner, Markus
Ehlenz, Matthias
Halbherr, Tobias
Koenigstorfer, Florian
Kothiyal, Aditi
Liaqat, Daniyal
Liaqat, Salaar
Müllner, Peter
Ponce Mendoza, Ulises
Raffaghelli, Juliana
Rodriguez, Indelfonso
Zeiringer, Johannes
Contents
User Assistance for Serious Games Using Hidden Markov Model . . . . . . . . . 380
Vivek Yadav, Alexander Streicher, and Ajinkya Prabhune
Exploring the Design and Impact of Online Exercises for Teacher Training
About Dynamic Models in Mathematics. . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Charlie ter Horst, Laura Kubbe, Bart van de Rotten, Koen Peters,
Anders Bouwer, and Bert Bredeweg
1 Motivation
Suppose you want to connect to your workplace network from home. Your workplace,
however, has a security policy that does not allow “outside” IP addresses to access essenƟal
internal resources. How do you proceed, without leasing a dedicated telecommunicaƟons
Textbook line to your workplace?
chapter
A virtual private network, or VPN, provides a soluƟon; it supports creaƟon of virtual links
that join far-flung nodes via the Internet. Your home computer creates an ordinary Internet
connecƟon (TCP or UDP) …
Generated Is the following statement true/false? Please discuss briefly why it is true or false: Vpns
Discussion have the disadvantage of requiring the VPN tunnel to be established before the Internet can
Prompt be accessed.
(italic)
Student
discusses
answer
readers is posing questioning about what they have read [1,25]. Yet, posing
good questions consumes time and money and thus many texts encountered
by learners either contain only a few questions at the end of a chapter or lack
questions.
Educational automatic question generation investigates approaches to gen-
erate meaningful questions about texts automatically, reducing the necessity for
manually generated questions. It hereby relies either on machine learning-based
approaches that excel in question variety and expressiveness but pose mostly
factual questions [6] or on rule-based approaches that lack expressiveness and
variety [32] but have limited capability to pose comprehension questions depend-
ing on their purpose (e.g. [17]).
This article investigates a novel machine learning-based question generation
approach seeking to generate comprehension questions with a high variety and
expressiveness. We hereby rely on two main ideas. First, research in the educa-
tional domain has investigated learning from errors [19] indicating that explain-
ing why a statement or solution is faulty may foster learning, conceptual under-
standing, and far transfer [10]. Second, we rely on the artificial jabbering of
state-of-the-art neural text generators that are capable of extrapolating a given
text with high structural consistency and in a way that often looks deceptively
real for humans. We seek to explore whether this jabbering can be conditioned
in such a way that it generates erroneous examples from textbook paragraphs.
Presented with such a statement, learners need to justify if a given statement is
true or false (see Fig. 1). This work comprises three main contributions:
1. We present the idea of leveraging artificial jabbering for automatic text com-
prehension question generation and introduce a prototypical generator.
2. We provide a quantitative and qualitative evaluation of the strengths and
weaknesses of such an approach.
3. We distill the main challenges for future work based on an in-depth error
analysis of our prototypical generator.
Automatic Question Generation via Neural Text Generators 3
2 Related Work
2.1 Learning from Erroneous Examples
When learning with erroneous examples, students are confronted with a task and
its faulty solution and have to explain why it is wrong (e.g. [30]). The underlying
theoretical assumptions are that erroneous examples induce a cognitive conflict
in students and thus support conceptual change [24] e.g. by pointing out typical
misconceptions [29]. It has been shown that erroneous examples are beneficial
for learning in a variety of domains such as mathematics [10], computer science
[4] or medicine [14]. Also, learners confronted with erroneous examples especially
improve deeper measures of learning such as conceptual understanding and far
transfer [24]. However, some studies have found that erroneous examples only
foster learning when learners receive enough feedback [14,30] and have sufficient
prior knowledge [30].
With the rise of high capacity machine-learning models, language generation has
shifted towards pretraining [27]. Trained on huge datasets, these models provide
state-of-the-art results on a wide variety of natural language generation tasks
[5,23] such as dialog response generation tasks [22] or abstractive summariza-
tion tasks [26]. Novel models like GPT-2 [23] are capable of extrapolating a given
text with high structural consistency and in a way that looks deceptively real for
humans. They copy the given text’s writing style and compose texts which seem
to make sense at first glance. Fine-tuning the model even increased the human-
ness of the generated texts [28]. Research in the credibility of such generated
texts found that hand-picked generated news texts were found to be credible
around 66% of the time, even when the model was not fine-tuned on news arti-
cles [28]. Another study found that human raters could detect generated texts in
71.4% of the cases with two raters often disagreeing if the text is fake or not [13].
These findings started a debate in the natural language generation community if
the model’s generation capabilities are to easy to misuse and therefore the mod-
els should not be released anymore [28]. Furthermore, such models are able to
generate poems [16] and to rewrite stories to incorporate counterfactual events
[21]. Besides of these open text generation models, special models for question
generation exist. They evolved from baseline sequence to sequence architectures
[6] into several advanced neural architectures (e.g. [5,33]) with different facets
such as taking the desired answers into account [34] or being difficulty-aware [8].
Although these systems work well in the general case they are mainly focusing
on the generation of factual questions [6,20,35]. Thus, although their expressive-
ness and domain independence is impressive, the educational domain still most
often uses template-based generators [15]. These template-based approaches are
often able to generate comprehension questions but lack expressiveness and rely
on expert rules limiting them to a specific purpose in a specific domain.
4 T. Steuer et al.
To experiment with the idea of using artificial jabbering for improving text
comprehension, we propose the following text generation task. The input is a
text passage of a learning resource from an arbitrary domain, having a length
of 500–1000 words as this has been used in psychological studies that found
text accompanying questions to be helpful [1,31]. The output is a generated
text comprehension question about the given text passage, asking learners to
explain why a given statement is true or false. We aim to generate high-quality
questions of good grammaticality, containing educational valuable claims and
having the right difficulty for discussion. Some technical challenges are inherent
in the described task. Every approach must tackle discussion candidate selection
as this determines what the main subject of the generated text will be. Also,
every approach must provide the neural text generator with a conditioning con-
text to ensure that the generated text is in the intended domain. Finally, every
approach must render the actual text with some sort of open domain genera-
tor. These subtasks are active fields of research and a huge variety of possible
approaches with different strengths and weaknesses exists. Yet, our first aim is
to evaluate the general viability of such an approach. Thus, we do not exper-
iment with different combinations of sub-components but our generator relies
on well-tested domain-independent general-purpose algorithms for the different
subtasks (see Fig. 2).
Fig. 2. Architecture of the automatic text comprehension question generator. The final
output is a justification statement that is combined with a prompt to form the actual
text comprehension question.
Automatic Question Generation via Neural Text Generators 5
First, for the discussion candidate selection, we make the simplifying assump-
tion that good discussion candidates are the concepts that are characteristic of
the text. To understand why this assumption is simplified consider a text about
Newtonian physics where a few sentences discuss the common misconception
that heavier objects fall faster than lighter objects. This discussion is unlikely to
involve any special keywords and thus will not be selected as input to the gen-
erator. Yet, it might be very fruitful to generate erroneous examples based on
these misconceptions. However, to test our general idea of generating erroneous
examples the simplification should be sufficient because we might select fewer
inputs but the one we select should be important. Furthermore, this assumption
allows us to rely on state-of-the-art keyphrase extraction algorithms. Considering
that the inputs are texts from a variety of domains, the keyphrase selection step
needs to be unsupervised and relatively robust to domain changes. Therefore,
we apply the YAKE keyphrase extraction algorithm [3] which has been shown
to perform consistently on a large variety of different datasets and domains [2].
Stopwords are removed before running keyphrase extraction and the algorithm’s
configured windows size is two.
Second, for selecting the conditioning context, a short text that already com-
prises statements about the subject is needed. Suppose the discussion subject
is “Thermal Equilibrium” in a text about physics. For the generator to pro-
duce interesting statements it must receive sentences from the text, discussing
thermal equilibria. Thus, we extract up to three sentences in the text compris-
ing the keyphrase, by sentence tokenizing the text1 and concatenating sentences
containing the keyphrase.
Third, we need to generate a justification statement as the core for the text
comprehension question. We use the pretrained GPT-2 774M2 parameter model
and apply it similar to Radford et al. [23] by using plain text for the model
conditioning. The plain text starts with the sentences from the conditioning
context and to generate the actual justification statement, a discussion starter is
appended. It begins with the pluralized discussion subject followed by a prede-
fined phrase allowing us to choose the type of justification statement the model
will generate. For instance, let “Thermal Equilibrium” be our discussion subject,
our to be completed discussion starter may be “Thermal equilibria are defined
as” or “Thermal equilibria can be used to” depending on the type of faulty state-
ment we aim for. The resulting plain text is given to GPT-2 for completion. To
prevent the model from sampling degenerated text, we apply nucleus sampling
[12] with top-p=0.55 and restrict the output length to 70 words. Finally, we
extract the justification statement from the generated text and combine it with
a generic prompt to discuss it, resulting in the final text comprehension question.
Note that we do not know, if the generated question is actually comprising a
true or false justification statement.
1
Using NLTK-3.4.5.
2
https://github.com/openai/gpt-2.
6 T. Steuer et al.
4.2 Methodology
Quantitatively, a total of 120 text comprehension questions coming from ten
educational texts are annotated by ten domain experts who have been teaching
at least one university lecture in a similar domain. Texts are equally distributed
across five different domains: Computer Science, Machine Learning, Networking,
Physics and Psychology. Twelve text comprehension questions are generated for
every text. They are based on three extracted discussion candidates and four
different discussion starters, of which we hypothesized that they represent inter-
mediate or deep questions according to Graesser et al. [9]. The discussion starters
are: “X has the disadvantage”, “X has the advantage”, “X is defined as” and “X
is used to” where X is the discussion candidate. This Every question is rated by
two experts who first read the educational text that was used to generate the
question and then rate it on five five-point Likert items regarding grammatical
correctness, relatedness to the source material, factual knowledge involved when
answering the question, conceptual knowledge involved when answering the ques-
tions and overall usefulness for learning. Before annotating every expert saw a
short definition of every scale, clarifying their meaning. Additionally, experts
can provide qualitative remarks for every question through a free-text field. For
the quantitative analysis the ratings where averaged across experts.
We use the quantitatively collected data to guide our qualitative analysis of
the research questions. To carry out our in-depth error analysis, we consider a
statement useless for learning if it scores lower than three on the usefulness scale.
This choice was made after qualitatively reviewing a number of examples. We
use the inductive qualitative content analysis [18] to deduce meaningful error
categories for the statements and to categorize the statements accordingly. Our
search for meaningful error categories is hereby guided by the given task for-
mulation and its sub-components. Furthermore, the useful generated statements
(usefulness ≥ 3) are analyzed. We look at the effects of the different discus-
sion starters and how they influence the knowledge involved in answering the
generated questions.
Automatic Question Generation via Neural Text Generators 7
5 Results
5.1 Quantitative Overview
The quantitative survey results indicate that many of the statements generated
are of good grammar, are connected to the text but are only slightly useful for
learning (see Fig. 3). Furthermore, most questions involve some factual knowl-
edge and deeper comprehension, yet both scores vary greatly. Breaking down the
different rating scores by domain or discussion starter does not revealed no large
differences. By looking at various examples of different ratings (see Table 1) we
found that a usefulness score of three or larger is indicative of some pedagogi-
cal value. With minor changes, such questions could be answered and discussed
by experts, although their discussion is probably often not the perfect learning
opportunity. In total, 39 of the 120 statements have a usefulness rating of 3 or
larger (32.5%), in contrast to 81 statements rated lower (67.5%).
Fig. 3. Overview of the quantitative ratings for the generated statements without any
human filtering. Scores are between 1 and 5 where 5 is the best achievable rating. The
whiskers indicate 1.5 Interquartile range and the black bar is the median.
While conducting the qualitative error analysis, the following main error cate-
gories where deduced. Keyword inappropriate means that the discussion candi-
date was not appropriate for the text because the keyword extraction algorithm
selected a misleading or very general key term. Keyword incomplete means that
8 T. Steuer et al.
The distribution of the different error categories can be seen is heavily skewed
towards keyword errors (see Table 2). The two keyword-based errors account for
49 or roughly 60% of the errors. Furthermore, statements generated by faulty
keyword selection mostly have a usefulness rating of one. The other error cate-
gories are almost equally distributed and are most often rated with a usefulness
score of two. The platitude case mostly comes from unnaturally combining a dis-
cussion candidate with a discussion starter resulting in very generic completion of
the sentence inside the generator. For instance, if the generator has to complete
the sentence “Classical conditionings have the disadvantage ...” it continues with
“...of being costly and slow to develop”. The remaining error categories have no
clear cause.
Besides the error analysis, annotators left some remarks about the erroneous
statements. Two annotators remarked on various occasions that the first part of
Automatic Question Generation via Neural Text Generators 9
Fig. 4. Overview of the quantitative ratings for the generated statements with an
usefulness rating larger or equal three. Scores are between 1 and 5 where 5 is the best
achievable rating. The whiskers indicate 1.5 Interquartile range and the black bar is
the median.
leads to statements that force learners to transfer the knowledge learnt into new
situations (see Table 3). The usage that is described in the generated statements
is normally not mentioned in the text, but can often be deduced by the knowl-
edge provided in the text. The advantage and disadvantage discussion starter
requires learners to think about the discussion candidate but also about similar
concepts and solutions and to compare them (see Table 3). Otherwise, learners
cannot tell if the stated advantage or disadvantage is one that is specific to the
discussed concept.
Table 3. Highly rated examples of different types of statements resulting from different
discussion starter.
Finally, one annotator provided qualitative remarks for the good statements
as well. This includes remarks that the generated statements are helpful but
often could be improved by using different discussion starters depending on the
domain (e.g. speaking of the advantage of a physical concept is odd). Also,
it was highlighted that the statements cannot simply be answered by copying
information from the text and that thinking about the definition discussion
starter sometimes resulted in the annotator checking a textbook to refresh some
rusty knowledge.
of the well-generated statements. First, they are not the typical factual Wh-
questions that ask for a simple fact or connection directly stated in the text.
Therefore, they often need a deeper understanding of the subject matter to be
answered correctly. While this can be a benefit, we have to keep in mind that our
annotators were experts and thus drawing connections between the text inherent
knowledge and previously learned subject knowledge might be too difficult for
some learners as also remarked by the annotators. Second, depending on the used
discussion starter, we can generate different kinds of useful questions. Our four
different discussion starters generate questions requiring three different types of
thinking. Depending on the discussion starter, the text comprehension questions
involve comparison with previous knowledge, transfer of learned knowledge to
new situations or implicit differentiation from similar concepts. An encouraging
result, because it shows that the generator’s expressiveness can be harnessed
to create different types of tasks. Moreover it provides evidence for the remark
of the annotators, that the questions in some domains could be improved by
using different discussion starters and that this is a worthwhile direction for
future research. Third, although we work with a variety of domains and input
text from different authors we were able to generate some valuable questions in
every domain. Furthermore, the distribution of the different quality scores did
not change much from domain to domain. Hence, our approach seems, at least
to a degree, domain-independent. Yet, as currently only a third of the generated
statements are usable this should be reevaluated as soon as the general quality of
the statements becomes better because it might be a trade-off between domain-
independence and statement quality. In summary, our qualitative analysis of the
well-generated questions provided evidence for their adaptability through dif-
ferent discussion starters and that they are well suited for text comprehension
below the surface level when learners have to think not only about facts but also
have to integrate knowledge.
Our error analysis allowed us to identify why we fail to generate interesting
questions. The five different error categories are promising starting points for
future work. Most often, the approach failed because the keyword extraction
step did not find a meaningful discussion candidate or extracted only parts of
it. This is not surprising as our goal was to test the general idea without fine-
tuning any of the intermediate steps. General-purpose keyword extraction is
similar but not identical to discussion candidate extraction. Hence, future work
might explore specific educational keyword extraction algorithms and their effect
on the generation approach. We assume that a fine-tuned educational keyword
extraction algorithm will yield much more valuable statements if adaptable to
different domains. Furthermore, as discussed in the results section the platitude
errors can be alleviated by not combining discussion starters and discussion can-
didates in an odd manner. Future work should, therefore, investigate the optimal
use of discussion starters taking into account different domains and discussion
candidates. Finally, we have the hardly discussable and statement too easy error
categories. While no clear cause of these errors could be identified, we assume
that a fine-tuning of the neural generator with discussion specific texts would
Another random document with
no related content on Scribd:
Carpenter Bees 36
Catnip 232, 240
Chaff Hives 251
Chrysalids 69
Circulatory System 57
Class 28
insecta 28
of the honey-bee 28
Cleome—see Rocky M't'n bee plant 238
Clover 228
Alsike 228
figure of 229
sweet 228
figure of 230
white 228
figure of 228
Clustering Outside the Hive 153
cause of 153
how prevented 153
adding room 176
extracting 188
shading 153
Cocoons 69
of bees 98
College Course 118
Colonies,
always strong 119
how moved 187
Columella 44
Comb 108
cells in 110
worker 110
drone 110
figure of 109
for guide 208
how fastened 157, 158
how made 108, 110
transparency of 110
use of 110
what determines kind 110
Comb Foundation 203
American 204
figure of 203
history of 203
how cut 207
how fastened 209
how made 206
use of 207
Comb Foundation Machine 205
figure of 205
inventor of 205
Comb Honey 215
apparatus to secure 141
care of 216
in boxes 142
in frames 144
in what form 144, 215
marketing 215
when to secure 215
Conventions 19
Corn 235
Cotton 236
figure of 236
Cover for Frames 129
Cover for Hives 129
figure of 130, 131
Crates,
section 149
market 216
Cyprian Bee 43
De Geer 45
Digestive Organs 60
Diseases 259
dysentery 247, 259
foul brood 259
Dissection 50, 65
Dissecting Instruments 51, 65
lenses 51, 65
needle points 51
dividers—see separators 146
Dividing Colonies—see artificial colonies 171, 177
Division-board 137
figure of 137
use of 138
Dollar Queens 186
Dorsata Bee 40
Dress for Ladies 197
Drones 86
development of 87
eggs of 87
eyes of 86
function of 83
influence of, on drone progeny 89
jaws of 86
figure of 92
leg of 86
figure of 87
longevity of 88
number of 86
tongue of 86
when in hive 86, 88
why so numerous 89
Dysentery 247, 259
Egg 67
of insects 67
of bee 96
Egyptian Bee 43
Empty Cells 188
importance of 188
how to secure 188
Entrance to Hive 128
Epicranium 48
Extractor,
of honey 188
figure of 189
Everett's 190
history of 188
how to use 194
knives for 191
figure of 191
rack for 189
figure of 190
use of 191
when to use 192
wire comb baskets for 189
figure of 189
of wax 212
figure of 213
Extracted Honey 214
market for 214
Extracting Honey 191
how done 194
why done 191
when done 192
Eyes of Insects 53
compound 54
simple 54
Fabricius 46
Family 34
apidæ 34
of the honey-bee 34
Feeder 160
figure of 160, 161
Feeding 159
amount to feed 159
use of 159
what to feed 160
honey 160
sugar 160
flour 163
Female Organs 64
Fertile Workers 77
Fertilization of Flowers by Bees 220
Figwort 238
figure of 238
Fitch's Report 47
Foot-power Saw 151
Foul Brood 259
cause 260
cure for 200
symptoms of 259
Foundation 203
figure of 203
history of 203
use of 203, 207
how cut 207
how fastened 209
how made 206
Frames 132
arrangement for surplus 147
block for making 134
figure of 135
cover for 136
figure of 133, 134
form of 132
Gallup 133
gauge for construction 135
figure of 135
inventor of 123
Langstroth 132
number of 132
section 148
small—see sections 144
space about 136
space between 136
Fruit trees 225
Imago 70
Insecta 28
animals of 30
class 28
Insects, or Hexapods 30
abdomen of 30
head of 30
imago of 30
larva of 30
pupa of 30
thorax of 30
transformations of 66
transformations, complete 66
transformations, incomplete 70
Introduction of Cell 185
figure of 167
Introduction of Queen 183
Intestines 61
Italian Bees 41, 180
description of 42, 181
figure of Frontispiece
history of 41
superiority of 181
Jaws 50
figure of 92
Judas Tree 225
figure of 224
Labium 48
Labrum 48
Ladies' Bee Dress 197
Langstroth, Rev. L. L. 123
Langstroth Frame 132
figure of 124
Langstroth Hive 123
figure of 124
Langstroth on the Honey-Bee 21
Larva 68
Latreille 45
Leaf-Cutting Bee 36
Legs of Insects 90
Linnæus 45
Ligula 49
figure of 91
Location of Apiary 120
Locust Trees 236
Lyonnet 46
Male Organs 62
figure of 63
Mandibles 50
figure of 92
Maple 224, 225
figure of 222
Market—for honey 213
crate for 216
figures of 216, 217
for comb 215
for extracted 214
how to stimulate 213
rules for 215
Mason Bees 36, 37
Maxillæ 50
Megachile 36
Melipona 35
Mice 272
remedy for 272
Mignonette 231
figure of 231
Milk-Weed 232
pollen masses of 233
figure of 233
Mimicry 31
Mouth Parts 48
figure of 49
variation of 50
Movable-Comb Hives 123
two types 123
Moving Colonies 187
Multiplying Colonies 171
Muscles of Insects 56
Mustard 233
figure of 233
Order 30
of insects 30
of the honey-bee 30
Osmia 37
Ovaries 64
figure of 64
Packard's Entomology 47
Palpi 49
Papers 19
American Bee Journal 19
Bee-Keepers' Magazine 21
Gleanings in Bee Culture 20
Paraglossæ 49
Parasitic Insects 32
Parasitic Bees 37
Parthenogenesis 80
in bees 80
in other insects 81
Plants 220
asters 243
figure of 243
April 223
August 242
barberry 225
figure of 226
basswood 237
figure of 237
beggar-ticks 244
bergamot 238
blackberry 236
boneset 238
figure of 241
buckwheat 243
figure of 243
button-ball 238
figure of 240
catnip 232, 240
clover 228
Alsike 228
figure of 229
sweet 228
figure of 230
white 228
figure of 228
coffee berry 226
corn 235
cotton 236
figure of 236
figwort 238
figure of 238
fruit trees 225
golden-rod 242
figure of 243
Judas tree 225
figure of 224
July 237
June 228
list of 221
locust 236
maples 221, 225
figure of 222
milk-weed 232
pollen-masses 232
figure of 233
mints 232
figure of 232
mignonette 231
figure of 231
mustard 233
figure of 233
okra 232
figure of 231
poplar 225
rape 234
figure of 234
Rocky Mountain bee 238
figure of 239
sage 232
white 226
figure of 227
sour-wood 240
Spanish needles 244
St. John's wort 240
sumac 226
teasel 235
figure of 236
tick-seed 244
tulip tree 234
figure of 235
willow 224
figure of 223
wistaria 225
American 225
figure of 225
Chinese 225
figure of 226
Pliny 44
Poison from Sting 12
innoculation of 12
Poison Sack 95
Pollen 111
function of 112
how carried 111
nature of 111
source of 111
where deposited 112
Preparation for Apiculture 117
college course 118
plan 118
read 117
visit 117
Products of Bees 104
comb 108
figure of 109
honey 104
pollen or bee-bread 111
propolis or bee-glue 112
wax 106
Products of Insects 104
Propolis or Bee-Glue 112
function of 113
nature of 112
source of 112
Publications 19
American Bee Journal 19
Bee-Keepers' Magazine 21
Gleanings in Bee Culture 20
Pupa 68
figure of 69
Queen 71
brood from eggs 78, 164
cages 184
cell 75
figure of 109, 167
introduction of 167
figure of 167
when started 164
where built 164
figure of 109
clipping wing of 168
how done 169
not injurious 168
why done 169
cocoon of 77
development of 75
eggs of 80, 81
how impregnated 81
Wagner's theory 81
fecundity of 83, 84
figure of 72
food of larvæ 76
function of 83
how procured 185
importance of 163
impregnation of 78
only on the wing 79
introduction of 183
laying of 82
longevity of 83
must have empty cells 188
never to be wanting 163, 176
never to be poor 186
no sovereign 85
ovaries of 72
figure of 64
oviduct of 64
piping of 102
rearing of 78, 163, 186
sex of 71
shipping 186
size of 72
spermatheca of 72
sterility of 83
sting of 71
tongue of 73
figure of 73
wings of 73
Queen Cells 75
figure of 109, 167
how secured 164
introduction of 167
figure of 167
Queen Rearing 163, 186
Queen Shipping 186
cage for 186
figure of 187
Queen White Ant 84
fecundity of 84
Quilt 136
Quinby, M. 138
Quinby Hive 139
figure of 139
Quinby's Mysteries of Bee-keeping 22
Quinby Smoker 198
figure of 199