Deciphering Egyptian Hieroglyphs: Towards a New Strategy for Navigation in Museums
<p>(<b>a</b>) Original image; (<b>b</b>) Monochrome image; (<b>c</b>) Sobel edge detection; (<b>d</b>) regions by frontier; (<b>e</b>) SUSAN algorithm; (<b>f</b>) Circle Hough Transform to obtain most-probable circles; (<b>g</b>) curvature salience after 20 iterations.</p> "> Figure 2
<p>(<b>a</b>,<b>b</b>) AAM method applied to hieroglyph images; Active Contour Algorithm applied to obtain the edge of the Egyptian hieroglyphs (<b>c</b>) initially; (<b>d</b>) after 1000 iterations; (<b>e</b>) Active Contour Algorithm applied to obtain the edge of the Egyptian cartouche after 300 iterations.</p> "> Figure 3
<p>The proposed approach consists of three stages.</p> "> Figure 4
<p>A set of cartouches from the Abydos King List: (<b>a</b>) Djedefre; (<b>b</b>) Menes; (<b>c</b>) Shepseskaf; (<b>d</b>) Neferkara I; (<b>e</b>) Mentuhotep II; (<b>f</b>) Raneb; (<b>g</b>) Sneferu; (<b>h</b>) Khafra; (<b>i</b>) Semerkhet; (<b>j</b>) Userkare; (<b>k</b>) Djoser; (<b>l</b>) Sekhemkhet.</p> "> Figure 5
<p>(<b>a</b>) Original image; (<b>b</b>) grayscale image; (<b>c</b>) median filter; (<b>d</b>) Canny edges; (<b>e</b>) threshold ISODATA inverted.</p> "> Figure 6
<p>(<b>a</b>) Original image; (<b>b</b>) Longest border of the cartouche; (<b>c</b>) Cartouche after orientation correction; (<b>d</b>) Recognition of the cartouche borders; (<b>e</b>) Extraction of the regions of interest.</p> "> Figure 7
<p>Scheme of the object localization process.</p> "> Figure 8
<p>(<b>a</b>) Hieroglyph from a database to be used as a reference; (<b>b</b>–<b>d</b>) Hieroglyphs extracted from several cartouches; (<b>e</b>–<b>g</b>) Overlapped images.</p> "> Figure 9
<p>Distance between points <span class="html-italic">P<sub>h</sub></span> (<b>a</b>) and points <span class="html-italic">P<sub>c</sub></span> (<b>b</b>).</p> "> Figure 10
<p>Scheme of the ROI extraction process.</p> "> Figure 11
<p>(<b>a</b>) Hieroglyphs extraction; (<b>b</b>) Hieroglyphs identification; (<b>c</b>) Reading sequence.</p> "> Figure 12
<p>Database of hieroglyphs obtained from the Abydos King List.</p> "> Figure 13
<p>The location process applied to search for stonemason’s marks: (<b>a</b>–<b>d</b>) original images; (<b>e</b>–<b>h</b>) localization process results.</p> "> Figure 14
<p>The effects of time, exposure and even vandalism make the recognition process difficult: (<b>a</b>) Khufu’s cartouche; (<b>b</b>) Neferkahor’s cartouche; (<b>c</b>) Qakare Ibi’s cartouche.</p> "> Figure 15
<p>Components of the system.</p> ">
Abstract
:1. Introduction
2. Overview of the Proposed Method to Interpret Hieroglyphs
- Texts were written to be read from the left to the right or from the right to the left.
- Egyptian scribes were able to write in different materials: stone, wood, faience, papyrus, gold, bronze, etc. Hieroglyphs were even painted.
- Differences between hieroglyphs in different texts or materials are not remarkable, as a similar model was used.
- Texts were written in both low-relief and high-relief art. In low-relief, hieroglyphs were incised in the stone. In high-relief, the rest of the cartouche was incised.
- Most texts preserved until the present day have suffered the effects of time, exposure and even vandalism.
2.1. Localization of Cartouches in Images
- (a)
- A threshold T is set to a random value
- (b)
- The image is binarized using T
- (c)
- The mean values , of the two sub-regions (objects and background) generated with T are obtained. is the mean value of all values under or equal to T. is the mean value of all values above T
- (d)
- A new threshold is calculated as
- (e)
- Repeat steps (b) to (d) until T stops changing its value.
2.2. Extraction and Identification of Hieroglyphs
- (a)
- The Chamfer distance between each point of the hieroglyph’s contour (ph) and each point of the cartouche’s contour (pc) is lower than d. d is set to 3 pixels for a cartouche of 100 pixels width. Figure 9, if the hieroglyph’s contour is marked in orange and the cartouche’s contour one is colored in blue, the distance between ph(2,3) and the image is 2 because pc(3,1) and pc(4,2) are the closest points of the cartouche, and the distance to them is 2. The minimum distance from ph(2,6) to the cartouche is higher than 3 because the closest points are pc(5,3), pc(5,4) and pc(5,5). To calculate the distance it can be used a convolution mask over the central point, and it will be increased each iteration
- (b)
- The angle of the contour line in pc minus the angle of the contour line in ph is less than Max_Angle. The contours have been obtained by using the Canny algorithm, which produces non-maximum suppression (the width of the contour is 1 pixel), so the contours that include pc and ph have 1 pixel width. Max_Angle is calculated as in (4):
2.3. Interpretation of Cartouches
3. Results
4. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Gardiner, A. Egyptian Grammar. Being an Introduction to the Study of Hieroglyphs, 3rd ed.; Oxford University Press: London, UK, 1957. [Google Scholar]
- Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
- Papari, G.; Petkov, N. Edge and line oriented contour detection: State of the art. Image Vis. Comput. 2011, 29, 79–103. [Google Scholar] [CrossRef]
- Sánchez-González, M.; Cabrera, M.; Herrera, P.J.; Vallejo, R.; Cañellas, I.; Montes, F. Basal Area and Diameter Distribution Estimation Using Stereoscopic Hemispherical Images. Photogramm. Eng. Remote Sens. 2016, 82, 605–616. [Google Scholar] [CrossRef]
- Franken, M.; van Gemert, J.C. Automatic Egyptian Hieroglyph Recognition by Retrieving Images as Texts. In Proceedings of the 21st ACM International Conference on Multimedia, Barcelona, Spain, 21–25 October 2013.
- Roman-Rangel, E.; Pallan, C.; Odobez, J.M.; Gatica-Perez, D. Retrieving Ancient Maya Glyphs with Shape Context. In Proceedings of the IEEE 12th International Conference on Computer Vision Workshops, Kyoto, Japan, 27 September–4 October 2009.
- Roman-Rangel, E.; Pallan, C.; Odobez, J.M.; Gatica-Perez, D. Analyzing Ancient Maya Glyph Collections with Contextual Shape Descriptors. Int. J. Comput. Vis. 2011, 94, 101–117. [Google Scholar] [CrossRef]
- Belongie, S.; Malik, J.; Puzicha, J. Shape Context: A New Descriptor for Shape Matching and Object Recognition. In Advances in Neural Information Processing Systems 13: Proc. 2000 Conf.; Leen, T.K., Dietterich, T.G., Tresp, V., Eds.; MIT Press: Cambridge, MA, USA, 2001; pp. 831–837. [Google Scholar]
- Nederhof, M.J. OCR of handwritten transcriptions of Ancient Egyptian hieroglyphic text. In Proceedings of the Altertumswissenschaften in a Digital Age: Egyptology, Papyrology and Beyond (DHEgypt15), Leipzig, Germany, 4–6 November 2015.
- Iglesias-Franjo, E.; Vilares, J. Searching Four-Millenia-Old Digitized Documents: A Text Retrieval System for Egyptologists. In Proceedings of the 10th SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH), Berlin, Germany, 7–12 August 2016.
- Hu, M.K. Visual Pattern Recognition by Moment Invariants. IRE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
- Herrera, P.J.; Pajares, G.; Guijarro, M.; Ruz, J.J.; Cruz, J.M.; Montes, F. A Featured-Based Strategy for Stereovision Matching in Sensors with Fish-Eye Lenses for Forest Environments. Sensors 2009, 9, 9468–9492. [Google Scholar] [CrossRef] [PubMed]
- Herrera, P.J.; Dorado, J.; Ribeiro, A. A Novel Approach for Weed Type Classification Based on Shape Descriptors and a Fuzzy Decision-Making Method. Sensors 2014, 14, 15304–15324. [Google Scholar] [CrossRef] [PubMed]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef] [PubMed]
- Smith, S.M.; Brady, J.M. SUSAN—A new approach to low level image processing. Int. J. Comput. Vis. 1997, 23, 45–78. [Google Scholar] [CrossRef]
- Ballard, D.H. Generalizing the Hough Transform to Detect Arbitrary Shapes. Pattern Recognit. 1981, 13, 111–122. [Google Scholar] [CrossRef]
- Sha’ashua, A.; Ullman, S. Structural saliency: The detection of globally salient structures using a locally connected network. In Proceedings of the Second International Conference on Computer Vision, Tampa, FL, USA, 5–8 December 1988.
- Cootes, T.F.; Edwards, G.J.; Taylor, C.J. Active Appearance Models. In Proceedings of the 5th European Conference on Computer Vision, Freiburg, Germany, 2–6 June 1998.
- Castro-Mateos, I.; Pozo, J.M.; Cootes, T.F.; Wilkinson, J.M.; Eastell, R.; Frangi, A.F. Statistical Shape and Appearance Models in Osteoporosis. Curr. Osteoporos. Rep. 2014, 12, 163–173. [Google Scholar] [CrossRef] [PubMed]
- Trinh, N.H.; Kimia, B.B. Skeleton Search: Category-Specific Object Recognition and Segmentation Using a Skeletal Shape Model. Int. J. Comput. Vis. 2011, 94, 215–240. [Google Scholar] [CrossRef]
- Toshev, A.; Taskar, B.; Daniilidis, K. Shape-Based Object Detection via Boundary Structure Segmentation. Int. J. Comput. Vis. 2012, 99, 123–146. [Google Scholar] [CrossRef]
- Roman-Rangel, E.; Marchand-Maillet, S. Shape-based detection of Maya hieroglyphs using weighted bag representations. Pattern Recognit. 2015, 48, 1161–1173. [Google Scholar] [CrossRef]
- Li, C.; Xu, C.; Gui, C.; Fox, M.D. Distance Regularized Level Set Evolution and its Application to Image Segmentation. IEEE Trans. Image Process. 2010, 19, 3243–3254. [Google Scholar] [PubMed]
- Zhang, K.; Zhang, L.; Song, H.; Zhou, W. Active contours with selective local or global segmentation: A new formulation and level set method. Image Vis. Comput. 2010, 28, 668–676. [Google Scholar] [CrossRef]
- McIlhagga, W. The Canny edge detector revisited. Int. J. Comput. Vis. 2011, 91, 251–261. [Google Scholar] [CrossRef] [Green Version]
- El-Zaart, A. Images thresholding using ISODATA technique with gamma distribution. Pattern Recognit. Image Anal. 2010, 20, 29–41. [Google Scholar] [CrossRef]
- Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
- Liu, M.Y.; Tuzel, O.; Veeraraghavan, A.; Chellappa, R. Fast Directional Chamfer Matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010.
- Rucklidge, W.J. Efficiently Locating Objects Using the Hausdorff Distance. Int. J. Comput. Vis. 1997, 24, 251–270. [Google Scholar] [CrossRef]
- Wagner, R.A.; Fischer, M.J. The String-to-String Correction Problem. J. ACM 1974, 21, 168–173. [Google Scholar] [CrossRef]
- Duque, J.; Cerrada, C.; Valero, E.; Cerrada, J.A. Indoor Positioning System Using Depth Maps and Wireless Networks. J. Sens. 2016, 2016, 1–8. [Google Scholar] [CrossRef]
Hierogyphs Compared | Chamfer Distance | Hausdorff Distance |
---|---|---|
A-B | 36888 (1) | 75 (0.52) |
A-C | 21485 (0.58) | 144 (1) |
A-D | 16069 (0.43) | 35 (0.24) |
Abydos Number | Cartouche | Phonetic Transliteration | Royal Name (Dynasty) |
---|---|---|---|
17 | T-t-i | Sekhemkhet (III Dynasty) | |
22 | Djed-f-ra | Djedefre (IV Dynasty) | |
70 | Mn-kheper-ra | Thutmose III (XVIII Dynasty) | |
72 | Mn-kheper-u-ra | Thutmose IV (XVIII Dynasty) | |
73 | Nb-maat-ra | Amenhotep III (XVIII Dynasty) | |
75 | Mn-peht-y-ra | Ramesses I (XIX Dynasty) |
Sekhemkhet | Djedefre | Thutmose III | Thutmose IV | Amenhotep III | Ramesses I | |
---|---|---|---|---|---|---|
Original picture | ||||||
Edge detection | ||||||
Cartouche localization | ||||||
Hieroglyphs extraction | ||||||
Hieroglyphs identification | ||||||
Corresponding cartouche |
% | σ | |
---|---|---|
Stage 1. Localization process | 95.4 | 0.4 |
Abydos King List dataset | 100 | - |
Egyptian Hieroglyph Dataset [5] | 96.3 | 0.3 |
Rest of images | 89.5 | 0.9 |
Stage 2. Extraction and identification process | 87.1 | 1.4 |
Abydos King List dataset | 92.8 | 1.1 |
Egyptian Hieroglyph Dataset [5] | 84.2 | 1.6 |
Rest of images | 83.4 | 1.5 |
Stage 3. Recognition process | 92.3 | 0.9 |
Abydos King List dataset | 100 | - |
Egyptian Hieroglyph Dataset [5] | 92.7 | 1.2 |
Rest of images | 84.1 | 1.5 |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Duque-Domingo, J.; Herrera, P.J.; Valero, E.; Cerrada, C. Deciphering Egyptian Hieroglyphs: Towards a New Strategy for Navigation in Museums. Sensors 2017, 17, 589. https://doi.org/10.3390/s17030589
Duque-Domingo J, Herrera PJ, Valero E, Cerrada C. Deciphering Egyptian Hieroglyphs: Towards a New Strategy for Navigation in Museums. Sensors. 2017; 17(3):589. https://doi.org/10.3390/s17030589
Chicago/Turabian StyleDuque-Domingo, Jaime, Pedro Javier Herrera, Enrique Valero, and Carlos Cerrada. 2017. "Deciphering Egyptian Hieroglyphs: Towards a New Strategy for Navigation in Museums" Sensors 17, no. 3: 589. https://doi.org/10.3390/s17030589
APA StyleDuque-Domingo, J., Herrera, P. J., Valero, E., & Cerrada, C. (2017). Deciphering Egyptian Hieroglyphs: Towards a New Strategy for Navigation in Museums. Sensors, 17(3), 589. https://doi.org/10.3390/s17030589