모바일 메뉴

KJO Korean Journal of Orthodontics

Open Access

pISSN 2234-7518
eISSN 2005-372X
QR Code QR Code

퀵메뉴 버튼

Article

home All Articles View
Split Viewer

Original Article

Korean J Orthod 2021; 51(2): 77-85

Published online March 25, 2021 https://doi.org/10.4041/kjod.2021.51.2.77

Copyright © The Korean Association of Orthodontists.

Evaluation of a multi-stage convolutional neural network-based fully automated landmark identification system using cone-beam computed tomographysynthesized posteroanterior cephalometric images

Min-Jung Kima , Yi Liub, Song Hee Ohc, Hyo-Won Ahna, Seong-Hun Kima , Gerald Nelsond

aDepartment of Orthodontics, Graduate School, Kyung Hee University, Seoul, Korea
bDepartment of Orthodontics, Peking University School of Stomatology, Beijing, China
cDepartment of Oral and Maxillofacial Radiology, Graduate School, Kyung Hee University, Seoul, Korea
dDivision of Orthodontics, Department of Orofacial Science, University of California San Francisco, CA, USA

Correspondence to:Seong-Hun Kim.
Professor and Head, Department of Orthodontics, Graduate School, Kyung Hee University, 26 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Korea.
Tel +82-2-958-9392 e-mail bravortho@gmail.com

Received: September 8, 2020; Revised: October 7, 2020; Accepted: October 7, 2020

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Objective: To evaluate the accuracy of a multi-stage convolutional neural network (CNN) model-based automated identification system for posteroanterior (PA) cephalometric landmarks.
Methods: The multi-stage CNN model was implemented with a personal computer. A total of 430 PA-cephalograms synthesized from cone-beam computed tomography scans (CBCT-PA) were selected as samples. Twenty-three landmarks used for Tweemac analysis were manually identified on all CBCT-PA images by a single examiner. Intra-examiner reproducibility was confirmed by repeating the identification on 85 randomly selected images, which were subsequently set as test data, with a two-week interval before training. For initial learning stage of the multi-stage CNN model, the data from 345 of 430 CBCT-PA images were used, after which the multi-stage CNN model was tested with previous 85 images. The first manual identification on these 85 images was set as a truth ground. The mean radial error (MRE) and successful detection rate (SDR) were calculated to evaluate the errors in manual identification and artificial intelligence (AI) prediction.
Results: The AI showed an average MRE of 2.23 ± 2.02 mm with an SDR of 60.88% for errors of 2 mm or lower. However, in a comparison of the repetitive task, the AI predicted landmarks at the same position, while the MRE for the repeated manual identification was 1.31 ± 0.94 mm.
Conclusions: Automated identification for CBCT-synthesized PA cephalometric landmarks did not sufficiently achieve the clinically favorable error range of less than 2 mm. However, AI landmark identification on PA cephalograms showed better consistency than manual identification.

Keywords: Artificial intelligence, Convolutional neural networks, Posteroanterior cephalometrics, Cone-beam computed tomography

Posteroanterior (PA) cephalometric analysis is a useful tool for evaluating the cranial-dentofacial structure and growth pattern in the transverse plane.1-3 PA cephalometric analysis in combination with lateral cephalometric analysis provides a substantial amount of diagnostic data for a comprehensive three-dimensional assessment in daily practice.4-6 Therefore, PA cephalograms have proven to be indispensable diagnostic and evaluation tools in planning comprehensive orthodontic treatment.

Nevertheless, some studies have debated the reliability and reproducibility of landmark identification on PA cephalograms. In general, inter-examiner error was reported to be significantly higher than intra-examiner error in assessments based on PA cephalograms.1,3-5 The effect of head positioning on the accuracy of landmark identification with PA cephalometric analysis is greater than with lateral cephalometric analysis.6 Therefore, landmark identification errors in the form of empirical differences that generate major errors in cephalometric analysis are inevitable.7,8 Moreover, conventional two-dimensional radiograms translate a stereoscopic anatomic structure to a planar structure, generating layered images that obscure the landmarks,9,10 and rotation of the head position can generate distortion to induce interference in landmark identification.11

The application of artificial intelligence (AI) techniques in automated landmark identification is a new approach for cephalometric analysis that aims to facilitate landmark identification, since this identification can be performed rapidly with high consistency. Although various machine learning-based automatic landmark identification systems using lateral cephalograms have been proposed to date, we found no corresponding studies that used PA cephalograms.

This study is the first trial to evaluate the validity of a multi-stage convolutional neural network (CNN)-based automatic landmark prediction system using cone-beam computed tomography (CBCT)-synthesized PA cephalograms. The null hypothesis is that there are no differences in the reproducibility of landmark identification between AI prediction and manual identification.

This retrospective study was performed under approval from the Institutional Review Board of Kyung Hee University Dental Hospital (IRB number: IRB-KH DT19013).

A total of 430 CBCT scans were selected from patients who met following the inclusion criteria: 1) patients who had visited Kyung Hee University Dental Hospital; 2) patients with growth potential, orthodontic appliances or/and dental prostheses, surgical screws or/and plates and patients with or without skeletal asymmetry; 3) no missing upper or lower permanent incisors, missing permanent upper or lower first molars, craniofacial syndromes, or dentofacial traumas.

The CBCT scans were taken at the 0.39-mm3 voxel size level, with a 16 × 13-cm field of view and a 30-seconds scan time at 10 mA and 80 kV (Alphard Vega; Asahi Roentgen, Kyoto, Japan). The obtained data were imported as Digital Imaging and Communications in Medicine (DICOM) files to Dolphin software 11.95 Premium (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA). All CBCT images were reoriented according to the anatomic structures of reference. The horizontal plane was established with reference to the right porion, left orbitale, and right orbitale. The sagittal plane was perpendicular to the frontozygomatic suture line and the horizontal plane passing through the nasion. The coronal plane passing through the nasion was made simultaneously perpendicular to the horizontal plane. The PA cephalogram was synthesized from the reoriented CBCT (CBCT-PA) images and saved in the JPG format with a range of pixel sizes, with a width of 2,048 pixels and heights of 1,755–1,860 pix (Figure 1).

Figure 1. Flow diagram showing the processing of the cone-beam computed tomography (CBCT)-synthesized posteroanterior (PA) cephalograms. The raw CBCT data were imported using Dolphin software 11.95 Premium (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA). The head position was adjusted to reduce layered bilateral structures. The ‘Build X-ray’ button in the software was used to synthesize the CBCT-PA with orthogonal X-ray exposure while eliminating virtual magnification, which causes image distortion.

Twenty-three skeletal landmarks used for Tweemac analysis were selected and manually identified on the 345 images used for model training. The landmarks on the remaining 85 images were manually identified two times in a two-week period for testing the model and validating intra-examiner consistency. For comparison with the examiner’s consistency, the AI assessments were also performed twice. All manual identification procedures were completed by a single examiner with more than five years’ orthodontic experience. A detailed description of the PA landmark definitions is given in Table 1. The See-through ceph (See-through Tech Inc., Seoul, Korea) software was used to accomplish landmark identification, with the nasion as the origin point of the coordinate system. The coordinates of each landmark were reported in Microsoft Excel (version 2010; Microsoft, Redmond, WA, USA). Two numeric values were recorded for all landmarks with reference to (X0, Y0).

Table 1 . Landmark definitions

LandmarkDefinition
Bilateral skeletal landmarks
Lateral orbit right (LOR)The most anterior point at the intersection of the frontozygomatic suture on the right inner rim of the orbit
Lateral orbit left (LOL)The most anterior point at the intersection of the frontozygomatic suture on the left inner rim of the orbit
Condyle point right (COR)The most superior (sagittal perspective) and the middle (frontal perspective) point on the contour of the right condyle head
Condyle point left (COL)The most superior (sagittal perspective) and the middle (frontal perspective) point on the contour of the left condyle head
Jugal point right (JR)The intersection of the outline of the right maxillary tuberosity and the zygomatic buttress
Jugal point left (JL)The intersection of the outline of the left maxillary tuberosity and the zygomatic buttress
Right antegonial notch (AGR)Right deepest point on the curvature of the antegonial notch
Left antegonial notch (AGL)Left deepest point on the curvature of the antegonial notch
Midline skeletal landmarks
Crista galli (CG)The most superior and anterior points on the median ridge of bone that projects upward from the cribriform plate of the ethmoid bone
Anterior nasal spine (ANS)Center of the intersection of the nasal septum and the palate
Menton (Me)Midpoint on the inferior border of the mental protuberance
Bilateral dentoalveolar landmarks
Upper first molar axis right (U6AR)Furcation of the upper right first molar
Upper first molar axis left (U6AL)Furcation of the upper left first molar
Alveolar crest right (ACR)The right side of the most cervical rim of the alveolar bone proper
Alveolar crest left (ACL)The left side of the most cervical rim of the alveolar bone proper
Upper first molar cup right (U6MCR)The upper right first molar mesiobuccal cup tip
Upper first molar cup left (U6MCL)The upper left first molar mesiobuccal cup tip
Upper first molar central fossa right (U6CFR)The upper right first molar central fossa
Upper first molar central fossa left (U6CFL)The upper left first molar central fossa
Lower first molar mesiobuccal cusp tip right (L6MBR)The lower right first molar mesiobuccal cup tip
Lower first molar mesiobuccal cusp tip left (L6MBL)The lower left first molar mesiobuccal cup tip
Lower first molar axis right (L6AR)Furcation of the lower right first molar
Lower first molar axis left (L6AL)Furcation of the lower left first molar


The multi-stage CNN model used in this study was developed on a personal computer with Keras (https://keras.io/, Python deep learning application programming interface). The model consisted of six convolution layers, followed by two dense layers (Figure 2). Deep learning was performed using a GeForce GTX 1080ti GPU (Nvidia Co., Santa Clara, CA, USA) on the Ubuntu 14.04 platform (https://releases.ubuntu.com/14.04/). All images were preprocessed for training by the examiner. The landmarks were identified manually on the row size of the images. To optimize the data for model training, we resized all images to 400 × 400 pixels in the first step. Subsequently, we trained the entire target dataset with the corresponding landmarks for each image in training phase 1. In phase 2, we continued training the model with cropped images in local areas that included the landmarks. The model was trained for each landmark individually with five different image crop sizes: 250, 200, 150, 100, and 50. Briefly, the proposed model was constructed in five stages using the deep CNN model. In the training stage, the learning rate was 0.01, batch size was 100, and the image was composed of a total of 200 epochs. One millimeter corresponding to 10 pixels was used for unit translation. The trained model automatically identified each landmark on the 85 test images twice. Figure 2 illustrates the summarized experimental flow. A visualized effect of each convolutional layer during the first stage training is illustrated in Figure 3.

Figure 2. Schematic experimental design summary of the multi-stage convolutional neural network (CNN) model.
Conv, convolution; FC, fully connected; AI, artificial intelligence.

Figure 3. The visualized effect of each convolutional layer during the first stage training.

The absolute value of mean distance difference was calculated in millimeters by the following formula:

mean radial error (MRE)=i=1nR1n,
standard deviation (SD)=i=1n(Ri-MRE)n-12,
R=Δx2+Δy2.

Microsoft Excel was used for all calculations.

The multi-stage CNN-based AI achieved an MRE of 2.23 ± 2.02 mm. The highest accuracy was obtained for the alveolar crest left (error, 1.2 mm), while the lowest accuracy was obtained for the condyle point left (COL) (error, 4.24 mm). The AI showed errors of less than 2 mm for 10 of the 23 landmarks. The mean intra-examiner error in the repeated manual identification on the 85 images was 1.31 ± 0.94 mm. The repeated measurements of the menton and left antegonial notch showed an average error of 0.05 mm, indicating the highest consistency. In contrast, the average error for the lateral orbit left was 4.91 mm, representing the lowest consistency in repeated measurements (Figure 4 and Table 2). The prediction outcomes are illustrated in Figure 5.

Table 2 . The MRE and SD for intra-examiner and AI identification

LandmarkManual identification 1 vs. 2(Intra-examiner)Manual identification 1 vs. AI(AI prediction)


MRESDMRESD
Bilateral
LOR4.711.071.812.01
LOL4.921.081.622.22
COR1.970.994.242.21
COL0.911.024.052.44
JR0.420.961.812.32
JL0.670.981.791.61
AGR0.190.791.651.91
AGL0.050.851.842.42
Midline
CG1.331.101.331.59
ANS1.770.991.452.08
Me0.050.832.141.83
Dentoalveolar
U6AR1.080.902.752.48
U6AL0.840.922.862.11
ACR0.120.931.031.68
ACL0.090.961.202.82
U6MCR1.630.902.632.09
U6MCL1.250.922.041.86
U6CFR1.670.892.362.09
U6CFL1.270.912.091.74
L6MBR1.280.893.483.23
L6MBL1.310.902.131.55
L6AR1.480.853.192.46
L6AL1.170.882.782.60
Average1.310.942.232.02

Unit of measurement: millimeter.

MRE, mean radial error; SD, standard deviation; AI, artificial intelligence.

See Table 1 for definitions of the other landmarks.



Figure 4. Mean radial error (MRE) for each landmark, and the average MRE between manual identification 1 and artificial intelligence (AI) (black) and manual identifications 1 and 2 (white).
See Table 1 for definitions of the other landmarks.

Figure 5. Accuracy of the convolutional neural network based on the automatic landmark identification system for cone-beam computed tomography using the synthesized posteroanterior cephalograms. The black dot represents the manually identified landmark and the white dot indicates the automatically identified landmark.
N, nasion.
See Table 1 for definitions of the other landmarks.

Although PA cephalometric analysis provides typical valuable information for comprehensive cranial-dentofacial evaluation, it generates more superimposed and layered anatomical structure images than lateral cephalograms. These additional superimposed structures affect the accuracy of landmark identification. It is difficult to identify landmarks with poor reproducibility, so accurate identification strongly depends on the examiner’s experience and skill levels. This may be the reason why PA cephalometric analysis is not routinely performed in orthodontic practice.1,2,4,12,13 This study examined the accuracy of an automated identification system based on a multi-stage CNN model for identification of cephalometric landmarks on CBCT-PA images.

Major et al.1 indicated that the intra-examiner errors ranged from 0.28 mm to 2.23 mm when identification was performed five times. The inter-examiner range of errors for identifying landmarks on PA cephalograms was 0.31 to 4.79 mm, which represents a significantly wide variation. Ulkur et al.2 assessed the intra- and inter-examiner consistencies in identifying landmarks on PA cephalograms and noted higher consistency among trained examiners. On the basis of their findings, in this study, only one examiner was used to minimize the inter-examiner error in landmark identification. Consistent with the findings of previous study, the intra-examiner assessments in the present study showed a large range of errors from 0.05 to 4.91 mm. The fact that the examiner has relatively less experience in PA landmark identification could be the reason for this large range of errors in intra-examiner measurements.

Shokri et al.14 assessed the errors for identification of landmarks in seven different positions on PA cephalograms and observed significant differences in assessments of the antegonion, condyle, and the zygomaticofrontal suture, which are farther from the midline and readily generate identification errors. Thus, we used the CBCT-synthesized PA cephalograms with a consistent head orientation protocol that was more reliable than the conventional protocols for landmark identification. Pirttiniemi et al.5 described the inter-examiner reproducibility of landmark identification on PA cephalograms and found that the mean distance error was higher for the frontozygomatic suture point, pogonion, condyle, and upper and lower incisal midpoints. Some previous studies also mentioned that the condyle, gonion, and zygomatic process points were difficult to locate consistently.1,4,12,15 In this study, we also found relatively higher errors for the condyle and frontozygomatic areas (Figure 6).

Figure 6. Many structures layered in vertical dimension. The condyle and dentoalveolar areas of the posteroanterior (PA) cephalograms interfere with the artificial intelligence image recognition ability. A, Cone-beam computed tomography-synthesized PA cephalogram. B, Vertical layered images at the intersection of the nasal septum and palatal area (white arrow) and vertical layered and horizontal superimposed images in the dental area. C, D, Vertical layered images in the condyle area.

The emergence of AI in cephalometry has allowed orthodontists to reduce working time and identify landmarks more consistently. Many previous studies have introduced various approaches to improve the accuracy of automated identification. Arık et al.16 applied a deep CNN architecture-based fully automated landmark identification system for the first time. They trained the system with 400 conventional lateral cephalograms including 19 landmarks. The result showed an successful detection rate (SDR) of 75.58% for a range of 2 mm. Park et al.17 trained You-Only-Look-Once version 3 (YOLOv3, https://pjreddie.com/darknet/yolo/) to construct an automated identification system and compared it with a previously introduced model using 1,311 conventional lateral cephalograms with 80 landmarks. They observed an SDR of 80.4% for range of 2 mm, which is approximately 5% higher than the SDR ranges reported in previous studies. Unfortunately, no previous study has used PA cephalograms with an automated identification system. In comparison with similar studies that used lateral cephalograms, our model may have a relatively lower accuracy rate, but it still showed promising feasibility and potential.

Hwang et al.18 used their automatic landmark identification system to compare the stability of manual and AI-assisted landmark identification, and they clearly observed that while AI demonstrated consistency in repetitive identification, the intra-examiner manual landmark identification demonstrated an error of 0.97 ± 1.03 mm. In this study, AI prediction for PA cephalograms also showed consistent results, while the mean intra-examiner error was 1.31 ± 0.94 mm. Thus, AI-based assessments are not affected by subjectivity or external conditions, unlike human performance. Therefore, the AI prediction system for PA cephalograms outlined in this study might offer some advantage because of its ability to identify landmarks consistently in comparison with manual identification.

Deep learning approaches have been recently recommended as a superior technology for automatic location of anatomical landmarks in radiographs. The CNN model shows outstanding ability in image recognition with the application of AI.16-23 This study presented a fully automatic landmark identification system for PA cephalometric analysis based on the multi-stage CNN model. This deep CNN learning model emulates the human examiner’s landmark identification pattern and performs prediction. Thus, AI prediction is affected by the human examiner’s identification pattern. If the examiner shows difficulties in some areas, the AI predictions will reflect these difficulties. In our present study, AI prediction showed the lowest accuracy in the condyle area while the repeated manual identifications showed the lowest consistency for the frontozygomatic suture area. The intra-examiner assessments might show large variability in repetitive tasks. Manual identification in the first measurement yields more variables for the condyle area, but not in the second measurement. However, the human examiner and AI show differences in decision-making. For example, the human examiner can make exclusive decisions based on clinical knowledge, i.e., when the bilateral anatomic structure shows layered images. Thus, the experienced human examiner might show better ability than AI for complicated PA cephalograms requiring comprehensive consideration with subjectivity.

A limitation of this study was that we could not conclusively compare the prediction accuracy of a model trained by a more experienced clinician (who might have shown smaller variations). Additional information on this aspect should be obtained through further investigation. In this study, we developed a feasible automated landmark identification system using PA cephalograms based on a multi-stage CNN model with a personal computer. AI might offer the advantage of consistency in repetitive tasks in comparison with a trained human examiner.

  • We used CBCT-synthesized PA cephalograms to reduce intra-examiner errors and to enhance the ability of Al image recognition.

  • The null hypothesis was rejected. Our multi-stage CNN model for CBCT-synthesized PA cephalograms did not adequately achieve the clinically acceptable error range of less than the 2 mm, but it showed better consistency than manual identification for repetitive landmark identification on PA cephalograms.

  • The skeletal landmarks condyle point right and left and most dentoalveolar landmarks showed significant differences between AI prediction and manual identification.

This article is partly from the PhD thesis of M.J.K. The final form of the machine-learning system was developed by computer engineers of See-through Tech incorporation (Seoul, Korea), which is expected to own the patent in the future.


No potential conflict of interest relevant to this article was reported.

  1. Major PW, Johnson DE, Hesse KL, Glover KE. Landmark identification error in posterior anterior cephalometrics. Angle Orthod 1994;64:447-54.
    Pubmed CrossRef
  2. Ulkur F, Ozdemir F, Germec-Cakan D, Kaspar EC. Landmark errors on posteroanterior cephalograms. Am J Orthod Dentofacial Orthop 2016;150:324-31.
    Pubmed CrossRef
  3. Na ER, Aljawad H, Lee KM, Hwang HS. A comparative study of the reproducibility of landmark identification on posteroanterior and anteroposterior cephalograms generated from cone-beam computed tomography scans. Korean J Orthod 2019;49:41-8.
    Pubmed KoreaMed CrossRef
  4. Sicurezza E, Greco M, Giordano D, Maiorana F, Leonardi R. Accuracy of landmark identification on postero-anterior cephalograms. Prog Orthod 2012;13:132-40.
    Pubmed CrossRef
  5. Pirttiniemi P, Miettinen J, Kantomaa T. Combined effects of errors in frontal-view asymmetry diagnosis. Eur J Orthod 1996;18:629-36.
    Pubmed CrossRef
  6. Major PW, Johnson DE, Hesse KL, Glover KE. Effect of head orientation on posterior anterior cephalometric landmark identification. Angle Orthod 1996;66:51-60.
    Pubmed CrossRef
  7. Smektała T, Jędrzejewski M, Szyndel J, Sporniak-Tutak K, Olszewski R. Experimental and clinical assessment of three-dimensional cephalometry: a systematic review. J Craniomaxillofac Surg 2014;42:1795-801.
    Pubmed CrossRef
  8. Kamoen A, Dermaut L, Verbeeck R. The clinical significance of error measurement in the interpretation of treatment results. Eur J Orthod 2001;23:569-78.
    Pubmed CrossRef
  9. Gribel BF, Gribel MN, Frazäo DC, McNamara JA Jr, Manzi FR. Accuracy and reliability of craniometric measurements on lateral cephalometry and 3D measurements on CBCT scans. Angle Orthod 2011;81:26-35.
    Pubmed CrossRef
  10. Gribel BF, Gribel MN, Manzi FR, Brooks SL, McNamara JA Jr. From 2D to 3D: an algorithm to derive normal values for 3-dimensional computerized assessment. Angle Orthod 2011;81:3-10.
    Pubmed CrossRef
  11. Meiyappan N, Tamizharasi S, Senthilkumar KP, Janardhanan K. Natural head position: an overview. J Pharm Bioallied Sci 2015;7(Suppl 2):S424-7.
    Pubmed KoreaMed CrossRef
  12. El-Mangoury NH, Shaheen SI, Mostafa YA. Landmark identification in computerized posteroanterior cephalometrics. Am J Orthod Dentofacial Orthop 1987;91:57-61.
    Pubmed CrossRef
  13. Leonardi R, Annunziata A, Caltabiano M. Landmark identification error in posteroanterior cephalometric radiography. A systematic review. Angle Orthod 2008;784:761-5.
    Pubmed CrossRef
  14. Shokri A, Miresmaeili A, Farhadian N, Falah-Kooshki S, Amini P, Mollaie N. Effect of changing the head position on accuracy of transverse measurements of the maxillofacial region made on cone beam computed tomography and conventional posterior-anterior cephalograms. Dentomaxillofac Radiol 2017;46:20160180.
    Pubmed KoreaMed CrossRef
  15. Oshagh M, Shahidi SH, Danaei SH. Effects of image enhancement on reliability of landmark identification in digital cephalometry. Indian J Dent Res 2013;24:98-103.
    Pubmed CrossRef
  16. Arık SÖ, Ibragimov B, Xing L. Fully automated quantitative cephalometry using convolutional neural networks. J Med Imaging (Bellingham) 2017;4:014501.
    Pubmed KoreaMed CrossRef
  17. Park JH, Hwang HW, Moon JH, Yu Y, Kim H, Her SB, et al. Automated identification of cephalometric landmarks: part 1-comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthod 2019;89:903-9.
    Pubmed CrossRef
  18. Hwang HW, Park JH, Moon JH, Yu Y, Kim H, Her SB, et al. Automated identification of cephalometric landmarks: part 2-might it be better than human? Angle Orthod 2019;90:69-76.
    Pubmed CrossRef
  19. Takahashi R, Matsubara T, Uehara K. Multi-stage convolutional neural networks for robustness to scale transformation. Paper presented at: 2017 International Symposium on Nonlinear Theory and Its Applications; 2017 Dec 4-7; Cancun, Mexico: Cancun: NOLTA, 2017. p. 692-5.
  20. Anwar SM, Majid M, Qayyum A, Awais M, Alnowami M, Khan MK. Medical image analysis using convolutional neural networks: a review. J Med Syst 2018;42:226.
    Pubmed CrossRef
  21. Yadav SS, Jadhav SM. Deep convolutional neural network based medical image classification for disease diagnosis. J Big Data 2019;6:113.
    CrossRef
  22. Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: a scoping review. J Dent 2019;91:103226.
    Pubmed CrossRef
  23. Park JJ, Kim KA, Nam Y, Choi MH, Choi SY, Rhie J. Convolutional-neural-network-based diagnosis of appendicitis via CT scans in patients with acute abdominal pain presenting in the emergency department. Sci Rep 2020;10:9556.
    Pubmed KoreaMed CrossRef

Article

Original Article

Korean J Orthod 2021; 51(2): 77-85

Published online March 25, 2021 https://doi.org/10.4041/kjod.2021.51.2.77

Copyright © The Korean Association of Orthodontists.

Evaluation of a multi-stage convolutional neural network-based fully automated landmark identification system using cone-beam computed tomographysynthesized posteroanterior cephalometric images

Min-Jung Kima , Yi Liub, Song Hee Ohc, Hyo-Won Ahna, Seong-Hun Kima , Gerald Nelsond

aDepartment of Orthodontics, Graduate School, Kyung Hee University, Seoul, Korea
bDepartment of Orthodontics, Peking University School of Stomatology, Beijing, China
cDepartment of Oral and Maxillofacial Radiology, Graduate School, Kyung Hee University, Seoul, Korea
dDivision of Orthodontics, Department of Orofacial Science, University of California San Francisco, CA, USA

Correspondence to:Seong-Hun Kim.
Professor and Head, Department of Orthodontics, Graduate School, Kyung Hee University, 26 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Korea.
Tel +82-2-958-9392 e-mail bravortho@gmail.com

Received: September 8, 2020; Revised: October 7, 2020; Accepted: October 7, 2020

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Objective: To evaluate the accuracy of a multi-stage convolutional neural network (CNN) model-based automated identification system for posteroanterior (PA) cephalometric landmarks.
Methods: The multi-stage CNN model was implemented with a personal computer. A total of 430 PA-cephalograms synthesized from cone-beam computed tomography scans (CBCT-PA) were selected as samples. Twenty-three landmarks used for Tweemac analysis were manually identified on all CBCT-PA images by a single examiner. Intra-examiner reproducibility was confirmed by repeating the identification on 85 randomly selected images, which were subsequently set as test data, with a two-week interval before training. For initial learning stage of the multi-stage CNN model, the data from 345 of 430 CBCT-PA images were used, after which the multi-stage CNN model was tested with previous 85 images. The first manual identification on these 85 images was set as a truth ground. The mean radial error (MRE) and successful detection rate (SDR) were calculated to evaluate the errors in manual identification and artificial intelligence (AI) prediction.
Results: The AI showed an average MRE of 2.23 ± 2.02 mm with an SDR of 60.88% for errors of 2 mm or lower. However, in a comparison of the repetitive task, the AI predicted landmarks at the same position, while the MRE for the repeated manual identification was 1.31 ± 0.94 mm.
Conclusions: Automated identification for CBCT-synthesized PA cephalometric landmarks did not sufficiently achieve the clinically favorable error range of less than 2 mm. However, AI landmark identification on PA cephalograms showed better consistency than manual identification.

Keywords: Artificial intelligence, Convolutional neural networks, Posteroanterior cephalometrics, Cone-beam computed tomography

INTRODUCTION

Posteroanterior (PA) cephalometric analysis is a useful tool for evaluating the cranial-dentofacial structure and growth pattern in the transverse plane.1-3 PA cephalometric analysis in combination with lateral cephalometric analysis provides a substantial amount of diagnostic data for a comprehensive three-dimensional assessment in daily practice.4-6 Therefore, PA cephalograms have proven to be indispensable diagnostic and evaluation tools in planning comprehensive orthodontic treatment.

Nevertheless, some studies have debated the reliability and reproducibility of landmark identification on PA cephalograms. In general, inter-examiner error was reported to be significantly higher than intra-examiner error in assessments based on PA cephalograms.1,3-5 The effect of head positioning on the accuracy of landmark identification with PA cephalometric analysis is greater than with lateral cephalometric analysis.6 Therefore, landmark identification errors in the form of empirical differences that generate major errors in cephalometric analysis are inevitable.7,8 Moreover, conventional two-dimensional radiograms translate a stereoscopic anatomic structure to a planar structure, generating layered images that obscure the landmarks,9,10 and rotation of the head position can generate distortion to induce interference in landmark identification.11

The application of artificial intelligence (AI) techniques in automated landmark identification is a new approach for cephalometric analysis that aims to facilitate landmark identification, since this identification can be performed rapidly with high consistency. Although various machine learning-based automatic landmark identification systems using lateral cephalograms have been proposed to date, we found no corresponding studies that used PA cephalograms.

This study is the first trial to evaluate the validity of a multi-stage convolutional neural network (CNN)-based automatic landmark prediction system using cone-beam computed tomography (CBCT)-synthesized PA cephalograms. The null hypothesis is that there are no differences in the reproducibility of landmark identification between AI prediction and manual identification.

MATERIALS AND METHODS

This retrospective study was performed under approval from the Institutional Review Board of Kyung Hee University Dental Hospital (IRB number: IRB-KH DT19013).

A total of 430 CBCT scans were selected from patients who met following the inclusion criteria: 1) patients who had visited Kyung Hee University Dental Hospital; 2) patients with growth potential, orthodontic appliances or/and dental prostheses, surgical screws or/and plates and patients with or without skeletal asymmetry; 3) no missing upper or lower permanent incisors, missing permanent upper or lower first molars, craniofacial syndromes, or dentofacial traumas.

The CBCT scans were taken at the 0.39-mm3 voxel size level, with a 16 × 13-cm field of view and a 30-seconds scan time at 10 mA and 80 kV (Alphard Vega; Asahi Roentgen, Kyoto, Japan). The obtained data were imported as Digital Imaging and Communications in Medicine (DICOM) files to Dolphin software 11.95 Premium (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA). All CBCT images were reoriented according to the anatomic structures of reference. The horizontal plane was established with reference to the right porion, left orbitale, and right orbitale. The sagittal plane was perpendicular to the frontozygomatic suture line and the horizontal plane passing through the nasion. The coronal plane passing through the nasion was made simultaneously perpendicular to the horizontal plane. The PA cephalogram was synthesized from the reoriented CBCT (CBCT-PA) images and saved in the JPG format with a range of pixel sizes, with a width of 2,048 pixels and heights of 1,755–1,860 pix (Figure 1).

Figure 1. Flow diagram showing the processing of the cone-beam computed tomography (CBCT)-synthesized posteroanterior (PA) cephalograms. The raw CBCT data were imported using Dolphin software 11.95 Premium (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA). The head position was adjusted to reduce layered bilateral structures. The ‘Build X-ray’ button in the software was used to synthesize the CBCT-PA with orthogonal X-ray exposure while eliminating virtual magnification, which causes image distortion.

Twenty-three skeletal landmarks used for Tweemac analysis were selected and manually identified on the 345 images used for model training. The landmarks on the remaining 85 images were manually identified two times in a two-week period for testing the model and validating intra-examiner consistency. For comparison with the examiner’s consistency, the AI assessments were also performed twice. All manual identification procedures were completed by a single examiner with more than five years’ orthodontic experience. A detailed description of the PA landmark definitions is given in Table 1. The See-through ceph (See-through Tech Inc., Seoul, Korea) software was used to accomplish landmark identification, with the nasion as the origin point of the coordinate system. The coordinates of each landmark were reported in Microsoft Excel (version 2010; Microsoft, Redmond, WA, USA). Two numeric values were recorded for all landmarks with reference to (X0, Y0).

Table 1 . Landmark definitions.

LandmarkDefinition
Bilateral skeletal landmarks
Lateral orbit right (LOR)The most anterior point at the intersection of the frontozygomatic suture on the right inner rim of the orbit
Lateral orbit left (LOL)The most anterior point at the intersection of the frontozygomatic suture on the left inner rim of the orbit
Condyle point right (COR)The most superior (sagittal perspective) and the middle (frontal perspective) point on the contour of the right condyle head
Condyle point left (COL)The most superior (sagittal perspective) and the middle (frontal perspective) point on the contour of the left condyle head
Jugal point right (JR)The intersection of the outline of the right maxillary tuberosity and the zygomatic buttress
Jugal point left (JL)The intersection of the outline of the left maxillary tuberosity and the zygomatic buttress
Right antegonial notch (AGR)Right deepest point on the curvature of the antegonial notch
Left antegonial notch (AGL)Left deepest point on the curvature of the antegonial notch
Midline skeletal landmarks
Crista galli (CG)The most superior and anterior points on the median ridge of bone that projects upward from the cribriform plate of the ethmoid bone
Anterior nasal spine (ANS)Center of the intersection of the nasal septum and the palate
Menton (Me)Midpoint on the inferior border of the mental protuberance
Bilateral dentoalveolar landmarks
Upper first molar axis right (U6AR)Furcation of the upper right first molar
Upper first molar axis left (U6AL)Furcation of the upper left first molar
Alveolar crest right (ACR)The right side of the most cervical rim of the alveolar bone proper
Alveolar crest left (ACL)The left side of the most cervical rim of the alveolar bone proper
Upper first molar cup right (U6MCR)The upper right first molar mesiobuccal cup tip
Upper first molar cup left (U6MCL)The upper left first molar mesiobuccal cup tip
Upper first molar central fossa right (U6CFR)The upper right first molar central fossa
Upper first molar central fossa left (U6CFL)The upper left first molar central fossa
Lower first molar mesiobuccal cusp tip right (L6MBR)The lower right first molar mesiobuccal cup tip
Lower first molar mesiobuccal cusp tip left (L6MBL)The lower left first molar mesiobuccal cup tip
Lower first molar axis right (L6AR)Furcation of the lower right first molar
Lower first molar axis left (L6AL)Furcation of the lower left first molar


The multi-stage CNN model used in this study was developed on a personal computer with Keras (https://keras.io/, Python deep learning application programming interface). The model consisted of six convolution layers, followed by two dense layers (Figure 2). Deep learning was performed using a GeForce GTX 1080ti GPU (Nvidia Co., Santa Clara, CA, USA) on the Ubuntu 14.04 platform (https://releases.ubuntu.com/14.04/). All images were preprocessed for training by the examiner. The landmarks were identified manually on the row size of the images. To optimize the data for model training, we resized all images to 400 × 400 pixels in the first step. Subsequently, we trained the entire target dataset with the corresponding landmarks for each image in training phase 1. In phase 2, we continued training the model with cropped images in local areas that included the landmarks. The model was trained for each landmark individually with five different image crop sizes: 250, 200, 150, 100, and 50. Briefly, the proposed model was constructed in five stages using the deep CNN model. In the training stage, the learning rate was 0.01, batch size was 100, and the image was composed of a total of 200 epochs. One millimeter corresponding to 10 pixels was used for unit translation. The trained model automatically identified each landmark on the 85 test images twice. Figure 2 illustrates the summarized experimental flow. A visualized effect of each convolutional layer during the first stage training is illustrated in Figure 3.

Figure 2. Schematic experimental design summary of the multi-stage convolutional neural network (CNN) model.
Conv, convolution; FC, fully connected; AI, artificial intelligence.

Figure 3. The visualized effect of each convolutional layer during the first stage training.

The absolute value of mean distance difference was calculated in millimeters by the following formula:

mean radial error (MRE)=i=1nR1n,
standard deviation (SD)=i=1n(Ri-MRE)n-12,
R=Δx2+Δy2.

Microsoft Excel was used for all calculations.

RESULTS

The multi-stage CNN-based AI achieved an MRE of 2.23 ± 2.02 mm. The highest accuracy was obtained for the alveolar crest left (error, 1.2 mm), while the lowest accuracy was obtained for the condyle point left (COL) (error, 4.24 mm). The AI showed errors of less than 2 mm for 10 of the 23 landmarks. The mean intra-examiner error in the repeated manual identification on the 85 images was 1.31 ± 0.94 mm. The repeated measurements of the menton and left antegonial notch showed an average error of 0.05 mm, indicating the highest consistency. In contrast, the average error for the lateral orbit left was 4.91 mm, representing the lowest consistency in repeated measurements (Figure 4 and Table 2). The prediction outcomes are illustrated in Figure 5.

Table 2 . The MRE and SD for intra-examiner and AI identification.

LandmarkManual identification 1 vs. 2(Intra-examiner)Manual identification 1 vs. AI(AI prediction)


MRESDMRESD
Bilateral
LOR4.711.071.812.01
LOL4.921.081.622.22
COR1.970.994.242.21
COL0.911.024.052.44
JR0.420.961.812.32
JL0.670.981.791.61
AGR0.190.791.651.91
AGL0.050.851.842.42
Midline
CG1.331.101.331.59
ANS1.770.991.452.08
Me0.050.832.141.83
Dentoalveolar
U6AR1.080.902.752.48
U6AL0.840.922.862.11
ACR0.120.931.031.68
ACL0.090.961.202.82
U6MCR1.630.902.632.09
U6MCL1.250.922.041.86
U6CFR1.670.892.362.09
U6CFL1.270.912.091.74
L6MBR1.280.893.483.23
L6MBL1.310.902.131.55
L6AR1.480.853.192.46
L6AL1.170.882.782.60
Average1.310.942.232.02

Unit of measurement: millimeter..

MRE, mean radial error; SD, standard deviation; AI, artificial intelligence..

See Table 1 for definitions of the other landmarks..



Figure 4. Mean radial error (MRE) for each landmark, and the average MRE between manual identification 1 and artificial intelligence (AI) (black) and manual identifications 1 and 2 (white).
See Table 1 for definitions of the other landmarks.

Figure 5. Accuracy of the convolutional neural network based on the automatic landmark identification system for cone-beam computed tomography using the synthesized posteroanterior cephalograms. The black dot represents the manually identified landmark and the white dot indicates the automatically identified landmark.
N, nasion.
See Table 1 for definitions of the other landmarks.

DISCUSSION

Although PA cephalometric analysis provides typical valuable information for comprehensive cranial-dentofacial evaluation, it generates more superimposed and layered anatomical structure images than lateral cephalograms. These additional superimposed structures affect the accuracy of landmark identification. It is difficult to identify landmarks with poor reproducibility, so accurate identification strongly depends on the examiner’s experience and skill levels. This may be the reason why PA cephalometric analysis is not routinely performed in orthodontic practice.1,2,4,12,13 This study examined the accuracy of an automated identification system based on a multi-stage CNN model for identification of cephalometric landmarks on CBCT-PA images.

Major et al.1 indicated that the intra-examiner errors ranged from 0.28 mm to 2.23 mm when identification was performed five times. The inter-examiner range of errors for identifying landmarks on PA cephalograms was 0.31 to 4.79 mm, which represents a significantly wide variation. Ulkur et al.2 assessed the intra- and inter-examiner consistencies in identifying landmarks on PA cephalograms and noted higher consistency among trained examiners. On the basis of their findings, in this study, only one examiner was used to minimize the inter-examiner error in landmark identification. Consistent with the findings of previous study, the intra-examiner assessments in the present study showed a large range of errors from 0.05 to 4.91 mm. The fact that the examiner has relatively less experience in PA landmark identification could be the reason for this large range of errors in intra-examiner measurements.

Shokri et al.14 assessed the errors for identification of landmarks in seven different positions on PA cephalograms and observed significant differences in assessments of the antegonion, condyle, and the zygomaticofrontal suture, which are farther from the midline and readily generate identification errors. Thus, we used the CBCT-synthesized PA cephalograms with a consistent head orientation protocol that was more reliable than the conventional protocols for landmark identification. Pirttiniemi et al.5 described the inter-examiner reproducibility of landmark identification on PA cephalograms and found that the mean distance error was higher for the frontozygomatic suture point, pogonion, condyle, and upper and lower incisal midpoints. Some previous studies also mentioned that the condyle, gonion, and zygomatic process points were difficult to locate consistently.1,4,12,15 In this study, we also found relatively higher errors for the condyle and frontozygomatic areas (Figure 6).

Figure 6. Many structures layered in vertical dimension. The condyle and dentoalveolar areas of the posteroanterior (PA) cephalograms interfere with the artificial intelligence image recognition ability. A, Cone-beam computed tomography-synthesized PA cephalogram. B, Vertical layered images at the intersection of the nasal septum and palatal area (white arrow) and vertical layered and horizontal superimposed images in the dental area. C, D, Vertical layered images in the condyle area.

The emergence of AI in cephalometry has allowed orthodontists to reduce working time and identify landmarks more consistently. Many previous studies have introduced various approaches to improve the accuracy of automated identification. Arık et al.16 applied a deep CNN architecture-based fully automated landmark identification system for the first time. They trained the system with 400 conventional lateral cephalograms including 19 landmarks. The result showed an successful detection rate (SDR) of 75.58% for a range of 2 mm. Park et al.17 trained You-Only-Look-Once version 3 (YOLOv3, https://pjreddie.com/darknet/yolo/) to construct an automated identification system and compared it with a previously introduced model using 1,311 conventional lateral cephalograms with 80 landmarks. They observed an SDR of 80.4% for range of 2 mm, which is approximately 5% higher than the SDR ranges reported in previous studies. Unfortunately, no previous study has used PA cephalograms with an automated identification system. In comparison with similar studies that used lateral cephalograms, our model may have a relatively lower accuracy rate, but it still showed promising feasibility and potential.

Hwang et al.18 used their automatic landmark identification system to compare the stability of manual and AI-assisted landmark identification, and they clearly observed that while AI demonstrated consistency in repetitive identification, the intra-examiner manual landmark identification demonstrated an error of 0.97 ± 1.03 mm. In this study, AI prediction for PA cephalograms also showed consistent results, while the mean intra-examiner error was 1.31 ± 0.94 mm. Thus, AI-based assessments are not affected by subjectivity or external conditions, unlike human performance. Therefore, the AI prediction system for PA cephalograms outlined in this study might offer some advantage because of its ability to identify landmarks consistently in comparison with manual identification.

Deep learning approaches have been recently recommended as a superior technology for automatic location of anatomical landmarks in radiographs. The CNN model shows outstanding ability in image recognition with the application of AI.16-23 This study presented a fully automatic landmark identification system for PA cephalometric analysis based on the multi-stage CNN model. This deep CNN learning model emulates the human examiner’s landmark identification pattern and performs prediction. Thus, AI prediction is affected by the human examiner’s identification pattern. If the examiner shows difficulties in some areas, the AI predictions will reflect these difficulties. In our present study, AI prediction showed the lowest accuracy in the condyle area while the repeated manual identifications showed the lowest consistency for the frontozygomatic suture area. The intra-examiner assessments might show large variability in repetitive tasks. Manual identification in the first measurement yields more variables for the condyle area, but not in the second measurement. However, the human examiner and AI show differences in decision-making. For example, the human examiner can make exclusive decisions based on clinical knowledge, i.e., when the bilateral anatomic structure shows layered images. Thus, the experienced human examiner might show better ability than AI for complicated PA cephalograms requiring comprehensive consideration with subjectivity.

A limitation of this study was that we could not conclusively compare the prediction accuracy of a model trained by a more experienced clinician (who might have shown smaller variations). Additional information on this aspect should be obtained through further investigation. In this study, we developed a feasible automated landmark identification system using PA cephalograms based on a multi-stage CNN model with a personal computer. AI might offer the advantage of consistency in repetitive tasks in comparison with a trained human examiner.

CONCLUSION

  • We used CBCT-synthesized PA cephalograms to reduce intra-examiner errors and to enhance the ability of Al image recognition.

  • The null hypothesis was rejected. Our multi-stage CNN model for CBCT-synthesized PA cephalograms did not adequately achieve the clinically acceptable error range of less than the 2 mm, but it showed better consistency than manual identification for repetitive landmark identification on PA cephalograms.

  • The skeletal landmarks condyle point right and left and most dentoalveolar landmarks showed significant differences between AI prediction and manual identification.

ACKNOWLEDGEMENTS

This article is partly from the PhD thesis of M.J.K. The final form of the machine-learning system was developed by computer engineers of See-through Tech incorporation (Seoul, Korea), which is expected to own the patent in the future.

CONFLICTS OF INTEREST


No potential conflict of interest relevant to this article was reported.

Fig 1.

Figure 1.Flow diagram showing the processing of the cone-beam computed tomography (CBCT)-synthesized posteroanterior (PA) cephalograms. The raw CBCT data were imported using Dolphin software 11.95 Premium (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA). The head position was adjusted to reduce layered bilateral structures. The ‘Build X-ray’ button in the software was used to synthesize the CBCT-PA with orthogonal X-ray exposure while eliminating virtual magnification, which causes image distortion.
Korean Journal of Orthodontics 2021; 51: 77-85https://doi.org/10.4041/kjod.2021.51.2.77

Fig 2.

Figure 2.Schematic experimental design summary of the multi-stage convolutional neural network (CNN) model.
Conv, convolution; FC, fully connected; AI, artificial intelligence.
Korean Journal of Orthodontics 2021; 51: 77-85https://doi.org/10.4041/kjod.2021.51.2.77

Fig 3.

Figure 3.The visualized effect of each convolutional layer during the first stage training.
Korean Journal of Orthodontics 2021; 51: 77-85https://doi.org/10.4041/kjod.2021.51.2.77

Fig 4.

Figure 4.Mean radial error (MRE) for each landmark, and the average MRE between manual identification 1 and artificial intelligence (AI) (black) and manual identifications 1 and 2 (white).
See Table 1 for definitions of the other landmarks.
Korean Journal of Orthodontics 2021; 51: 77-85https://doi.org/10.4041/kjod.2021.51.2.77

Fig 5.

Figure 5.Accuracy of the convolutional neural network based on the automatic landmark identification system for cone-beam computed tomography using the synthesized posteroanterior cephalograms. The black dot represents the manually identified landmark and the white dot indicates the automatically identified landmark.
N, nasion.
See Table 1 for definitions of the other landmarks.
Korean Journal of Orthodontics 2021; 51: 77-85https://doi.org/10.4041/kjod.2021.51.2.77

Fig 6.

Figure 6.Many structures layered in vertical dimension. The condyle and dentoalveolar areas of the posteroanterior (PA) cephalograms interfere with the artificial intelligence image recognition ability. A, Cone-beam computed tomography-synthesized PA cephalogram. B, Vertical layered images at the intersection of the nasal septum and palatal area (white arrow) and vertical layered and horizontal superimposed images in the dental area. C, D, Vertical layered images in the condyle area.
Korean Journal of Orthodontics 2021; 51: 77-85https://doi.org/10.4041/kjod.2021.51.2.77

Table 1 . Landmark definitions.

LandmarkDefinition
Bilateral skeletal landmarks
Lateral orbit right (LOR)The most anterior point at the intersection of the frontozygomatic suture on the right inner rim of the orbit
Lateral orbit left (LOL)The most anterior point at the intersection of the frontozygomatic suture on the left inner rim of the orbit
Condyle point right (COR)The most superior (sagittal perspective) and the middle (frontal perspective) point on the contour of the right condyle head
Condyle point left (COL)The most superior (sagittal perspective) and the middle (frontal perspective) point on the contour of the left condyle head
Jugal point right (JR)The intersection of the outline of the right maxillary tuberosity and the zygomatic buttress
Jugal point left (JL)The intersection of the outline of the left maxillary tuberosity and the zygomatic buttress
Right antegonial notch (AGR)Right deepest point on the curvature of the antegonial notch
Left antegonial notch (AGL)Left deepest point on the curvature of the antegonial notch
Midline skeletal landmarks
Crista galli (CG)The most superior and anterior points on the median ridge of bone that projects upward from the cribriform plate of the ethmoid bone
Anterior nasal spine (ANS)Center of the intersection of the nasal septum and the palate
Menton (Me)Midpoint on the inferior border of the mental protuberance
Bilateral dentoalveolar landmarks
Upper first molar axis right (U6AR)Furcation of the upper right first molar
Upper first molar axis left (U6AL)Furcation of the upper left first molar
Alveolar crest right (ACR)The right side of the most cervical rim of the alveolar bone proper
Alveolar crest left (ACL)The left side of the most cervical rim of the alveolar bone proper
Upper first molar cup right (U6MCR)The upper right first molar mesiobuccal cup tip
Upper first molar cup left (U6MCL)The upper left first molar mesiobuccal cup tip
Upper first molar central fossa right (U6CFR)The upper right first molar central fossa
Upper first molar central fossa left (U6CFL)The upper left first molar central fossa
Lower first molar mesiobuccal cusp tip right (L6MBR)The lower right first molar mesiobuccal cup tip
Lower first molar mesiobuccal cusp tip left (L6MBL)The lower left first molar mesiobuccal cup tip
Lower first molar axis right (L6AR)Furcation of the lower right first molar
Lower first molar axis left (L6AL)Furcation of the lower left first molar

Table 2 . The MRE and SD for intra-examiner and AI identification.

LandmarkManual identification 1 vs. 2(Intra-examiner)Manual identification 1 vs. AI(AI prediction)


MRESDMRESD
Bilateral
LOR4.711.071.812.01
LOL4.921.081.622.22
COR1.970.994.242.21
COL0.911.024.052.44
JR0.420.961.812.32
JL0.670.981.791.61
AGR0.190.791.651.91
AGL0.050.851.842.42
Midline
CG1.331.101.331.59
ANS1.770.991.452.08
Me0.050.832.141.83
Dentoalveolar
U6AR1.080.902.752.48
U6AL0.840.922.862.11
ACR0.120.931.031.68
ACL0.090.961.202.82
U6MCR1.630.902.632.09
U6MCL1.250.922.041.86
U6CFR1.670.892.362.09
U6CFL1.270.912.091.74
L6MBR1.280.893.483.23
L6MBL1.310.902.131.55
L6AR1.480.853.192.46
L6AL1.170.882.782.60
Average1.310.942.232.02

Unit of measurement: millimeter..

MRE, mean radial error; SD, standard deviation; AI, artificial intelligence..

See Table 1 for definitions of the other landmarks..


References

  1. Major PW, Johnson DE, Hesse KL, Glover KE. Landmark identification error in posterior anterior cephalometrics. Angle Orthod 1994;64:447-54.
    Pubmed CrossRef
  2. Ulkur F, Ozdemir F, Germec-Cakan D, Kaspar EC. Landmark errors on posteroanterior cephalograms. Am J Orthod Dentofacial Orthop 2016;150:324-31.
    Pubmed CrossRef
  3. Na ER, Aljawad H, Lee KM, Hwang HS. A comparative study of the reproducibility of landmark identification on posteroanterior and anteroposterior cephalograms generated from cone-beam computed tomography scans. Korean J Orthod 2019;49:41-8.
    Pubmed KoreaMed CrossRef
  4. Sicurezza E, Greco M, Giordano D, Maiorana F, Leonardi R. Accuracy of landmark identification on postero-anterior cephalograms. Prog Orthod 2012;13:132-40.
    Pubmed CrossRef
  5. Pirttiniemi P, Miettinen J, Kantomaa T. Combined effects of errors in frontal-view asymmetry diagnosis. Eur J Orthod 1996;18:629-36.
    Pubmed CrossRef
  6. Major PW, Johnson DE, Hesse KL, Glover KE. Effect of head orientation on posterior anterior cephalometric landmark identification. Angle Orthod 1996;66:51-60.
    Pubmed CrossRef
  7. Smektała T, Jędrzejewski M, Szyndel J, Sporniak-Tutak K, Olszewski R. Experimental and clinical assessment of three-dimensional cephalometry: a systematic review. J Craniomaxillofac Surg 2014;42:1795-801.
    Pubmed CrossRef
  8. Kamoen A, Dermaut L, Verbeeck R. The clinical significance of error measurement in the interpretation of treatment results. Eur J Orthod 2001;23:569-78.
    Pubmed CrossRef
  9. Gribel BF, Gribel MN, Frazäo DC, McNamara JA Jr, Manzi FR. Accuracy and reliability of craniometric measurements on lateral cephalometry and 3D measurements on CBCT scans. Angle Orthod 2011;81:26-35.
    Pubmed CrossRef
  10. Gribel BF, Gribel MN, Manzi FR, Brooks SL, McNamara JA Jr. From 2D to 3D: an algorithm to derive normal values for 3-dimensional computerized assessment. Angle Orthod 2011;81:3-10.
    Pubmed CrossRef
  11. Meiyappan N, Tamizharasi S, Senthilkumar KP, Janardhanan K. Natural head position: an overview. J Pharm Bioallied Sci 2015;7(Suppl 2):S424-7.
    Pubmed KoreaMed CrossRef
  12. El-Mangoury NH, Shaheen SI, Mostafa YA. Landmark identification in computerized posteroanterior cephalometrics. Am J Orthod Dentofacial Orthop 1987;91:57-61.
    Pubmed CrossRef
  13. Leonardi R, Annunziata A, Caltabiano M. Landmark identification error in posteroanterior cephalometric radiography. A systematic review. Angle Orthod 2008;784:761-5.
    Pubmed CrossRef
  14. Shokri A, Miresmaeili A, Farhadian N, Falah-Kooshki S, Amini P, Mollaie N. Effect of changing the head position on accuracy of transverse measurements of the maxillofacial region made on cone beam computed tomography and conventional posterior-anterior cephalograms. Dentomaxillofac Radiol 2017;46:20160180.
    Pubmed KoreaMed CrossRef
  15. Oshagh M, Shahidi SH, Danaei SH. Effects of image enhancement on reliability of landmark identification in digital cephalometry. Indian J Dent Res 2013;24:98-103.
    Pubmed CrossRef
  16. Arık SÖ, Ibragimov B, Xing L. Fully automated quantitative cephalometry using convolutional neural networks. J Med Imaging (Bellingham) 2017;4:014501.
    Pubmed KoreaMed CrossRef
  17. Park JH, Hwang HW, Moon JH, Yu Y, Kim H, Her SB, et al. Automated identification of cephalometric landmarks: part 1-comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthod 2019;89:903-9.
    Pubmed CrossRef
  18. Hwang HW, Park JH, Moon JH, Yu Y, Kim H, Her SB, et al. Automated identification of cephalometric landmarks: part 2-might it be better than human? Angle Orthod 2019;90:69-76.
    Pubmed CrossRef
  19. Takahashi R, Matsubara T, Uehara K. Multi-stage convolutional neural networks for robustness to scale transformation. Paper presented at: 2017 International Symposium on Nonlinear Theory and Its Applications; 2017 Dec 4-7; Cancun, Mexico: Cancun: NOLTA, 2017. p. 692-5.
  20. Anwar SM, Majid M, Qayyum A, Awais M, Alnowami M, Khan MK. Medical image analysis using convolutional neural networks: a review. J Med Syst 2018;42:226.
    Pubmed CrossRef
  21. Yadav SS, Jadhav SM. Deep convolutional neural network based medical image classification for disease diagnosis. J Big Data 2019;6:113.
    CrossRef
  22. Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: a scoping review. J Dent 2019;91:103226.
    Pubmed CrossRef
  23. Park JJ, Kim KA, Nam Y, Choi MH, Choi SY, Rhie J. Convolutional-neural-network-based diagnosis of appendicitis via CT scans in patients with acute abdominal pain presenting in the emergency department. Sci Rep 2020;10:9556.
    Pubmed KoreaMed CrossRef