- Scientists used a data-driven approach for facial phenotyping that exploits partable as well as integrated data contained in 3D facial images.
- The study has a wide range of applications from early diagnosis, treatment planning in craniofacial surgery, to biometrics and forensics.
The human face consists of distinct features like mouth, chin, nose and eyes, whose shape, size and composition are heritable. Forensic examiners could sketch face of perpetrators with the help of DNA obtained from a crime scene, doctors could use DNA for facial and skull reconstructive surgery, and historians would be able to recreate facial features using old DNA.
However, we still don’t completely understand the genetic variants that are responsible for variation in human face. Therefore, we can reconstruct facial features from a given DNA, but that would be not accurate.
Previously, researchers selected some certain features, like the width of the mouth or the distance between the eyes. Then they would find a link between this feature and several genes. This already resulted in detection and identification of different genes, but the outcomes are limited due to the small set of selected/tested features.
New Data-Driven Approach
Now researchers at KU Leuven, universities of Pittsburgh, Penn State and Stanford have used a data-driven approach to facial phenotyping, which take advantage of both the partable and integrated data contained in 3D facial pictures. This enables the identification of genetic effects on facial structure at different levels – from local to global.
Using this technique, they produced a nested series of multivariate genome-wide association studies with controlled multiple testing and low computational overhead. Then they applied it to a European-derived cohort and integrated it with previous genome-wide association studies. They used a sample of 2,329 people of European ancestry.
Simply put, each face was subdivided into a smaller part. Researchers evaluated whether any particular region in the DNA matched these parts. This division approach allowed the researchers, for the first time, to check for a significant number of facial features.
Reference: Nature Genetics | doi:10.1038/s41588-018-0057-4 | KU Leuven
Results
The researchers were able to pick 15 different regions in DNA. They discovered that genomic loci connected to these modular facial features are active when face develops in the womb. Also, these genetic variants are associated with genome location that influence where, when and how much genes are expressed.
7 out of 15 identified genes are connected to the nose, and the skull does not consist of traces of the nose. It has only cartilage and soft tissue. In the future, it would become quite easier for scientists to determine the appearance of the nose, if the skull also yields DNA.
What’s Next?
It’s very difficult (you can say impossible) to predict 100 percent accurate face by just using the DNA information. Currently, we are not even close to figuring out all the genes that are responsible for shaping our face. Also, our environment, lifestyle and age play a very important role in what our face looks like.
It is possible to connect other medical pictures (like brain scans, X-ray) to genes, using the same technique. In the far future, it could provide us with genetic details of brain functioning as well as neurodegenerative diseases like Alzheimer.
Read: Google Develops AI That Predicts Heart Disease By Scanning Your Eyes
Moreover, a deeper understanding of genetic basis of facial features could improve our knowledge of the complicated relationship between phenotype and genotype in carniofacial syndromes and birth defects, craniofacial morphogenesis, and provide a basis of determining facial features in a wide range of applications, ranging from early diagnosis, treatment planning in craniofacial surgery, to biometrics and forensics.