Dr. Ciro Donalek​

CTO & Founder, Virtualitics;

former Computational Scientist at Caltech

Ciro Donalek is a former Computational Scientist at Caltech where he successfully applied Machine Learning techniques to many different scientific fields, co-authoring over a hundred scientific and technical publications (e.g., Nature, Neural Networks, IEEE Big Data, Bioinformatics). Dr. Donalek has also pioneered some of the uses of Mixed Reality for immersive data visualization, exploration and machine learning, leading the iViz project at Caltech; he has also a patent on “Systems and Methods for Data Visualization Using Three-Dimensional Display”. During his 15 years career as a data scientist Dr. Donalek has been awarded different research fellowships, served as reviewer for numerous major scientific journals, and has given many invited talks on Machine Learning, Virtual Reality and Data Visualization. He also has a Minor Planet named after him as a reward for the work done in the automatic classification of celestial bodies, and has been part of the group that built the Big Picture, the single largest real astronomical image in the world, 152 feet wide and 20 feet tall, currently installed at the Griffith Observatory in Los Angeles, the most-visited public observatory in the world (with 1.5 million visitors a year).Dr. Donalek holds a PhD in Computational Science (University Federico II of Naples, Italy) and a MS in Computer Science and Artificial Intelligence (University of Salerno, Italy). He is married with two children.

WATCH LIVE: November 3rd at 11:30 am

Dr. Ciro Donalek

Recent advances in Machine Learning (ML) have led to the widespread adoption and use of Artificial Intelligence in both public and private sectors. Although these models can derive powerful predictions and provide useful insights into large quantities of data, they are often opaque, leaving the users — especially non-technical users — in a difficult position of having to trust a model that they cannot understand, and therefore cannot explain results to other stakeholders. This is a challenge that both users and data scientists face and has led to growing interest in Explainable AI (XAI), e.g. explain to a user how the predictions were made, the main factors that influence the model, when the model can fail. In this talk we will go through how to make the black box of AI more transparent with XAI.