The efficiency and effectiveness of any AI algorithm depends on how rigorous the data is, data of high quality ensures the training algorithm to give the best results in terms of accuracy.
So to ensure producing a reliable complete data, it was tempting to go through this challenge and apply state of art techniques addressing the solution of this problem plus discovering and innovating new methodologies to handle this serious problem.
Revolving around data imputation by using advanced statistical approaches and machine learning algorithms.
Creating a reliable precise and efficient solution that can be quietly generalized to handle mostly all kinds of missed data from various resources to facilitate the training process done by any machine learning algorithm.