Even in areas where AI methods function to high standards of correctness, there remain challenges. AI decisions are often not explained to users, do not always appear to adhere to social norms and conventions, can be distorted by bias in their data or algorithms, and, at times, cannot even be understood by their engineers.
The overarching aim of the CDT is to train the first generation of AI scientists and engineers in methods of safe and trusted AI. An AI system is considered to be safe when we can provide some assurance about the correctness of its behavior, and it is considered to be trusted if the average user can have confidence in the system and its decision-making.