AI is Built on Trust
The development of artificial intelligence (AI) is accelerating, enabling profound progress in every field of human endeavor – from healthcare to education to climate control and crop yields. By combining AI with natural human ingenuity, we are able to maximize the potential of individuals and enable them to accomplish remarkable achievements.
For example, our research labs around the world are now trying to address one of humanity’s deadliest diseases – cancer. Not through test tubes and medical equipment, but with AI and machine learning.
By leveraging machine learning and natural language processing, we are helping the world’s leading oncologists figure out the most effective, individualized cancer treatment for their patients by providing an intuitive way to sort through all the research data available. Another initiative that we are working on is to pair machine learning with computer vision to give radiologists a more detailed understanding of how their patients’ tumors are progressing, and more importantly, how to treat it (pictured above).
AI Raises Complex Questions on Trust and Ethics
It is evident that AI has the potential to help society overcome some of its most daunting challenges. But its potential can only be maximized if it can collect, aggregate and share data at massive scale.
And that raises ethical issues around universal access, security, privacy, transparency and so on. To some extent, AI has upended the relationship we once had with technology and the level of trust that we had in it needs to be re-examined.
In addition, on the societal level, as AI continues to augment our decision-making process, how can we ensure that it treats everyone fairly? And how can we ensure people and organizations remain accountable for AI-driven systems that are becoming not only more pervasive but more intelligent and powerful?
These are some of the key questions that individuals, business and governments need to ponder, analyze and untangle as the advancement and proliferation of AI continues to accelerate.
We believe that for the full potential of AI to be unleashed, we must build a solid foundation of trust. Users will not use a new AI-enabled solution if they do not trust that it meets the highest standards for security, privacy and safety. To realize the full benefits of AI, we will need to work together to find answers to these questions and create systems that people trust.
Building Trust in AI
Ultimately, for AI to be trustworthy, Microsoft believes that it should not only be transparent, secure and inclusive but also maintain the highest degree of privacy protection. And we have drawn up six principles that we believe should be at the heart of any development and deployment of AI-powered solutions:
- Privacy and security: Like other cloud technologies, AI systems must comply with privacy laws that regulate data collection, use and storage, and ensure that personal information is used in accordance with privacy standards and protected from misuse or theft.
- Transparency: As AI increasingly impacts people’s lives, we must provide contextual information about how AI systems operate so that people understand how decisions are made and can more easily identify potential bias, errors and unintended outcomes.
- Fairness: When AI systems make decisions about medical treatment or employment, for example, they should make the same recommendations for everyone with similar symptoms or qualifications. To ensure fairness, we must understand how bias can affect AI systems.
- Reliability: AI systems must be designed to operate within clear parameters and undergo rigorous testing to ensure that they respond safely to unanticipated situations and do not evolve in ways that are inconsistent with original expectations. People should play a critical role in making decisions about how and when AI systems are deployed.
- Inclusiveness: AI solutions must address a broad range of human needs and experiences through inclusive design practices that anticipate potential barriers in products or environments that can unintentionally exclude people.
- Accountability: People who design and deploy AI systems must be accountable for how their systems operate. Accountability norms for AI should draw on the experience and practices from other sectors such as privacy in healthcare. It also needs to be adhered to during system design and in an ongoing manner as systems operate in the world.
These six principles guide the design of Microsoft’s AI products and services, and we are institutionalizing them by forming an internal advisory committee to help ensure our products adhere to these values.
To learn more about these six AI principles, I would encourage you to read ‘The Future Computed, Artificial Intelligence and its Role in Society’ Helmed by Brad Smith, President and Chief Legal Officer; and Harry Shum, Executive Vice President of Microsoft AI and Research Group.
This book examines how we can prepare for an AI future and is available to download here free of charge.
In addition, we are deeply involved in efforts throughout the AI community and have co-founded the Partnership on AI, hich aims to devise the best practices for AI, increase its awareness and discuss its influence on people and society.
All these initiatives reflect what we aspire to achieve – Responsive and Responsible AI Leadership – and our approach is grounded in, and consistent with, our company mission to enable every person and organization on the planet to achieve more.
At the end of the day, we believe the progress of AI can address many more challenges than they present, and the AI tools and services we create must assist humanity and augment our capabilities. We are optimistic about the future of AI, and the opportunities that it providers to create a better future for all. But to ensure that we realize this future, it will be essential for governments, businesses, academics and civil organizations to work together to create trustworthy AI systems