AI technology is an essential part of industry’s efforts to revisit business strategies and keep pace with a world increasingly defined by AI-based intelligent systems. It helps improve efficiencies, create effective decision systems, reduce cycle time, and enhance the customer experience. The AI technology stack is becoming more standardized, and automation is now mature enough to create complete end-to-end digital experiences. However, AI adoption and integration with legacy systems are more complicated in highly regulated industries such as aerospace, fintech, autonomous vehicles and health care. With data security, privacy and customer safety paramount, businesses in these industries need to understand the rapidly evolving regulatory structures that can make or break AI initiatives.
This leads to more questions about AI’s trustworthiness, even as companies seek quicker integration with their digital and automation initiatives. As AI becomes a requirement in every industrial sector, it has moved from technology-oriented initiatives to framework-based solutions with multiple derivative modules. This means that each industry must take care of human values when building AI systems and ensure that they align with regulatory principles. The adaptive frameworks have custom elements that ensure successful implementations for each industry. Each sector is working on its strategic objectives to ensure that decision systems are authenticated. The framework’s common outcome is to optimize resources, increase efficiency and create a reliable decision support system with human augmentation as appropriate. Such framework needs to ensure that the technology aligns with the end user expectations without compromising the in-person experience of the products, solutions and services. Thus, data privacy and processing along with intelligent analytics & alignment with standards will lead to quicker adoption of AI.
Industry-specific value chains can be built without bias using AI trustworthiness principles in a matured technology stack. Before fully autonomous operations can be realized, derivative AI services can be developed that factor in risk, authentication and guarantees so that human-augmented frameworks mature in the right way. Moreover, AI’s market opportunity is expected to increase by an average compound average growth rate of 42.2% from 2020 to 2027 – Source. Such a market potential provides incentives for companies to follow through with a quicker AI adoption strategy. For successful adoption, AI trustworthiness must have assurance, security, risk, and safety layers with guaranteed services at various levels of industry-specific value chains.
Here is a quick glimpse of AI trustworthiness, roadmap and regulations across aerospace, autonomous vehicles, fintech and healthcare industries.
· Risk, safety, and security
· Decision systems
|·Function and service
·Security and safety
|·Security and service
·Validation and safety
|·Transparency and security
.Credibility, privacy and consent
·Explainable and reliable
|·Three-level approach – 1) Assistance to human, 2) Human/Machine collaboration, 3) More autonomous machines
·Augmenting autonomous with adaptive learning
|·Five-level approach – Levels 1&2 are driver support features, Levels 3, 4 & 5 are automated driving features
·Human control to machine control
·Hybrid automation with intelligent analytics
|·Four-level approach –
Cognitive to self-resilient systems, collaborative and cross-platform
|·An evolving strategy having no specific level approach
·Descriptive to prescriptive AI
·Collaborative human augmented intelligent analytics for expert decisions
|Global AI regulatory
|Global AI consortium
|Geography-specific AI consortium
|Geography-specific AI consortium
AI technology, products, solutions and services have accelerated digital transformation, automation, and autonomous initiatives.
Businesses must work closely with regulators, certification agencies and professional bodies such as NIST, ISO, IEEE, ISA and SAE International to develop standards, guidelines and best practices in AI. Businesses should collaborate with regulators, certification agencies, professional bodies, research labs, partners and universities to build an AI trustworthiness ecosystem. AI trustworthiness can play a vital role in tomorrow’s industry unleashing human potential, resulting in comprehensive and sustainable growth.
For a detailed perspective on AI trustworthiness in aerospace, autonomous vehicles, fintech and healthcare industries, please refer to the Infosys white paper.
About the Author:
Dr. Ravi Kumar G. V. V. is an Associate Vice President and Head Advanced Engineering Group (AEG) of Engineering Services, Infosys. He has led many innovation and applied research projects for more than 20 years. His areas of expertise include mechanical structures and systems, knowledge-based engineering, composites, artificial intelligence, robotics, autonomous systems, AR, VR and Industry 4.0. He is involved in the development of commercial products like AUTOLAY (CADDS-COMPOSITES) – a spin-off Indian LCA(Tejas) program, Nia Knowledge – a knowledge-based engineering platform and KRTI 4.0 – an operational excellence framework. He contributed to many Industry 4.0 implementation projects and played a crucial role in the development of Industry 4.0 maturity index under the umbrella of Acatech, Germany. He is also involved in various initiatives of the World Economic Forum (WEF) fourth industrial revolution technologies in production. He is a member of the HM 1 and Chair of G31 technical committees of SAE International. Dr. Ravi Kumar has published over fifty technical papers, three patents and developed many aerospace standards. He has a Ph.D. and an M. Tech from IIT Delhi and a B.E. (Honors) from BITS Pilani, India.
LinkedIn: Dr Ravi Kumar G. V. V