Chennai, Feb 22 : Even as Artificial Intelligence (AI) is deepening its reach into everyday risk and compliance, the transition will not happen overnight, finds a survey by Moody’s.
The slow uptake of AI technologies, according to the respondents, is due to a combination of practical, cultural, and regulatory challenges.
The survey, ‘Navigating the AI landscape,’ conducted among 550 international compliance and risk experts shows that 80 per cent respondents expect widespread adoption of AI in risk and compliance within the next five years.
However, only 5 per cent of those surveyed anticipate the transition happening within the next 12 months.
Among professionals who are not currently using or testing AI, there is a lack of understanding about its potential applications in managing risk and compliance, Moody’s said.
The survey points out that “only a quarter of respondents rate their knowledge as high or very high.
One challenge around AI is the sheer variety of terminology in use, as AI in terms of risk management and compliance doesn’t mean the same to everyone — there was no consistent answer from those surveyed.”
According to Moody’s, despite these challenges, experts widely agree on two key points: AI offers significant advantages, and new legislation is necessary to regulate its use.
Survey Highlights: Almost 70 per cent of firms expect AI will exert a transformative or major impact on risk and compliance.
Early adopters identify benefits of AI usage in five key areas: Efficiency gains, enhanced risk identification, tighter fraud detection, cost saving and error reduction, and data processing and quality gains.
Among respondents already using or trialing AI, 63 per cent are using it for data analysis and interpretation to make sense of their analytical results.
By sector, almost three in four bank and fintech professionals expect a high impact from AI, as opposed to asset and wealth managers and insurers who both perceive a lower impact.
Only 9 per cent of those surveyed are active users and 21 per cent are in the pilot or trial stage.
Two-thirds of respondents rate their firm’s data quality as low (inconsistent and fragmented).
Five in every ten respondents say ensuring transparency and explainability of AI decision-making is the most important safeguard.
AI regulations are poorly understood, but the acknowledgement to the fact that it’s needed is almost unanimous.
Over-reliance on AI is a top concern among respondents (17 per cent), followed by lack of transparency (15 per cent).
vj/rad
</
#widespread #slow #compliance #Moodys #Chennai #Tamil #Kollywood #Chennai #Artificial Intelligence #Fintech
.