Grant Thornton | 1 CPE | Internal audit’s role in assessing AI programs and related risks
Artificial intelligence (AI) refers to the simulation of human intelligence processes by machines, typically involving tasks such as learning, reasoning, problem-solving, and decision-making. AI has transitioned from an emerging technology a few years ago to a critical technology resource today. In fact, it is expected that 53% of businesses will use AI to increase productivity in 2023 and will look leverage AI for other core processes such as aggregation of data, internal communication, and writing code in sensitive applications. Further, regulatory oversight and requirements for companies using AI continue to build momentum across many industries and in political arenas.
The IIA has defined internal audit’s role in AI as “helping an organization evaluate, understand, and communicate the degree to which artificial intelligence will have an effect (negative or positive) on the organization’s ability to create value in the short, medium, or long term.” However, many internal audit leaders don’t have AI as a defined risk area in the annual audit planning process, let alone an engagement in the audit plan.
Join Grant Thornton’s IT Internal Audit team on September 14th to discuss how internal audit leaders can assist organizations in adopting AI effectively while minimizing risks, and guide the process by understanding AI’s potential benefits and challenges. In doing so, we’ll share how IA teams can effectively collaborate with data scientists and IT professionals to ensure proper controls are implemented to manage risks.
The discussion will focus on:
- Current trends in AI adoption
- Successful AI implementation strategies and leading practices
- Common challenges and mitigation of AI adoption
- The maturing AI regulatory landscape
- Overview of NIST’s Artificial Intelligence Risk Management Framework (AI RMF 1.0)
- Examples of successful adaptation of IIA AI Auditing Framework