AI's Role as an Agent: Understanding Agentic Artificial Intelligence and Human Involvement
Agentic AI is on the horizon, poised to revolutionize the way we approach Quality Assurance (QA) testing. Currently, AI is being hailed as the missing link, a catalyst for productivity gains through its ability to collaborate with other forms of AI, sharing skills and streamlining processes.
So, what is agentic AI exactly? It's about assigning tasks and processes that typically require human decision-making, even when automation seems impossible, to GenAI, RPA, and other automation tools. Various agents, each specializing in distinctive tasks, work together in this setup. Some agents focus on ensuring compliance and adherence to standards, while others fulfill user requests, collect and redistribute data, and identify workflows.
In our testing realm, agentic AI comes alive when AI agents independently make decisions in real-time. For instance, in the event of a code modification behind a button, these agents can decisively determine whether the process necessitates a fix or should proceed uninterrupted.
Humans will still be indispensable, but their roles will shift and become less isolated. In conjunction with this change, there will be a gradual decrease in the number of human employees, necessitating the importance of effectively supervising AI agents, training AI models, and overseeing critical decision-making.
What might these new roles look like?
Agentic AI Workflow Designer
The workflow designer orchestrates dynamic testing workflows using agentic AI, improving efficiency by optimizing test paths in real-time and reducing redundancies.
AI Model Validation Engineer
This role ensures that AI models are accurate, fair, and reliable, addressing unique issues like model drift and bias, making the overall process more efficient.
AI Ethics Specialist
This specialist ensures AI systems comply with ethical standards, such as fairness, transparency, and consistency.
Agentic AI Trainer And Configurator
This professional trains and configures agentic AI systems to adapt to domain-specific requirements, ensuring testing processes are efficient.
AI Bug Prediction Specialist
This specialist uses AI to predict potential bugs, focusing testing efforts on high-risk areas and reducing rework.
Conversational Test Automation Engineer
This engineer tests chatbots and voice assistants using AI-driven tools for dynamic interaction validation.
Continuous AI Monitoring Specialist
This role monitors AI systems in production, detecting anomalies and performance issues in real-time.
AI Lifecycle Manager
This manager oversees the integration and lifecycle of AI systems in the SDLC, ensuring AI solutions are continuously optimized.
AI Overseer
The AI Overseer monitors the entire Agentic stack of agents and arbiters, ensuring the decision-making elements of AI operate smoothly.
Preparing for these roles can be challenging, but fostering collaboration between QA engineers, developers, and AI specialists is the key to integrating agentic AI into workflows effectively. Invest in training programs in AI technologies and machine learning, tailoring training to fit your team's specific needs. Encourage adaptability, continuous learning, and a growth mindset to facilitate AI-driven changes and innovation. When hiring, prioritize candidates with strong communication skills, problem-solving abilities, and experience liaising between teams.
In the context of the text, Hugo Farinha could potentially play the role of the AI Lifecycle Manager or the AI Overseer, seeing as these roles involve overseeing the integration and lifecycle of AI systems, as well as monitoring the entire Agentic stack of agents and arbiters. Alternatively, Hugo Farinha could also work as an Agentic AI Trainer and Configurator, ensuring agentic AI systems adapt to domain-specific requirements and testing processes are efficient.