AI Risks in Telecom Sector: Navigating the Challenges
Dharmendra Misra, Consulting Partner, Wipro Limited
Buland Khan, Chief Technologist and DMTS — Principal Member, Wipro Limited
Introduction
The integration of AI in telecom operations has brought significant value, enhancing service delivery and operational efficiency. AI’s potential to reduce operating costs, minimize human dependency, and improve service quality has fueled substantial investments in this area. However, the reliance on AI and operational data introduces new vulnerabilities that must be addressed to ensure reliable and beneficial technology use.
Handling Data types in Telecom Sector
In the telecom sector, two primary types of data are directly related to service users:
- User-Provided Data: This dataset includes information provided by users during service enrollment and through interactions at the front office or during Trouble to Resolve (T2R) and Request to Response (R2R) journeys.
- Network/Service Platform Layover Data: This dataset is automatically generated when users interact with the network or service platform, either during service consumption (usage data) or idle periods (location changes). This data can be further classified into Control Layer Data and Usage Layer Data, each presenting different levels of risk when adopting AI in specific telecom functions.
The most vulnerable applications are those that store and process user-provided data. These applications often adopt AI quickly, generating excitement but also increasing the risk of overlooking vulnerabilities. Fortunately, tools and methodologies are available to address these risks, thanks to prolonged research and industry-independent technology.
Another area of concern is usage data created during service consumption. This data flows from the user’s device to the network and core processing systems, such as charging, performance management, and quality assurance. The vast spread of data and the rapid growth of AI in this area offer cost advantages and speed up processes. However, any vulnerability here can significantly impact a large user base. For example, a biased logic design could disrupt services for many users, and compromised AI tools could share confidential information with unlawful entities.
Telecom data flows through extensive infrastructure and is processed in multiple locations for various purposes. The sheer volume of data makes it challenging to adopt economically viable processing mechanisms. Therefore, the focus must shift to intelligence in data rather than the data itself.
Real-World Examples
- T-Mobile’s 2022 Incident: In 2022, T-Mobile experienced a significant issue with their AI-driven customer service system. The AI system, designed to handle customer inquiries and support, began providing incorrect and irrelevant responses due to a flaw in its algorithm. This led to widespread customer dissatisfaction and a surge in calls to human support agents, overwhelming the call centers. The incident underscored the importance of rigorous testing and monitoring of AI systems to prevent such failures.
- AI-Driven Network Management Failures: A major telecom provider implemented AI for network management to optimize performance and reduce downtime. However, the AI system misinterpreted network data, leading to incorrect adjustments that caused widespread service outages. The company had to revert to manual network management temporarily while addressing the AI system’s flaws. This case highlights the risks of over-reliance on AI for critical operations without adequate safeguards.
Safeguarding AI in Telecom
- Data and Business Function Classification: Telecom data is vast and growing rapidly. Technology use must be focused, and different data and business functions should be handled differently. Classifying data and business functions where AI is applied is crucial from both business vulnerability and customer/stakeholder sensitivity perspectives.
- API Endpoint and Contextual Risk Analysis: The level of autonomy in the telecom ecosystem is contextually related to API endpoint-associated risks. APIs exchange sensitive information across internal and external systems, enabling successful service delivery. Dynamic decision-making and provisioning/deprovisioning of access rights are essential for best practices.
- Stringent Self-Governance: Robust self-governance is integral to any good autonomous solution implementation. Systems must have the freedom to make functional decisions based on context, with supervision layers or authoritative overrides in place.
- Data in Transit and Data at Rest: Telecom networks and systems carry hypersensitive data that require rapid processing and exchange. Innovative analytical applications of intelligence are needed to develop frameworks that support data sharing without adversaries in an autonomous AI environment.
- Bias in Modelling: Bias is a common risk across industries, and mitigation is generic. However, telecom datasets are unique due to consumer behavior, which is randomly associated with aspects of humanity rather than science. Micro-segmentation and model training can help reduce unintended biases.
- Micro Process-Oriented AI Function Design: This cautious approach involves small steps toward longer journeys in unpredictable territories. Telecom reinvents itself quickly, so a balance between caution and innovation is necessary.
- Matrix-Based Privileges: Self-governance and privilege-based access are essential but insufficient in context-centric telecom operations. Privileges must be dynamic to protect data and processes in continuously changing environments.
- Data to Information Shift: The focus should shift from data processing, storing, and sharing to creating information that can be used from multiple perspectives without storing data. This approach is challenging but necessary as data growth continues.
- Responsible AI: Implementing AI responsibly is crucial to ensure ethical and fair use of technology. This includes adhering to principles such as transparency, accountability, and fairness. Telecom companies should establish clear guidelines and frameworks for responsible AI use, regularly audit AI systems for biases and ethical concerns, and ensure that AI applications align with societal values and legal requirements.
- Risk Due to Rise of Agentic AI: The rise of Agentic AI, poses significant risks. These systems can potentially make decisions that are not aligned with human values or intentions, leading to unintended consequences. For instance, this could result in service disruptions, privacy breaches, or even security threats. It is essential to implement robust monitoring and control mechanisms to ensure that Agentic AI systems operate within defined ethical and operational boundaries.
References
· ETSI GR SAI 004 GR SAI 004 — V1.1.1 — Securing Artificial Intelligence (SAI); Problem Statement
· OWASP OWASP Top 10 for LLM Applications 2025 — OWASP Top 10 for LLM & Generative AI Security