SK Telecom 6G White Paper: View on Future AI Telco Infrastructure

SK Telecom (SKT) has released another 6G white paper titled, "SK Telecom 6G White Paper: View on Future AI Telco Infrastructure" laying out the evolution direction of the next-generation telco network infrastructure through the convergence of artificial intelligence (AI) and telecommunications.

In its first 6G white paper published last year (post here and discussion here), SKT provided an analysis of the key requirements for 6G standardization, technology trends, candidate frequencies, among others.

In this latest 6G white paper [PDF] released this year, SKT defines the key elements for 6G infrastructure evolution as ‘Cloud-Native, Green-Native, and AI-Native,’ and presents the direction of the 6G infrastructure evolution based on the ubiquitous intelligence emphasized in the International Telecommunication Union’s (ITU) 6G Framework Recommendations (IMT-2030).

Quoting from the paper:

First, Cloud-Native is a design principle, that should be considered when designing a next-generation network infrastructure, to flexibly respond to rapidly changing requirements and to achieve high availability, scalability, and operational cost reduction by providing communication services based on cloud technology away from the existing physical communication network environment.

Second, Green-Native is a design principle, that should be reflected in the future network infrastructure, to ensure the sustainability of the mobile communication business, as well as participate in reducing carbon emissions to respond to climate change which is a global hot topic. The technical factors to be considered in the radio access network and core network have been described in the 6G white paper published in 2023.

Third, AI-Native is a design principle that should be considered in designing the next-generation network infrastructure to secure the performance, efficiency, and stability of the network by integrating AI across all areas of the network, such as radio access networks, core networks, and transport networks. Rather than simply adopting AI partially, this involves leveraging AI to make the network more intelligent, thereby improving customer experience, maximizing operational stability and efficiency based on AI across all areas of network, and creating new revenue opportunities by utilizing existing telco infrastructure assets to provide AI services.

SKT is anticipating adopting a flexible network architecture based on ‘Generation Mix,’ which appropriately combines previous generations of mobile communications, while considering data traffic demands and specialized services. The white paper also highlights the concept of ‘Telco Edge AI’ infrastructure, which combines telecom network infrastructure and AI to simultaneously provide real-time data processing and AI services.

Quoting from the paper again:

Telco Edge AI infrastructure is an infrastructure that adds new value to the traditional telco infrastructure, which focuses only on connectivity, by offering not only communication services but also the AI computing power necessary for AI services. This is achieved by leveraging existing infrastructure, facilities, and equipment of operators.

Mobile network operators are deploying and operating infrastructures and facilities to provide telecommunication services nationwide, making it difficult to replace them with Telco Edge AI infrastructure all at once. Therefore, SK Telecom classifies its Telco Edge AI infrastructure into the following three areas based on business and service characteristics.

The first area is “co-location”, which involves transforming the unused space in telco infrastructure owned by mobile network operators into infrastructure that provides AI computing and AI services cost effectively. xPU-based servers, which provide AI computing power, consume more energy and generate substantial heat, accelerating the aging of chips compared to traditional telecommunications equipment. Consequently, even if there are available spaces in the telco infrastructure, introducing xPU-based servers without proper preparations is challenging. Therefore, as an alternative to hyperscale AI DCs, which require excessive time and costs to build, we can expand high-performance, energy-efficient cooling systems and power supply systems in existing telco facilities to proactively prepare, allowing for the flexible and rapid deployment for AI services.

The second area is “AI-Server”, which is to provide AI computing power at the edge and improve communication performance by placing AI servers further forward based on the cooling and power facilities prepared on the co-location. AI-Server does not directly provide connectivity. Instead, it can provide AI workloads for improving the performance of communication services, such as network automation, optimization, and energy saving to facilities concentrating Data Unit (DU) or higher network side. AI workloads for services could be developed by third parties or companies. The AI-Server can provide AI services at the edge, providing customers with relatively low latency and high security compared to large-scale AI DCs.

The third area is “AI-RAN”, which refers to the area where RAN functions that provide communication services and AI computing computations for AI services are performed at the same xPU-based server platform. AI-RAN is one of the key technologies for realizing AI-Native, which is being discussed in 6G. The difference from AI-Server is that it provides AI and communication services simultaneously on a single equipment.

In AI-RAN, AI computing power can be used in two main areas. It is a field that 1) improves RAN performance using AI models to simplify physical layer signal processing and wireless resource allocation functions that require complex computation, which are the main workloads of traditional RAN, and 2) provides AI services at the same time by utilizing the computing resources left after providing RAN services for connectivity.

You can download the paper from here.

Related Posts

Comments