As I looked at another new paper looking at sub-terahertz and moving to even higher frequencies, I started to wonder if our hunger for higher bandwidth will ever satiate. During 5G days the research community focussed so much on mmWave that many publications still don't realise that the deployed 5G is underpinned by sub-6GHz spectrum.
Some months back, Prof. Emil Björnson from KTH Royal Institute of Technology argued about the same thing in his LinkedIn post.
A lot of 6G research is motivated by the presumption that we will need more bandwidth in future networks. However, the bandwidth is not the actual end-user performance metric, but merely one of many factors that influence performance. I think it is important to ask the question "Do we need more bandwidth" or will alternative ways of improving performance be preferable in the future?
In the blog post, linked from the LinkedIn post, he explains:
As the wireless data traffic continues to increase, the main contributing factor will not be that our devices require much higher instantaneous rates when they are active, but that more devices are active more often. Hence, I believe the most important performance metric is the maximum traffic capacity measured in bit/s/km2, which describes the accumulated traffic that the active devices can generate in a given area.
The traffic capacity is determined by three main factors:
- The number of spatially multiplexed devices:
- The bandwidth efficiency per device; and
- The bandwidth.
We can certainly improve this metric by using more bandwidth, but it is not the only way and it mainly helps users that have good channel conditions. The question that researchers need to ask is: What is the preferred way to increase the traffic capacity from a technical, economical, and practical perspective?
I recommend reading the complete blog post here. In another of his video embedded below, Emil discusses which factors determine the data speed and how those factors might be improved in the future, including which new innovative solutions are on the drawing table. He goes on to describe how the lessons from the past can be combined with visions for the future, to determine what we reasonably know about 6G today.
Going back to the LinkedIn post, couple of comments by Oscar Bexell provide some additional information into this discussion:
"Higher throughput" and the marketing value of higher peak rates is what made the WiFi industry taking the irreversable and unfortunate decision to go with 160/320MHz channels in the 6GHz band. This means we end up with only 2-3 non-overlapping channels and the band will be destroyed faster than the 5GHz spectrum. Horrible decision.
That means adding more cells. So standardization should aim at making it easier to build (neutral host) small cell networks, especially indoors.
There is a good possibility that in 6G era we will see many more shared and neutral host networks, not only because of spectrum efficiency but for other reasons as well. That will be a topic for a future post.
Related Posts:
- Free 6G Training: Spectrum for 5G, Beyond 5G and 6G research
- Free 6G Training: Intelligent Reflecting Surfaces (IRS) for Wireless Communications
- Free 6G Training: Can Ultra Massive MIMO deliver Terabit/s Broadband Connectivity in 6G?
- Free 6G Training: Achieving the Terabit/s Goal in 6G Broadband Connectivity
- Free 6G Training: Communications Using Intelligent Reflecting Surfaces in B5G & 6G
- 3G4G: Massive MIMO for 5G: How Big Can it Get?
- Connectivity Technology Blog: CSI-RS vs SRS Beamforming
- The 3G4G Blog: Cell-free Massive MIMO and Radio Stripes
- The 3G4G Blog: Distributed Massive MIMO using Ericsson Radio Stripes
Comments
Post a Comment