Home > LTE, Mobile Convergence > Mobile Network Operators – retool, rethink & reinvent

Mobile Network Operators – retool, rethink & reinvent

 

 

 

 

Some time back I had an opportunity to speak with a technology pioneer, who helped introduce the best multi-media device – the iPhone on AT&T.  We went into the technical details of the experiences, the paradigm shift that never happened and the impending “data tsunami” that is happening as we speak. I have been blogging about this very data explosion for a long time now. I have been a traffic planner for the last 5-6 years of my career as a telecom engineer. I have seen the evolution of wireless networks from a voice centric GSM to a data centric-LTE, a shift in the thought processes of the big-iron telco companies that have shaped the way we communicate and interact with the world. MNOs (Mobile Network Operators) are in the cross hairs of technology evolution, data pipes are filling up faster that they can build. I monitor capacity at the Radio access side for an operator on a day-to-day basis, take my word for it – we ding your data experience at the cost of voice. I can say the same happens for many operators – it is what it is. Some operators had offered unlimited data and then pulled wool over your eyes by ‘throttling’ the user. Why does this happen, what prevents the MNOs to offer data at the promised speeds – the problems are umpteen and to take the bull by its horns is hard. Will 4G and LTE solve the problem? Initially it will, but as soon as more devices are offered, capacity will be the crunch point and techniques to optimize and improvise.

 

What will happen if Operators don’t innovate? This one graphic below says it all.

 

Here are some techniques for mitigating them

Expand Site density: Bringing the site as close to the customer as possible – give them the best coverage with the least interference. How can that be done – a layered coverage solution – macro cells outside, micro site in the home and work areas along with a very robust cell site backhaul which is possible a Fiber to cell site. How can we expand site density without building new sites? An approach that many operators like AT&T is taking – Femto/Pico solutions, DAS systems, blanket Wi-Fi coverage, etc. The most important approach that differs from the coverage solutions of yesteryear is that capacity is more important than coverage – meaning minimal interference (less C/I). So site design needs to be thought out, medium height sites with ‘clean’ RSRP (Reference Signal Received Power) and RSRQ (Reference Signal Received Quality) with minimal overlapping servers.

  

Another concept that can be implemented is Sectorization – Spatially separating the transmissions from each cellsite into three or even six sectors, each independently capable of the full capacity of a single cell. This can be further enhanced through Multiple Input, Multiple Output (MIMO) techniques which introduce spatial diversity, often requiring additional physical antennas at each cellsite.

Rollout LTE/LTE-Advanced: One of the easiest solution is throwing the new technology at the network an upgrade path that makes it scalable and spectral efficient. LTE promises greater spectral efficiency and higher data rates than its predecessors. In the long term, this should lead to lower costs because it requires fewer cellsites to deliver the same payload. While there have been some high profile launch plans for the technology, and many service providers are committed to deployment, it isn’t a quick fix. 

In many countries, the spectrum to be used is not yet available and will have to be cleared. Spectrum may be expensive to buy (although probably not as highly priced as for 3G). A wide range of supported devices is unlikely to become available until 2013 at the earliest. We have seen strong disparities in spectrum pricing for lower frequencies, such as the 800MHz bands released by switching off analog terrestrial television compared to 2.6GHz and higher bands. Service providers value the lower frequencies much more highly because of the increased range and inbuilding penetration they offer. Higher frequencies are most valuable in very high traffic areas with large numbers of small cellsites.

The additional spectrum combined with the superior performance of LTE will add further capacity to networks, but if it is installed only at existing cellsites this is unlikely to do more than double the capacity. Other factors affecting this issue are regulatory and planning restrictions for individual cellsites. There are peak RF transmission power limits for individual cellsites, which could potentially be exceeded at combined 2G/3G/4G sites where all are used simultaneously. The use of new frequency bands may require additional antennas to be installed, with multiple antennas per sector required for MIMO operation. There may also be physical limits at cellsites preventing additional equipment or antennas from being installed. All of these factors mean that analysis is required at a granular level before a full LTE rollout can be planned and costs estimated.

Spectrum, carrier aggregation, Multi-flow HSPA+, etc

The allocated radio spectrum extends from 9 kHz to 400 GHz, but only a small part of this is suitable for cellular mobile systems. Below 500 MHz, aerial sizes become too large for compact hand held devices and above 3 GHz the achievable range in a non-line of sight environment diminishes rapidly. Hence economic provision of wide area cellular services requires access to spectrum within this restricted range. 

The most glaring secondary market failure is the mobile broadband “spectrum crunch.” Clearwire owns, or holds long term leases on, 145 MHz of spectrum, with almost nationwide coverage. Its holdings are nearly as large as those of Verizon and AT&T put together, and the company actively uses a small fraction of its capacity, by one plausible assessment about 10%. At the same time, Comcast, Time Warner, and Cox have substantial holdings in both the AWS bands (bands that mobile carriers also hold or use for mobile data) and, to a lesser extent, 700MHz blocks—entirely unused. Given the known crunch that AT&T faced after the introduction of the iPhone, the continued claims of major capacity crunch driving an extensive search for more spectrum to auction, and the clear knowledge of precisely who the buyers and sellers in this market could be, the spectrum markets theory would have predicted that we should have seen transactions in these frequencies to improve the capacity of the major mobile broadband carriers. These predictions have not in fact materialized. Part of the barrier, as described below, is certainly regulatory. But much of the cause, including the regulatory difficulties, result from problems inherent to the kinds of large-scale markets in infrastructures that licensed spectrum facilitates.

First, Clearwire’s holdings are at a higher frequency band than those used by AT&T or Verizon. Binding the two systems together would be difficult. Second, Clearwire’s holdings are in a contiguous band, while the major carriers built their systems to utilize paired, separated bands. Third, the cable companies appear to value the present rents from their bands less than the option to access licensed wireless capacity at an instant, and appear to be unwilling to sell at any price that the major carriers are willing to pay.

Offload to Wi-Fi, Whitespaces & Femtocells

Deploying Wi-Fi as a core element of mobile data networks, both 3G and now 4G, is hardly unique to AT&T. Android devices, both mobile and tablet, relied on Wi-Fi to a lesser extent, but their users used data less intensively than did iPhone and iPad users. It seems clear that AT&T was driven to Wi-Fi early because the iPhone’s design invited major changes in data usage; it is possible that as Android interface and the app ecology continue to grow, and as Verizon adopts Wi-Fi offloading, we will see the broader smartphone and tablet market follow the iPhone ratio. The shift to Wi-Fi offloading is itself a function of inadequate availability of licensed spectrum for mobile data. Once the auctions are concluded, this argument would go; the companies will be able to fully provision their customers’ needs. But that argument entirely misses what we learn from Wi-Fi offloading about the flexibility and innovation feasible in an open wireless environment. Like the introduction of the iPhone, we have to expect more devices and applications to come down the road that will dramatically increase the demand for capacity.

 

Femtocells Ecosystem

The need for In-Building Solutions is also being driven by the emergence of a “small cells” strategy and by small cells in general.

Small cells are:

  • public wireless access nodes, using licensed spectrum, based on picocells or femtocells with substantially lower coverage offering high capacity, serving a smaller number of users than typical deployed macro cells
  • often owned and deployed by mobile operators to provide a transparent indoor or outdoor capacity layer in complement to a macro-cell “umbrella” coverage layer

Infrastructure small cells can provide contiguous or noncontiguous coverage. Small cells may be provider-owned and may be complemented by residential femtos or enterprise femtos.

There are several factors behind the trend to smaller cells, including the perceived risks to health and visual appearance. Larger cells that transmit more radio waves sometimes spark concerns about radiation, while at the same time are held to be less aesthetically pleasing, especially in dense locations. Smaller cells also consume less power, reducing energy demands and offering potential environmental benefits. Other reasons for the shift to small cells are to increase capacity in In-Building engineered coverage, to offload macro coverage from high-consumption users and to complement macro coverage. 

Network as a Platform

Operators and infrastructure providers need to consider the full path of information flow from the Internet to the core network to the devices and make both the information as well as the functionality (in the form of APIs) accessible for both internal and external use. So, besides carrying the network traffic on demand; the IP backhaul, the mobile packet data core, the radio access network and the cloud computing infrastructure should work in sync.

The new network should be thought of as an object-oriented framework consisting of various network elements that are tightly coupled but are also modular enough to act independently in dealing with the application and the service world. This allows the network to scale on demand at a much granular level than the traditional networks. The migration to data services is creating tremors in the ecosystem. The business of access and service is fundamentally going through a transition, slower in some places than others.

To manage data traffic effectively, cost per bit across all the network components should be minimized. The network also needs to respond to the spike in demand as such it needs to use micro and macro networks (heterogeneous networks), network offload, self-optimizing network capabilities to maximize efficiency and minimize cost of delivery.

The network awareness and application awareness go hand-in-hand. The application can benefit tremendously from the real-time network information that will help the application adjust and fine-tune how it communicates with the client on the mobile device. For example, if the site is expecting congestion for the next 60 seconds, application can suspend resource intensive activities that can impede the experience. Operators like AT&T have started to offer tools like ARO (Application Resource Optimizer) 12 that can help eliminate utilization inefficiencies from the application. Pandora and Zynga benefitted greatly from such tools in optimizing their apps.

Similarly, network can benefit from being application aware by managing the resources on the network in real-time whether it’s the signaling load that needs to be rebalanced or the throughput that needs to be adjusted. Network can help the device in using offload options such as Wi-Fi by alerting the device in advance so it prepares itself to switch in case that is more cost effective for the user.

Network intelligence in user profile, context, personalization, identity management can launch several new services and can empower existing applications. The more the network layers and the associated information are abstracted for application use, the more valuable the asset becomes.

Therefore, the ecosystem should collaborate to understand how the IP network should be architected, which APIs should be enabled, what back-end infrastructure should be in place to support the vision, and how these assets can be monetized without impacting the user experience or the network.

Adding intelligence to the network – NGSON (next-gen Service Overlay Networks)

What is NGSON? The proliferation of Internet removed geographical and temporal barriers for users to access value-added applications on a global scale, and MNOs faced complicated development and costly deployment of software to support the diverse requirements and environments of users. Service-oriented architecture (SOA) was introduced to address this problem by defining how to integrate the services (or functions) distributed over the Internet for the production of applications. Based on the SOA principle, the service delivery platform (SDP) was introduced, which takes on creation, execution, delivery, and control of services. Within this technology, a composite service can be orchestrated and composed of one or more component services that are distributed in different network domains. This capability allows service providers to have rapid and flexible development of their composite services by utilizing existing component services. While SDP has been of great interest to network operators due to new revenue opportunities through offering their service enablers, such as billing and messaging for third-party service providers, the traditional network infrastructures lack support in the delivery of such services because they manage the service delivery and its quality of service (QoS) support, but only in their own domains (called a silo approach). To tackle this problem, the service overlay network (SON) was introduced, which is employed as an intermediate layer to support creation and deployment of value-added Internet services over heterogeneous networks.

 

 

The SON approach, an overlay network is built as a common infrastructure for delivery of multiple value-added services. The overlay network is constituted by strategically deployed nodes (called SON nodes), which are dedicated to provide service-specific data forwarding and control functionalities. The SON nodes are interconnected with the logical connections provided by different underlying networks. Within this infrastructure, a composite service can be created by providing its business logic (i.e., composite service specification), which defines the interactions among component services. On a request for the composite service, SON invokes each component service in the order defined in the given composite service specification and delivers the service data for the subsequent interactions with a certain QoS guarantees regardless of the operating network domain of the component service. However, the SON approach has faced an intrinsic limitation to handling of the diverse and dynamic environments of users, services, and networks which are inherent characteristics of the recent trends in computing technology. I will discuss more about NGSON and marriage between IT and Telecom networks in the next few blog-posts.

 

Did you like this? Share it:
Categories: LTE, Mobile Convergence Tags: , ,
  1. No comments yet.
  1. No trackbacks yet.

Switch to our mobile site

Stop Copying Plugin made by VLC Media Player