Over the last few months we saw two big announcements for Telecom consolidation in the US from the 3rd and 4th largest carriers. Is this a surprise or what? By all means it is not – I have been busy lately studying demand and supply curves for my micro-economics class (part of my MBA program) and viola – this all makes perfect sense! It is all due to the economy of scale – the larger the firm is the better it is able to supply the service needed at a better rate (meaning lesser dollars!). The one graphic that is always hurting the budgets for carriers is the ARPU distribution and the long tail economics.
So what will happen over the next few years is a change from selling voice and data for MNOs to service enablers. This requires several market plays which require – two main factors investment dollars and partnerships with several niche market players and this will allow MNOs to move into the VAS space.
Several years ago Microsoft had a similar competition from the likes of Google which challenged the whole notion of selling licenses for an OS, Google brought in the concept of freemium play – give Android for free and sell the apps! A similar shift is happening today in the wireless ecosystem – old models need to be thrown out and value creation needs to shape the networks!
Operators need to take these steps for survival or be bought out –
Who is the competitor? The primary competitors for the telecom service providers had been the other service providers. However, over the course of last few years, players have been migrating and surfing in segments across the board – from Android, Apple to PayPal, from P&G to AT&T, from Facebook to Time Warner, from Google to Best Buy, every company wants to capture the mindshare and piece of the consumer’s pocketbook. The fine line between partners and competitors can get obliterated in a quarter. One product launch or one acquisition can change the game in an instant.
Deploy end-to-end framework starting with the mobile packet core & IMS: Innovation is not just for the edge network or the devices, but rather the entire mobile network from RNC, SGSN, GGSN, Billing systems to the devices should be available as a platform for innovation. Obviously, one can’t just open up all the critical pieces of information at once, it should be done in a methodical and thoughtful manner. In order to provide a good experience for the customer and a robust API framework for the developers, the various network component need to work in sync that help understand the user behavior at a granular level and turn observations into insights that can be exposed to the developers who can leverage the input by building new experiences.
Charge for OTT services: The consequences of not playing an active role in the OTT services can be severely detrimental to the operator profitability. Given that there is a significant pressure on the margins of the voice, access, and messaging businesses respectively, operators have to find new sources of sustainable revenues in the next 5 years or else accept to live with the decline of margins by 30-50%. Netflix, Pandora and several OTT products have created enormous inroads in terms of capital generation while the MNOs have passively seen their ARPU slip! Toll-free apps from AT&T are one step to correct that direction. A detailed explanation of the concept has been provided here in my blogpost.
Exploit the long tail: Voice, Access, and Messaging will continue to be the three dominant revenue generating applications for mobile operators for some time. However, the next bucket of revenue isn’t in any one or two applications but rather in the long-tail that can include hundreds of applications. While individually, they might not generate a significant amount of revenue, collectively, they can rival the top three in generating billions of dollars in the coming years. By focusing the vertical areas such as health, retail, education, energy and horizontal areas such as security, cloud computing, payments, and others, mobile operators can create a sustainable revenue source for the future.
Foster developer ecosystem and provide tools: Mobile operator VAS revenues are tightly linked to the developer ecosystem they are able to foster. Just like Android and iOS have huge developer following which is helping them dominate the mobile OS platform landscape, mobile operators who open up the APIs for network, billing, profile, authorization, location, performance, security, quality of service, etc. will build a robust ecosystem that churns up new services and applications that help drive enormous value to the end-customer. Directly or indirectly, this will add to the bottom line and the operator will be seen as a service innovator in the market place. Application developers primarily focus on how their application or the service works. They rarely spend time on examining how the API request will impact the network. Operators need to setup testing labs and provide simulation tools to developers so that they are more informed about the traffic and signaling load generated by their applications and are better prepared to address data consumption issues. Here are some examples of the developer outreach that has happened over the years – AT&T, Verizon, Sprint, T-Mobile & Clearwire.
Creating value rather than nickel & dime for bytes: MNOs must manage their data margins but the pricing of data shouldn’t exclusively focus on the amount of data transmitted. A byte during a financial transaction or for a medical application is far more valuable than a byte transferred doing social networking updates. A health care application that provides peace of mind to family members might not send gigabytes of data but consumers are more interested in reliability and immediacy than the app tonnage. Remote monitoring apps can lower the overall cost for healthcare and insurance providers and they will not be measuring the cost of the app by amount of data transmitted but rather by the value it provides.
How can the MNOs play the space and get ahead of the game with the changes that will happen over the next few years. Below is a very good video describing the change model for next few years.
Cloud RAN, Integrated antennas & MSR/SDR
Distributed Node-B architecture or Cloud Radio Access Network (C-RAN) is the new paradigm in base stations architecture that aims to reduce the number of cell sites while increasing the base station deployment density bypassing some of the zoning and construction hurdles to brining up new sites on-air. Metro cities like NY, LA and SFO already have a high density of Cell towers. The concept of the Cloud RAN comes with a new architecture that breaks down the base station into a Base Unit (BU) – a digital unit that implements the MAC PHY and AAS (Antenna Array System) functionality, and the Remote Radio Head (RRH) that obtains the digital (optical) signals, converts digital signals to analog, amplifies the power, and sends the actual transmission. By making the RRH an active unit capable of converting from analog to digital, operators can now place numerous BUs in a single geographical point while distributing the RRUs according to the RF plans. The RRH becomes an intelligent antenna array which not only submits RF signals but also handles the conversion between digital and modular data.
Wireless telecommunications leader Ericsson recently announced a new cellular radio solution called AIR (antenna integrated radio) which combines the radio unit with the antenna unit for simpler installation and integration into wireless networks, with the added bonus of decreased power consumption. Generally speaking, the smaller a company can make their cell sites, the better. With smaller parts, more parts can be added and obtaining permits and zoning clearance becomes just a little easier. This is an issue Operators have been confronting head on with its cell site equipment for the past few years.
The 3GPP definition for Multi-standard Radio is: Base Station characterized by the ability of its receiver and transmitter to process two or more carriers in common active RF components simultaneously in a declared RF bandwidth, where at least one carrier is of a different RAT than the other carrier(s). In simple terms, a single Base Station will be able to simultaneously transmit different radio access technologies from a single unit. So a unit may be for example transmitting GSM, WCDMA 2100 and LTE 2600 simultaneously. The number of technologies supported by a BTS will be an implementation choice. With technology maturing it won’t be surprising to have upto 4-5 different technologies in a MSR-BS in the next five years. The advantage the mobile operator will have will not only be monetary but there will be possibility of space saving. But as the old proverb says, they will be “putting their eggs in a single basket”. If one unit stops working then the coverage in the area goes down. There may not be an option to fall back on different technology.
A software-defined radio system (SDR) is a radio communication system where components that have been typically implemented in hardware (e.g. mixers, filters, amplifiers, modulators/demodulators, detectors, etc.) are instead implemented by means of software on an embedded system. While the concept of SDR is not new, the rapidly evolving capabilities render practical many processes which used to be only theoretically possible. Software radios have significant utility for the military and cell phone services, both of which must serve a wide variety of changing radio protocols in real time.
Software-defined radios are expected by proponents like the Wireless Innovation Forum (formerly SDRForum ) to become the dominant technology in radio communications. SDRs, along with software defined antennas are the enablers of the cognitive radio. Software-defined radio will make it possible to use the electromagnetic spectrum in fundamentally new ways. Most radio standards today are designed to use a fixed, narrow frequency band. In contrast, software-defined radio devices can tune into many different frequencies simultaneously, making possible communications schemes that wouldn’t be feasible with conventional radio gear.
Packet Microwave Backhaul
Availability of broadband access technologies, one of the constraints limiting massive deployment of wireless broadband services is confusion about the best way to interact with legacy networks. Communications service providers are being challenged in terms of OPEX to:
- Reduce backhaul transport costs while meeting escalating demands for bandwidth-intensive mobile multimedia data services.
- Evolve to packet-optimized networks that are capable of supporting Fourth Generation (4G) wireless.
- Ensure service availability to retain and grow their customer bases.
Communications service providers are seeking mobile-backhaul networks that can meet capacity demands with architectures that support data services as well as traditional voice at a sustainable cost. One solution to these challenges is to increase the use of microwave. Worldwide, microwave is used for half of the total connections in the access network, representing a valid and complementary alternative to wireline technologies such as copper and fiber. Ethernet-based microwave connections are replacing legacy Plesiochronous Digital Hierarchy (PDH) E-carrier system/ T-carrier system (E1/T1) microwave connections at an increasing rate. Legacy, TDM-based backhaul networks are able to guarantee customer service levels. However, TDM can offer granularity only to the E1/DS1/STM level, that is, in multiples of relatively large fixed numbers (speeds) for all services. Next-generation packet technology, based on Ethernet, makes the facility of service level guarantees over these networks more difficult, but provides much better service differentiation and a finer level of granularity than TDM. Packet technology grants backhaul operators a new level of service provisioning capabilities.
Like all networks, microwave links based on TDM enjoy and suffer the service level advantages and disadvantages of legacy TDM technology. The world of microwave backhaul is moving rapidly from TDM to Ethernet. As these links progress into the Ethernet world, they can take advantage of the myriad additional service level capabilities that Ethernet affords even as they overcome the inherent difficulties. Advanced microwave networks offer an additional capability related to reliability: Adaptive Coding and Modulation (ACM). In ACM, modulation is decreased automatically in difficult transmission conditions and is restored automatically as transmission conditions improve. These variations in modulation cause variations in capacity complicating the guarantee of service levels.
Microwave technology is increasingly used to provide Carrier Ethernet. The main advantages of Microwave based Ethernet solutions are:
- Rapid deployment time
- Cost effective when compared to other approaches
- Offers throughput that rivals fiber for many applications
- Mature carrier-grade solution
Because Microwave uses radio spectrum instead of physical connective it eliminates “right of way” issues that complicate installation of fiber or copper media. In many environments, Microwave can provide the lowest cost per bit for transporting Ethernet services. It is a competitive choice for capacities, up to 1 Gigabit per second and with ongoing developments in Microwave technology we expect to see capacity expand to several Gigabits in the near future.
However, many misconceptions exist regarding Microwave technology as an access media for Ethernet services. These misconceptions include: insufficient capacity, weather influenced performance degradation, and concerns about spectrum availability. Recent developments in Microwave technology and Ethernet services as defined by the MEF are making these concerns moot and highlighting other benefits of Microwave technology in access networks.
Packet Core in the cloud and SDN
A very niche argument has come forth – if radio baseband can be moved to the clouds for processing and transmission to the core, why can’t the Core (EPC) be moved to the cloud model? What hinders us from running our PGW, AAA/HSS/Policy servers on the Amazon cloud? Is technology a barrier or standards a barrier to entry? Here is a good Blogpost on that attempt.
Standards have evolved so that Tier 2 operators have started t o share the same EPC/Packet core with a pay per use model. Hosted Mobile core is a path forward for regional operators to help overcome the high cost of deploying, maintaining and operating LTE networks. A variety of wholesale backhaul and site sharing schemes are being utilized to defray costs in the RAN and backhaul. However, this still leaves the daunting challenge of building, maintaining and operating a next generation mobile core – or evolved packet core (EPC). Here is a Scenario for hosted packet done by Alcatel Lucent.
SDN: Software defined networking is an emerging architecture for computer networking. SDN separates the control plane from the data plane in network switches and routers. Under SDN, the control plane is implemented in software in servers separate from the network equipment and the data plane is implemented in commodity network equipment. OpenFlow is a leading SDN architecture.
SDN allows for:
- Quick experimenting and optimization of switching/routing policies.
- External access to the innards of switches and routers that formerly was closed and proprietary.
For more reading please read my last Blogpost on it.
SON, OSS & Automatic Node control
SON (self-optimizing Networks) as implemented with LTE will make the implementation and planning of HetNets easy. Though the earliest usecases have been concentrated across neighbor relations and other automation aspects of the LTE networks. 3GPP Rel.9/10 SON functionality provides for addressing more maturing networks which includes these additional use cases:
|Planning & Deployment||Maintenance||Optimization|
|Planning of eNodeB||Hardware/capacity extension||Support of centralized optimization entity|
|Planning of eNodeB Radio parameters||Automated NEM upgrade||Neighbor list optimization|
|Planning of eNodeB Transport parameters||Cell/Service outage detection and compensation||Interference control|
|Planning of eNodeB data alignment||Real-Time Performance management||Handover parameter optimization|
|Hardware installation||Information correlation for fault management||QoS parameter optimization|
|eNodeB/network authentication||Subscriber and equipment trace||Load Balancing|
|O&M Secure tunnel setup||Outage compensation for higher level network elements||Home eNodeB optimization|
|Automatic inventory||Fast recovery of unstable NEM system||RACH load optimization|
|Automatic Software download to eNB||Mitigation of outage of units|
|Radio parameter setup|
Self-Configuration is a broad concept which involves several distinct functions that are covered through specific SON features, such as Automatic Software Management, Self Test and Automatic Neighbor Relation configuration. The Self-Configuration algorithm should take care of all soft-configuration aspects of the eNodeB once it is commissioned and powered up for the first time. It should detect the transport link and establish a connection with the core network elements, download and upgrade the corresponding software version, setup the initial configuration parameters including neighbor relations, perform a self-test and finally set itself to operational mode.
Socrates – http://www.fp7-socrates.org/?q=node/25
NGMN _ http://www.ngmn.org/
All the thoughts discussed above are my views.