Wireless networks of today have become a complex mix of various different flavors of services. Big macro networks can no longer become the sole provider of services but will become a conduit for heterogeneous networks and become a ‘backhaul’ for many services that will happen over the years. With a limited spectrum scenario it looks like a bleak future for wireless networks unless a smart strategy to inter-operate wireless with various protocols and software to overcome this. HetNets are evolving as we speak and implementation is a complex mix of various 3GPP and IEEE networks. Atleast in terms of 3GPP we are now backward compatible and there is convergence with LTE, though the bands of operation are fragmented and cumbersome for one phone to work worldwide.
M2M is probably the first implementation of wireless other than providing voice or data access to a live user. Wireless monitoring of fleets and SCADA devices on 2G networks is how it all started. Now everything from a coke vending machine to a buoy in the sea is reporting back stats. There has been a number of working definitions of what is meant by M2M; some have included OnStar and eBooks such as Kindle as examples of machines. However, we can take the CTIA definition of M2M as “applications or mobile units that use wireless networks to communicate with other machines. These applications may include telemetry and telematic devices, remote monitoring systems (e.g. transportation, etc.) and other devices that provide status reports to businesses’ centers (e.g. operations, traffic management, data management, etc.).The M2M market doesn’t use high data rate but rather good coverage and long service life including a battery life. Moreover, the growth in M2M is less than the growth rate of smart phones so the overall fraction of traffic used by M2M is actually seen to decline for the past several years and in these forecasts through 2016 and beyond.
A growing number of businesses across many sectors are investigating M2M applications to transform the way they do business. These applications have broad potential; for example, they can be used for video surveillance and home security, automated meter reading, remote equipment monitoring, fleet management and public safety. As a result, the worldwide M2M cellular market is expected to reach $2.14 billion by 2017(Source: IDG). Given their diversity, M2M applications require a wide range of products, connectivity and support.
Wireless Health Monitoring
- Cellular Protocols – eg. GSM, CDMA, WiMAX, LTE, etc.
- Short Range Wireless(Proprietary Protocols)
- Short Range Wireless: Standard Protocols
- Bluetooth Smart
- ZigBee Health Care Public Application Profile
- Low Power Wi-Fi
- IEEE 802.15.4
- IEEE 802.15.6
It is not just these air interface protocols that make it possible but the underlying APIs that have been developed by various vendors that make it a complete package. APIs are a method to make each device compatible to be able to monitor and talk to each other.
Interface 1 is the interface between the platform and external service providers. The API for this interface is provided for the application developers and service providers.
Interface 2 is the interface between the platform and the customer. The interface may be supported on leased lines or on the Internet. The default application protocol is XML Web Services.
Interface 3 is a set of interfaces supporting additional functionality. Such functions are Installation support and initiation of automatic testing (important for high volume end-system deployments eg. for AMR); Access to remote databases for data storage and logging.
Interface 4 is the standard interface towards the backbone IP network. It includes functionality belonging to the three lowest layers of the ISO/OSI protocol stack, i.e. the Internet protocol stack.
Interface 5 is an interface between the service platform CO (connected objects) and the device COs, i.e. a protocol stacks as depicted on the backbone side of the gateway.
Interface 6 is proprietary and/or application specific, and may include functionality and protocols from all or only a few layers of the ISO/OSI stack.
Interface 7 may be identical to Interface 4. However, it may be optimized for M2M applications according to specifications from the IETF (eg. from the working groups 6lowpan and Roll).
The APIs provide application portability and harmonized access to common lower layer functionality. At the same time, they separate the application from the mechanisms and technology used for communication, facilitating independent innovation of applications, protocols and infrastructure. The API furthermore describes capabilities and services between objects. Capabilities and services may freely be allocated to end systems (COs) or to servers. This enables functional allocations at the discretion of the developer. The same API may be used for network centric services and for P2P (peer-to-peer) services. This enables services and service components to move, i.e. the architecture is agnostic to functional allocation and location. The actual CO protocol (eg. monitoring or remote control) is carried as payload by the user plane service elements of the API. The architecture allows new applications and protocols to be defined without changing the basic API or its primitives (i.e. methods).
The service logically provided between COs, via the service API, shall be flexible in also offering subsets of the functionality. The idea, in particular applicable to device networks, is for the implementation to apply the simplest and most efficient protocol stack meeting the service requirements, with no or minimum processing and transmission overhead. Any of the layers shown may be functionally transparent, depending on the protocol stack in use for a given object. Flexibility is enabled by resolution of the CO characteristics, including protocol stack profile, from the CO identifier. In local implementations it shall also be possible to support the network service API without including the basic IP bearer, eg. by providing the service directly above the link layer. Such a sub-IP approach will however require a gateway arrangement to allow communication services to extend the local area beyond the reach of the applied link layer protocol. Recent studies have indicated that remote electronic patient monitoring systems have great potential to improve patient care for those in rural and remote regions. These systems, such as blood glucose monitors, blood pressure devices, pulse oximitry devices, or heart monitors, enable medical providers to electronically observe a patient remotely using these devices and telecommunications networks, and are a cost effective and patient-friendly way to monitor elderly patients and those with chronic medical conditions or recovering from a medical procedure.
Whenever we talk of Power utilities we think of it as old and archaic a system that was built in the 1960’s and has been going strong. Today’s power grid is composed of two networks. The first is an actively managed transmission network which supplies electricity over longer distances at a higher voltage; the other, the distribution network, operates at a lower voltage and takes electricity the last mile to individual homes and businesses. Combined, transmission and distribution networks represent a significant technical legacy, mirrored in its investment requirements; current estimates are US$ 13 trillion worldwide through to 2030. Unlike other industries, telecommunications for example, power utility infrastructure is composed of many analogue/electromechanical legacy systems that are prone to failure and blackouts. It is dominated by centralized generation disseminated via a relatively passive (limited control), and one-way or limited two-way communication network between utilities and the end users. Residential energy consumption is often projected rather than measured. Grid maintenance is time-based and often reinforced when system components fail or reach their expected lifetime. Outage management practice relies on consumers notifying the utility that a power outage has occurred. A significant volume of the electricity which enters the network is lost either through technical inefficiencies or theft from 4-10% in Europe to more than 50% in some developing city environments.
A smart grid uses sensing, embedded processing and digital communications to enable the electricity grid to be:
- observable (able to be measured and visualized)
- controllable (able to manipulated and optimized)
- automated (able to adapt and self-heal)
- fully integrated (fully interoperable with existing systems and with the capacity to incorporate a diverse set of energy sources).
A smart grid will create the platform for a wide range of advanced and low-carbon technologies. The smart grid, encapsulates embedded intelligence and communications integrated at any stage from power generation to end point consumption. To date, the majority of the industry debate has centered on smart meters and advanced metering infrastructure – devices designed to accurately measure and communicate consumption data in the home or office environment. Confusion can arise if the term “smart meter” is used synonymously with “smart grid”. One of the objectives of this paper is to provide some clarity regarding this misunderstanding. The reality is that, with the holistic smart grid, the smart meter becomes just one more node on the network, measuring and relaying flow and quality data.
A smart grid will exhibit several key characteristics:
Self-healing and resilient: A smart grid will perform real time self-assessments to detect, analyze and respond to subnormal grid conditions. Through integrated automation, it will self-heal, restoring grid components or entire sections of the network if they become damaged. It will remain resilient, minimizing the consequences and speeding up the time to service restoration. The modernized grid will increase the reliability, efficiency and security of the power grid and avoid the inconvenience and expense of interruptions – a growing problem in the context of ageing infrastructure. In the US alone, interruptions in the electricity supply cost consumers an estimated US$150 billion a year.
Integration of advanced and low-carbon technologies: A smart grid will exhibit “plug and play” scalable and interoperable capabilities. A smart grid will permit a higher transmission and distribution system penetration of renewable generation (e.g. wind and photovoltaic solar energy resources), distributed generation and energy storage (e.g. micro-generation).
Enable demand response: By extending the smart grid within the home (via a home area network), consumer appliances and devices can be controlled remotely, allowing for demand response. In the event of a peak in demand, a central system operator would potentially be able to control both the amount of power generation feeding into the system and the amount of demand drawing from the system. Rather than building an expensive and inefficient “peaking plant” to feed the spikes in demand, the system operator would be able to issue and demand response orders that would trigger a temporary interruption or cycling of noncritical consumption (air conditioners, pool pumps, refrigerators, etc.).
Asset optimization and operational efficiency: A smart grid will enable better asset utilization from generation all the way to the consumer end points. It will enable condition- and performance-based maintenance. A smart grid will operate closer to its operational limits, freeing up additional capacity from the existing infrastructure; this remains an attractive proposition when a US study demonstrated that transmission congestion costs Eastern US consumers US$ 16.5 billion per year in higher electricity prices alone.
Customer inclusion: A smart grid will involve consumers, engaging them as active participants in the electricity market. It will help empower utilities to match evolving consumer expectation and deliver greater visibility and choice in energy purchasing. It will generate demand for cost-saving and energy-saving products.
Market empowerment: A smart grid will provide greater transparency and availability of energy market information. It will enable more efficient, automated management of market parameters, such as changes of capacity, and enable a plethora of new products and services. New sources of supply and enhanced control of demand will expand markets and bring together buyers and sellers and remove inefficiencies.
Internet of Things – connected everything!
We are standing on the brink of a new ubiquitous computing and communication era, one that will radically transform our corporate, community, and personal spheres. Early forms of ubiquitous information and communication networks are evident in the widespread use of mobile phones: the number of mobile phones worldwide surpassed 6 billion by end of 2011. These gadgets have become an integral and intimate part of everyday life for many millions of people, even more so than the internet.
Today, developments are rapidly under way to take this phenomenon an important step further, by embedding short-range mobile transceivers into a wide array of additional gadgets and everyday items, enabling new forms of communication between people and things, and between things themselves. This new dimension has added to the world of information and communication technologies (ICTs): from anytime, any place connectivity for anyone, we will now have connectivity for anything. Connections have multiplied and created an entirely new dynamic network of networks – an Internet of things. The Internet of Things is a reality today, and is based on solid technological advances and visions of network ubiquity that are zealously being realized.
The Internet of Things is a technological revolution that represents the future of computing and communications, and its development depends on dynamic technical innovation in a number of important fields, from wireless sensors to nanotechnology. First, in order to connect everyday objects and devices to large databases and networks – and indeed to the network of networks (the internet) – a simple, unobtrusive and cost-effective system of item identification is crucial. Only then can data about things be collected and processed. Radio-frequency identification (RFID) offers this functionality. Second, data collection will benefit from the ability to detect changes in the physical status of things, using sensor technologies. Embedded intelligence in the things themselves can further enhance the power of the network by devolving information processing capabilities to the edges of the network. Finally, advances in miniaturization and nanotechnology mean that smaller and smaller things will have the ability to interact and connect. A combination of all of these developments will create an Internet of Things that connects the world’s objects in both a sensory and an intelligent manner.
The benefit of integrated information processing, industrial products and everyday objects will take on smart characteristics and capabilities. They may also take on electronic identities that can be queried remotely, or be equipped with sensors for detecting physical changes around them. Eventually, even particles as small as dust might be tagged and networked. Such developments will turn the merely static objects of today into newly dynamic things, embedding intelligence in our environment, and stimulating the creation of innovative products and entirely new services. The internet as we know it is transforming radically. From an academic network for the chosen few, it became a mass-market, consumer-oriented network. Now, it is set to become fully pervasive, interactive and intelligent. The advent of the Internet of Things will create a plethora of innovative applications and services, which will enhance quality of life and reduce inequalities whilst providing new revenue opportunities for a host of enterprising businesses.
Nevertheless, the human being should remain at the core of the overall vision, as his or her needs will be pivotal to future innovation in this area. Indeed, technology and markets cannot exist independently from the over-arching principles of a social and ethical system. The Internet of Things will have a broad impact on many of the processes that characterize our daily lives, influencing our behavior and even our values.
For the telecommunication industry, the Internet of Things is an opportunity to capitalize on existing success stories, such as iPhone but also to explore new frontiers. In a world increasingly mediated by technology, we must ensure that the human core to our activities remains untouched. On the road to the Internet of Things, this can only be achieved through people oriented strategies, and tighter linkages between those that create technology and those that use it. In this way, we will be better equipped to face the challenges that modern life throws our way.
Social Media, IMS and SO-LO-MO networks
The biggest growth for Mobile that occurred over the last couple of years is SO LO MO, or Social, Local and Mobile. Is that surprising – not at all. Personally a few years ago I used to plan for an overnight trip, locations to live eat and play. Now all I do is take my smartphone and I follow the directions that it gives to eat, live and enjoy.
Social networking was “born” around the early-mid ‘00s, a time when the first context management models and frameworks were being designed for the market. While the world was developing the first context management systems, social networking was growing fast; so fast in fact that no one can afford to ignore it any more. The introduction of smartphones into the global telecommunications market enabled social networking to actually change the way people lived, the way they communicated, and their habits. Information and Communication Technologies research has started taking this new reality very seriously and numerous initiatives have invested in relevant projects, in order to explore all social networking aspects and incorporate its logic in the lowest architectural layers. This article presents a short historical review of the evolution of various aspects of context management in pervasive computing systems, it revisits the earlier models, in order to put community context where it belongs, it presents a context management architecture that is suitable for pervasive services combined with social networking and explores its added value for users all over the world.
Advances in smart devices, social computing applications, and wireless communications are enabling new service integration and management opportunities. New devices and mobile platforms already permit to accurately trace world-related information and user (physical) activities; physical world perspective (connected to Internet of Things) is relatively new if compared to the typical Internet computing information, pure data, pages, and sites. Internet has also evolved and promoted several revolutions in usage starting with Web2.0 and more, in the sense of more direct participation and social involvement, see the new Google “Search, plus your world,” with intrinsic integration of user social environment. At the same time, the recent convergence of the worlds of Internet, broadcasting, and mobile communications represents the glue that brings together the computing and the physical world, including people, social interactions, and (mobile) services. What is still missing is a general support platform able to truly enrich the whole mobile service management cycle to fully exploit the power of the collective (though imprecise) socio-technical monitoring information deposit to enhance service delivery from both social (recommendation of services of interest) and technical (resource and data delivery optimization) points of view so to improve the Quality of Experience (QoE) for final users.
With the main goals of openness and interoperability via an application-layer approach, IP Multimedia Subsystem (IMS), based on the Session Initiation Protocol (SIP), has recently gained success as the session control protocol for application delivery over all-IP next generation networks. IMS specifies a simple and powerful service delivery platform and service composing model, and defines the Service Capabilities Interaction Management (SCIM) component to manage integration and composition of IMS service blocks. At its current stage, however, IMS still exhibits limitations in the support of integrated monitoring and service composition/recommendation management. On the one hand, the possibility to automatically maintain lists of more useful and widely used mobile services build around user needs, social behaviors, and current activities is still widely unexplored in IMS. On the other hand, SCIM is still a raw composer component without much intelligence: in other words, SCIM is unable to dynamically exploit socio-technical monitoring information about people, service, and network usage to dynamically (re-)adapt the delivery of (composed) mobile multimedia services.