Power at the edge: Processing and storage move from the central core to the network edge
MEC and other edge computing initiatives address the need to place processing and storage where appropriate, whether a central location or the network’s edge, depending on factors such as applications, traffic type, network conditions, subscriber profile, and operator’s preference. In this InterDigital sponsored report, RCRWireless explores the evolution of the edge’s role in fixed and mobile networks and how it may impact network optimization, value-chain roles and relationships, business models, usage models and, ultimately, the subscriber experience.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |1|
In collaboration with
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |2|
Table of contents
I. Report: Power at the edge. MEC, edge computing, and the
prominence of location 3
What: Basics 4
Why: Drivers 8
Which: Initiatives 14
Where: Topologies 16
Who: Business models 22
When: Timeline 24
II. Vendor profiles and interviews 27
ADLINK Technology 28
Vasona Networks 71
III. Service provider interviews 78
Verizon Wireless 85
Further resources 94
Watch the video of the interviews
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |3|
I. Report: Power at the edge.
MEC, edge computing, and the
prominence of location
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |4|
Location, location, location. Multiple-access Edge Computing (MEC) and edge
computing in general are gaining acceptance in both fixed and mobile
networks as we increasingly realize the power of location in wireless networks
? and especially in virtualized networks. This does not mean the centralized
cloud or the big data centers hosting the network core will go away anytime
soon. But a rebalancing act is definitely due.
In recent years, there has been a strong push to move everything to a
centralized cloud, enabled by virtualization and driven by the need to cut
costs, reduce the time to market for new services, and increase flexibility. In
the process, we lost sight of how important the location of functionality is to
performance, efficient use of network resources and subscriber experience.
Physical distance inevitably increases latency.
Central processing and storage limit the ability to optimize RAN utilization. A
fully centralized network may be easier and cheaper to run, but it does not
always keep subscribers happy.
MEC and other edge computing initiatives address the need to place
processing and storage where appropriate, whether a central location or the
network?s edge, depending on factors such as applications, traffic type,
network conditions, subscriber profile, and operator?s preference.
Virtualization, initially used as the basis for moving to the centralized cloud, is
even more foundational in enabling hybrid models, because it gives service
providers the flexibility to choose location, hardware and software
independently to optimize end-to-end network performance and QoE. Both
operators and vendors agree that we need to keep a healthy balance between
what remains centralized and what gets distributed to the edge. The same
applies to the RAN: in some places a centralized ? C-RAN or vRAN ? approach
makes sense; in others the traditional distributed model works just fine.
In this perspective, virtualization, MEC and 5G, in different but complementary
ways, free both fixed and mobile networks from the constraints of a
centralized architecture and topology. The new networks can adapt to and
accommodate new applications and functions, and can optimize their
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |5|
performance. They replace legacy networks in which applications and
functions had to painstakingly overlay an existing, rigid architecture.
In this report, we explore the evolution of the edge?s role in fixed and mobile
networks and how it may impact network optimization, value-chain roles and
relationships, business models, usage models and, ultimately, the subscriber
Terminology: MEC or edge computing?
In this report, we use the term ?edge computing? to refer to the processing,
storage and network optimization at the edge of both fixed and mobile
networks, that is independent (and agnostic) of the access technology. Specific
implementations of edge computing can, however, play a role in optimizing the
utilization of access resources. MEC is an example of that.
Mobile edge computing refers specifically to mobile networks. Because
networks increasingly include fixed and mobile components and they integrate
them more tightly than in the past, the distinction between edge computing and
mobile edge computing is narrowing. It may disappear altogether with the
convergence of fixed and mobile networks.
MEC is a specific approach to edge computing that is primarily intended for
mobile operators, or, more generally, service providers that have a core network
on which MEC can be overlaid.
?Edge? is the term that resists a simple definition. In the context of edge
computing, the edge could be in the RAN or the customer?s premises, or it could
be an aggregation point in a more centralized location. As networks evolve and
become virtualized, the opposition of central core and edge is likely to
disappear, to be replaced by multiple locations that may be appropriate to host
a given application or function.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |6|
Do we still need edge computing in virtualized 5G networks?
How does edge computing ? and specifically MEC ? relate to the other main
technological innovations in wireless, namely virtualization and 5G? Strictly
speaking, MEC does not depend on them, and MEC deployments can start
ahead of 5G or full virtualization. However, they share a common direction
toward networks that are more flexible and less homogeneous, in turn dictated
by the need to increase network efficiency, capacity and cost.
As they all move toward the same goals in parallel, they enable and reinforce
each other, because they work within different domains in the end-to-end
mobile network. 5G, C-RAN and vRAN improve performance mostly through an
evolution of the RAN ? e.g., through spectrum utilization, wireless interface,
architecture. NFV and SDN work at the function level to optimize processing
within the core. MEC is more narrowly focused to enable operators to manage
applications and end-to-end traffic at the application level.
MEC, 5G and virtualization are not alternative solutions among which operators
will choose. We will still need edge computing when 5G arrives. Edge computing
lowers latency ahead of 5G, but when 5G arrives, it will need edge computing to
lower latency further and meet the 5G requirements.
Similarly, RAN virtualization facilitates the rollout of edge computing, because
the centralized BBU location where the baseband processing is concentrated
provides a good integration point where some edge computing functionality can
be located. In a small-cell C-RAN deployment in a retail center, for instance, the
BBU location can also host the MEC server that manages location-based
applications, which may be available to visitors both over the cellular network(s)
and Wi-Fi networks. By combining C-RAN or vRAN deployments with MEC
deployments, operators can also improve the business case, because the
incremental cost of adding MEC to a new deployment is substantially lower than
that of rolling out MEC over the existing infrastructure, as Mansoor Hanif at BT
notes in the interview in this report.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |7|
What is MEC? No longer mobile edge computing!
Among the multiple initiatives to enable or facilitate edge computing and, more
specifically, mobile edge computing, ETSI MEC occupies a central role, because it
provides a framework to shift processing, storage and control to the edge that is
integrated within existing fixed and mobile networks. MEC was created to
address mobile operators? need to move the processing and storage of some
services and applications to the edge, and to optimize mobile network
performance and resource utilization in real time.
MEC standardization work started in 2014 at ETSI, with a seminal white paper
authored by Huawei, IBM, Intel, Nokia, NTT DOCOMO, and Vodafone. The list of
active participants has since grown to include more vendors and operators. The
focus initially was on mobile networks, but it now access-technology agnostic
and it has extended to fixed networks, to reflect the tighter integration of mobile
and fixed networks, which often host the same services and applications.
The MEC acronym no longer refers to ?Mobile Edge Computing? and instead
stands for ?Multiple-access Edge Computing.? Although the change was
prompted by ETSI?s internal process requirements, it was welcome change as it
expands the reach and potential of MEC in today?s rapidly evolving wireless
networks, which include a more varied set of access technologies and spectrum
The extension to non-cellular technologies means that Wi-Fi is now included
within MEC?s scope. This is a welcome addition that reflects the fact Wi-Fi in
most markets accounts for the majority of traffic to mobile devices.
The report will delve into these, but three points are worth emphasizing:
? The main driver of MEC adoption is QoE. With MEC, operators move
hardware that traditionally had been located in centralized data centers
toward the edge and, more importantly, closer to the users. The two
primary advantages of this are a reduction in latency and more efficient
utilization of the available capacity ? both of which improve QoE.
? Moving content to the edge can have multiple advantages ? e.g., lower
latency, location-aware services, flexible service creation, security, and
reduction in backhaul traffic ? but only if the right content is stored locally
and the edge location is appropriately chosen. If these conditions are not
met, MEC may increase costs and complexity, at a price that is too high to
justify the enhancement in QoE.
? MEC emerged within a growing ecosystem that is converging toward a new
approach to network design and operations, which we refer to as ?pervasive
networks? , as opposed to the legacy ?atomic networks.? In this new
environment, networks are virtualized, use open source software, and rely
on APIs for application development. Technological advances are driven by
multiple open initiatives, operators and vendors, reducing the impact of
proprietary solutions and even of established standards bodies such as
3GPP. From the hardware side, MEC and edge computing introduce the
need for modular solutions that support multiple form factors to enable
deployments in a more varied set of environments.
?RAN and content
?More efficient use of
?Central cloud offload
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |8|
Edge computing?s appeal comes from a growing realization that centralized
topologies are not sufficient to serve current and forecasted traffic loads with
the QoE that both operators and subscribers expect.
The fundamental driver for edge computing is the continued growth in
subscriber traffic and, especially, in real-time traffic, such as video and
interactive applications such as games. In the future, augmented/virtual reality
traffic will add pressure on operators to continue to increase capacity and, just
as importantly, reduce latency. High latency ? even in a high-capacity network
? will cripple QoE. For subscribers, limited capacity and high latency may have
the same effect and look indistinguishable. For an operator, adding capacity
without lowering latency can be an expensive mistake.
Even with 5G?s promised reduction in latency, edge computing will be useful in
reducing the latency introduced by the backhaul. While backhaul technologies
add different levels of latency, they all inevitably contribute to it as a function
of physical distance.
The concurrent growth in traffic load ? increasing capacity requirements ? and
of real-time traffic ? increasing latency requirements ? drive the need for
processing, storage, and control for selected applications to be moved to the
At the same time, operators face a challenging situation because they have to
meet high QoE expectations in a cost-effective way, in an environment where
subscriber revenues are flat in most markets. This means that they need to
improve performance within the current spending levels. To do this, they have
to maximize the utilization of existing resources before they launch into a RAN
expansion. MEC and edge computing can play a major role in doing this,
because local control and processing enable a finer-grain real-time
optimization of RAN transmission.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |9|
Growth in video traffic increases importance of latency
Video traffic keeps on growing, and at a faster pace than other types of traffic.
According to Cisco?s VNI, mobile video accounts for 60% of traffic today, and that
number will likely be 78% by 2021. Ericsson?s estimates are in the same range:
50% in 2016, growing to 75% in 2022. The 2016?2022 CAGR for video is 50%,
compared to 23% for web browsing and 39% for social networking.
The growth in video traffic is hardly surprising, given the availability of more
video content and its higher-quality. Also, video is no longer confined to video
apps: it has become an integral component of social networking and
communication apps. In the process, real-time video calls have finally gained the
social acceptance that, for a long time, was missing and slowed down the
adoption of video communication as an alternative (or complement) to voice or
The increased use of video in its multiple forms ? e.g., downloaded, streamed,
uploaded, interactive ? reinforces the role of latency in determining overall
subscriber experience (and the attendant churn rates). For this reason, video
has, since the beginning, been one the main drivers for edge computing. Not
only is video traffic growing it is often location-specific and concentrated in
specific locations and times that are the backdrop for perfect use cases to justify
moving processing and storage to the edge.
Edge computing cannot lower the latency in the RAN, but it can reduce the end-
to-end latency: video can be cached at the edge, or the processing (e.g., with
ABR) that optimizes video traffic can be done at the edge. Both approaches can
be combined to reduce the traffic between the centralized core and the edge.
Video is an instance, however, in which it is crucial to pick the edge?s location
within the network carefully to avoid unnecessary investment and complexity.
For instance, caching at the edge works because subscribers at a given location
and time tend to watch a remarkably consistent and narrow set of content. But
if the MEC server is co-located with a small cell, there are too few subscribers
within the cell footprint to justify caching, even if the cost of storage were not an
issue. At the same time, the more remote the integration point is, the lower the
impact on latency and hence on QoE.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |10|
We want it all, we want it now, we want it here: immediacy and time/location awareness
The use cases for mobile computing go beyond the need for low latency in video
and real-time traffic. Immediacy is another performance yardstick that
subscribers care about and expect in all applications, not only real-time ones.
Immediacy can be quantified as the time it takes for an application or service to
launch, or for content to appear on the device or register as being sent.
Edge computing can bolster immediacy, because content no longer needs to be
downloaded from a remote data center, or to be sent to the data center for
processing and then back to the subscriber. Managing applications and traffic
types in real time at the edge enables operators to control the effects of
congestion on time-to-content and to provide the desired level of immediacy.
In enhancing QoE, the relevance of and application or content is complementary
to immediacy ? and relevance is often tied to time of day and the subscriber?s
location. Video content popularity peaks and declines very quickly. An offer for a
discount on a restaurant meal works best if you are near the restaurant and
hungry. Mobile operators already provide time- and location-aware content, but
edge computing can make that content more closely tied to the location and
hence more relevant to subscribers. In addition, time- and location-aware
content appeals to content providers and venue owners that may be the
source of the content and applications hosted on the edge servers.
Stadiums are a good example of a location with high traffic loads that are
mostly location/time dependent and have a high perceived value to the
spectators. Videos from the game, content associated with teams, or ads
for services available at the stadium generate a massive amount of content
? and to a large extent, it is also consumed locally. Storing and processing
that locally, at the network edge, will improve QoE for those attending the
game. And by offloading that work from the rest of the network, it will
improve QoE for other subscribers, too.
MEC servers in stadiums may be used to provide location-based services
and content not only through the network of a specific operator, but also
2017 Super Bowl: AT&T stats
$40 million investment in network expansion
9.8 terabytes of data during the game, equivalent to 28
88% increase in traffic over last year
148% up from average pro football game traffic
59.9 terabytes in the Houston area over the weekend
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |11|
across operators and the local, possibly stadium-
operated Wi-Fi network. Initially these services may
include location-based retail and advertising,
services for the public, event-specific applications,
surveillance, and stadium operations and services.
Farther in the future, they may expand to AR/VR
services and IoT applications.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |12|
From MEC proofs of concept to use cases
If you have heard of only one use case for MEC, it almost certainly involves
video. However, the scope of edge computing is much wider. It is driven by the
diverse needs of multiple stakeholders:
? Keep latency low for real-time traffic ? video, but also voice for enterprise
? Offer applications and content that target visitors in a venue and may be
finely tuned to their location (e.g., proximity to a store) or timing (e.g., off-
? Keep local content within the premises; avoid sending content that may be
generated and accessed locally, or that is location-specific (e.g., enterprise
data) to the centralized core, and back to the RAN to be delivered to the
? Share local content among available wireless networks, including Wi-Fi and
multiple mobile service providers
? Provide an extra layer of security by keeping local content within the
? Optimize content delivery based on real-time RAN conditions, as well as
other factors, such as subscriber devices, content type, application
ETSI MEC PoCs show areas of early interest from participant operators and
vendors. As MEC gets deployed, the list of use cases will grow. The table on the
next page lists some that have attracted attention to date.
MEC stakeholders: who gains what?
Mobile operators and other service providers: provide better QoE,
better resource utilization, offload from the centralized core
Venue owners: offer venue-based services to visitors, enrich their
experience and encourage them to share it
Enterprise, IoT: develop and support enterprise-specific applications
and services, have fast and secure access to enterprise data and
applications over multiple networks, and support IoT applications
within the enterprise
Content providers, OTTs, application developers: optimize content
delivery and application access in real time, adapting content to RAN
conditions and subscriber requirements
Subscribers: better access to applications and content, increased
immediacy and relevance of wireless connection, leading to a more
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |13|
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |14|
Edge computing requires more effort and investment than adding a server at
the selected edge location. Many moving parts have to come together to
enable the stakeholders ? service providers, venue owners,
applications/content providers, enterprises, subscribers ? to fully harness the
benefits of shifting functionality to the edge.
Correspondingly, there are multiple, complementary initiatives converging to
create an ecosystem that will support distributed network models end to end.
Each stakeholder will likely find only a few of the initiatives relevant, and may
feel overwhelmed by the apparent competition among them. For vendors, the
best bet is often to be active in multiple initiatives to prepare themselves to
participate successfully in the nascent ecosystem.
Among the factors that converge in the creation of the edge-computing
ecosystem are these:
? Edge hardware. It may have to be installed outdoors or in locations with
space, security or environmental constraints that differ from those of a
data center or central office.
? Mobile devices. New device types will populate the network to support
IoT applications, and many of them are likely to benefit from edge
? Services and applications. These may be targeted directly at subscribers,
enterprise workers, or visitors to a venue, or may be used for IoT; they
may be managed by different entities ? mobile operators, other service
providers, venue owners, enterprise, OTTs or content owners.
? Integration in the end-to-end network. Edge functionality has to be tightly
integrated with the RAN and the centralized core. Managing potentially
dynamic edge locations (i.e., the edge location changes in real time
depending on network conditions) requires orchestration capabilities in
the core network.
? Integration across networks. The operators of multiple networks, possibly
owned and managed by different parties, may want to share edge
processing and storage resources in the same location.
? Application developers. They need to optimize their apps or develop new
ones to work in an edge environment.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |15|
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |16|
We have talked so far about moving processing, storage and control to the
edge, as if it were clear what we mean by the edge. But it is far from obvious
where the edge is ? or, more accurately, where potential edge locations are,
and which one or ones a service provider should select.
This is a crucial question ? perhaps the most important one ? to ensure that
edge functionality brings both performance and financial benefits to the
service provider and the other edge stakeholders. If the edge location is too far
out, too close to the subscriber, edge computing may become overly expensive
and complex. If the edge location is too close to the centralized core, the
benefits of edge computing dissipate, with a more complex network topology
but no significant improvement in performance.
And is there a single edge? Not only may different service providers pick
different edge locations for their networks or specific locations in their
networks; it may also make sense to have multiple edges in a given location,
depending on the applications.
Location-based content and applications are most likely to be hosted in an
aggregation point that reaches all the infrastructure that covers the venue. An
enterprise deployment may be housed in a location that covers all the
enterprise?s buildings or just a subset of them.
For applications that require video caching, service providers have more
flexibility in choosing the edge location. They may want to see what their
subscriber usage patterns are, and pick an edge location where they can
maximize the caching contribution.
And is the edge a fixed location? It does not have to be, although initially it is
likely be. For many applications ? e.g., location-based and enterprise
applications ? an edge location that does not change though time may be
desirable. But for locations with highly fluctuating network loads or for
applications with uneven temporal and spatial distribution, a moving edge that
shifts depending of real-time network conditions is possible in a virtualized
environment and can maximize the cost/performance benefits of edge
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |17|
To address the issue of where the edge is or could be, it is useful to review the
MEC architecture proposed by ETSI. Other edge computing initiatives rely on the
MEC architecture or use their own edge server. Traffic to and from UEs that
involves applications, services or content hosted in the edge server is directed to
the server; the rest of the traffic is routed to the centralized core as usual.
The MEC server or edge host uses a virtualized platform to host applications It
can interface with cellular (3GPP) networks, as well as other available networks,
including Wi-Fi networks. Application developers have access to API to get their
applications hosted in the edge server. The MEC server provides the processing
and storage capability to support the hosted apps. Storage may be used for
caching frequently accessed content, or for local breakout (to keep local content
within the MEC footprint), and avoid using backhaul resources to transmit the
content back and forth. Local processing enables applications to optimize their
Another important and innovative element in the MEC server is the addition of
user- and network- information services. These provide the foundation for
optimizing end-to-end network performance.
MEC services for network optimization
Radio network information service (RNIS)
? Up-to-date radio network conditions
? Measurements and statistical information related to the user
? Information about the UEs served by the radio node(s)
associated with the host (e.g., UE context and radio access
? Changes in UE information
Location information service
? Location information: cell ID, geolocation, etc.
? Location of specific or all UEs served by the radio nodes
associated with the ME host
? Location of a category of UEs (optional)
? Location of all radio nodes associated with ME host
Bandwidth manager service
? Allocation of bandwidth to ME applications
? Prioritization of certain traffic routed to ME applications
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |18|
The MEC server is the bridge between what happens in the RAN and the UE and
what happens with the applications and content. It enables network operators,
application and content providers, and others that may play a role in serving
subscribers to manage traffic in real time on the basis of factors such as
application/content requirements, network conditions, and policy ? to optimize
utilization of the available network resources. Network operators can use this
information to manage core and RAN resource allocation, but they can also
share this information with content owners, venue owners and application
providers to coordinate traffic management with them.
Where is the edge? Location tradeoffs
If cost, complexity and space were not constraining factors, moving processing
and storage to the far edge would generally be the best way to improve QoE.
But they do matter, and so the choice of edge location does not rest solely on
performance, but on the evaluation of tradeoffs among multiple factors. The
criteria for such evaluation will vary among operators, but there are some high-
level considerations that apply across networks.
As the chosen edge location gets closer to the UE:
? The latency gets lower, and this improves performance on applications that
are sensitive to it, such as video calls, voice or gaming, leading to a higher
? Hardware has less storage capacity, limiting the amount of content that can
be stored at the edge.
? Processing power becomes more limited, and hence it may not be efficient
to run some applications from the edge.
? End-to-end network complexity increases, because network operators have
to deploy, integrate and manage more hardware, at a higher number of
? Resources may be needlessly duplicated if applications could be efficiently
run from a centralized location.
The opposite side of the equation holds when the chosen edge location moves
toward the centralized cloud (i.e., higher latency, more storage capacity, more
processing power, less complexity, more efficient use of resources).
Another key consideration is the footprint ? it gets smaller as the edge host
moves away from the centralized cloud ? because this is the area over which the
RAN optimization and the mobility management are scaled. When the edge host
covers a small area, edge traffic optimization is confined to this area. So, two
MEC servers can both optimize their respective footprints, but as two separate
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |19|
zones. If a single MEC server covers both footprints, optimization can be
coordinated across the entire area. Coordination across a wider area may make
optimization more effective, but at the same time the greater distance to the
RAN may limit the granularity of the optimization capabilities.
The footprint of the edge host is also determined the type support for mobile
access in edge-based applications and services. Within the footprint, subscribers?
experience is preserved as they move from one cell to the next within the same
network. If the edge host covers multiple networks, application access can be
preserved across networks. As the subscriber moves away from the footprint to
an area that has no edge host or a different one, access to the application has to
be managed to ensure a smooth transition. When the two footprints do not
have the same edge capabilities, the subscriber may experience a discontinuity
in the quality of the access to the edge application. Obviously, if the application
is supported only at the edge (e.g., an enterprise application or a location-based
application in a mall), subscribers lose connectivity to the application as they
move out of the footprint. In many cases this is a desired outcome (e.g., the
enterprise may want its services available only within its campus), but it is
something that has to be kept in mind when selecting the edge host location and
how to manage mobility across footprints.
Factors that play a role in the selection of the location of the edge server include:
? RAN resources. The edge server capabilities must be sufficient to serve the
covered footprint. If they exceed the RAN capabilities, the investment in
edge computing is wasteful.
? Backhaul resources. Edge computing may address capacity and/or latency
limitations in the backhaul and prevent the backhaul from becoming a
? Applications. Latency, processing and storage requirements that affect edge
location vary across applications, so the ideal edge server location varies by
? Subscriber/client expectations, and venue-owner preferences and
requirements. The expectation for service performance and QoE may be
different for enterprise employees, mall visitors, or IoT sensors.
? Operator policy and preferences. Operators may want to position
themselves in the market in a specific way ? e.g., provide a higher-quality
service for a specific enterprise client or venue owner.
? Content provider preference. If the content provider pays for the edge
infrastructure, it will want to choose the location of edge hosts, because this
will allow the provider to maximize its return on investment.
In addition, real-time optimization enables operators to shift the location of the
edge dynamically depending on factors such as demand for an application, RAN
conditions, concentration of subscribers or power considerations. A virtualized
platform will make it possible to instantiate applications at different edge
locations as operators expand their edge computing and virtualization
Where is the edge? Location options
vCPE, home/office GW, HeNB
Wi-Fi access points
C-RAN BBU pool
Other aggregation points
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |20|
Optimizing end-to-end network resource utilization from the edge
Improving performance is an obvious goal for mobile operators. But when it
comes to ensuring cost effectiveness and profitability, network resource
utilization is the goal that should take center stage. It is a measure of how much
value operators can squeeze out of the network infrastructure they have ?
which in turn is a measure of subscribers? happiness with the service.
In today?s networks, the prevailing approach is to maximize performance, given
the financial resources available. This typically means increasing RAN capacity
using a brute-force approach ? i.e., deploying the latest technological tools that
increase throughput. But far less effort is put into optimizing the use of the
capacity available. It is like buying a fast car without having roads that allow you
to drive fast.
There are many ways to optimize network resource utilization and, under
intense pressure to improve performance without increasing costs, mobile
operators have started to work toward this optimization goal. MEC and edge
computing in general are geared to achieving exactly that by changing the
processing, storage and control in the network.
The impact on QoE from moving processing and storage to the edge is easy to
grasp, even though it is not trivial to quantify over a network because it depends
on multiple environmental factors that are variable. Other things being equal,
though, moving processing and storage to the edge improves latency,
immediacy and QoE.
The new control features at the edge introduce a new type of optimization, one
that works in real time, leveraging information about network conditions to
optimize end-to-end network performance instead of optimizing the
performance of individual network elements. Edge computing is not required for
this type of optimization ? it can be implemented in current 4G networks ? but
MEC servers are well suited to gathering information from the RAN, processing it
and forwarding the results to the centralized core or to content or application
providers. ETSI specifications define services (see table above) that collect
information that can be used in multiple ways to optimize the utilization of
network resources and QoE.
Some optimization is best done, either remotely or at the edge, by the entity
that controls the applications and content ?the mobile operator, an OTT or a
content owner ? because that entity has direct control and better knowledge of
content and applications, as well as better access to them. And it often wants to
retain some degree of control to ensure it can create the subscriber experience
it aims for.
An example is that, increasingly, the content served to subscribers is encrypted;
operators do not know the content type, much less have the flexibility to
optimize it, but the content owners do.
Throughput guidance is an optimization tool that is being developed to address
the mobile operators? need to adapt content to real-time RAN conditions and to
be able to do so in collaboration with third parties. It uses data about network
conditions ? especially RAN conditions and RAN load ? to generate advice for
content and application providers on how to manage traffic exchanged with the
subscriber. When the network has sufficient capacity, the providers can share
content at the highest quality available. When the network is capacity
constrained or congested, the content and TCP transmission can be adapted to
provide subscribers the best experience possible given the real-time availability
of network resources.
Tools like throughput guidance allow operators to tell third parties what they
can do to optimize content delivery, without the operators having to access it
directly. For it to work, however, operators and third parties have to tightly
coordinate network optimization. In some cases, this will require getting past a
history of tension and competitiveness between mobile operators and
content/application providers. The situation is rapidly changing, however, as
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |21|
both camps realize they need to work together to provide outstanding QoE. And
both sides might benefit further if throughput guidance and other optimization
tools are the catalysts that facilitate tighter relationships among them.
Throughput guidance: results from
Google and Nokia trial
Network metrics improvement
TCP retransmissions 30-45%
TCP round-trip time 55-70%
Mean client throughput 20-35%
TCP packet loss 35-50%
Click-to-play time reduction 5-20%
Average video resolution improvement 5-20%
Video format change frequency reduction 10-25%
Source: Nokia, Google
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |22|
Mobile operators will continue to have a leading role in planning and funding
MEC deployments, because they are integral parts of their network
infrastructure. Other edge computing hosts, partially or fully independent of
mobile operators, are likely to be deployed, paid for, and operated by the
enterprise or venue owner, even if some of the applications supported can be
hosted by the mobile network.
Even in the MEC space, however, new business models may arise that have a
more direct and active role for venue owners and the enterprise, on the
periphery side, and for content/application providers, on the cloud side. They
both stand to benefit directly from the MEC infrastructure ? and, in some
cases, more than mobile operators do.
For example, a MEC server that supports industrial IoT applications in a
warehouse may be more valuable to the enterprise than to the mobile
operator. The enterprise may see in it a compelling business case, while a
mobile operator might struggle to see a positive ROI or might not be able to
assess the revenue potential because it is dependent on enterprise-specific
applications that it is not familiar with.
Similarly, a content or application provider may be willing to locate some of
the infrastructure it needs at the edge of the network when that is more
effective ? and potentially more cost effective ? than a remote cloud location.
And it is not only companies like Google or Facebook that may be interested in
having a presence at the edge.
Smaller companies may also be willing to locate processing and storage
functionality at the edge in a virtualized environment, where they do not need
to own a host server but might pay only for the services they need. In this
model, the mobile operator may deploy and pay for the initial edge hardware,
but then it can monetize the investment by renting access to it to a third party.
An arrangement of this type can be mutually beneficial, especially when
accompanied by joint efforts to optimize network performance ? for instance,
with tools like throughput guidance.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |23|
Edge computing stakeholder Why pay for edge computing?
Mobile operators ? Better QoE. Churn reduction, lower customer support costs.
? Increase in network resource utilization. More value extracted from existing infrastructure, better end-to-end
network TCO, need for capacity expansion.
? Location-based services. Monetization of services to subscribers (possibly), to the enterprise, to public venues, and
for IoT services.
? Offload centralized core. Cost-effective improvement of both QoE and resource utilization. When the centralized
core requires additional processing/storage capacity, operators may decide to deploy it surgically at the edge
Other service providers, including IoT
? Location-based services. Service providers, such as DAS neutral hosts, cable operators, wholesale service providers
or MVNOs, may be willing to install their own edge infrastructure to provide services specifically targeted to a
location. They might use a local access network they own or manage, or lease capacity from the RAN local network
operators. They may monetize such services to venue owners or other parties with a presence in a venue (e.g.,
retail) or with IoT applications.
Public venue owners ? Location-based services. Offered as an amenity to guests (e.g., stadium, hospital) or as a service (e.g., city, college);
to advertise to visitors; to support needs of tenants (e.g., stores in a mall); to support their own operations.
Services can be made available through Wi-Fi or MulteFire venue-owned networks, DAS networks or small cells.
Enterprise ? Local breakout. Keep enterprise data and applications local, provide enterprise voice services.
? Security. Keep enterprise data and traffic local to the enterprise.
? Enterprise services. Develop and possibly mange enterprise-based services and applications. Services can be made
available through Wi-Fi or MulteFire venue-owned networks, DAS networks or small cells
Content and application providers ? Shift of processing and storage to the edge. Improve QoE, better control delivery of service, coordinate real-time
traffic optimization with operators. Content and application providers may own their edge infrastructure, but
leasing resources from the operator?s virtualized edge servers is an approach more likely to be accepted by both
operators and third-party providers.
Residential and small businesses
? Home/small business gateway. Residential and small-business customers may invest in an edge host that supports
services and hosts content used by the people within the premises and shared over the Wi-Fi and cellular
networks. Service providers may subsidize the edge host as a subscriber-retention feature.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |24|
In many ways, edge computing is nothing new. There have been edge
computing solutions all along to serve niche markets or to address specific
performance and optimization challenges in mobile networks. What is
different today is that network virtualization offers a framework to expand
edge computing capabilities, adding scalability, reliability, flexibility, cost
effectiveness. This will take edge computing to the mainstream and enable
operators to reap benefits in terms of improved QoE and resource utilization.
The ETSI MEC standardization work creates the foundation for edge computing
deployments in mobile and, increasingly, fixed networks. During the first term
(2015-2017), ETSI ISG completed the groundwork, released the basic
specifications, and encouraged the creation of the ecosystem. More work is
needed during the second terms (2017-2018) not only to expand beyond
mobile networks, but also to strengthen the links with other edge computing
initiatives while avoiding the risk of fragmentation of efforts.
Beyond standardization and industry collaborative initiatives, there is a need
to explore different business and deployment models, and revisit the role that
stakeholders ? e.g., venue owners, enterprises, content and application
providers ? will have in deploying, managing, and funding edge computing
The business case also needs to be assessed to understand where and when
edge computing provides a better return that the centralized cloud. To assess
the business case for edge computing we need to go beyond the standard ROI
model. Improvements in QoE or resource utilization are highly valuable, but
notoriously difficult to quantify, because they involve end-to-end network
improvements. A traditional financial model that looks at a solution that
delivers a well-contained benefit is inadequate for edge computing, as it does
not adequately capture the costs that it requires and the value it brings.
The time to commercialization can be fast as edge computing can be
introduced without waiting for full network virtualization or 5G. In practice,
however, it will take a couple of years before commercial launches, as vendors
and operators complete their trials to learn what is the most efficient way to
balance centralized versus edge processing and storage.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |25|
ETSI MEC second term objectives
? Support 3GPP access technologies (Wi-Fi and fixed)
? Extend the virtualization support types, to render the
environment as attractive as possible for third-party players
? Study possible charging models which may be applicable to MEC
? Fill gaps relating to lawful interception
? Develop testing specifications and test methodologies
? Coordinate plug fests
? Coordinate experimentation and showcasing of MEC solutions
? Expedite the development of innovative applications
? Ensure a low entry barrier for application developers
? Disseminate the results of the work
? Strengthen the collaboration with other organizations
? Study new use cases
? Enable MEC deployments in NFV environments
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |26|
Mobile edge computing takes us beyond the centralized
cloud, to hybrid virtualized model, which combine
centralized and distributed processing, storage and control
Operators can leverage network flexibility to find the best
edge location to maximize QoE and optimize network
resource utilization, the main drivers for edge computing
MEC is not for mobile networks only. Other fixed networks
including Wi-Fi can use the MEC framework and share it
with mobile networks
There are multiple network edge locations where it makes
sense to deploy MEC servers or other edge hosts. Evaluating
the tradeoffs that these locations offer is crucial for
successful edge computing deployements
New business models will accelerate move to the edge,
with an increased role of venue owners, enteprises, and
application and content providers.
Application and content optimization at the edge
encourages a tighter cooperation of mobile operators with
application and content providers
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |27|
II. Vendor profiles and interviews
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |28|
Since 1995, ADLINK has provided hardware and
software solutions for a variety of indoor and
outdoor environments in multiple form factors.
The core of ADLINK?s business is serving the
market for industrial and telecommunications
applications, which is becoming increasingly
interconnected with the internet of things.
ADLINK provides embedded solutions ? such as
network appliances, hardened outdoor products,
vCPEs, and industrial IoT equipment ? ranging
from modules to network appliances and servers.
The products address multiple markets, including
industrial, military, health, transportation, utilities
and telecommunications verticals. ADLINK also
provides measurement and automation products
for industrial systems, machine vision systems, and
automated test and measurement equipment. A
third market area includes smart displays, and
fixed and mobile computing platforms for the
same verticals, with a focus on operations in harsh
Edge computing is a good fit within the scope of
ADLINK business. ADLINK provides hardware
solutions that enable the deployment of functions
at the edge that support multiple-access edge
computing (MEC) and fog, comply with industry
standards, and encourage interoperability. In
collaboration with ecosystem partners, ADLINK
offers pre-validated software and hardware to
facilitate and accelerate edge computing
In addition to its data center and central office
solutions, ADLINK offers the SETO-1000, an Intel
Xeon-based, ruggedized edge server for outdoor
use that targets the MEC and fog market. The
SETO-1000 uses a 19" chassis, has up to 96 GB of
RAM, and can support applications such as
augmented reality, video analytics and caching,
and distributed content and DNS caching.
ADLINK has recently launched the SETO-1000 as
part of one of the MEC architecture?s three main
? The MEC hosting infrastructure management
system: the SETO-1000, the virtualization layer
and the virtualization manager
? The MEC application platform management
system: traffic control RAN information
services, communications services and
? The application management system: the
MEC virtualized machine
For next-gen virtualized networks, ADLINK is
working on solutions that use ADLINK?s Modular
Industrial Cloud Architecture, in which hardware
and software are decomposed and are based on
open specifications. This architecture operates in
real time, because the confluence of virtualization
and IoT imposes a resources-on-demand model in
which hardware resources must be flexible
enough to meet competing needs from multiple
applications in real time. Achieving this, ADLINK
argues, will take more than modifications to the
current hardware; what is needed is a more
fundamental redesign of the underlying
architecture, in which computing, storage and I/O
are allocated to separate functional models during
the design and then combined, as needed, in the
hardware unit, based on the specific requirements.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |29|
at the edge
A conversation with Jeff Sharpe,
Senior Product Manager, Network &
Communications, ADLINK Technology
Monica Paolini: Network virtualization adds
flexibility and scalability to networks, and enables
operators to use network resources more
efficiently. The ability to move network functions
to the edge is a crucial element in optimizing
network performance. The location of network
functions becomes more relevant in a virtualized
network. In this conversation, Jeff Sharpe, Senior
Product Manager, Network & Communications at
ADLINK Technology, shares his views on what is
required to move network functionality to the
edge and what the benefits are.
Jeff, let?s get started with an introduction to your
role at ADLINK and how ADLINK entered the edge
computing market in telecoms.
Jeff Sharpe: I?m the Senior Product Manager of
one of our business units, called Network &
Communications. My primary focus is long-term
strategies for products. ADLINK has been around
for over 20 years. We?re leaders in edge compute
technologies in multiple technology segments ?
industrial IoT, network and communications,
telecom industries, even military.
We are a hardware provider, but we also supply
software services. We have a long history and
growing expansion of ecosystem partners, and we
can come to the table with our telecom friends.
Monica: MEC is part of ADLINK?s identity. You have
been doing edge computing for a while.
Jeff: Absolutely. With our leadership in industrial
IoT and manufacturing, we have been making a lot
of the key edge components for years, and we?re
now starting to see them in the telecom and
Some of our existing partners ? such as Intel,
Nokia, Saguna, Tieto, Wind River or PeerApp ?
were early players. They participated in some of
the first deployments while driving the standards
that mobile edge computing started to develop
about two to three years ago. We?re way ahead of
the curve, and very excited about this technology.
Monica: What has ADLINK done so far with its
Jeff: Saguna is a great partner of ours. They?ve
been around since day one of the ETSI mobile
edge compute. We realized that for us to be
successful and for our customers to be successful,
we need a system that?s not just a piece of
hardware you would put at the edge, but that also
has carrier-grade capabilities, that fits ETSI
standards and is based on open architectures for
the RAN, customer premises, outdoor cabinets or
the central office. Saguna is a great fit with that.
They have great software that we embed within
Tieto, another one of our partners, does a lot of
work on virtual BTS, virtual radio access networks.
We also supply some of their software to service
Another one of our critical partners is Wind River
Technologies, who provide Intel-based
NFV/SDN/vCPE software called Wind River
Titanium Core and Titanium Edge that ADLINK
embeds within our systems.
This list continues to expand. MECSware is another
partner, based in Europe. It?s a branch off of Nokia,
and we?re collaborating on some virtual RAN
And finally, we collaborate with security
companies such as Trend Micro, Fortinet, and
Checkpoint, and vCPE providers such as netElastic,
Wind River, and CertusNet.
With our partners, we work on proofs of concept
with the service providers.
Monica: How is the involvement of mobile
operators worldwide progressing?
Jeff: What we?re seeing is that a lot of the service
providers are in different stages of mobile edge
compute in different countries. We see Korea,
especially SKT, as being one of leaders, especially
with 5G coming out. They want to use more and
more MEC technologies to speed up video and
Europe seems to be at the forefront of MEC as
they?re including it with their LTE rollout. Proofs of
concept started with a focus on network
optimization. A lot of the carriers were looking at
MEC and asking, ?How can I optimize my network?
How can I save money? What?s my total cost of
ownership savings that I can have in my network??
Some of the earlier proofs of concept were about
video caching, augmented reality, and content
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |30|
delivery to improve QoE while reducing impact on
the backhaul networks. ?How can I do more at the
edge versus in my cloud?? It?s starting to shift. We
started in Europe. Now in the US, we?re starting to
see more proofs of concept centering on the
monetization of the network. How can the service
providers generate more money while improving
services, especially better quality of experience for
their end customers? The proofs of concept are
now more into analytics for network grooming for
5G, and providing valuable services to help
monetize their networks.
Operators have many questions. ?How do I get
prepared for 5G, enable augmented reality, virtual
CPE, virtual BRAS, which is more of a broadband
access service? How can I generate more money?
How do I improve quality of experience for my
We?re seeing a change in deployment of
applications based on regions. In China and Asia-
Pacific countries like Japan, we?re starting to see
more operator uptake on services as well. In the
rest of Asia and in Oceania, we?re seeing more
security, deep packet inspection on how they can
reroute traffic, load balancing, and, again, network
It?s an exciting area, because there are so many
different applications you can run at the edge.
By the way, Monica, I wanted to define the edge.
We see the edge as anything that is outside the
data center and closer to the customer.
From an ADLINK perspective, we see the edge
being the customer premises, or the RAN tower
itself, or the central office. We see that as being
very, very close to the customer, and optimizing
the utilization of the operator?s key assets ? that?s
the tower, the central offices, and even their
Monica: You?re working on multiple fronts. How
are you supporting your partners and customers?
Jeff: We?re seeing things starting to gear up,
mainly from the central office towards the
customer premises. We have products that fit in
each of those areas. We have an outdoor server
that fits directly into the radio tower itself. It sits
outside. It?s waterproof and it?s fanless.
A primary question is, ?How do I replace a lot of
that equipment at my RAN tower and virtualize all
that equipment on a Xeon-class server?? That was
part of our early proofs of concept with Verizon.
We were also embedded in some of the Nokia
products and their Liquid server program. It?s
Now we?re starting to see all these other new
applications coming in ? video monitoring, video
surveillance, and traffic management. All these
cool applications are widespread at the edge. We
see great potential there.
Monica: You mentioned multiple edge locations.
Does each operator have a different preference
for where it wants the edge to be, or does it
depend on the application?
Jeff: It?s a little bit of all that. That?s a great
question. If you look at virtual CPE as an up-and-
coming application, a lot of carriers are saying it
looks like it?s going to be a $1.5 billion market by
2019 or 2020.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |31|
It could be these little, tiny boxes that you actually
put on the customer premises, like a Starbucks, a
McDonald?s or a small franchise. Or it could be a
larger data-center environment, where the telco
would place their equipment in that data center or
in a closet, so it needs a bigger space.
When you look at things like virtual radio access
networks, the operator needs to virtualize a lot of
key equipment at the tower. As the network
grows, things like load balancing, traffic shaping,
content injection, lawful intercept, all of these key
programs that use DPI, would use larger servers in
the central office.
Seeing the whole gamut of infrastructure at the
edge, we initially thought that we?d put everything
in the cloud and everything would all be perfect.
But the latency increases and the backhaul
network introduces delays.
Also, the cloud may not have the horsepower to
host the number of appliances that are going to be
attached to the network. That?s where the critical
part of computing at the edge comes into play.
Monica: What is the security advantage of moving
functionality to the edge?
Jeff: The key aspect is securing the service
provider?s customers. One of the key assets of
things like vCPE is offering next-gen firewall
protection, DDoS protection, or encryption.
There?s value in the ability to do IPsec from the
customer all the way to the customer?s
application, which could be in their private cloud,
and offering as a service the encryption of all the
messages from the vCPE end user and equipment
all the way to their cloud.
Security is, of course, one of the most important
aspects. The service provider can offer these
security services without the end users having to
figure out what type of security they want to use.
Again, this comes back to the monetization of the
operator?s network and using that entire
infrastructure to benefit the operator?s end user.
Monica: You said that to make sure the ecosystem
is in place, you have to work with multiple partners
and customers. At the same time, there are also
many initiatives to support edge computing.
MEC is maybe the most talked-about one, but
there are many other initiatives, too. How do you
see them working together?
Jeff: We?ve been part of the ETSI MEC, which is the
ETSI version of mobile edge compute. With this
involvement and through our partners, we?ve
enabled the MEC infrastructure for deployment.
We?ve also been leaders of another edge
technology for IoT called the OpenFog Consortium.
Moving into 2017, more committees and
sponsored consortiums are coming into play.
These include Central Office Re-architected as a
Datacenter (CORD), Telecom Infrastructure Project
(TIP), Open Compute Project for Carrier Grade
systems (OCP-CG), and the Open Edge Compute.
New ones are starting almost every week.
I think it?s great. We?re setting up these different
consortiums, committees and new research
groups to help the service providers look at the
openness of the software and the hardware.
It allows industry leaders like ADLINK to come in,
provide context, and drive some of the standards
in these different areas, whether it?s IoT at the
edge, mobile edge compute, or computing edge,
which enables our product development and our
But how do we have all these initiatives talk to one
another? Although the original intent of having
these consortiums is great, it?s also confusing the
industries on overlap and messaging. Do I use ETSI
MEC? What is the difference between TIP and
OCP? Do I use CORD? Do I use Open Edge
software? What types of standards do I want to
use and how do they all interoperate? These are
the same early issues seen in NFV deployments
using open source as it overcomplicates the
service chaining and deployment.
Then you throw in 5G concepts for faster and agile
mobile networks. How do all these work together
and come together long term? How do I optimize
and monetize my network? Time will tell which of
these standards work out and which ones service
providers want to use.
I think we?re also hurting ourselves by adding more
and more of these industry consortiums to the
mix. It?s just adding to the confusion. I would love
to see us all come together, work together. Luckily,
ADLINK is involved in a lot of these consortiums, so
we can bridge some of the gaps and give our ideas
to each one with a single focus. Time will tell.
Since we?re at such early stages of edge
computing, especially in the mobile area, I think
the ETSI MEC will be the forerunner, and then we?ll
get into CORD-M, OpenFog Consortium and others
down the road and see how they can interact,
especially with Open Edge.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |32|
The question for the member companies will be
how to make software available to suppliers as
open source so the operators can use it within
Monica: Do you think there is a risk of
fragmentation or duplication of efforts with all
Jeff: Absolutely; I?m seeing it myself. I?m doing the
stuff with CORD. I?m doing it for ETSI MEC and Fog
Consortium. Our ecosystem partners are focused
on each of these different areas too. Individually,
they may not be aware of what ETSI MEC is doing,
and what CORD is doing, and what Open Edge is
doing, and OpenFog Consortium. Which leads to
too many ingredients for a simple deployment of
the edge. ADLINK is trying to simplify and deliver a
standards-based, open architecture for our
A great thing happened last year during the MEC
World Congress in Munich. ETSI MEC invited the
OpenFog Consortium for a day-long session,
enabling collaboration between OpenFog
Consortium and ETSI MEC to go through the intent
of how to work together and what needs to occur
to co-partner ETSI MEC.
Luckily, we?ve been part of both committees, and
it was great seeing that first partnership between
two consortiums and two committees starting to
work together. Hopefully, that will expand into
CORD, TIP, OCP, 3GPP and ETSI NFV.
Monica: What does ADLINK do to help its
customers select the most relevant initiatives?
Jeff: Based on my experience over the past few
years in the MEC industry, as we?re starting to see
more deployments and more proofs of concept,
the operators and the service providers do need
Mobile edge is fairly new in the US and Europe.
Operators are trying to figure out what they want
to do with edge computing. How do they
implement it? What is the easiest way to take your
technology and your partner?s technology, and
implement it into their network? They need help
ADLINK comes into the customer?s situation
wearing a solution hat. We have a series of
products that fit all the different hardware needs
and routing software that are part of mobile edge
We also bring in our reference partners ? partners
that have been validated on our hardware, or
whose software we embed within our hardware
for a full PaaS (platform as a service). This way we
can de-risk a lot of the proofs of concept and some
of the concerns that carriers may have.
As I said earlier, operators are really looking for
partners and help. Moving away from the typical
procurement model, operators are looking for risk
sharing and partners to assist them with their
decisions and deployments. For example, an
operator may want the ability to do 5G analytics at
the edge, investigating if users? equipment is
functioning as expected on the network, while
improving QoS and QoE as networks transition to
We would pull in one of our key partners that does
analytics for RAN and for smart phones and can
look at all the attributes of roaming from tower to
tower, and communications between those two
towers, all the different aspects of that phone
connection. Drops, unnecessary packet losses,
application performance and all those are things
we can track at the edge vs. at the cloud. Having
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |33|
real-time analytics at the edge can enable real-
time load balancing, content delivery, application
performance and other key items 5G is promising
to the end user.
We would bring in that partner, introduce it to the
operator, and say, ?Our partner has already been
validated. Let?s do the proof of concept. By the
way, this is built on Wind River, Tieto, or Saguna?s
software, which is also pre-validated and pre-
integrated within the system.?
We?re not just coming in with a piece of hardware.
We?re coming in with a piece of hardware plus our
ecosystem of partners that?s either embedded,
referenced, or validated on the system. That de-
risks the operator?s position.
Monica: The operators also need help with
monetization, because that?s a powerful driver to
edge computing. How can you help them with
Jeff: There are a lot of buzzwords that operators
hear. You hear vBRAS, and vCPE, augmented
reality, and now artificial intelligence, plus IoT. We
try to listen to our customers. We want to
understand what problem they?re trying to solve.
If the problem is ?I want to make more money,?
then we can bring in joint solutions with partners,
or provide from our own experience examples of
how operators can monetize their network. We try
to work together as partners to understand what
the hot buttons of their customers are. What are
the key driving forces? What are the TCO and ROI
they need? How can we help them, by integrating
all the software within our hardware and
becoming a one-throat-to-choke, where we?re the
support entity or integrator? How can we help
them if they go to different customers to test it all
What?s also cool is that every time we go meet
with a service provider, we?ll come up with
different use cases. When we walk away, we?ll
come out with 50 more use cases that we had
never thought of. It?s really great.
Then we go back to our offices and we look at
them. For instance, ?We never thought about this.
How do we integrate IoT in the transportation
realm for a municipality?? Or a carrier might want
to know, ?How do I operate my network within
smart-city trials with ADLINK products and
ADLINK?s ecosystem software products??
Because we?re at the forefront of mobile edge
compute, we hear all these use cases of what we
could do with them. Again, we bring that
experience to other carriers in Europe, in Asia ?
and all that gives us exponentially the different use
Monica: Your experience in different verticals
helps, because a lot of MEC applications are going
to be for verticals or IoT ? it?s more than just music
and video caching.
When we talk about MEC, we often talk about 5G
too. What?s the relationship between the two? Is
5G going to enable some part of MEC that we
cannot deploy now, or is MEC going to be
necessary to support 5G?
Jeff: I think it?s the latter. I think we?ll start seeing
more and more MEC deployments, because MEC
is a key use case for 5G. There are three to four
different factors around MEC use cases for 5G.
One is that we know the key deliverable of 5G is
speed. How do I increase the amount of data
throughput to my end user? Whether that end
user is an appliance, an intelligent car, or a human,
it?s all about speed. Instead of megabits, it?s
Second, you have the enormous number of
devices that will be attaching to the mobile
network. 5G has to be able to manage this.
Where MEC comes into play is early on with the
analytics, knowing how the network evolves as it is
growing. Operators need to know how to use
analytics to help them conform their network in
certain areas, whether metropolitan, urban or
How can I use analytics to do that at the edge, at
the tower itself? Using that data, I can say I need
more compute power in this area to do this, this,
and this for speed or because of the number of
new appliances that are being attached.
And later, the question becomes how to use things
like video caching, content delivery. Areas that
take up a lot of bandwidth, that require a lot of
high processing speed for Xeon processors, that
could be done in the cloud but need to be done at
the edge to ensure that the user?s experience is
MEC is an introduction into 5G. As 5G rolls out, I
foresee more and more edge, cloud, and
computing technologies to enable that speed.
Once 5G standards are written, the radio
equipment is pretty much set.
Monica: What is ADLINK doing to prepare for 5G?
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |34|
Jeff: We?re not selling radio equipment. We?re not
selling some of the 5G components that are
focused on the bandwidth and the frequencies.
However, what we are selling are the high
compute technologies at the edge. Again, that will
be in the central office, at the customer premises
or at the radio tower. A lot of those products are
related to the virtualized state of the radio towers.
As you start getting into 5G, you?re going to see
both cloud RAN and virtual RAN. Our set-top box,
which is our brand, is focused primarily on vRAN
technology ? a cloud environment at the tower.
ADLINK has also developed the Modular Industrial
Cloud Architecture that enables a modular type of
compute system that can be put into the central
office, or a data center, or the customer premises.
This architecture can support things like vCPE,
deep packet inspection, security, load balancing,
load routing ? all of these different attributes that
need a lot of different compute power or I/O
We?re also heavily into open architecture. Our
edge compute products are a modular cloud
We?re submitting specs to the Open Compute
Project, or OCP, specifically for telecom. It?s called
Carrier Grade Open Rack. We truly believe in an
open architecture that enables our partners or
even our competitors to build the same type of
components that are modular, that can fit
Those are the key aspects of 5G, because once 5G
starts happening, you?re going to see more
openness, more suppliers like ADLINK and
ecosystem partners needing to work together
more, not only on the hardware, but the
components for end-to-end connectivity, and the
service training that goes with it.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |35|
About ADLINK Technology
ADLINK Technology is enabling IoT with innovative embedded computing solutions for edge devices, intelligent
gateways and cloud services. ADLINK?s products are application-ready for industrial automation,
communications, medical, defense, transportation, and infotainment industries. Our product range includes
motherboards, blades, chassis, modules, and systems based on industry standard form factors, as well as an
extensive line of test and measurement products, smart touch computers, displays and handhelds that support
the global transition to always connected systems. Many products are Extreme Rugged, supporting extended
operating temperature ranges, and MIL-STD levels of shock and vibration. ADLINK is a Premier Member of the
Intel? Internet of Things Solutions Alliance and is active in several standards organizations, including OCP, ETSI
MEC and NFV, CORD, OpenFog Consortium, TIP, the PCI Industrial Computer Manufacturers Group (PICMG),
the PXI Systems Alliance (PXISA), and the Standardization Group for Embedded Technologies (SGET). For more
information, please visit www.adlinktech.com.
About Jeff Sharpe
Located in Portland, Oregon, Jeff Sharpe is the Senior Product Manager for ADLINK?s Network and
Communications Portfolio focusing on product investment strategies, Business & Market Development for the
NFV/SDN/MEC/IoT markets. Jeff has over 34 years of Telecom and Data Communications experience with the
key focus on network evolution strategies and product delivery. Prior to ADLINK, Jeff was Senior Manager of
Solutions at Radisys and Managing Director of Next Generation Platform Products at Nortel Networks.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |36|
Since 1983, Advantech has developed hardware
for a wide range of environments, verticals,
requirements, IoT applications, and device types.
Verticals include process control, digital health,
power and energy, as well as smart city
applications; the IoT applications are for industrial
automation, intelligent systems and connectivity,
among others uses.
Because of the diverse markets it serves,
Advantech offers multiple equipment types ?
including systems, platforms and network
appliances, all the way to embedded devices,
which connect sensors, actuators and cameras,
and rugged mobile devices (e.g., for transportation
MEC and edge computing fall within the scope of
the Networks and Communications Group.
Advantech provides hardware solutions at the
outer edge that enable MEC and OpenFog
deployments and the push to move processing
and storage to the far edge.
Advantech has recently introduced a virtualized
platform, Packetarium XLc, for MEC and other
edge-computing deployments. The platform
provides a carrier-grade interim solution for 4G
networks that is ready for 5G, and it is based on an
open architecture and industry standards.
Advantech positions it as a micro-datacenter-in-a-
box, or a microserver with a modular design
meant to provide scalability. According to the
company, it can accommodate 9 slots, up to 288
Intel Xeon processor cores, and is typically installed
away from big, centralized data centers.
The carrier-grade Packetarium XLc supports five
nines availability, complies with NEBS Level 3 and
uses a 6U compact form factor with reduced depth
(400 mm). The Packetarium XLc PAC-6009 flagship-
model server is also designed to help operators
transition from legacy solutions such as ATCA in a
The Packetarium platform is part of Advantech?s
NFV Elasticity initiative that leverages scalable
Intel-based platforms to enable service providers
to deploy VNFs anywhere in the network ? and
specifically in edge locations where proximity to
the user improves performance, QoE, and
resource utilization. NFV Elasticity helps operators
integrate the infrastructure in the core and at the
edge with RAN elements such as access points,
macro cells and small cells.
Advantech participates in an ETSI MEC PoC, ?Multi-
Service MEC Platform for Advanced Service
Delivery,? along with Brocade, GigaSpaces, Saguna
and Vasona Networks. The PoC demonstrates how
a unified NFV infrastructure with a cloud
orchestration system can concurrently support
multiple MEC platforms and applications. The MEC
platform provides operators a way to leverage
analytics at the edge, helping them to optimize
RAN performance and, especially, to minimize
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |37|
flexibility at the edge
A conversation with Paul Stevens,
Marketing Director, Networks and
Communications Group, Advantech
Monica Paolini: To support processing and storage
at the edge, operators need hardware that is
scalable, cost effective and flexible. The
requirements are variable, driven by a wide range
of environments and performance targets. In this
conversation, we talk about how hardware
designed for the edge can meet these challenges
with Paul Stevens, Marketing Director, Networks
and Communications Group at Advantech.
Paul, Advantech has a broad portfolio of solutions.
What is Advantech?s focus on edge computing and
Paul Stevens: Advantech is a manufacturer that?s
been in the embedded computing business now
for over 30 years. We?ve grown over that time to
be a billion-dollar company, focusing on a diverse
range of industrial areas. Over the years, our
products have been used to connect thousands of
industrial devices, sensors, actuators, and so on at
the edge of the network, where we?ve been active
in IoT-like applications such as SCADA and remote
monitoring for a number of years.
Various divisions across Advantech work on
embedded computing in all of its forms. The group
that I?m working for, the Networks and
Communication Group, focused over the last 15 or
20 years on telecom infrastructure and enterprise
We?re finding now that a lot of the businesses are
beginning to converge as the new IP infrastructure
comes into play. Equipment that we?d previously
been designing for the core is now finding itself
out at the edge as well. And there are different
constraints there, as mobile edge computing, fog
computing and so on start to come into play.
Monica: What are the requirements that change
as you move from the core to the edge? How do
these changes affect you?
Paul: I think there?s a need for more scalability and
flexibility in platforms that we design for the edge.
A couple of years ago, when NFV came along, we
were somewhat concerned that everything would
disappear into data centers and into the cloud, and
we?d all be out of business.
On the contrary, though, we have found that,
continuing on our embedded tenets, there are a
lot of opportunities and a growing market demand
at the edge. Over Advantech?s history, we?ve
acquired precious experience working with very
stringent environmental specifications in central
offices, for carrier-grade and NEBS compliance.
Now we?re putting that experience to work in the
harsher and more rugged environments of the
systems we provide at the edge.
On the scalability side, scaling out offers more
processing headroom for example to add extra
baseband and MEC application processing as
needs evolve. Scaling up and down involves
greater design flexibility and providing more cost-
optimized networking gear to match precise
workloads at specific physical locations.
Monica: There are clearly different physical
constraints. What about equipment size, or
power? Does it depend on where you decide the
edge actually is?
Paul: Yes. It?s all about defining where the edge
actually is, because it is a bit of a moving target.
And it will continue to be a moving target over the
next few years as technology advances.
In mobile edge computing, for example, a lot of
the discussions were initially focused on putting
more intelligence at the radio head or the eNodeB.
We?re beginning to see that bringing the compute
or virtual edge up into a higher-level aggregation
point is probably a better way to plan the network
architecture and topology.
The devices that we?ve been building are more
ruggedized. If they?re for outside use, then there
are temperature and size constraints, as well as
power constraints to consider. If we?re moving into
areas that are closer to central office
requirements, then we?ve got to adapt to their
various environmental needs and space needs. If
we?re close to transmission equipment, we need
short depth boxes and so on.
The products that we?re designing scale from the
edge of the network with a few cores, to these
aggregation points, where higher levels of
computing density are required. At the same time,
we have to meet tight packaging and
Monica: As you said, the edge can be a moving
target. And you offer solutions for different edge
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |38|
locations. But how do operators decide where the
edge is for what they?re trying to accomplish?
Paul: It will depend on the services they?re trying
to offer. We can draw an analogy with customer
premises equipment in the enterprise, where vCPE
and technologies like SD-WAN are beginning to
take over. There, the edge is moving between that
customer premises equipment, and the managed
services in the cloud. Now you can move those
services where you want, from the cloud down to
the CPE, and have them typically running on the
We?re looking more at deploying devices that are
closer to the user, that are more programmable,
that are more flexible. We can now run virtual
functions either on the devices that are at the
edge of the network or up in the cloud, wherever
it makes the most sense from a security
perspective, from a performance perspective, and
also from a latency perspective.
Monica: The closer you go to the user, the lower
the latency. But what?s the security advantage in
moving functionality from the core to an edge
aggregation point closer to the RAN, or to the
Paul: The security gateway function in the RAN is
evolving, especially with densification. We?re
finding that it?s better adapted to aggregation
points at the edge which secure data as it hits the
network instead of placing gateways in the core.
This can also take place in the same system as
vRAN and MEC processing.
Monica: What is the relationship between MEC
and virtualization? Is a virtualized network a
prerequisite for deploying MEC? Or do they
complement each other?
Paul: I think they go together because the whole
ecosystem evolves around NFV. They are really
playing hand in hand. From a MEC perspective, all
the layers are built on multi-core processors where
virtualization becomes absolutely key. Most
companies are pursuing Network Functions
Virtualization and service function chaining as a
Monica: Within a virtualized network, you may
have multiple edges, depending on the
application. This gives the operator more flexibility
in managing applications.
Paul: Yes, the elasticity comes with being able to
move those applications either from the data
center to aggregation points, or even closer to the
And just where is the edge when user entities and
devices such as connected cars start to talk to each
other, or make use of other neighboring devices
for communication purposes? It becomes more
difficult to decide just where the edge is. I think
we?ll see it moving around, and elasticity is key to
being able to move network functions around with
Monica: What exactly is edge computing? What
does it entail? MEC is one take on that. Then there
are other initiatives such as OpenFog, or CORD.
How do you see them playing together as we push
the functionality toward the edge?
Paul: There are a lot of open initiatives ? we?re
trying to democratize, to a larger extent, what can
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |39|
be done at the edge. Mobile edge computing
offers a new ecosystem and a new value chain.
The idea is for operators to open up their radio
access network to allow rapid deployment of new
We?re still in a period of transition. In 2017 we?re
working on proofs of concept. We?ll start to see
more trials out there. I think it?s key that we test
out the various technologies to see how each of
the new initiatives come together for a better
At Advantech, we work very closely with the
vendors that are, let?s say, closer to the solution ?
and that are working on the standards.
To position Advantech, we?re not the solution
provider ? we?re the building-block provider. Our
expertise really is in the compute engines. It?s
within the embedded products that we are
capable of putting into these diverse locations.
Monica: You mentioned scalability before. What?s
your approach to providing scalability to your
Paul: We can scale equipment designs to meet
performance, throughput, and connectivity needs
wherever necessary. Typically, we optimize
platform designs to meet those requirements and
overall financial constraints.
As an example, if a specific open and universal
appliance priced at $300 is being deployed in
50,000 units, then, obviously, there are dramatic
capex savings when compared with the
deployment of an over-specified standard server
equivalent that could multiply capex by 5 or 10
It?s very important that we work with our
customers ? the OEMs, the system integrators, or
the operators ? to try to tune those performance
requirements for various locations in the network.
Elasticity will allow us to move some of the
compute capacity around. But in many cases we?re
looking at what optimum platform performance is
needed at specific places in the network.
Monica: Virtualization and edge computing also
change the ecosystem composition and
relationships. It?s not just a hardware change. It?s
also the way you run an operator network, and the
value chain dynamics.
How is edge computing changing Advantech?s
relationship both with vendors and OEMs and with
Paul: With NFV much has happened within the
ecosystem. NFV has brought a lot of innovation. It
does mean that there are many more building
blocks to actually put together, hence the number
of initiatives to standardize around NFV in various
For us, there is no blueprint right now. I think the
ideal situation is to be, as we are, working very
closely with select members of that NFV
ecosystem, including virtualized network function
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |40|
A lot depends on the operators. Some tier ones
have their own R&D capabilities, others are doing
the profiling and testing themselves. Then it will
depend on the business model that best suits
them for larger-scale rollout.
In some cases, we?re either working with the
operators? preferred system integrators or service
vendors. In other cases, the operators are doing a
certain amount of work themselves and we deal
directly with them. The environment is quite
flexible now. Some of those models are beginning
to shake out as various NFV applications are
deployed, particularly on the SD-WAN side.
The important thing is to bring the ecosystem
together and make sure we set up the trials, the
testing of all the different elements, so we have
solutions that are proven and ready to go.
Monica: Do you think that with virtualization,
operators will start taking more of a hands-on
approach in selecting vendors? Does that mean
you may be working more with service providers
directly, rather than through partnerships?
Paul: There?s a mix. Over the last year or so,
several of our OEM customers have become our
partners. They?re relying on us for hardware
development as well as certification.
In the past, they would take on certifications like
NEBS for North America. Now, where there?s more
of a partnership hand-off, we?re taking care of
some of those tasks that they did originally.
That?s from the OEM side. The operators are trying
to take a new approach with NFV. We?re seeing
more of a white-box approach, where we?re
looking at a number of almost universal platforms
at different performance levels.
At the end of the day, it?s really how much
performance you can pack into the space
available, how much connectivity and offload you
need in the system. We?re being influenced
directly by the tier-one operators in that respect.
Monica: You mentioned trying to get the
ecosystem going. Proofs of concept are a great
way to explore what is achievable. What is your
experience to date on that?
Paul: Pretty good so far. We kicked off a MEC ETSI
PoC back in September. We were part of the
Brocade initiative, along with Saguna and Vasona,
as well as GigaSpaces with their Cloudify product.
We demonstrated, an advanced service delivery
proof of concept, at the MEC Congress in Munich.
In the PoC, we put together a platform that makes
it easier for developers from the application
developer space to start developing applications
they can easily and quickly be tried out in a live
PoC or a live trial.
The PoC is coming along. The results haven?t been
published yet. We?re still in the evaluation and
development stage with some of the APIs. I think
over the next three or four months, we?ll start to
see the results of that.
Monica: What were the learning points in the
PoC? What applications have you rolled out?
Paul: The learning points were in bringing the
various ecosystem parts together and working as a
team to create a fully functional multi-vendor,
multi-application MEC platform that allows both
virtualized network functions and cloud services to
be instantiated and delivered.
We?re working on MEC user-plane functionality
following the 3GPP documentation, as well as
developing the various MEC APIs between
platforms. Some of the partners have more
experience in the orchestration portion of the NFV
infrastructure. We?re bringing user plane
functionality and APIs together so we have a
platform that can accelerate the development of
From a MEC perspective, the new services and
apps we?re looking at are connected cars, IoT,
virtual reality and augmented reality applications.
For that to start to happen we need to put
platforms out there that are ready to go and can
be connected up to a cellular network. This makes
it possible to start smaller-scale trials sooner.
Monica: It?s interesting that you mentioned IoT. At
the very beginning, attention on MEC was mostly
focused around video traffic optimization.
Now there is a growing traction from enterprise
and vertical-specific applications, as well as IoT.
What that means is that the ecosystem also has to
include partners from enterprise and, within that,
specific verticals. The involvement of the
enterprise is going to be crucial.
Paul: Advantech has several divisions, already, that
are working on multiple types of IoT. We are
beginning to bring our groups together for a
There?s certainly a level of convergence that
wasn?t there a year ago but that is now picking up
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |41|
pace. Several of these platforms, and in particular
the Packetarium XLc, can be seen as a micro data
center at the edge of the network or extending the
That?s where we?re going to find a new generation
of data centers: at the edge of the network,
handling in one particular aggregation point
perhaps several thousand IoT gateways,
connected cars, video surveillance cameras, and so
Where there?s a need for fast response and
latency, that?s where we?ll see that level of
connectivity and local processing going on.
One important factor though. While the failure of
a single system in a data center hosting aisles of
NFV infrastructure may not be critical, the
situation changes as equipment gets deployed at
the edge. In small sites with half a rack or less of
NFV infrastructure, failure of a system can have a
big impact on service availability and user
experience. Edge sites are unmanned, remote
locations meaning higher MTTR and service cost,
compared to central offices and telecom data
centers, so availability of each NFVI at the edge of
the network is critical.
Packetarium XLc supports high availability via
redundancy at all levels via 2+2 redundant power
supplies, the ability to withstand a single fan
failure, redundant system management, system
fabrics switches and control modules. Hot swap
support for field replaceable units guarantees
Monica: What is unique about Advantech?s
contribution to the MEC ecosystem?
Paul: I think it?s providing the platforms that are
adapted for MEC and needed now in the field. A
lot of work in the various PoCs has taken place on
standard IT servers, on data-center servers. We?re
focusing on the edge and there?s a point where
you need to go out there and start real testing.
We?re listening to all of the different performance
requirements of the players in this space. We?re
working hand-in-hand with a growing ecosystem,
to put together a platform that developers can get
out and start working on.
For example, at Mobile World Congress 2017 we?ll
be hooking up some remote radio heads to the
platform in a vRAN environment.
Packetarium XLc, Advantech?s edge computing
platform, brings all of that together. It can connect
to the radio heads, perform baseband and MEC
application processing, and also provide backhaul
into the core network. We?re trying to put
together platform configurations that are very
scalable and very powerful so they can connect to
many remote radio heads in dense areas.
The platform can obviously also connect to small
cells, as well, in a MEC scenario. At Mobile World
Congress 2017, we will be demonstrating some of
the software-defined radio applications that can
be employed. Visitors will be able to discover how
connect our micro data centers in a box to a real
Monica: Today we are still at a proof of concept
stage. When should we expect a commercial
deployment of MEC applications?
Paul: I think in 2017 the baseline infrastructure will
come together for real testing and trials.
Obviously, everybody?s got their sights set on 5G,
which is still a few years out. In 2017 we?re also
going to see more advances in new technology
which can be deployed, and upon which more live
applications can start to be tested and validated.
Monica: What will you be doing at Advantech over
the next five years?
Paul: Our focus is going to be on adding services
where required. Probably some higher levels of
integration, depending on how fast the ecosystem
changes move forward. We will continue to
working closely with select ecosystem players.
We will as such continue to bring together the
unique expertise that we have in developing
hardware that fits out there at the edge, and the
diverse expertise of the other members of the
At Advantech today, we position ourselves as the
application-ready platform providers. As things
continue to evolve, you?ll see us moving more up
the integration chain to facilitate that process.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |42|
Advantech Co. Ltd. (TAIDEX:2395) is a leader in providing trusted, innovative products, services, and solutions.
Its Networks & Communications Group provides the industry?s broadest range of communications
infrastructure platforms, scaling from one to hundreds of Intel? cores, consolidating workloads onto a single
platform architecture and code base. The group?s technology leadership stems from its x86 design expertise
combined with high-performance switching, hardware acceleration and innovative offload techniques. For the
new IP infrastructure, Advantech?s NFV Elasticity framework extends NFV to the mobile edge by supporting
scalable carrier-grade platforms that run VNFs anywhere in the network. Operators, integrators and software-
vendors can then rapidly validate the latest NFVI for vE-CPE, SD-WAN or MEC and benchmark VNFs using
Advantech?s Remote-Evaluation-Service. For more information see www.advantech.com/nc
About Paul Stevens
Paul is Marketing Director for Advantech?s Networks & Communications Group. Paul has focused on technology
marketing roles since he joined Advantech in 2002. Prior to that he was European Marketing Manager at
Motorola Computer Group where he managed partner initiatives and helped evangelize new technology
introductions. He is actively focused on helping build out Advantech?s NFV ecosystem network. He studied
Electrical and Electronic Engineering in the UK and now lives with his family in France where he says the food is
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |43|
Artesyn has worked on developing power
conversion and embedded computing
technologies since 1971, initially as the Embedded
Computing and Power unit of Emerson Network
Power, and later through the acquisition of the
Motorola Computer Group, Force Computers and
Astec. Artesyn solutions span multiple verticals,
including communications, broadcast and
networking; industrial and transportation; and
military, and government.
In telecommunications, it provides hardware and
platforms for the end-to-end network ? from the
edge to the core ? supporting broadcast and
media services and providing security and
integration across technologies. Over the last few
years, it has focused on virtualized solutions, both
in the RAN and in the core. Currently it is
developing the next generation of hardware to
support the transition to 5G.
The MaxCore platform is designed to meet service
providers? needs for edge computing and network
virtualization. With the MaxCore platform, Artesyn
provides a fully integrated suite of cloud-based
products that is scalable, flexible and power
MaxCore virtualizes L1?L3 baseband processing in
the RAN and supports MEC deployments. It is
geared to optimizing performance and minimizing
latency in environments with a high density of
traffic and subscribers, thus targeting network
operators? densification efforts and their transition
to advanced LTE functionality and 5G.
Within its commitment to fully virtualized
solutions, Artesyn sees MEC as a solution it
supports to expand the reach of networks,
maximize performance, reduce costs, and
generate monetization opportunities. Artesyn?s
MEC server is designed to cover as many as 96 2x2
MIMO LTE-TDD 20 MHz sectors. It was developed
in partnership with Intel and Wind River.
Artesyn sees virtualized RAN and MEC as part of a
flexible RAN approach that enables operators to
reduce costs and to support a wider range of
services for enterprise customers and dense
indoor coverage. With the decentralized cloud
data center approach that is central to MEC,
operators can more efficiently provide bandwidth-
intensive and real-time applications within the
existing RAN infrastructure.
Artesyn has worked on many use cases that
require edge computing, including:
? IoT applications for smart cities
? Security gateways for residential multi-tenant
? OTT video delivery and transcoding
? Augmented reality
? Location-aware services for retail,
government, health and education verticals
? Edge-based analytics to optimize RAN and
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |44|
Edge applications and
services to get closer
to the subscriber
A conversation with Linsey Miller, VP
of Marketing, Artesyn
Monica Paolini: Moving processing and storage to
the edge is crucial to delivering the quality of
experience that real-time applications require. It
also enables the rollout of new revenue-
generating applications and services. In this
conversation, Linsey Miller, VP of marketing at
Artesyn, shares her perspective on MEC, and tells
us what she learned during the initial work on
Linsey, with edge computing, mobile operators
and other service providers are looking at ways to
improve performance and quality of experience
for their subscribers by moving functionality to the
edge. At Artesyn, you?ve been working on this area
for quite a long time. Can you tell us what you
have done to date in this area?
Linsey Miller: Artesyn has been part of the
wireless infrastructure for quite some time. We
provide embedded computing and power
conversion products to telecom equipment
manufacturers and service providers, and our
platform started to trend toward technologies that
needed to be located at the edge and to support
real-time functions. And so we got involved in a
very dense compute platform we call MaxCore.
MaxCore lends itself well to mobile edge
computing, because it deploys multiple different
virtual network functions and puts a lot of
processing power into a small space. We?ve always
been doing that in one way or another at Artesyn,
from platforms down to blades, drawing on our
Emerson network power and Motorola lineage.
We?ve always been involved in serving the wireless
infrastructure, so edge computing is an area we?re
really excited about and honored to contribute to.
Monica: You started working on edge computing
before MEC even existed. You have your bases
covered when it comes to mobile edge computing.
Have things in edge computing changed since you
first started working on it?
Linsey: Yes, definitely. Before, a lot of functions
could exist, but in a slower, more disconnected
fashion, so you had to rely upon the core
infrastructure for data plane processing, instead of
provisioning that at the edge. You inherently had a
network that was not really provisioned as
efficiently as it could have been. This is one of the
biggest merits of NFV, Network Functions
Virtualization. Without NFV, you just couldn?t set
up and tear down distinct services that you now
can by having that capability at the edge ? that is,
at the location where the user wants to use it.
Monica: This allows operators to allocate their
compute resources as they need them.
Linsey: Right. This excites us because it means we
could potentially save them a lot of money. If
you?re provisioning for peak capacity, you?re
putting a lot of equipment in a location that may
not need it all the time. An example would be a
shopping mall, a sporting event or a concert
venue, which may be completely full of users one
minute and may be completely empty the next.
With mobile edge computing, we can now
provision a network which moves to where users
are going and which maybe even changes the
functions within it based on the services they want
to use while they?re there.
Monica: In a dense environment, such as a
stadium, subscribers want to do lot of video
streaming and video uploads, and those are
challenging from an infrastructure point of view.
How can operators deal with that?
Linsey: We?re focused on a couple of things:
density and latency. We are the hardware
provider, and with mobile edge computing, the
promise is that you can do so much more at the
edge with all these unique and innovative
applications, but on general purpose hardware.
We?re taking that hardware and making it serve
more users, with as many services as possible in
one place depending upon where they go.
Within our platform, we are focused on reducing
latency: the communications and the compute
functions that are within that platform enabling all
those applications are happening as quickly as
possible. That could be the difference between
seeing a live video streamed from a drone, versus
sitting there and waiting for your Instagram picture
or video to upload or download from the network.
The other focus at Artesyn is on density. We?re
trying to get as many users and as many cells as
possible into a small space working off one edge
Monica: Stadiums are a good example of a venue
where you have both density and latency
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |45|
requirements at the same time. Can you tell us
about trials you have done in this area?
Linsey: This started for us officially in 2015, and
then at Mobile World Congress in 2016. We were
excited to partner with China Mobile, which was
one of the first operators to be as outspoken as
possible about mobile edge computing and virtual
RAN, along with Wind River and Intel. That year we
also did a demonstration with Telefonica and SK
Since then we?ve been involved in various demos.
One was with China Unicom and Baicells, which
did a drone demo. We?ve also been involved with
Deutsche Telekom at the NGMN conference.
NGMN has been the source of a lot of great info
on mobile edge computing and 5G.
We?re excited to be a part of a lot of different
initiatives with operators, because many of them
are being innovative about this and really getting
hands-on about how they can take advantage of
this, because it means they can offer new services.
Another example is with Verizon. Verizon recently
did an Innovation Lab challenge, where they
invited multiple technology providers to come in
and show some new services. One of the
categories for that competition was low latency,
and we won a low latency award with our
MaxCore platform because we were, essentially,
showing a mobile edge computing application. We
were showing 360-degree virtual reality in the
context of a sporting event. Imagine you?re at a
stadium, you?re watching a game live, something
just happened that you want to see a replay of ?
you could actually hold your phone up and see a
replay of that, with a 360-degree view. This a great
example of new services operators can deploy that
users will just come to love, and eventually not
know how to live without.
Monica: This is a very good example of a new
service. What other new services are operators
looking into that are more than just streaming
video up or down?
Linsey: When you look at some of the virtual
functions that mobile edge computing can put in
place, it?s exciting from a consumer point of view
as well as an enterprise point of view. We worked
with one of our application partners, Clavister, to
show virtual security gateways. We can show over
3,700 virtual security gateways being enabled by a
At the mobile edge computing conference and we
saw Carnegie Mellon show one of its examples: a
virtual network function based on facial
recognition. That?s really exciting when you think
about how a location-aware element can identify
users and know them and cater things to their
Augmented reality, ad services and ad
monetization are also great ways operators can
use mobile edge computing to enhance the user
The drone demo is from another trial we did. You
can imagine how video footage from a drone at
live events is a great application that can be
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |46|
powered by the mobile edge computing network,
The focus today is on which of these services can
help operators make money now. Those are the
ones where customers and operators have the
keenest interest in how can we provision the
service. But there are a lot of new, exciting services
for the future and for 5G.
Monica: Local services can support monetization
efforts ? for instance, advertisements or
applications that deliver content tied to a specific
Linsey: Yes, even concessions ? it could be
something that simple. But if you think about
Pok?mon Go as an example of augmented reality
and how you can enhance a user experience while
they?re shopping, maybe seeing a game or
something like that, you could make it really fun
and you could make it something users really want
to buy into and really want to use while they?re
Monica: Are MEC and NFV changing Artesyn?s
position within the value chain?
Linsey: They are changing our position in the
ecosystem a little bit. They also give us another
opportunity to show how our platform can really
help drive MEC and NFV forward.
When you think about NFV, you think about
dynamic provisioning and how you could do that
better as a general application. But you can also
think about the exciting possibility of enabling
multiple different virtual network functions on one
platform. Before, you had multiple different
bespoke appliances, with each doing a different
thing; now you can bring those into blades and a
platform, and they can be doing different things,
and you can turn them on and you can turn them
Network slicing is the amalgamation of all those
different virtual network functions. We think it
makes easier for operators to deploy new services,
because now they can go and say, ?I want to pick
my very best application solution providers. And
then I want to have this general-purpose hardware
that I know is going to be high performance, I
know it?s going to have low latency, I know it?s
going to have high density. I can support lots of
different service instances, and I can do them
quickly. I know if there?s a real-time application I
want to incorporate, this one is going to work for
me.? We?re trying to build that rock-solid platform
that can go in all of those different directions, and
then work with the various different application
solution providers to benchmark and show how
As a company, traditionally we have sold to the
telecom equipment manufacturers ? the appliance
vendors ? but because of NFV, our role is
changing. We go into a position where sometimes
we sell directly to the service provider, but we are
also working together with that telecom
equipment manufacturer to make sure its
application software is running on our platform in
the best possible way. We?re open to both models,
because ultimately those telecom equipment
manufacturers could be the super integrators of all
those different things, and in some cases they may
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |47|
just empower the software that runs on our
However, what makes Artesyn unique in the
market is that we are an integrator for complex
workloads. Whether it?s a real-time function, or a
mix of different types of resources, or different
types of compute resources ? they could be Intel,
could be NVIDIA, could be FPGAs, could be camera
inputs ? we bring all of those things together in
one platform that can go in all those different
Monica: You may end up working more closely
with operators, because they have started to take
a more active role in building and operating.
Linsey: We?re excited to work with some of the
ones that are really innovative, because they know
which applications are going to help them
monetize those services. They have very specific
problems to solve, and our team acts as a good
extension of their resources when it comes to that.
We?ve been really thrilled to collaborate with so
many of them already on mobile edge computing.
Monica: Is NFV a foundation and a requirement
Linsey: I think they go hand in hand. Some of these
problem areas are great candidates for mobile
edge computing, the way they?re deployed today.
For instance, in an enterprise or stadium, there
may be a DAS and that is not their problem right
now. It?s owned by the building owner or the
stadium provider, and the operator can?t offer
great services on its network because the DAS just
doesn?t really scale.
Mobile edge computing really does need NFV. You
need to have equipment that is dynamic and can
expand and contract based on what the network
I think there?s going to be a big sea change on
ownership of the network as a result of mobile
edge computing. If the stadium owner can take
that overhead off its plate, and not have to buy
and maintain that equipment, but can instead buy
it as a service from the operator, and the operator
can ensure that the network will serve up those
ads and the content the stadium owner wants to
stream to its consumers, then it?s the best of both
worlds for both sides of that equation. The
operator can now monetize services that it
couldn?t before. And the stadium owner no longer
has to maintain something, which isn?t its core
competency anyway. Today it?s frustrating to the
people that come to their venue, quite frankly, if
their wireless experience is below their
Monica: Stadium owners may also provide the
same infrastructure for different operators, so
there may be some consolidation at the edge.
Linsey: Yeah, that?s really exciting, having an
operator-agnostic framework there.
Monica: That also might be very important for the
business case, because with MEC you?re adding
infrastructure to the network. So how can you
justify the business case? Is the improvement in
performance sufficient to justify the cost that MEC
adds in terms of equipment? Will venue owners
have to contribute to the initial cost?
Linsey: I would think so, because there are areas
where they can benefit from MEC being in place.
Imagine if you?re at a sporting event and the venue
owner can serve up not just concession ads, but
also jerseys or things you would buy in a store that
you otherwise might not physically see as you?re
walking out, but it comes up on your app, and you
can have it shipped to your house. All sorts of
interesting follow-on purchases, ad monetization,
or loyalty could happen as a result of that
Monica: Do you need to have MEC for 5G? Or do
you have to wait for 5G before you realize the full
Linsey: I would say, ?Certainly not!? You don?t
have to wait for 5G to take advantage of MEC ? it?s
here now. A couple of examples: We
demonstrated 96 cells on our current platform,
and that was LTE with Intel. We also partnered
with a company called Amarisoft, and we?re
getting up to 120 cells, and that?s 2x2 FDD LTE.
With LTE, operators are now in this lull because
they expanded their network, they made it better
? and equipment providers had a nice boon from
that build-out ? and then things leveled off. LTE
networks have so much capacity still, and can
support many of these services that may have
video. There are many new services that can be
deployed on an LTE network with no rip and
replace. I think there?s a keen interest from
operators to take advantage of the existing
infrastructure and build on it. MEC is a wonderful
bridge between the LTE network that is there
today and the 5G network that will come in the
future. But we don?t have to wait for 5G to deploy
Monica: How will 5G make MEC better and more
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |48|
Linsey: It going to get really exciting in the context
of what the users can do with 5G. With driverless
cars, and machines that can communicate with
each other, 5G is going to bring a completely
different level of performance just based on how
much more bandwidth it?s going to allocate for
applications like that. There will be more real-time
applications, and more mission-critical
With LTE we?re not going to have as many services
that are life or death. It might be really exciting,
cool and fun from a consumer point of view, but
handing over a lot of that control into functions
that you never before dreamed of being done
without human interaction, that?s where 5G is
really going to be a different experience for all of
Monica: What are you working on at Artesyn
Linsey: Our first stage for mobile edge computing
was creating a platform and a technology that
would enable you to do all of these different
virtual network functions and virtual RAN, and the
network would incorporate just the basic tenets of
mobile edge computing.
Now we?re working on taking that to the next
level, to the mission-critical functions. We?re
putting more features into our platform to enable
very high reliability. We are looking at things that
the telecom network has done historically at its
core, and implementing those at the edge so you
have high availability, high reliability.
We?re also working on the incorporation of more
and more processing elements that exist as types
of resources within those platforms. So, things like
GPU functionality, very dense computing, higher
connectivity, and proving out all of those different
applications with our different ecosystem
It?s a lot of work, but it?s exciting when you look at
the operators that are moving fast in this space:
they?ve got the use cases that will hit the market
first ? that?s out priority.
But in the background, we are an infrastructure
provider and we integrate complex workloads. We
want to make sure that all of these things are
working together, whether they?re doing graphics
processing, compute, or connectivity. That is our
core focus, and just making that better and better,
and more and more reliable at the edge is our
design focus in the next couple of years ahead.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |49|
Artesyn Embedded Technologies is a trusted leader in the design and manufacture of reliable embedded
computing solutions. For more than 40 years, Artesyn has enabled customers in a wide range of industries
including communications, military, aerospace and industrial automation to accelerate time-to-market, reduce
risk and focus on the deployment of new features that build market share. Building on the acquired heritage of
industry leaders such as Motorola Computer Group and Force Computers, Artesyn is a recognized leading
provider of advanced network computing solutions ranging from application-ready platforms, single board
computers, enclosures, blades and modules to enabling software and professional services.
About Linsey Miller
Linsey Miller is Vice President of Marketing for embedded computing solutions at Artesyn Embedded
Technologies, leading a team that includes global product marketing, technical marketing and marketing
communications. Linsey drives solutions and partnerships that enable Artesyn?s customers to succeed as their
business models change by improving the performance, efficiency and capability of their critical computing
infrastructure. She has previously held senior sales and marketing positions with Emerson Network Power,
Interphase and Verizon.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |50|
Intel solutions are geared to addressing three
business and networking challenges that service
providers face ? their need to
? Increase network capacity to meet growth in
? Build agile networks to accelerate innovation
? Protect data traversing their networks
This requires an end-to-end transformation
toward virtualized, software-defined and cloud-
ready networks ? from the devices all the way to
the core and the cloud. Driving this transformation
are Intel?s silicon and software solutions, but Intel
is also working to
? Advance open source initiatives and standards
? Build an open ecosystem with initiatives like
the Intel Network Builders program
? Collaborate with service providers, cloud
players and enterprises
In wireless, Intel?s focus is on the virtual network
infrastructure of 5G in three areas:
? Radio access technology, with anchor booster
beamforming, NR technology, and massive
? Access network, with FlexRAN, including
solutions for C-RAN, vRAN, small cells and
? Core network, with router, vEPC, backbone
and network slicing solutions
MEC plays a central role in Intel?s strategy toward
virtualization within a unified management plane
for service management and orchestration. Intel
sees MEC as a technology that will decrease RAN?s
opex, and that will help service providers create
new services and generate new revenues. The
company foresees a wide range of business
opportunities in verticals such as healthcare,
energy, manufacturing, retail, government,
transportation, financial services, and education.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |51|
MEC gets ready for
the enterprise and IoT
A conversation with Caroline Chan,
Vice President Data Center Group and
General Manager 5G Infrastructure
Monica Paolini: Virtualization gives network
operators ? both fixed and mobile ones ? flexibility
on where to locate processing and storage within
their networks. And they have started to move
beyond the centralized cloud and place some
function at the edge, or near the edge. Multiple-
access edge computing, or MEC, is one initiative
that provides a standards-based framework to
operators and vendors to enable edge computing.
I talked to Caroline Chan, the Vice President Data
Center Group and General Manager 5G
Infrastructure Division, at Intel, about the impact
of the move towards the edge of the network.
Caroline Chan: This is one of my favorite topics, so
I am excited to have this conversation. And I
noticed that you used the right word: you said
?multiple access.? We used to call it ?mobile edge
computing? when we started it, but now we?ve
expanded the horizon.
Monica: It is still somewhat difficult to get used to
the new name, but it?s more than just a change in
what the MEC acronym stands for. But before we
get into the relevance of the move to ?multiple
access,? can you give us an overview of what Intel
is doing in MEC?
Caroline: At Intel, I am part of a group that
provides the silicon platforms for software to a
whole host of companies in the communications
industry. I am focused on wireless, specifically on
We started to work on MEC three or four years
ago, when it was just a twinkle in our eye. We
started this work with Huawei, Nokia, Vodafone,
DOCOMO, and IBM. We kicked off this initiative in
MEC in an attempt to cloudify the network ? as a
way to bring the best-known methods to a cloud
service provider, within the innovation cycle. This
gives operators the ability to generate additional
revenue and start capturing some significant
savings. Now this is clear as the industry is moving
towards more and more network function
virtualization. But, initially, with MEC we had to ask
ourselves, ?What are the benefits we?re really
looking to have? What benefits can we reap from
Today, I am very excited. We started MEC and it?s
evolving: we?re now talking about trials and early
deployments. We?ve really come a long way, and
we?ve been in this as a technology partner to our
customers and end users from the very beginning.
Monica: As you mentioned in the beginning, MEC
stands for ?multiple access,? no longer ?mobile
access.? It started off as part of the virtualization of
mobile networks, and now the scope is expanding.
Caroline: When we started, because our
background was in wireless and everybody I
mentioned is involved in wireless, it was natural to
concentrate on wireless networks. But then within
the ETSI MEC, the operators asked, ?Why limit this
to cellular? Why not Wi-Fi?? So we said, ?Okay,
let?s add Wi-Fi then.? But then people asked, ?Why
just wireless? What about wireline?? And this
brought about the second release that?s coming
up. We redefined MEC as ?multiple-access edge
computing.? Because once you realize what the
benefits of MEC are, you see they are applicable to
all types of access networks.
Monica: It?s an advantage to have the same
approach available across different networks,
Caroline: Yeah, exactly.
Monica: Let?s look at the future also. So today, we
have 4G networks, and we started deploying MEC
in 4G. But what about 5G? Are there parts of MEC
that need 5G?
Caroline: This is the conversation we?ve had from
the beginning: people would say ?why?? and ?can
we?? so we started with 4G, and now we are
continuing with 4.5G. Most of our trials today sit
on 4G and 4.5G.
So what does 5G really bring you? A lot of
applications we talk about require extremely low
latency. And that?s where MEC really comes in. All
the speakers at all the conferences I?ve gone to,
and all the operators I?ve talked to, emphasize NFV
as the foundation of 5G. With NFV, the investment
that?s required drastically decreases, because the
foundation is a virtualized platform that uses
general purpose hardware. And the multiple-
access benefits, the MEC benefits, fit into this.
In 4G, MEC is something that is nice to have: it
opens up your revenues, it opens up your ability to
deliver services. But for 5G, MEC almost becomes
a de facto requirement. We?re very excited about
it, and the road to MEC already started three, four
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |52|
years ago. We?ve really started seeing
our deployment this year in 2017.
Monica: This process is going to
require time, because we have to
learn many things to deploy MEC, and
this requires a different mind-set.
You mentioned the benefits of MEC.
Can you go over what is the value of
MEC, especially to the enterprise?
Caroline: When we started this, we
started with caching. Moving the
content closer to the user makes
sense to a lot of people. And, as we
dig in more and more, that is still a
Now, one of the things we?re realizing
is that, once we go talk to the operator, a lot of
times the enterprise use cases become very
prominent. Internally, I call MEC the perfect
marriage between IT and telecommunications.
When you go to the enterprise, when you talk
about launching applications, many times you are
talking to the IT side of the enterprise. And IT has a
different methodology, different principles and
different personal methods than the
telecommunications side. The IT team is much
more comfortable when you hide that
communications part from IT ? making it seamless.
With MEC, we are saying, ?Here is an API, as
defined by ETSI, and you don?t have to worry
about all the plumbing underneath. With APIs, you
have a rapid innovation cycle so you can start
putting in your applications. And by the way, these
are applications that IT can control. The
applications that IT wants deployed. All using a
virtual machine with security.?
Regarding security, as part of our learning, we
have gone through a lot of security issues, related
to billing, lawful intercept. A lot of the things we
worked through today became a play for the
enterprise. Really opening up and transforming the
business from a consumer-driven subscription
model to an application-services enterprise model.
Monica: This changes the relationship between
the operators and the enterprise, and gives the
operators a way to address the needs of the
enterprise in new ways. Traditionally this has been
an issue for mobile operators.
Caroline: And, in fact, we see that some of our
partners go to market through the channel model.
If they package the MEC right, they can sell it from
their channel through MVNOs. When we started
our work on MEC, this was part of our vision; it
was on our wish list. And we started seeing it
coming true as we worked through all of the
learnings. It does change the model for a lot of
operators. It makes the operator more
comfortable with the enterprise side of MEC, and
capable of solving some of the B2B issues that ETSI
is working on.
Monica: Wi-Fi plays a crucial role today in the
enterprise wireless network. How is MEC going to
fit in? You can argue that Wi-Fi is in competition
with MEC. Alternatively, you can argue that MEC
gives the enterprise a way to share the same
services across different networks, cellular and Wi-
Caroline: We see Wi-Fi and cellular as
complementary. We never view Wi-Fi as a
competitor. Cellular and Wi-Fi can coexist. In the
past, an enterprise would choose between Wi-Fi
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |53|
and cellular. But we?re seeing enterprises go into
MEC deployments using both.
MEC has become access agnostic, and we are
advocating a multiple access platform. You can use
it either in cellular or Wi-Fi networks, as you see fit.
Or even in wireline networks. MEC becomes an
access-agnostic way of deploying applications. It?s
right on the network?s edge, as close to users as
Based on what we have done, and our partners
have done, MEC is predominantly deployed in LTE
networks. But going forward, we absolutely see
Wi-Fi being part of it. That?s why ETSI is moving
rapidly forward to include Wi-Fi and wireline in
MEC. Starting in late 2015, we have seen some
sporadic deployment ? early deployment of trials ?
and they have been predominantly LTE based.
Monica: This is a question that you probably get all
the time: What is the business model for MEC?
Because from both an enterprise and an operator
point of view, MEC requires an additional
investment in infrastructure. How can we justify
the additional cost?
Caroline: That?s something we?re going embark on
this year. Now that we have deployments out
there, we want to get a TCO model. From the early
indications we have, the payback time is
promising. You roll out MEC and then you start to
leverage your new ability to deploy multiple
applications ? not one or two, but tens of different
applications running on the same platform. This is
a step forward compared to the past, when you
might have had to roll out one platform for each
application. The payoff looks quite promising, and
we are working more on it this year. We would like
to publish a MEC business case for our end users.
Since we started working on MEC, we?ve been
doing a lot of edge caching. That seems to be
pretty straightforward in terms of being able to
save on backhaul. But we didn?t stop there; we
started looking at different verticals related to IoT.
We asked ourselves about retail, transportation,
health care and industrial. Each of them does have
its own matrix of things to do, so we started
attacking them one at a time.
One of the well-known trials was with China
Mobile and Nokia, on Mobile Formula One, and it
was within the sports vertical. This is a rather
straightforward use case, because you are simply
giving your subscribers a way to see a sports event
much more close-up and personal. At the same
time, the operator gets a share of the revenue for
video broadcasting. The TCO looks very promising.
Different use cases have different matrixes and
different returns. We are seeing very promising
results in some of the verticals, and we are
working with our partners and customers to start
publishing this kind of information.
Monica: As you mention, there is considerable
variability across verticals and environments.
Could there also be different sources of funding?
For instance, since the enterprise or the venue
owner stands to benefit so much from MEC, could
they become willing to pay for the MEC
Caroline: I think that?s very feasible. Especially if
you are looking at all the new spectrum that is
becoming available. Here in the US, there?s
discussion around CBRS and lightly licensed
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |54|
spectrum. It does create a different environment.
Opportunities created thanks to new spectrum
mean that vendors can sell new services and
applications through vertical channels.
Or what about an MVNO coming in and funding
part of this? It doesn?t all have to be funded by the
operators themselves. The enterprise may
purchase some of the equipment under a sharing
model. I think MEC becomes an enabler, and this
makes the different ownership models interesting.
Spectrum owners need to see a return for their
capex. If MEC is proven to fit well within their
business needs, it gives them an incentive to take
that next step in becoming a participant in the
Monica: We?ve been mostly talking so far about
MEC, but there are other initiatives, such as
OpenFog and CORD. How do you see MEC within
the context of these other initiatives?
Caroline: Every time people say there?s a new
initiative coming, it means there?s a heightened
interest within the ecosystem to make edge
I think OpenFog, Open Compute, CORD, and MEC
will eventually all merge, because they serve
different segments. I think that?s what ETSI is
trying to do. At the last ETSI meeting on MEC, we
hosted an OpenFog day, where the two camps
were sitting together and we learned from each
Will they all become one initiative? I don?t know
yet. We have seen multiple initiatives coexist
addressing different parts of the market. But I do
see some merging, especially as IoT becomes 50%
of the market. That needs to be addressed from an
enterprise perspective. But I think for now, all of
the innovation that is coming from all camps only
helps the industry to move forward. And to move
Monica: Could there be a risk of fragmentation or
the development of proprietary solutions, in this
Caroline: I hope not. I guess there?s always a risk,
but I think the camps are absolutely in dialogue.
You know, in that day at ETSI, I learned so much
from the OpenFog people, and there was all this
exchange of ideas. So I don?t see a high risk of
fragmentation. I see the camps learning from each
other, maybe initially addressing different parts of
the market. Instead, I do see a probable
convergence at some point.
Monica: Recently there has been a lot of interest
in network slicing. And with network slicing, you
manage traffic based on applications and services.
This is something that MEC does, as well, because
depending on the function, you do the processing
in a different location in the network.
This is different from what we have in the
networks today, where all the traffic goes through
the same channel. So what is the relationship
between network slicing and MEC, or edge
Caroline: MEC and network slicing complement
and enable each other. In fact, the Intel Mobile
World Congress 2017 demo will include network
slicing and MEC. I don?t see them as competing
with each other at all. Network slicing is an enabler
for 5G, and is built on top of a solid and flexible
There will be different slices targeting the massive
and varying IoT ecosystem, and enhanced mobile
broadband. MEC can sit at each one of these slices,
and we?re going to demonstrate that the two
enhance each other very well.
Monica: Intel has been quite busy with edge
computing trials. What were the major lessons you
Caroline: Number one is, when we started, there
were a lot of questions about the killer app. For a
while, I couldn?t get my head around ?What is a
killer app?? One thing we learned is that you
securing the network is a fundamental priority,
both in the hardware and in the software. I
remember, we saw this for the first time when
lawful intercept was introduced.
Also, once you start doing the trials, once you start
talking to different segments, different enterprises
directly, you see a set of killer apps. There?s not
one app that fits all. Every vertical has its own
The most fundamental thing is to provide a
platform that is secure, that is virtualized, and that
has a fast deployment cycle. This is very, very
important. That?s one of the hallmarks of a
virtualized platform. And it must not impede the
operations side. So, in other words, all your billing,
all your OAMs, or operations, administration and
maintenance, need to be packaged well.
And then the other learning point is that you do
need to take care of the channel. You need to
make your packaging such that you can easily sell
through the channels. You don?t need to worry
about the killer apps, because once you go out
there and engage, you will get the applications
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |55|
that run efficiently, cost-effectively and beneficially
on the platform.
So now what?s my biggest lesson learned? For a
while, we were brainstorming, we were doing
surveys. And then we realized that before we
found the killer app, we had to get a platform that
people are willing to deploy. If you pass the
security audit, then the application discussion can
Monica: How do you see Intel?s role in promoting
the growth of the ecosystem?
Caroline: Intel has always played the technology-
enabler partner role. For this, we have the
Network Builder program. It?s part of our NFV
initiative. It?s a way for us to get our ecosystem
together. It includes a lot of types of companies,
including software companies, component
providers, system integrators and middleware
companies ? and they rally us around MEC?s
With the Network Builder program, we?re
matchmaking. There is even some bucket-enabling
funding. We also have a very large enterprise sales
force. And we organize, and we work together
with our partners. Ultimately, we don?t sell the
system, we sell the technology. So we will
innovate, and invest in growing the ecosystem.
We?ll also do things like study the provider
business case, the provider business model, and
we?ll participate in the ETSI standard. We?re part of
the OpenFog initiative, as well, and we help make
sure that the standard is open and it adheres to
the NFV principles so it really becomes an enabler
of the ecosystem.
Monica: What are you working on right now, and
what should we expect from Intel over the next
Caroline: We?re making it well known that we are
invested heavily in 5G. We tied MEC into our 5G
initiative, and that?s why it?s part of my team?s
mandate. We?ll work closely with partners and
customers on it.
We?re developing many solutions, including
network slicing and a reference platform. We are
developing a platform where we are advancing the
ecosystem using network slicing.
We are also making the MEC SDK software
development kit available to the ecosystem.
Anybody who wants it may come to intel.com. If
you qualify, we will send you one of these.
And we?re going be supporting and participating in
more trials and proofs of concept. And really see
this taking off in the industry.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |56|
Intel (NASDAQ: INTC) expands the boundaries of technology to make the most amazing experiences possible. As the leader in the PC
industry, Intel is powering the majority of the world?s data centers, connecting hundreds of millions of mobile and Internet of Things
(IoT) devices, and helping to secure and protect enterprise and government IT systems. Our manufacturing advantage?fueled by our
pursuit of Moore?s Law?lets us continuously push the limits of performance and functionality and expand what experiences can be
made possible. Intel has a growing portfolio of products and technologies that deliver solutions to help communication service
providers transform their networks, bringing advanced performance and intelligence from the core of the data center to the network
edge. Intel?s commitment to network transformation is long and deep ? with years invested in delivering reference architectures,
growing a strong ecosystem, and partnering with end-users. We are also deeply committed to 5G which represents the true
convergence of computing and communications. 5G is a fundamental shift for the industry where networks will transform to become
faster, smarter, and more efficient to realize the potential for the IoT and mobility, enabling richer experiences throughout daily life ?
augmented reality, smart cities, telemedicine, and more. Information about Intel and the work of its more than 100,000 employees
can be found at newsroom.intel.com and intel.com.
About Caroline Chan
Caroline Chan is the Vice President Data Center Group and General Manager 5G Infrastructure Division within Intel?s Network
Platform Group (NPG). She is responsible for leading a cross functional organization driving global network infrastructure strategy for
5G. Bringing Intel processor into the wireless infrastructure, projects such as virtualized RAN, mini-cloud RAN, 5G network,
heterogeneous network consisted of small cells and Wi-Fi, and mobile edge computing for IoT. In her role, she closely works with
telecommunication vendors, operators, and application developers. Caroline also represents Intel at industry forums. Her research
interests include 5G and HetNet performance. Prior to joining Intel, Caroline was Director of Product Management at Nortel Networks
where she managed a portfolio of 3G and 4G wireless infrastructure products. Caroline was born in Nanjing, China, received her BS EE
from University of Texas at Austin, and MS EE from University of Massachusetts at Amherst. Outside of her family and work, Caroline
is passionate about the Texas Longhorn Football team.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |57|
For over four decades, InterDigital has developed
new mobile technologies and contributed to global
wireless standards. The company?s activities are
organized around the concept of the emerging
Living Network, in which intelligent networks self-
optimize to deliver services that are tailored to the
content, context and connectivity of the user,
device or need.
In the edge computing space, InterDigital strives to
bring MEC to the far edge ? small cells, HeNB and
STB, as well as user devices such as personal
devices or devices embedded in vehicles.
Processing and storage at the far edge enable
ultra-low latency, location-specific and real-time
traffic optimization, agility and adaptability, as well
as context awareness. These performance
enhancements are designed to support distributed
MEC use cases such as multimedia content,
gaming, and personalized cognitive assistance, and
generally to aid the development of applications
that benefit from edge architectures.
InterDigital actively participates in the ETSI MEC, as
well as in the Open Fog Consortium and other
open-edge initiatives, as well as in groups with a
wider mandate, such as 3GPP, European
Commission?s Horizon 2020 and NSF?s PAWR. It
also participates in an ETSI MEC PoC in
collaboration with Intracom, CVTC Limited and
University of Essex. The PoC uses FLIPS, a multi-
cast video technology designed to lower the
latency in the transmission of real-time content.
InterDigital?s MEC vision is rooted in two
technologies that the company believes
complement and enable MEC deployments:
? Flexible routing, which combines SDN and ICN
and provides the foundation for FLIPS. ICN
shifts networking away from host-to-host
communications, to content- and name-based
addressing. InterDigital believes that this
paradigm shift is necessary to meet 5G?s
latency requirements, by facilitating pushing
content to the edge.
? Dynamic surrogates, which support ICN-based
orchestration that allows operators to allocate
network processing resources where and
when they are needed, using softwarized
servers located at the edge. The goal of
surrogated services is to predict and
dynamically meet traffic demand, taking
advantage of the flexibility afforded by NFV.
InterDigital continues to pursue its R&D activity to
integrate services with the network infrastructure,
as SDN, NFV and MEC converge in the evolution of
4G and the development of 5G.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |58|
with edge computing
A conversation with Debashish
Purkayastha, Member of Technical
Monica Paolini: MEC and, more generally, edge
computing are crucial to ensuring that low-latency
and high-reliability applications can be successfully
deployed in 4G and 5G networks.
Debashish Purkayastha, Member of Technical Staff
at InterDigital, shares with us his thoughts on how
this transition to mobile edge computing, through
initiatives such as MEC and OpenFog, will change
the way we operate and benefit from wireless
networks, and how we will be able to meet the
new latency and reliability requirements.
Debashish, can you tell us what you and
InterDigital are doing in mobile edge computing?
Debashish Purkayastha: The development of the
mobile edge network has the potential to lead us
to the next killer app. There are mainly two
reasons. The first is that edge computing can
dramatically increase the responsiveness of the
network. The second is that it gives application
programmers access to mobile network
information, which many of the developers don?t
have at this time.
We started to work on edge computing a few
years back, when we identified the need for
enabling the edge. This means adding capabilities
at the edge and defining a range of services that
can be offered from the edge, taking advantage of
the network information as well as user
We started by working on edge caching, storing
content at the edge of the network, and then
enabling video-distribution services from the edge
with a more detailed knowledge about what the
user likes, what the user?s preferences are,
because those can be easily and accurately
measured from the edge of the network.
While working on that, we realized that we did not
need to constrain ourselves to content, and that
we could move computation, as well as storage, to
the edge of the network.
The first challenge that we faced was defining
where and what exactly is the edge. Depending on
the use cases, the edge may be at different levels
of the network.
Another problem is the availability of so many
devices and platforms, which create a
heterogeneous environment: devices of different
shapes, different sizes, different capabilities. These
devices may be owned by different parties and
different network operators, and use different
access mechanisms. It?s difficult to develop
applications on such a wide range of
The third challenge is for the developers. With so
many different devices, different access
mechanisms, different owners, how can a
developer write an application that runs uniformly
across all such platforms?
Monica: With edge computing, we gain the
opportunity to manage traffic in a more intelligent
and sophisticated way. But at the same time, we
have to deal with a higher level of complexity.
From an operator point of view, the increase in
complexity can be challenging. What are the
Debashish: The main advantage is the reduction in
latency. With lower latency, we will see instant
improvements in the responsiveness of
applications, which in turn will improve the
satisfaction of the customers that the network
operators are serving.
Another important aspect of edge computing is
the reduction in traffic in the backhaul network. As
small cells are being deployed, connecting a small
cell to the backhaul network can be challenging. It
is difficult to run fiber to each small cell, thus
creating a bottleneck in the operator?s network.
To avoid that potential bottleneck, edge
computing provides a reduction in traffic ?
because we have the capability to process users?
requests and data at the edge. It will be possible to
respond to user requests from the edge and avoid
data traveling over the backhaul network.
The third important aspect is privacy and security.
As personal data traverses the internet, it is
vulnerable to being stolen. With edge computing,
we can process data at the edge, so we can keep
the data in a local context. We don?t let it go
outside the local context, into the internet. Many
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |59|
users and applications demand that kind of privacy
Monica: You mentioned you may want to keep the
content local to enhance security or to choose
different levels of performance for applications.
But who is going to decide?
Obviously, the first candidate is the mobile
operator, which decides how to manage traffic
based on its network resources. But it could also
be the enterprise that decides what content stays
local. Or it could be the user or the content
provider that decides. How do you see that
Debashish: By deploying MEC, we have more
options for traffic management, or traffic control.
Where does my data go? How far does it go?
These are decisions that are available to the user,
the network operator, the application developers,
and the enterprise itself. Right now, we think it is a
decision that can be application specific or user-
requirement specific. There will not be a single
rule, and the decision may vary depending on the
application, context, etc.
For example, in certain applications, such as a
video distribution service, the user may be OK with
certain kinds of low-quality video, and this will
allow the network operator to lower the video
quality to improve the efficiency of the network.
In such cases, the operator may decide what data
rate a particular user should be allowed, based on
the utilization of network resources. For example,
in the video distribution service, it may not be
possible to give HD video to every user who says,
?I always want HD-quality video.? In that kind of
application, the control will likely stay with the
It is different, however, if we?re talking about user
data, such as medical records that are being
processed at the edge in a medical application. In
that case, it may be the user who says, ?My data
should be processed at the edge,? and attaches
that information to its request to the network
Users can go ahead and even specify, for example,
the edge server where they want to process data,
because they may know that only that specific
server can provide the security they expect.
Monica: Within ETSI, there is an industry-wide
effort to standardize edge computing with MEC. A
MEC standard is an enabler for edge computing in
multi-vendor networks. You need multiple vendors
for edge computing, and interoperability is
required. What is the progress to date?
Debashish: Standardization is important for MEC
technology to be adopted widely. There are areas
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |60|
where standardization is required, and some
that we can leave open.
For example, MEC is a combination of two
technologies. One is the networking
technology, and the other is IT, or information
From the networking technology perspective,
a MEC platform has to be connected with the
core network and with the access network.
We may need to standardize that interface;
otherwise we may have issues with
Also, we may need to standardize how a MEC
platform will collect and provide network
information to applications. The information
that the platform provides may be a candidate
If we look at the IT side of the MEC
technology, we may need to standardize how
an application can be run on a platform, what
APIs it can use, and what services the platform
By standardizing those APIs, we enable developers
to write applications for platforms from different
vendors. That is also a very important part of the
But we can leave open certain decisions about
implementation, such as how the application
provider reads, processes and uses the data. Those
are definitely aspects that we do not plan to
Monica: The standardization process is going to
create a foundation for an ecosystem for MEC and
other edge-computing initiatives. What is
InterDigital doing to strengthen this ecosystem?
Debashish: We believe the creation of the
ecosystem is important. This can be done in
multiple ways. First, by participating in the
standardization efforts. Second, come up with test
beds and proofs of concept.
At InterDigital, we have developed a FLIPS-based
proof of concept available in the ETSI PoC Zone.
We also actively participate in many of these
consortiums, where we contribute towards
building test beds to evaluate the MEC technology.
ETSI MEC and other consortiums are developing
these APIs that allow application developers to
come in and build applications for the platform.
In ETSI MEC?s phase one work, APIs have been
defined already. They will enable application
developers to build and run their applications.
Many vendors are coming up with platforms that
run applications at the edge.
We believe that in the near future, with the
standardization of APIs and the development of
test beds and proofs of concept, we will see more
development and growth in the MEC ecosystem.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |61|
Monica: What is the relationship between 5G and
MEC? Do we need to wait for 5G before we deploy
Debashish: We see a lot of references about the
link from MEC to 5G, and from 5G to MEC. Today,
MEC is being deployed in 4G networks, to offer
innovative services and enhance user experience.
However, MEC will hit its potential with 5G, and
5G will hit its potential with MEC. They
complement each other. MEC is the very
foundational element that will allow 5G to live up
to its potential.
In the IMT-2020 5G use-cases triangle, MEC is a
foundational element in a lot of the cases in the
ultra-reliable and low-latency vertex. For example,
the requirements for use cases such as connected
gaming, cognitive assistance, industry automation,
mission-critical applications cannot be met without
Furthermore, MEC provides network-related
information that will be valuable in 5G networks.
Monica: Virtualization and MEC are closely
related. It?s difficult to think of mobile edge
computing without some level of virtualization,
because virtualization allows you to move
functions to the edge, or it makes it easier.
Debashish: MEC is a combination of networking
technology and information technology, otherwise
known as cloud technology. If we talk about cloud
virtualization, use of virtual machines is what
comes to mind automatically. The ETSI MEC
reference architecture describes the virtualization
of the MEC platform.
ETSI MEC actively collaborates with another ETSI
group called ETSI NFV, involved in Network
To enable cloud resources at the edge, we
definitely need virtualization techniques. It?s not
only the VM, or the virtual machine, technology
that will enable MEC or make MEC successful. We
need to also look into the other technologies that
One example of this is containers, or micro
services. They enable virtualization, but they?re
different from a virtual machine concept. Those
will also come into the picture as MEC gets
deployed into multiple use cases.
Monica: There are different definitions of edge
computing. Where is the edge? You can argue that
the edge is at the eNodeB, or it could be an
aggregation point in the C-RAN. Or it could be in
the device. Where is the edge? And is there one
Debashish: We think the edge can be at multiple
levels of the network. Again, it really depends on
the use case, or the verticals that we are talking
Now, if we talk about, say, ultra-reliable low-
latency use cases, it makes sense for the edge to
be defined as what we call the leaf of the network,
the extreme edge. It?s not even eNodeB. It may
not even be a small cell. It can be the user device
In certain cases, where ultra-low-latency is not
required but, say, we need a big amount of
computing, it?s OK to put the edge at one or two
levels higher in the network. Maybe at the core
network level, or in certain cases, it?s OK to be in
the distant cloud itself. It?s based on the
requirements of the verticals we are considering.
Where we want to apply the MEC technology will
decide where the edge is.
From an application-developer perspective, before
writing down the code or developing the
application, a developer needs to first clearly
identify where the application will benefit, and
decide where it should be deployed to get the
most benefit out of the edge computing.
Based on those requirements, the application will
be deployed at a certain location in the network.
Monica: What are the best use cases for mobile
edge computing, and where should they be
located within the edge?
Debashish: For the ultra-reliable low-latency
communications, or URLLC, use cases, we see
applications like connected gaming, cognitive
assistance, autonomous navigation for self-driving
cars, or even drones benefiting the most from
It would be beneficial for those use cases, where
we need very low latency, to move computing to
the edge of the network, what we also call the far
edge ? maybe in the small cells, or maybe in the
access points. Or even certain parts of the edge
resources can be put in the device itself.
In other cases ? for example, photo editing or OCR
applications ? computation may not need to be in
the device itself. We can offload computation to
the network ? say, one level higher in the
aggregation, in the eNodeB, for example ? or
maybe at the core network level.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |62|
Monica: Let?s look at the business model. Are
operators going to pay for MEC, or are the content
providers, the enterprise, the venue owners going
to pay for some or all of MEC?s infrastructure?
Debashish: The business model is still evolving.
The industry doesn?t have a clear answer to
questions like ?What?s the business model?? or
?Who pays for MEC??
We think the initial adoption of MEC will be driven
by enterprise customers to support some of their
applications, such as industrial IoT, surveillance,
covering sporting events, distributing content
generated from those events.
Now, moving beyond enterprise customers, there
are use cases where both end users and the
network operators benefit, in which case the
question of ?Who pays for that?? is still not clearly
Network operators may deploy MEC with the view
that they will be able to generate additional
revenues. Revenues may come from the users or
the application developers. By providing services
such as computation, network information,
location information, etc., network operators are
hoping to generate additional revenues.
On the other side, users may not be willing to pay,
because, from their point of view, operators
deploy edge computing to solve a network-related
That is an area which is still not clearly defined, but
we think it will be driven by how successfully MEC
platforms solve problems related to latency,
backhaul traffic and computational offloading. If
MEC enables innovative services and improves the
performance of applications, users may be willing
to pay for those value-added services.
Monica: Can you tell us what you?re working on
right now in preparation for the next five years?
Debashish: At InterDigital, we are still debating
exactly where the edge is. Is it at the core network,
or is it at the eNodeBs, or is it in the devices?
From our perspective, we think that we will
benefit by pushing applications to the far-off edge
? i.e., very far out in the network. We are
interested in enabling applications that will require
this far-edge MEC to support low-latency use
We are now working to build a MEC platform that
will enable running in far-edge devices. When I say
?far-edge devices,? I mean a small cell or a Wi-Fi
access point, which will not need, say, any
additional computational infrastructure. We will
try to use whatever computational capability those
We are planning to build a platform that can be
used across all these different devices, and enable
application developers to write one single
application for multiple devices.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |63|
InterDigital develops technologies that are at the core of mobile devices, networks, and services worldwide. We
solve many of the industry's most critical and complex technical challenges, inventing solutions for more
efficient broadband networks and a richer multimedia experience years ahead of market deployment.
InterDigital has licenses and strategic relationships with many of the world's leading wireless companies.
Founded in 1972, InterDigital is listed on NASDAQ and is included in the S&P MidCap 400? index.
About Debashish Purkayastha
Debashish Purkayastha is a Member of the Technical Staff in the Technology Evaluation and Prototyping Group
at InterDigital. He is part of a team working on Multiple-Access Edge Computing (MEC) and currently focusing on
activities related to MEC in 5G and multi RAT environments, and to distributed virtualization to enable
computing at the extreme edge of the network. He has been working in the wireless communications industry
for more than twenty years, focusing on the design and development of 3G, 4G, 5G cellular and Wi-Fi systems.
He has been granted 25 patents and numerous patent applications are pending to date. He holds a Master?s
degree in computer engineering from Villanova University in Villanova, Pennsylvania, USA.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |64|
Qwilt was founded in 2010 to help broadband
fixed and wireless service providers optimize the
delivery of video traffic, both to meet the capacity
and latency requirements of high video traffic
loads, and to improve quality of experience.
From the beginning, Qwilt has developed solutions
based on open caching to manage video content
from multiple sources and direct it to a variety of
subscriber devices that work across different types
of networks. Depending on the application
requirements and service provider preferences,
Qwilt solutions can be deployed in the core, at the
Gi-LAN, or in the edge cloud, at the eNodeB.
Qwilt?s Open Edge Cloud platform provides open
caching and content delivery solutions for service
providers that want to optimize video delivery and
other real-time applications such as augmented
reality and virtual reality at the edge of the
The Edge Cloud platform leverages compute and
storage capabilities as close as possible to the edge
? and hence to the users ? to minimize latency. It
relies on cloud management and connectivity, and
open APIs using Edge Cloud Nodes. The Edge
Cloud platform is not designed to replace or
compete with CDNs, but rather to complement
them, and carry the traffic where CDN nodes are
not deployed or are not cost effective.
Qwilt?s Edge Cloud solution aims to extend the
content providers and CDNs footprint, to lower
transport costs and to improve delivery quality. It
also allows them to manage traffic spikes and
adjust to the uneven distribution of traffic through
time, and, hence, to utilize network infrastructure
more efficiently and reduce the need for capacity
The solution allows service providers to meet
anticipated traffic spikes (e.g., a game or an
update) or unexpected ones (e.g., an accident or a
viral video). It can lower capex (less infrastructure
investment is needed) and lower opex (peering
and transit costs are lower).
Open Caching software combines open caching
with media delivery and analytics, and runs on an
NFV platform, the Video Fabric Controller, that
runs on COTS. With open caching, data that is used
frequently is stored at the edge and delivered
when requested, without requiring any action
from content providers, CDNs, or subscribers.
Qwilt estimates that 10% of titles account for over
80% of video traffic. Without caching, this
frequently accessed content has to repeatedly
traverse the network, increasing costs and latency,
and lowering QoE. Caching at the edge enables
operators to reduce traffic and cost within the
core, and minimize latency. According to Qwilt,
with open caching operators see an average
streamed bit-rate increase ranging from 55% in
North America, to 133% in Asia.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |65|
Video at the edge
optimizes fixed and
A conversation with Dan Sahar, Co-
Founder and VP of Product Marketing,
Monica Paolini: Video is the traffic type that
benefits the most from edge processing and
storage, both in fixed and mobile networks. And
increasingly they both share platforms and traffic
management tools. Edge computing will
accelerate this trend. We explore these topics with
Dan Sahar, Co-founder and VP of Product
Marketing at Qwilt.
Dan, can you tell us what is that you do at Qwilt to
help service providers improve the subscriber
experience with streaming video?
Dan Sahar: Qwilt was founded in 2010, with the
objective of helping service providers address the
growth in online video. Streaming video was just
starting back then, with the likes of Netflix and
YouTube, and that had a major impact on service
We realized the best way to address that challenge
was by changing the way broadband networks and
mobile networks are built, and by bringing the
content delivery function of those videos,
primarily, closer to the subscriber. And by doing
that, we gain efficiencies on multiple fronts, both
on the economic front as well as on the quality of
experience front. The solution we created is one
that enables these service providers to do exactly
that. You could think of it as the last tier of content
delivery that sits inside the service provider
network, and is able to acquire content from a
range of sources, and deliver it in close proximity
to where the users are.
Monica: That?s a major challenge, because video is
the most difficult type of content to transfer over
fixed or mobile networks. Can you tell us what you
do to the content itself? How did it change
throughout the years?
Dan: One key principle in our solution from day
one was that we maintain the same fidelity for the
video as it was originally streamed and thus we do
not make any changes to the videos themselves.
Video has changed in several ways through the
years, from progressive download initially, into
adaptive bit rate. Adaptive bit rate is probably the
method most streaming video providers use
today; progressive download has pretty much
faded out. And the move we?re seeing right now is
to use adaptive bit rate for both live and VOD
content, and to look at ways to optimize and
secure that delivery. We?re also seeing a growth in
TLS and HTTPS delivery mediums. Our solution
evolved to address ABR, as well as changes on the
transport and content sides.
Monica: When we talk about edge computing,
how do you define the edge? Where is the
location of the edge that optimizes delivery of
video and other types of content?
Dan: We see the edge in primarily two locations.
Inside the network, it would be the first IP location
in the network. On the fixed-line side, like in a
cable network, that might be a CMTS location, and
it can be on the B-RAS on the fixed-line side. On
the mobile side, it used to be at the SGi or Gi level.
We?re seeing it move deeper, to the eNodeB and
S1 interface. That would be the first point of edge
inside the network.
The second place where the edge can play a role is
on the device or at the home. On the device, this
can be the handset and the software application
running on the device. At the home, it can be a
residential gateway, or even an Apple TV or a
Chromecast or an Amazon Echo that has some
built-in content delivery capability inside of it.
And there?re different characteristics for each one
of these locations: some have more processing
power and more storage; others have less
processing power, less storage, but they?re a lot
closer to the consumer, so they bring more value
to the entire value chain.
Monica: How do you decide where the edge is for
a specific type of content, application, content
provider or service provider? Is there an easy way
to figure it out?
Dan: I think there?s no one right answer for that.
You could equate this to the way packages are
delivered in the real world. You can have US Postal
Service, you can have FedEx, pretty soon you?ll
have drones delivering them to your home. They
all get the package there on time, but some of
them cost more, and each one has different
capabilities. That?s a good analogy to how content
delivery is done. Some things you can deliver from
the centralized cloud ? for example, from an AWS
data center ? and for other things, there?s a lot
more value to doing them at the deep edge of the
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |66|
Ultimately it has to have an economic balance.
There has to be a benefit to the service provider to
deploy edge computing and storage resources so
that content delivery can leverage these
capabilities. Then you have to decide which
content can be delivered from the centralized
cloud, and which content has to come from the
Monica: At the beginning, video was over fixed
networks. And when video over mobile came
along, it was a completely different thing ? just
short videos, and different from video over fixed
networks. Is that still the case, that we can make a
clear distinction between fixed and mobile?
Dan: I think they?re becoming very, very much the
same. And operators are changing as video
On the consumer side, people watch videos on
their mobile devices. It can be on Wi-Fi, but when
they get out of their own homes, they continue to
watch the same videos. The content is becoming a
lot more similar across delivery mediums. Maybe
you have a bigger screen on one than on the
other, but the content is not different.
There?s some adaptation the content provider has
to do for the screen size, but, other than that, the
transport medium and the ABR formats are exactly
the same on both access types. And I think it?s a
good thing: the industry is becoming one big video
medium, so you can watch video wherever you
Monica: And the expectations are pretty much the
same from the user perspective, so they?re not
willing to say, ?Well, since it?s mobile, it might not
be as good quality.? They expect the same good
quality they expect from fixed networks. What
does this convergence mean for Qwilt in terms of
the solutions you provide?
Dan: Our solution has two main components. One
is the edge-cloud nodes, the cache software that
sits inside the service provider network. The
second is the cloud component ? you can think of
it as the control plane ? decides how to delegate
the traffic into those caches.
Now, these edge cloud nodes can sit on the mobile
side, and they can sit on the fixed-line side. What
goes into them has to be location specific. If I have
a software node that sits on the fixed-line side, it
will cache the content that is relevant for that part
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |67|
of the network. If an operator has both mobile and
fixed, maybe it?ll have nodes that have the mobile
formats for those videos on the mobile side, while
the equivalent nodes on the fixed-line side will
have encoding that is more suitable for the
residential devices such as Apple TV or
Chromecast. But their function will be very much
Monica: That?s good also for those service
providers that have both fixed and mobile. They
can manage video and other content using the
same platform on both networks.
Dan: The operator will have a single platform it can
manage across both mobile and fixed, and this
platform will also be able to address the operator?s
own content. Many operators have a video side to
their business, as well as third-party OTT content
that comes over the internet.
The vision is that there will be one layer that
addresses both of these content sources, and
there would be a single resource that can adapt
and handle both. If an SP has a big launch of a new
series, you can allocate the resource to that. If
there?s a big live event going out from OTT
sources, the resources will shift to that.
Monica: You raise an important issue: both the
service provider or a content provider may own
the content. As we move more functionality to the
edge, how is the relationship between content
provider and service provider changing?
Dan: With our solution, we help the service
provider become part of the content delivery
chain. We do this by creating an API that enables
various content providers to make use of these
resources. But the owner of the actual
components ? the storage and compute ? is the
service provider, and the service providers have to
decide how to make use of these resources. We
give them the tools to do exactly that.
I think the understanding across the board is that
consumers expect to have content from a range of
sources ? not just from the service provider?s own
offering ? and the service providers have to build
the network that can handle them all.
Monica: The service providers are in control of
how the content is transmitted, but at the same
time, they need to work with anybody that
provides that content to make sure the content
delivery is working as expected.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |68|
Dan: Exactly. I wouldn?t say take control; I would
say they participate in content delivery. Because
the current delivery medium is still valuable and is
still going to be used. A lot of content is going to be
streamed from the centralized cloud platform.
For some things, such as popular content, or
content that is very latency sensitive, you can put
into play the edge resources that the service
provider has deployed. These are network
resources that nobody else has, and content
providers can decide which content to stream
from the centralized cloud, which to delegate to
the service provider for delivery at the edge. I think
operators could play a much bigger role than they
do today in content delivery with edge cloud
Monica: Within edge computing how much are
efforts such as MEC and Fog Computing
accelerating this trend?
Dan: There are technical challenges on several
fronts. On the mobile side, one is how to get a
server function to be inside the mobile network,
and MEC is doing a lot of work on that front. If
you?re deploying content delivery functions on the
S1 interface, how are you going to take care of
billing? How are you going to get into the GTP
tunnels? There is a number of questions. You
could think of them as inter-working function
questions covering how to place a server inside a
mobile network. And I think the MEC group is
doing a great job on that front.
There?s another set of interfaces that have to be
defined: how does a content provider that has a
VOD or live offering on the internet make use of
these service provider resources? How does a SP
publish these resources to the outside world as a
function that content providers can leverage?
Qwilt has been doing a lot of work on that front
under the scope of an industry group, the
Streaming Video Alliance, which is creating the set
of APIs to enable a content provider or a
commercial CDN to use what?s known as open
caching functions that sit inside the service
provider edge cloud.
Monica: You?ve been working on this before MEC
standardization work started. What have you
learned so far?
Dan: The ecosystem needs a lot of balancing. You
have a situation because some service providers
compete to some extent with internet content
providers ? they both have video services ? but
there is the common understanding that the
consumer is shared between the two. And there?s
a greater maturity in the industry now: people are
figuring out that the content providers, CDNs and
service providers have to work together to create
an infrastructure that will benefit everybody
economically, but also quality-wise. That?s
something that took many years to build, and
when we started out there was little collaboration
between the two sides ? the service providers and
the content providers. It?s becoming a lot better
Monica: And I think that?s crucial, because, as you
said in the beginning, monetization is a big issue. Is
somebody getting a free ride?
And do you think there is now a balance within the
ecosystem as to how much different players
benefit and have to commit to financially?
Dan: You see several initiatives that are driving
more collaboration, like the Streaming Video
Alliance that I mentioned earlier. The TIP project
that Facebook is driving is another such initiative.
And ultimately you see a far greater collaboration
between the two sides that didn?t exist before.
Service providers have a lot to offer to the
ecosystem, and they bring important assets. They
own the network, and the network has capabilities
that are unique, that you cannot just get
anywhere. If I?m an internet content provider,
there?s no way I can get my hands on multicast in
the access network without a service provider to
help me out.
And then on the other side you see a service
provider understanding that content providers
drive a lot more data into the network, which is
good, and ultimately, that?s what the consumers
consume ? a lot more over-the-top content ? so
their network has to support that.
Monica: What does Qwilt do that is crucially
needed and different from everybody else?
Dan: The first fundamental principle at Qwilt was
that we are at the edge. We?ve been deploying
software at the edge of the network where
compute and storage resources did not previously
exist, and we were one of the first companies to
And the edge has a lot of intricacies that are not
trivial to solve. How do you manage massively
distributed software nodes? How do you squeeze
the most performance out of the very limited
computing storage you have at the edge, because
of the real estate limitations?
The other aspect is the ecosystem that we?re
building, both technically and commercially, with
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |69|
content providers as well as CDNs. We?re ahead of
the curve in terms of the APIs that are required to
drive this collaboration ? how content providers
and CDNs can make use of the edge cloud of the
open cache function that sits inside it.
Monica: The business case is becoming a hot topic
in edge computing. With edge computing, you
obviously need to add more infrastructure at the
edge, and that comes with a cost. You get better
performance, but is this an appealing tradeoff?
Dan: You have to look at who?s gaining what from
this edge computing model. Every bit that is
streamed from the edge inside of a service
provider?s network is a bit that doesn?t have to
cross the entire network, and it doesn?t have to
cross the core of the internet. That means a
service provider gains economic efficiency by
streaming that bit from the edge instead of coming
from a transit or peering location. And there is
value in that.
The other value is consumer experience. Because
content is streamed from the edge, there?s a
greater ability to overcome any bottlenecks in the
network, and much lower latency as the content is
streamed to the consumer. That?s a benefit to the
service provider as well as to the content provider
in terms of the experience their subscribers are
getting. If the content provider is striving toward
providing a full HD experience all the time without
buffering, this is the way to do it.
And this value extends to commercial CDNs that
are the platform many content providers use to
distribute their content. This gives them better
reach in places they cannot reach today, and with
far lower latency.
There is definitely value to the entire ecosystem
and a question of how each side is going to
compensate the other for this value. I think the
market will dictate how exactly that is done. But
we have found that equation balances itself out,
and when we?re streaming content from the edge,
there are economic benefits across the board for
Monica: Do you see, in terms of ownership and
initial capex investment, an increasing role for the
venue owner, the content providers, the CDNs, to
participate, because it?s something they benefit
from along with the service provider? Are they
willing to step up and put some investment into it,
Dan: That?s something that will fall primarily on
the shoulders of venue owners and service
provider as they?re building their venue network.
It?s expected of them, because that?s they will reap
the benefit. And for edge computing, it?s like the
move from traditional networking into networking
that uses storage and compute resources, so it?s
basically shifting resources from one place to
Both network operators and venue owners are the
ones that are going to be responsible for building
out that infrastructure, and there are ways for
both of them to benefit. Over the long term, that?s
a far more economically sensible way to build
networks than simply to throw routers at the
Monica: What are you focusing your attention on
these days to get ready for the challenges,
including the challenges of 5G, over the next five
Dan: Our focus is on two fronts. One is the
technological front: to create standardization
around the APIs that are required to enable this
open cache function inside the edge cloud, and to
manifest that into our product as well, and to have
a range of APIs that the content ecosystem can use
? increase our capabilities when it comes to
service providers? own content.
The second front is on the ecosystem side. We?re
trying to enhance our content provider and CDN
relationships so they can make use of this edge
cloud layer that sits deep inside the service
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |70|
Qwilt?s unique Edge Cloud platform and Open Caching software solutions help internet service providers
address the dramatic growth of streaming media on their networks and the need for a low latency, high scale
infrastructure to support future applications. Qwilt?s cloud-managed open platform, running on commodity
compute and storage infrastructure and deployed close to consumers, creates a massively distributed Edge
Cloud that supports applications such as Open Caching, 4K live streaming, AR, VR, self-driving cars and IoT. This
low-latency Edge Cloud architecture enables a high-quality streaming experience for consumers on a massive
scale. A growing number of the world?s leading cable, telco and mobile service providers rely on Qwilt for Edge
Cloud applications. Qwilt is a Founding Member of the Streaming Video Alliance and a leader of the Open
Caching industry movement. Founded in 2010 by industry veterans from Cisco and Juniper, Qwilt is backed by
Accel Partners, Bessemer Venture Partners, Cisco Ventures, Disrupt-ive, Innovation Endeavors, Marker and
Redpoint Ventures. Learn more at www.qwilt.com.
About Dan Sahar
Dan Sahar is the Co-Founder and VP of Product Marketing at Qwilt. Dan drives product marketing and go-to-
market activities for Qwilt, bringing more than fifteen years of marketing and product management experience
at high technology companies. Prior to co-founding Qwilt, Dan was Director of Product Marketing at Crescendo
Networks (F5 Networks), a leading provider of data center application delivery products. At Crescendo Dan was
responsible for leading the company?s overall marketing and product direction. Earlier in his career Dan held
product management roles at Juniper Networks and Kagoor Networks (acquired by Juniper) as well as
engineering management positions at Kagoor Networks and Seabridge (Nokia Siemens Networks). Dan holds a
Bachelor?s degree in Computer Science and Business from Tel Aviv University Magna Cum Laude and an MBA
(Marketing) from the Leon Recanati School in Tel Aviv University.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |71|
Vasona Networks? focus has always been
optimizing RAN performance and QoE by
managing traffic at the edge between the RAN and
the mobile core, and it started well before MEC
and other recent edge computing initiatives were
established. The goal of this approach is to help
mobile operators improve end-user quality of
experience and utilize network resources more
efficiently and, in turn, save money and
differentiate their services.
To date, Vasona has raised $48 million from
investors that include Bessemer Venture Partners,
New Venture Partners, NexStar Partners and
Vodafone. Vasona has deployments in Europe,
North America, and Latin America.
Vasona?s initial solutions addressed the challenges
that mobile operators face with video traffic and
its tight capacity and latency requirements, which
have to be accommodated in a RAN that is often
congested. Bad video experiences not only cause
end-user dissatisfaction, they also waste RAN
In the last few years, Vasona?s approach has
widened to include the optimization of all types of
traffic, by allowing operators to manage each type
of traffic based on specific traffic requirements,
RAN conditions, and operators? strategy and
Vasona has developed standards-based software
platforms for MEC that sit at an aggregation point
between the RAN and the mobile core. By placing
the MEC functionality in an aggregation point that,
typically, covers a thousand or more cells, Vasona
helps operators contain the cost of edge
processing and coordinate traffic management
over a larger part of operators? footprint, rather
than for a single cell.
Vasona has two products today. SmartAIR? is an
edge application controller designed to overcome
resource contention at the individual cell level,
taking into account all the active application
sessions. When the RAN is congested, SmartAIR
operates in real time to manage individual traffic
flows to reduce latency and improve network
SmartVISION? is a software suite that provides
operators with real-time and historical data to help
them analyze RAN performance. For each cell
sector, SmartVISION collects information on user
activity, app and content usage, and capacity ?
information that operators can use to optimize
network performance and plan for network
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |72|
The network performance
A conversation with John Reister, Vice
President of Marketing
and Product Management,
Monica Paolini: MEC and other edge computing
initiatives call our attention to the relevance of
location of network functions in determining
performance and, even more crucially, QoE.
Vasona Networks had started to move traffic
optimization closer to the edge before MEC work
at ETSI started.
In this conversation with John Reister,
Vice-President of Marketing and Product
Management at Vasona, we talked about the
evolution of moving functions toward the edge ?
and about where the edge is.
John, can you tell us what Vasona does in this
space and what?s your role in the company?
John Reister: I run product and marketing. Vasona
is an edge computing company. We were founded
on the principle of finding value for mobile
operators by providing computing software at the
edge of the network.
Monica: Over the last few years, we have been
talking about the cloud, about centralizing
everything and moving processing away from the
edge. In the last few years, there has been an
increasing focus on what?s happening at the edge
? and on edge computing. Why do mobile
operators care about edge computing?
John: One of the things edge computing does is it
enables operators to achieve a higher degree of
flexibility in their networks. That gives them a lot
more agility in how quickly they can move and
how quickly they can introduce services.
It has the potential to reduce cost. It enables
operators to optimize networks and introduce
new service capabilities for their customers.
Ultimately, it?s about enabling them create more
value in a way that?s unique.
When operators implement technologies in the
core or in the data center, they tend to be things
that can be easily copied by anybody on the
internet. Whereas when you do things at the
network edge, it?s a more sustainable
differentiation, a unique capability that adds value
for your mobile customers.
Monica: Are there any challenges that come with
John: Yes. One of the challenges, is when you say
edge computing, where?s the edge?
One of the challenges we hear from the operators
is ?I?ve got 10,000, 20,000, or 50,000 cell sites. If
you?re going to tell me I have to deploy a server at
each one of those, or even one server for every 10
of those, that?s thousands and thousands of
servers that have to be deployed and managed.
It?s not only expensive, it?s an operational mess.?
That?s why we don?t do it that way.
Alternatively, we deploy at an aggregation point,
where there tends to be anywhere from 100 to
500, or even 1,000, cell sites that come in through
that edge compute implementation. You?re two
orders of magnitude, almost three orders of
magnitude more scalable, because there are fewer
places you have to deploy.
The other big challenge we encounter is that edge
processing is an inline element. To do the kinds of
things you want to do with these services, you
need to be inline. As soon as you?re inline and
affecting tens of thousands, if not hundreds of
thousands of mobile customers from this
installation, reliability is critical.
Achieving high availability, high reliability, is
essential. Proving that you can achieve that with
this kind of infrastructure is key.
Monica: How is Vasona addressing the challenges
you just mentioned?
John: Our implementation is software based. We
can be deployed as close to the edge as the
operator wants. But for the applications we do and
for the kind of capabilities operators want to
achieve with the platform, we think that being at
the aggregation point works really well.
We?ve implemented capabilities to sit at that point
and inline and do the traffic classification. From
that point, we can map the users and figure out
what cells they?re in. We can then classify and
guide the traffic through the applications the
operator wants to apply for those users.
Monica: You have been working at the network
edge since before work on MEC started. How long
have you been doing this?
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |73|
John: We introduced our product, SmartAIR?, a
little over four years ago. We called it an edge
application controller, which is a pretty similar
name to multi-edge later adopted by ETSI. We?re
glad edge computing now has a home in the
standards community, with the MEC initiative.
We introduced an edge solution to provide unique
value from this point at the edge: low latency
capabilities that allow operators to manage the
challenges they face ? not just the ones on the
road to 5G, but the ones they?re currently facing.
We?re broadly deployed. We have over 100,000
cells deployed in Europe and the Americas.
Monica: You?re beyond the proof-of-concept stage
that most of the MEC trials are at. What have you
learned so far from your operators as you try to
John: I mentioned already that the operational
challenges are very important, and so is reliability.
To interoperate transparently, you have to be able
to insert edge functions into the network
transparently, and to install them easily. These
things have been really important.
We identified the applications that affect
operators and that are needed today. They?re
struggling to deal with the onslaught of video
traffic. A lot of the tools that have been tried have
been made obsolete. There?s a great need to deal
with all that encrypted traffic, while keeping the
mobile customers happy and committed to the
operator as their service provider.
Monica: Video is a big issue ? if nothing else,
because so much of the traffic is video. And there
is a role for processing and optimizing video traffic
towards the edge.
But where is the edge? If you put processing too
far out in the edge, it may not be as effective. If it is
too centralized, it may also not be effective. How
do you find the right balance there? Where is the
right place for the edge, in terms of computing?
John: We?re not big believers in caching. We think
as you get close to the edge, caching is not as
valuable. And it?s not helping to address capacity
issues with the air interface anyway.
But we do think traffic management benefits by
locating it at the edge. One of the best MEC use
cases is throughput guidance. We do think there
are applications that help improve the quality you
can get over existing infrastructures. You avoid
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |74|
having to run to stay in place, not providing better
service while constantly having to throw tons and
tons of resources into new cell sites and carriers,
and cell splits, and so on.
What drives how close to the edge you have to be
is the fact that the traffic is largely encrypted. To
figure out that you?re dealing with a video session
and what its needs are, you have to see the
beginning of the session being set up.
If you are too close to the edge, when a user does
a handover to another cell, the MEC application on
that other cell would not know what that session
was. So the application would lose context, and
you would have this very complex need to hand
over from MEC instance to MEC instance. It gets
By being located at an aggregation point, like we
are, where we?re dealing with several hundred cell
sites, the vast majority of handovers we see are
between cells and between cell sectors that are
covered under the same MEC instance.
When users hand over from one cell to another,
we already know what the conditions are on the
cell they?re going to before the very first data
packet goes through that new cell. We see the
handover from the control messages. So even
before the data starts moving over to that new
cell, we know they?re going to be in that new cell.
We already know the conditions of that new cell,
and we can adjust and take that into account in
our traffic management, essentially before it
Monica: This is an entirely new way for operators
to manage traffic. For video, it?s very important.
But are other types of traffic affected as well?
John: One of the benefits of managing traffic as we
do at the application-type level is that you can
manage down the latency for those time-sensitive
services where the user is directly interacting with
By managing traffic at the application-type level,
you can keep the queues much shallower. Because
of that, the latency you see during the busy hour
can be cut by 25% or 30%. That affects browsing. It
affects your social media applications. The benefits
are not just for video. They?re definitely for all
types of services and, obviously, for the coming IoT
Monica: It also gives operators a better view of
what goes on in their networks.
John: A part of edge computing that is less often
mentioned, is that you?re now linking together not
just information about the users and the type of
applications they?re using, but also what cells
they?re in, how busy those cells are, how
congested those cells are. You?re able to put all
those pieces of data together.
One of the prime benefits is you can now manage
your capital investment with a focus on the quality
of experience that you?re delivering for those
applications that are important to your customer
base. Instead of just looking at low-layer metrics of
the network, you can now actually manage your
capex that way. That allows you substantial
savings, and it?s a big contributor to the business
Monica: Business case, you said it. Let?s talk a little
about that. As you said before, you want to avoid
the risk of having servers at every cell site. But you
still need to add some hardware. This is something
that you add on top of what operators already
have, so that?s an additional cost. Is there a
business case that justifies the additional spending
that edge computing requires?
John: I think this is one of the keys. You have a
MEC platform, and then you?re going to layer your
applications on top of it. There?s certainly the
potential for new revenue streams. Some are
relatively far-flung, and some potentially closer in.
But as you said, you can talk virtualization as long
as you want, but the historical cost savings of
virtualization come from a massive server farm in
your data center, replacing all these purpose-
directed servers that were underutilized.
That?s not the case with MEC, because it?s a new
investment. The business case still has to prove
There are benefits on the revenue side, from new
services. We see the benefits of improving quality
of experience: getting more customer advocates
who believe in your network because of the
reduced stalls and lower latency on the browsing
and the social media and so on.
But, clearly, the biggie in the near term is
investment savings. You don?t have to constantly
run out and chase after the peak demand in your
The problem is, human nature is such that people
remember the bad experiences much more than
the good experiences. It only takes 5 or 10 bad
experiences a week for users to think you?re a
lousy service provider. That can easily happen in
those peak hours.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |75|
You?ve been in this mode of having to build to
those peaks, and it?s really expensive. By putting in
a mobile edge compute system, you?re actively
managing the traffic congestion and managing the
services. Because of that, you can make the quality
so much better during those peak times. That
allows you to not have to use your capex to chase
after those instantaneous spikes in usage that are
causing those bad experiences.
That capital benefit is certainly a key part of the
business case ? being able to focus your capital on
the portion of the network causing the poor QoE.
Monica: The business case becomes subtler here.
It?s not just adding numbers, but seeing what the
resources you have can do and how happy your
subscribers are. It requires a different conceptual
framework behind the financial model.
John: It does. I think there is a transition in thinking
that is starting to happen, because everybody
looks at the growth in demand and they look at
the revenue curve, and those lines are not parallel
on any chart.
I think there?s an awareness that you can?t
continue to spend to meet that level of demand
without something radically changing. This does
require new thinking and an ability to focus your
capital investment to maximize the quality of
experience. That?s one of the things MEC can
enable operators to do.
Monica: With edge computing, content providers
may take a more active role. They might get
involved in rolling out or participating in some way
with the MEC infrastructure, because they stand to
benefit from it.
John: This may be the best part, to be
honest. For a long time, I think the
industry has had a model that?s been ?
I won?t call it hostile, but difficult. There
has been a situation where the
applications create all these demands
on the network.
With edge computing, you have an
opportunity for a truly collaborative
approach. You can take IoT. You can
take throughput guidance. There are
clear cases where you?re now
essentially having levers to pull or
information in the network that is
being shared in real time, and that
gives the ability to evolve that
relationship between operators and
With throughput guidance, for
example, the operator is telling the
video streaming content provider in real time,
?This is the best this user can get at this time, and
let?s jump right to that rate. Don?t try to go higher,
because it?s just going to get stuck and congest the
network, and you don?t need to go lower and
harm the quality.?
You share that information, and it?s best for the
mobile customer, who is really the customer of
both that content provider and the mobile
IoT is another example. You have the ability to
provide much better security and lower latency, by
bringing that IoT traffic around the core through a
VPN into a private cloud. You can create a secure
capability with low latency for those types of
You?re providing levers, information and tools that
allow this tremendous, positive working
relationship between the service provider and the
Monica: I guess that might address some of the
issues we have with encryption. If there is
collaboration, then encryption no longer creates a
barrier to optimizing around traffic.
John: Encryption is a tricky subject. Of course,
there are two layers of encryption. Operators put
on their own encryption when the backhaul is over
a shared environment or non-owned
environment. They run IPsec tunnels from the
NodeB site back to the network. That certainly
affects where and how mobile edge computing
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |76|
Then you have the encryption that the content
provider puts on its content. I don?t expect that to
change. I think end users want to ensure privacy
and so on.
At Vasona, we have the ability to classify and
understand encrypted traffic. We don?t decrypt it
in any way. But we?re able to figure out what type
it is and then make sure it gets the treatment it
If there are situations where you are going to host
it on the MEC platform, which would be the case
for IoT gateways, for example, then you?re
absolutely right. The point of the certificate
exchange in that case can move right onto the
MEC platform for that.
Monica: Now, let me ask you the obligatory
question, which is about 5G. There are different
schools of thought here. MEC is helping the move
towards 5G, but is it required? Do we need to wait
for 5G in order to use MEC, or is it the other way
around? What is enabling what?
John: Yeah, that?s a great question. 5G is about a
lot more than just the radio. There are huge
implications for the backhaul when you get new
5G radios that have much shorter range. You?re
going to need more of them. The backhaul gets a
lot bigger and more complex.
MEC is really a great step in that direction. It?s
bringing down the latency. It?s putting the kind of
capabilities into the network that you?re aiming for
with 5G. It?s a stepping stone. The nice thing about
MEC is that it has a business case and addresses
today?s problems, while taking you in the direction
you want to go.
Monica: Let?s have a peek at what?s going on in
the next five years. What are you guys working
John: Obviously, MEC standards are not finished
yet. Right now, we?re an edge compute platform.
We?re embracing the MEC standards as they move
along. There are MEC platform capabilities, and
there are MEC applications.
We?re working on new applications. IoT is one
area. We continue to believe that security and
quality of experience remain a big focus.
Then on the platform side, it?s integrating down
into the network infrastructure, so you can run on
a new server cluster. But there are, obviously,
other initiatives going on with the packet core and
with cloud RAN, that are also NFV driven. You can
certainly start to see intersections as you get far
Monica: It looks like you have a pretty busy
schedule over the next few years.
John: I think it?s an exciting time. I imagine it?s a
little bit of a scary time but also a great
opportunity for the operators. The demands being
placed on them are enormous. The expectations
are very high. But then, they?re in the driver?s seat.
And that?s a good place to be.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |77|
About Vasona Networks
Mobile operators work with Vasona Networks to improve user experiences and better leverage network
investments. The company?s pioneering SmartAIR? edge application controller and SmartVISION? analysis suite
help operators overcome congestion on 3G, 4G networks, build a bridge to 5G, and understand network
activities for improved management and planning. Vasona?s standards-based software at the mobile edge
delivers intelligent solutions to increase the quality of experience, enable low-latency services and focus
investments to create a more flexible, intelligent and responsive mobile network from the individual cell level.
Founded in 2010, Vasona has deployments in major networks around the world. For more information, visit
www.vasonanetworks.com or contact firstname.lastname@example.org.
About John Reister
John Reister is VP of Marketing and Product Management for Vasona Networks, supporting the company?s work
with global mobile network operators to deliver better subscriber experiences. John was VP product strategy for
Arris (Nasdaq: ARRS), joining through its acquisition of BigBand Networks where he was VP advanced technology
and chief architect. He was instrumental in the company's expansion to telecom markets with platforms for
advanced video services. Previously, John was CTO of DSL pioneer Copper Mountain Networks (Nasdaq: ARRS), a
consultant with Bain & Co. and an engineer with McDonnell Douglas (NYSE: BA).
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |78|
III. Service provider interviews
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |79|
MEC in the
A conversation with Mansoor Hanif,
Director of Converged Networks and
Monica Paolini: MEC is more than a tool to lower
latency. It is an enabler for services that are
specific to a location but are served over multiple
access networks, both fixed and mobile. In this
context, MEC is a powerful driver that accelerates
the convergence of fixed and mobile networks. In
this conversation, Mansoor Hanif, Director of
Converged Networks and Innovation at BT, tells us
how this is shaping the network evolution at BT in
Mansoor, what is your new role at BT, and what
are you working on with regard to MEC?
Mansoor Hanif: Personally, I came from the EE
side. I was in charge of the radio networks. Since
the beginning of November 2016, I?ve moved to
the BT research team. I?m now looking after the
converged networks research lab.
On edge computing as a whole, at BT we are not in
the commercial phase at the moment. We are
doing a number of proofs of concept looking at the
customer experience ? and especially looking at
the converged customer experience that we can
manage through the edge and the business cases
We did a number of proof-of-concept trials last
year, mainly in Wembley Stadium. This year, we
have a couple of new locations we?re looking at for
proofs of concept.
Monica: Can you tell us more about these proofs
of concept? What kind of applications, and what
were the learned lessons?
Mansoor: From the couple of proofs of concept
we did last year, we showed the effectiveness of
mobile edge computing solutions for reducing
latency for video. For enhanced video
orchestration in a stadium such as Wembley, we
showed that this could be effectively used to
replace closed-circuit TV for security uses. There
are similar opportunities for many other vertical
We also did a throughput-guidance trial with
Akamai, and we showed that there is benefit in
transmitting the real-time radio conditions
through MEC to the CDN; it helps content delivery
partners improve the end-to-end experience for
mobile video users.
Those were successful proofs of concept.
This year, we are focusing on a couple of areas. We
are having a look at our retail estate to see how we
can improve the engagement of our customers
when they come to visit our shops. Can we
showcase the offers we have and the converged
capabilities we have? Can we improve the user
experience by localizing some functionality from
our IT systems in the central back office to the
local shop? We?re going to have a look at that from
the retail shop attendant and customer
We are also looking into public spaces such as
museums, where we can improve the user
experience of visitors through edge computing.
For the entertainment field, we?re looking into
where edge computing can improve the user
experience with things like augmented reality.
There, we look at local positioning and at low-
latency applications, and how we can hook
location-based augmented reality applications into
our fixed offerings to offer enhanced solutions
people can take away and keep with them after
they?ve visited a location.
We?re also looking at the scenario in which you
have a lot of people visiting a location who are
from outside the country or from other operators.
Can we have a single solution that manages all the
users at a location, but also has intelligent control
and enhanced added-value services over the MEC
Monica: When I started working on MEC, video
seemed to be the major use case. What I?m
hearing from different sources now is that there is
a shift. Video is still one of the main use cases, but
there is more to MEC than video. Would you agree
Mansoor: Certainly. Anything that requires
compute capability to give an effective user
experience is a good use case for MEC. Anything
that requires a lot of processing power to be able
to deliver a good-quality experience. Augmented
reality is a good example.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |80|
Video, certainly, is a good use case, especially high-
definition 360 panoramic video that you want to
make available to the user to browse through and
choose whatever camera angle he wants.
Rather than sending all of that data over to the
user, you can do the computing on the edge
platform, delivering a very good quality of service
to the user, and not expose your network.
Monica: Mobile edge computing clearly relies on
network virtualization. Initially, virtualization
efforts were focused on moving everything to a
centralized location ? the data center, the cloud ?
to get cost savings and have everything in one
place, which is attractive.
With mobile edge computing, the trend is in the
opposite direction. There are some things that do
not work sufficiently well in a centralized location.
There are advantages to moving processing and
storage to the edge.
Mansoor: Today, as a converged operator, we
would rather speak about the converged edge for
fixed and mobile. One of our key objectives is to
make sure we effectively implement convergence
across fixed and mobile.
If you?re talking about the fixed network and the
mobile network separately, the edge can mean
different things. There are three elements to this,
but there?s probably some granularity between
First, there?s the capability you would like to
centralize into the core. That?s very much about
cloud computing and cloud connectivity, where
you can effectively centralize. I think that will
continue, to some extent, but not to the extent
that we have it today.
Second, there is the edge seen from a fixed
network, which is pretty much on the distributed
exchanges or aggregation points. There are some
applications we feel should be decentralized there.
Finally, there?s an opportunity now to offer
converged mobile and fixed aggregated capability
on those aggregation points.
There?s also still, we believe, a big opportunity at
the edge of the radio network, which is the closest
to the customer.
From our perspective, all of that can fit into an
interesting business model: you can leverage a
single platform that allows you to manage all those
applications anywhere between the core, the
aggregation point, and the radio edge with a single
platform. That makes it very easy to manage and
shift applications when you need to, where you
It?s not manageable, not possible, to get to a good
business model with completely separate
platforms for the radio edge, the aggregation
points for the fixed edge, and the core.
Two factors are driving things out to the edge. One
is that the edge speeds are moving so fast that the
kind of capability we need on the edge is getting
increasingly difficult to aggregate in the center.
Speed at the edge is scaling up so very fast that I
don?t think it?s possible to scale a centralized
computing platform to aggregate all of this on the
The second factor is that the processing power
required to support those edge speeds is such that
you will almost certainly have to enhance the
processing power at the edge of the network to
some extent, so the utopia of virtualizing all of the
radio baseband hardware is unlikely to happen
If you need to add or upgrade hardware to support
the new radio capabilities and speeds, you can also
put in some mobile edge computing power and
add some intelligence there.
In the UK, we?ll probably be doing a lot of indoor
installations over the next three to five years. If
you are going to do that, then the actual cost of
putting in an extra board or two to enable edge
computing is affordable. The overall cost of adding
the MEC hardware as part of a wider indoor
coverage installation is much lower than if you are
doing a dedicated MEC rollout.
Monica: You raised a lot of very interesting points
that I would like to follow up on. The first one is
the fixed/mobile convergence. In fact, it?s telling
that MEC doesn?t stand anymore for Mobile Edge
Computing, but for Multiple-access Edge
What does that mean to you? A mobile network
was very different from a fixed network, in terms
of what subscribers do. The convergence is not just
in terms of technology, but also in terms of what
we do over a fixed/mobile network ? or a
Mansoor: You said two things there that are
slightly different. There?s wireless and wireline; in
that case, Wi-Fi would be considered wireless. It?s
mobile and fixed, where Wi-Fi would be
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |81|
considered part of a fixed network. That?s
generally how operators have seen it in the past.
Now we?re trying to converge all that into a single,
coherent control point. We?d like to have the
ability to improve the customer experience as you
move between those. We need to make sure we
get the best out of our Wi-Fi investment and our
4G investment, and future 5G, we hope, and fixed
Today, the mechanisms available for controlling
those are not sufficiently aligned between the
fixed standards and the mobile standards. For
example, we need to get a single identity, a single
authentication. We also need effective anchoring
and control mechanisms to allow us to do
intelligent traffic steering between the fixed and
To this end, we are working with the standards
bodies to make sure that, as we move into 5G, we
have a lot more ability to intelligently steer the
user into the best customer experience over
whatever access type is most suitable.
Edge computing plays a role there. We believe that
on the network-wide level, some functionality
needs to stay in the core network. But some core
functions for specific applications could also be
localized into a MEC solution where, in a specific
environment like a museum or a shopping center,
you can put in an anchor MEC-based solution
that?s hooking into the indoor installation for Wi-Fi
and 4G small.
Solutions like MulteFire, or LTE unlicensed, or
simply Wi-Fi can be integrated into a single MEC
anchor, where we can provide layered services. It?s
a way of integrating that locally to work together
with what?s in the core, but also to work
independently and provide extra services where
needed in those specific locations.
Monica: As an operator, you have to decide which
functionality should be centralized and which
functionality should be pushed to the edge. And
then, for the access network, you have to decide
which applications should use the fixed network
and which should use the mobile network ? and
which fixed or mobile network to use, when
multiple ones are available.
Mansoor: You put many things on the core side:
the unified sign-on, the unified authentication, but
especially the traffic steering, the quality of
service, and the quality of experience
management at the network level.
But you can also have a separate policy locally for a
specific location, based on what you agreed with
landlords and what they want to offer your
customers, and other people?s customers. It?s
important to have the flexibility to tailor this to
In the terminology of ?Network slicing?, for
example, this is the capability to offer an enhanced
network slice in a localized environment.
Monica: Do network slicing and edge computing
go hand in hand, complementing each other, in
the scenarios you describe?
Mansoor: Yes. Mobile edge computing adds an
extra granularity to the type of slice you can offer.
Already, we?re experimenting a lot with how far
we can push network slicing on our 4G network.
It?s going to be a lot easier with a 5G network,
because 5G is built around network slicing.
MEC increases the granularity of the type of slicing
you can offer because you can then actively offer
completely different types of slicing locally for any
Monica: We talked about the edge, and you
mentioned aggregation points and the RAN.
Where is the edge?
Mansoor: The edge of a fixed network has been
considered to be, let?s say, the local exchanges, or
the equivalent. The edge from a mobile network is
the radio antenna, which is the closest to the
Whether that?s a macro site or an indoor site,
that?s where the edge would be. That?s where I
think the different definitions of the edge have
come in. Obviously, depending on the type of
application and the type of reach you?d like to
have, you can choose where to put the edge
What?s important is that it?s also very much
dependent on the traffic load and the application
Ideally, you could have a dynamic capability for
orchestration, where you can move an application
from the small indoor cell to the macro cell. Or
from the macro cell to the local aggregation point
or exchange, back and forth, depending on time of
day, the load, and the optimal customer
experience you want to give. That?s the ideal
situation we?d like to get to.
Monica: Is this why you need a single platform
that allows you to manage dynamically all those
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |82|
Mansoor: Yeah, ideally. Otherwise I do not think
it?s possible to manage the quality of experience
we want to offer customers in order to make it a
useful business proposition. It would be very, very
difficult to do that.
Monica: In terms of the business case, with edge
computing, you inevitably have to add some more
processing and storage capability at the edge.
That?s going to come with some cost.
At the same time, it?s clear that the more you push
to the edge, the better the performance you have
in terms of latency. There?s a tradeoff there. How
much is worth pushing to the edge? When you
look at the business case, what are the tradeoffs
that you think will make sense? How aggressively
do you want to push things to the edge, when that
comes at a cost?
Mansoor: It?s more about being intelligent about
the cost. More than a tradeoff, we need to work
on all fronts to lower the cost and increase the
added value. First of all, we need to get the
hardware platforms that support edge computing
to be as low cost as possible.
That?s why we?re working through initiatives like
the Open Compute project and similar ones to get
to an almost white box situation for the hardware
that supports MEC. We need to lower the cost of
the hardware if we want MEC to become
deployable on a massive scale.
On top of that, the actual cost of implementing a
hardware-based solution needs to be reduced.
That?s where a standalone business case of rolling
out MEC capability into offices and shopping
centers doesn?t make any sense from our
However, you can piggyback on indoor
installations in an intelligent way so that the
increased installation and implementation cost of
a MEC solution is only a very small part of your
overall cost of the indoor installation.
Timing is going to be critical. We need to catch the
wave of large-scale indoor installations at the right
moment so we can slot in the MEC hardware, at
least in the majority of cases. That would change
the business model.
Those are the two things ? cost of the hardware
platform and timing of installation ? that are going
to lower the cost of MEC computing on the mobile
We need to make sure that when, for example,
you?ve got a new customer in a big location, the
initial investment is covered by simple use cases of
connectivity and some basic services. The MEC API
interface needs to be flexible enough that, later,
we can very easily add on new functionality as and
when we need to, on the same platform.
That way, we can continue to generate new
revenues on top of the baseline, which is doing the
basic financing for the installation. That?s why I
think MEC is much more than simply improving
the efficiency of the network or using low latency.
We need to identify and focus on new services
that the combination of proximity and compute
power enables, and that we have the flexibility to
rapidly implement those solutions on top of the
MEC platform as and when customers ask us to.
Monica: It?s more than getting lower latency. It?s
thinking during network deployment where the
functionality goes in a much broader perspective.
Mansoor: Absolutely, and being able to
dynamically shift content and dynamically deploy
new applications locally ? leveraging the value that
proximity adds by improving the user experience
for applications such as augmented reality.
With augmented reality you can be very, very
close to the user and therefore really improve the
subscriber experience. If you take Pok?mon Go,
which was a massive hit, it doesn?t need a network
at all, or very little, but it?s not very granular.
If you want to take that to the next level and
provide services so compelling that businesses are
asking us to put them into their locations, you
need to make the customer experience a couple of
levels better than that.
That?s the kind of thing we?re working on so we
can offer businesses a compelling way to draw in
more customers and have customers pay for more
services. We want to enable all of that with a great
customer experience that we can monetize, to a
Monica: When you talked about monetization,
you mentioned subscribers. Could you also get
revenues from the content owners or the
enterprise which also stand t benefit from edge
computing as well?
Mansoor: I don?t think we would get paid directly
by those third parties. But if we come up with the
use cases that generate extra revenue for the
landlord or the third parties, then we could
effectively have a place in that value chain and get
Monica: What about the capex? Could the mall
owner, the stadium owner, the airport
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |83|
management be willing to pay for all or some of
the infrastructure if it?s for services they provide?
For instance, if it?s a mall and it?s trying to get its
customers engaged, is this something you think it
would pay for?
Mansoor: There are already a lot of landlords
ready to pay for a good indoor installation, as long
as it?s covering all operators and it?s offering a
good quality of service. People are moving to that
situation. We have a number of companies
proposing pre-installation in the UK. It?s good,
If we have a landlord that?s looking for basic
connectivity, we could offer it a shared-cost MEC
platform on top of that indoor installation, which
would allow the enablement of many new services
within that environment.
The actual capex of that MEC layer on top of a DAS
installation would be only a fragment of the DAS
installation itself. It could be partially funded by
the landlord in some cases.
In my view, if the DAS or the small-cell installation
is already being funded by the landlord, it would
be reasonable for us to offer to manage a MEC-
type service for all operators by putting in the
extra capex ourselves and then putting in our
the local customers.
Monica: We can start with MEC in 4G networks,
but with 5G, MEC will be more pervasive and more
efficient. How do you see the transition of MEC as
we go from 4G to 5G?
Mansoor: It?s a very interesting question, because
if you are focusing on the latency added-value of
MEC alone, then with 5G, MEC?s latency
advantage will be taken away, because 5G should
be inherently capable of very low latency.
On the one hand, you can see how 5G could
replace MEC in certain areas. At the same time, to
have end-to-end 5G capability, when you?re talking
about user speeds of 10G or above for one user,
it?s going to be increasingly difficult to centralize all
the computing power needed to aggregate that
Inherently, if we want to adopt 5G massively, we
are going to have to use more distributed
computing power, simply because the speeds
being offered are so high that it?s going to be very
difficult to keep up with the aggregated capacity
requirements if you centralize them.
With 5G, inherently you?re going to be looking to
distribute the core to some extent.
Also, I don?t think 5G will be deployed, necessarily,
in a very homogeneous fashion in many networks.
It will take a few years. In the meantime, you can
offer a reasonably homogeneous quality of service
across many, many locations by implementing
MEC as an enabler in the first place. It will allow us
to homogenize customer experience as we roll out
services over a mix of 5G and 4.5G and 4G.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |84|
BT?s purpose is to use the power of communications to make a better world. It is one of the world?s leading
providers of communications services and solutions, serving customers in 180 countries. Its principal activities
include the provision of networked IT services globally; local, national and international telecommunications
services to its customers for use at home, at work and on the move; broadband, TV and internet products and
services; and converged fixed-mobile products and services.BT consists of six customer-facing lines of business:
Consumer, EE, Business and Public Sector, Global Services, Wholesale and Ventures, and Openreach. For the
year ended 31 March 2016, BT Group?s reported revenue was ?19,042m with reported profit before taxation of
?3,029m. British Telecommunications plc (BT) is a wholly-owned subsidiary of BT Group plc and encompasses
virtually all businesses and assets of the BT Group. BT Group plc is listed on stock exchanges in London and New
About Mansoor Hanif
Mansoor joined EE in November 2011 and led the technical launch of the 1st 4G network in the UK and was also
accountable for the integration of the legacy 2G and 3G Orange and T-mobile networks. Until 2016 he led the
team who plan, design, rollout, optimise and operate all EE radio access networks, including Mobile Backhaul
and Small Cells, and was accountable for the coverage aspects of EE?s Emergency Services over LTE programme.
He was also a board member of MBNL (the joint venture of EE with H3G) until 2016. During the acquisition of EE
by BT, Mansoor led the EE network Integration team and is currently Director for Converged Networks and
Innovation in BT R&D. He is a member of the BT Technology Steering Board and is a board member of the
Scottish Innovation Programme.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |85|
in the enterprise with
A conversation with
Matt Montgomery, Director, Wireless
Business Group, Verizon Wireless
Monica Paolini: Edge computing improves
performance and optimizes resource utilization in
many use cases. The enterprise is an environment
where edge computing is going to play a large role
for a diverse set of use cases, which include not
only data and voice connectivity, but also IoT
applications. I talked with Matt Montgomery,
Director of the Wireless Business Group at Verizon
Wireless, about how edge computing addresses
the connectivity requirements of the enterprise,
while providing the same high level of security as
Matt, can you give us an introduction on your role
at Verizon and about what Verizon is doing to
bring edge computing to the enterprise.
Matt Montgomery: I have business operations,
marketing, and partner enablement
responsibilities for our Wireless Business Group,
which is dedicated to our large and enterprise
From a mobile edge computing perspective, my
primary responsibility is ensuring that our
customers can successfully use multiple partnering
solutions. It?s not just all Verizon, all Cisco, all
Microsoft, or all Apple. It?s a combination of
multiple technologies, OEMs, and partnerships to
create one solution that solves business
Regarding edge computing, my responsibility is to
make sure Verizon provides world-class solutions
to customers and that the solutions they choose
work seamlessly on the Verizon network. It?s an
integrated approach with technologies, equipment
manufacturers and partners that create the best
solution possible to help them move business
Monica: Over the last few years, we have seen a
push to move everything to a centralized cloud in
large data centers. Now, the tide is turning. Service
providers, enterprises, venue owners, and even
content providers are showing an interest in
moving some functionality to the edge. Why do
you think that?s so?
Matt: Industry innovation causes a pendulum
swing. As new technologies become available, as
new threats arise, as the computing experience
becomes more intense and form factors become
more open, businesses are looking at what they
can do locally versus centrally.
I don?t believe it?s a binary equation, but I do think
that what we can do now with the mobile edge is
much different from what we could do just a few
years ago. The options for customers are opening
Customers are moving quickly to leverage both
network and application assets much more
aggressively. For instance, now they can optimize
Wi-Fi to create fast lanes. And they can optimize
the applications themselves so they can build a
more localized computing experience.
And now they can do this, in some cases, for less
money and with more control and more security.
They can use analytics more precisely to create
higher-performing localized environments for the
business, versus a larger, centralized environment
where it?s very hard to make massive changes
without a lot of disruption.
It makes companies more nimble. It creates agility
in the delivery of applications. And it can provide,
in some cases, a more secure environment.
Monica: In terms of security, do enterprises feel
comfortable about moving more of their
functionality to their premises?
Matt: I see security as above the distinction
between a distributed network with edge
computing, and a centralized network. I look at
security as its own silo, its own platform.
The answer is, yes, some organizations feel that, if
they keep things more local, they can control the
security component more easily.
That premise, in my mind, isn?t inaccurate, but it is
problematic. The security framework needs to be
rich and robust whether your network is
centralized or everything?s at the edge. At Verizon,
we believe that our service needs to be highly
secure in all cases.
Some customers believe that, along with the
performance benefits, mobile edge computing
offers that high level of security.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |86|
Monica: What kind of applications does the
enterprise usually need to put closer to the edge?
Matt: The internet of things is one driver. Data
itself is also driving the move to edge computing.
There?s so much data generated from network
analytics these days that we can?t physically get all
of it across the network. By doing some of the
computing at the edge, we lower the latency.
Mobile edge computing offers the lower latency
that we need to provide the best experience.
The more mission-critical an application and the
more crucial uptime reliability is to the customer,
the more benefit edge computing offers.
Edge computing lets us create fast lanes in a
highly-optimized Wi-Fi environment and further
optimize applications within those fast lanes. We
can ensure that the users get a higher quality of
service in that environment than they would if we
did everything centralized over a giant MPLS
Monica: Does that include only data, or voice as
Matt: It?s voice as well. Voice is now a
consideration for many businesses that are looking
to move everything onto IP and deliver it on
whatever form factor fits the business need.
It could be a smartphone with an integrated dialer
that manages your desk phone and your mobile
phone. It could be a wireline phone that is
integrated with your mobile number. It could be a
VaaS, or video as a service, for services such as
video conferences. Edge computing could help
deliver with less latency.
Monica: New business models may emerge as
services become location aware or location based.
Do you think the enterprise is willing to pay for at
least some of the capex or opex required to deploy
Matt: The model?s different, you?re absolutely
right. Edge computing creates a capex model.
Companies have been moving to more of an opex,
computing-as-a-service model. Some customers
would prefer to move out of an opex model to a
capex model because there are tax implications.
And edge computing is a more capital-intensive
But it?s also important to note that companies like
Cisco, as an example, have already built into their
operating system, from a networking and routing
perspective, the ability to do mobile edge
computing and create Wi-Fi fast lanes, and the
ability to tag and ensure certain applications have
a higher quality of service.
Organizations that have invested in this sort of
product can pivot and turn on a stronger edge
computing experience without a giant checkbook
An example would be on tablets in an edge
environment. Apple tablets now come with the
ability to do application optimization. They come
with the ability, using Wi-Fi, to provide a higher
quality of service. As the industry starts to look at
the benefits of edge computing versus centralized
computing as a service, we?re finding that some of
the capabilities needed to do it are already in
Monica: The ability to integrate Wi-Fi with cellular
also should be a priority for the enterprise.
Matt: It?s very important that the experience
inside the four walls is replicated outside the four
walls, at least for the critical applications and the
optimization needed to make that happen. And
we can do that by keeping applications local. But
at the same time, Verizon believes that when you
leave those four walls, we need to provide the
same level of service for that end user that they
get within that environment.
We spend a good deal of time working with Cisco
and all of our networking partners, to ensure that
if you migrate onto a 4G LTE connection using 4G
LTE Advanced, we can provide a similar look and
feel of the optimized model you enjoy now across
wireless interfaces, and do it locally.
We try to make that experience seamless for the
end user, especially when we integrate our VoIP
and video services. If they?re part of the mobile
edge computing experience, we want to replicate
that as they move outside of the four walls into
the more centralized approach in the wide-area
network. These are the capabilities that we
provide customers today.
Monica: Again, it?s crucial to have the seamless
connectivity because, as an end user, you
shouldn?t need to know what access you are using.
Edge computing allows you to be access-
technology neutral, in the sense that you focus on
the functionality, and then it doesn?t matter what
RAT the subscriber uses.
Matt: We have a strong opinion on why this
environment is better with Verizon. The last thing
we want is for our end customers to have an
experience that?s different. They may understand
they?re on cellular, and not on Wi-Fi, but we do not
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |87|
want them to notice the difference. And we feel
Verizon is the best at doing this. Our customers
really don?t know that they?ve moved into an
environment in which their services are being
provided to them by the Google or Amazon Cloud
versus being serviced locally.
We work hard with our OEM partners to create a
networking environment in which you can move in
and out and still get access with the same quality
and level of service that you get from a mobile
Monica: Can you say something about IoT?
There?s a huge amount of interest, maybe some
hype as well. What do you hear from the
Matt: IoT is driving companies to edge computing,
because of the analytics generated out of an IoT
environment, especially in the industrial internet
space. Some of our customers are moving more
diligently, more pervasively, into sensors and
monitoring, and using that data to make better
It?s problematic to transmit the data across
traditional networks. Mobile edge computing
becomes almost a requirement because we have
to keep the data local. We can mine the data and
only move the high-level analytics data across a
We find that IoT is driving a mobile edge
computing experience for data collecting, for the
processing of data, especially in certain industries.
Think about what GE?s doing, adding sensors to
machines. They?re making what I call dumb
machines into smart machines so they can gather
the data and make better decisions.
This is driving mobile edge computing. This level of
need has been one of the bigger catalysts for edge
Monica: Are there any applications or any specific
verticals that are ahead in the move to IoT?
Matt: We expect the manufacturing vertical to
pick up quickly. We?re seeing some in the utilities
vertical, which would include energy.
Next would be transportation ? not just
transportation and shipping, but also receiving. It?s
moving in and out of those four walls, tracking and
monitoring all inventory, and reporting in near
In healthcare too, where we can help monitor
medicine shipments that need to remain below a
certain temperature, and where highly restricted
pharmaceuticals like OxyContin need to be
constantly managed to help avoid theft or
These applications are driving mobile edge
computing because of the analytics generated and
its use in near real-time decision-making.
Monica: You?ve talked a lot about analytics. That?s
an interesting part because, oftentimes, we think
about edge computing as more secure and
residing more on the content side, but also it
allows you to manage your network resources
better, to optimize them better.
Do you think a lot of the analytics is also moving to
the edge because, as you say, it?s much more
efficient? You?re trying to optimize so you have all
the data there. No point sending it all the way back
to the core.
Matt: Our customers have not completely run to
an edge computing mode. It?s a hybrid approach.
They still have access to cloud assets that are not
necessarily in a centralized architecture. These
aren?t going away.
What?s going away from the centralized cloud is
the massive amount of data being generated
locally. We are consuming data locally and
evaluating it locally for decision-making. We?re
transmitting much smaller data sets up into the
The cloud environment hasn?t dried up or
disappeared. It?s still there, but we?re leveraging
edge computing so we don?t have to move all of
that data up into the cloud to make it happen.
Monica: Has it made it cost effective for Verizon,
as well, because you?re going to have less cost for
transport and backhaul?
Matt: Right. The telecommunications capabilities
here are remarkable. The ability to create data sets
and analyze the data, it?s even outpaced that.
What I?m suggesting is that IoT, especially, but
mobile computing in general has created such a
large set of data that it has outpaced our ability to
move it all into a cloud.
But it hasn?t outpaced our ability to secure it over
the network during transport. Analyzing it locally
and then moving what?s needed to the cloud is
really the process that we see starting to happen.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |88|
Monica: One final question: As you look at the
next five years, what do you think will change in
mobile edge computing? What new challenges are
you going to try to address?
Matt: My crystal ball tells me that you?re starting
to see network technologies that may swing the
pendulum back to a more centralized approach.
What?s not going to stop is IoT and the ability to
mine data for insights into what it can offer
organizations. This is not stopping, it?s
accelerating. The volume of data is growing and all
of this will only continue exponentially.
Also, standards are finally coming into place, both
for mobile edge computing and for 5G.
As our backhaul is much stronger with fiber, we?re
connecting smart cities and smart businesses with
much higher-performance networks. And we?ll be
using 5G, which has very low latency with
incredible speeds. As a result, we open up the
ability to move more of that data back to a
I see the idea of a hybrid IT or an organizational
model that meets exactly what each business
needs. If you?re a movie studio, you?re going to do
more edge computing because you?re going to
deal with video that sits within the studio itself.
You?re not going to move it, because you need to
take action on it there.
If you?re a manufacturer, you may need a large
capital investment to deploy all the edge
computing you need to leverage some of the
generalized IT environments.
Because of these points, with 5G I have the pipes
that are secure enough and have the low-latency
capability to move data to whatever environment I
need it in and to service those applications.
These are disruptive technologies. 5G will disrupt
mobile edge computing. Mobile edge won?t go
away, but its trajectory might be disrupted
because of the new things we can do as we move
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |89|
About Verizon Wireless
Verizon helps organizations achieve better business outcomes and drive better customer experiences, simply,
securely and reliably. With our investments in superior technology like LTE advanced, America?s largest and fastest
4G LTE ever, we deliver innovative solutions like mobility, IoT, cloud, security and telematics that can help you
connect people, places and things around the world.
About Matt Montgomery
Matt Montgomery is the director of marketing for Verizon?s Wireless Business Group.
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |90|
3GPP Third Generation Partnership Project
ABR Adaptive bitrate [streaming]
API Application programming interface
ATCA Advanced Telecommunications
AWS Amazon Web Services
BBU Baseband unit
B-RAS Broadband remote access server
BTS Base transceiver station
CBRS Citizens Broadband Radio Service
CDN Content delivery network
CMTS Cable modem termination system
CORD Central Office Re-Architected as a
COTS Commercial off-the-shelf [hardware]
CPE Customer premises equipment
CPRI Common public radio interface
CPU Central processing unit
C-RAN Cloud RAN
DAS Distributed antenna system
DDoS Distributed denial of service
DNS Domain name system
DPI Deep packet inspection
eNodeB Evolved NodeB
EPC Evolved Packet Core
ETSI European Telecommunications
FDD Frequency division duplex
FLIPS Flexible IP-based Services
FPGA Field-programmable gate array
GGSN Gateway GPRS support node
GPRS General Packet Radio Service
GPU Graphics processing unit
GTP GPRS Tunneling Protocol
HD High definition
HeNB Home eNB
HSS Home subscriber server
HTTPS Hypertext Transfer Protocol Secure
ICN Information-centric networking
IMT International Mobile
IoT Internet of things
IP Internet Protocol
IPsec Internet Protocol security
IT Information technology
L1 [OSI] layer 1
L2 [OSI] layer 2
L3 [OSI] layer 3
LTE Long Term Evolution
ME Mobile edge
MEC Multiple-access Edge Computing
MIMO Multiple input, multiple output
MME Mobility management entity
MVNO Mobile virtual network operator
NAP Network access point
NEBS Network Equipment-Building System
NFV Network Functions Virtualization
NFVI NFV infrastructure
NGMN Next Generation Mobile Networks
NR New radio
NSF National Science Foundation
OAM Operations, administration and
OCP Open Compute Project
OCR Optical character recognition
OEM Original equipment manufacturer
OTT Over the top
OVP Open Virtualization Profile
PAWR Platforms for Advanced Wireless
PGW Packet gateway
PoC Proof of concept
PoE Power over Ethernet
PTN Public telephone network
QoE Quality of experience
RAM Random access memory
RAN Radio access network
RAT Radio access technology
RAU Radio aggregation unit
RNC Radio network controller
RNIS Radio network information service
ROI Return on investment
RRH Remote radio head
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |91|
SDK Software development kit
SDN Software-defined networking
SD-WAN Software-defined wide area
SGW Serving gateway
SP Service Provider
STB Set-top box
TCO Total cost of operation
TCP Transmission Control Protocol
TDD Time-division duplex
TG Throughput guidance
TGE TG entity
TIP Telecom Infra Project
TLS Transport Layer Security
UE User equipment
UHD Ultra-high-definition television
URLLC Ultra-reliable low-latency
vBRAS Virtual BRAS
vCPE Virtual CPE
vEPC Virtual Evolved Packet Core
VM Virtual machine
VNF Virtualized network function
VoD Video on demand
VoLTE Voice over LTE
VPN Virtual Private Network
vRAN Virtual RAN
REPORT Power at the edge ?2017 Senza Fili Consulting ? www.senzafiliconsulting.com |92|
 5G Americas, Understanding information centric networking and Mobile
Edge Computing, 2016.
 5G Public-Private-Partnership, The 5G Infrastructure Public Private
Partnership: The next generation of communication networks and
 5G Vision ? The 5G Infrastructure Public Private Partnership: The next
generation of communication networks and services, Release 5.1, 5G
 Beck, Michael Till, Martin Werner, Sebastian Feld, and Thomas Schimper,
Mobile Edge Computing: A taxonomy, AFIN 2014.
 Bhardwaj, Ketan, Ming-Wei Shih, Pragya Agarwal, Ada Gavrilovska,
Taesoo Kim, and Karsten Schwan, Fast, scalable and secure onloading of
edge functions using AirBox, Georgia Institute of Technology, 2016.
 Chao Hu, Yun, Milan Patel, Dario Sabella, Nurit Sprecher, and Valerie
Young, Mobile Edge Computing: A key technology towards 5G, White
paper 11, ETSI, 2015.
 Cisco, Cisco Visual Networking Index: Global Mobile Data. Traffic Forecast
Update, 2016?2021, 2017.
 Ericsson, Ericsson Mobility Report, 2016
 ETSI, Mobile Edge Computing (MEC) Terminology, ETSI GS MEC 001
 ETSI, Mobile Edge Computing (MEC): Technical requirements, ETSI GS
MEC 002 V1.1.1, 2016.
 ETSI, Mobile-Edge Computing (MEC): Service scenarios, ETSI GS MEC-IEG
004 V1.1.1, 2015.
 ETSI, Mobile-Edge Computing (MEC): Proof of concept framework, ETSI
GS MEC-IEG 005 V1.1.1, 2015.
 ETSI, Mobile Edge Computing: Market acceleration: MEC metrics best
practice and guidelines, ETSI GS MEC-IEG 006 V1.1.1, 2017.
 ETSI, Mobile-Edge Computing, 2014.
 Friedman, Thomas L., Thank You for Being Late: An Optimist's Guide to
Thriving in the Age of Accelerations, 2016
 I, Chih-Lin, Corbett Rowell, Shuangfeng Han, Zhikun Xu, Gang Li, and
Zhengang Pan, Toward green and soft: A 5G perspective, IEEE
Communications Magazine, 2014.
 I, Chih-Lin, Shuangfeng Han, Zhikun Xu, Qi Sun, and Zhengang Pan, 5G:
Rethink mobile communications for 2020+, Philosophical Transactions of
the Royal Society A, 2016.
 Intel, Real-world impact of Mobile Edge Computing (MEC), 2016.
 InterDigital, What the MEC? An architecture for 5G.
 International Telecommunication Union, IMT vision ? Framework and
overall objectives of the future development of IMT for 2020 and
beyond, Recommendation ITU-R M.2083-0, 2015.
 International Telecommunication Union, IMT vision: Framework and
overall objectives of the future development of IMT for 2020 and
beyond, Recommendation ITU-R M.2083-0, 2015.
REPORT Massively densified networks ? 2016 Senza Fili Consulting ? www.senzafiliconsulting.com |93|
 Marek, Peter, Delivering elasticity to the mobile edge, Embedded
 Miller, Linsey, How can operators make money with MEC? Artesyn.com
 Paolini, Monica, Charting the path to RAN virtualization: C-RAN, fronthaul
and HetNets, Senza Fili, 2015.
 Paolini, Monica, Massively densified networks: Why we need them and
how we can build them, Senza Fili, 2016.
 Paolini, Monica, The smart RAN: Trends in the optimization of spectrum
and network resource utilization, Senza Fili, 2015.
 Qwilt, Open Edge Cloud business case, 2016.
 Reister, John, It?s MEC to the rescue for struggling mobile video
performance, Telecoms.com, 2017
 Vasona Networks, Are mobile networks ready for the next gaming craze?
Vasona Networks, 2016
REPORT Massively densified networks ? 2016 Senza Fili Consulting ? www.senzafiliconsulting.com |94|
Latest reports in this series:
Improving latency and capacity in transport for C-RAN and 5G. Trends in backhaul, fronthaul, xhaul and mmW
Massively densified networks. Why we need them and how we can build them
Voice comes to the fore, again. VoLTE and Wi-Fi Calling redefine voice
Getting the best QoE: Trends in traffic management and mobile core optimization
The smart RAN. Trends in the optimization of spectrum and network resource utilization
Charting the path to RAN virtualization: C-RAN, fronthaul and HetNets
LTE unlicensed and Wi-Fi: moving beyond coexistence
Watch the video of the interviews
About RCR Wireless News
Since 1982, RCR Wireless News has been providing wireless and mobile industry news, insights, and analysis to
industry and enterprise professionals, decision makers, policy makers, analysts and investors. Our mission is to
connect, globally and locally, mobile technology professionals and companies online, in person, in print and now on
video. Our dedication to editorial excellence coupled with one of the industry?s most comprehensive industry
databases and digital networks leads readers and advertisers to consistently choose RCR Wireless News over other
About Senza Fili
Senza Fili provides advisory support on wireless data technologies and services. At Senza Fili we have in-depth
expertise in financial modelling, market forecasts and research, white paper preparation, business plan support, RFP
preparation and management, due diligence, and training. Our client base is international and spans the entire value
chain: clients include wireline, fixed wireless and mobile operators, enterprises and other vertical players, vendors,
system integrators, investors, regulators, and industry associations. We provide a bridge between technologies and
services, helping our clients assess established and emerging technologies, leverage these technologies to support
new or existing services, and build solid, profitable business models. Independent advice, a strong quantitative
orientation, and an international perspective are the hallmarks of our work. For additional information, visit
www.senzafiliconsulting.com or contact us at email@example.com or +1 425 657 4991.
About the author
Monica Paolini, PhD, is the founder and president of Senza Fili. She is an expert in wireless technologies and has
helped clients worldwide to understand new technologies and customer requirements, create and assess financial
TCO and ROI models, evaluate business plan opportunities, market their services and products, and estimate the
market size and revenue opportunity of new and established wireless technologies. She frequently gives
presentations at conferences, and writes reports, blog entries and articles on wireless technologies and services,
covering end-to-end mobile networks, the operator, enterprise and IoT markets. She has a PhD in cognitive science
from the University of California, San Diego (US), an MBA from the University of Oxford (UK), and a BA/MA in
philosophy from the University of Bologna (Italy). You can reach her at firstname.lastname@example.org.
? 2017 Senza Fili Consulting, LLC. All rights reserved. The views and statements expressed in this document are those of Senza Fili Consulting LLC, and they should not be inferred to reflect the position of the
report sponsors, or other parties participating in the interviews. No selection of this material can be copied, photocopied, duplicated in any form or by any means, or redistributed without express written
permission from Senza Fili Consulting. While the report is based upon information that we consider accurate and reliable, Senza Fili Consulting makes no warranty, express or implied, as to the accuracy of the
information in this document. Senza Fili Consulting assumes no liability for any damage or loss arising from reliance on this information. Names of companies and products here mentioned may be the trademarks
of their respective owners. Cover photo by Senza Fili, Loyly, Helsinki.