Posts by



Posts by

An interview with InterDigital’s Diana Pani

InterDigitalThe 3GPP wireless standards are vital to our work and shape much of the core of our business at InterDigital. In light of the novel Coronavirus and its impact on communities in every corner of our globe, we want to explore the potential impact the pandemic will have on the 3GPP releases scheduled for the coming years, namely Release 16 this year and Release 17 next year. As a future-looking company, we believe it’s important to consider the foreseeable impacts the pandemic might have on the long awaited 5G rollout and its evolution.
 
This week, InterDigital’s Communications team had a virtual chat with Senior Director of 5G Standards and Research Diana Pani, an active contributor to the standardization of radio access protocols within 3GPP, former RAN2 Vice Chair, and current chair for 3GPP 5G sessions, to better understand the state of 3GPP standards and outlook for 5G. Read on below.
 
This interview has been edited for length and clarity. 
 
IDCC: Diana, like many others at InterDigital, you have been directly involved with 3GPP standards work for years. We understand that with Release 16 and 17 have now been pushed back three months, due to the global pandemic?
 
Diana: Yes, Rel-17 and some aspects of Rel-16 have been officially pushed back by 3 months.
 
The key and most important aspect to consider is that Rel-16 ASN.1 freeze date in June remains unchanged, but the functional freeze date has been postponed from March until June. ASN.1 is used to define message syntax of certain protocols between the network and devices. Typically, we have three months between the functional freeze and ASN.1 freeze in order to allow us to do a thorough review and make corrections to  both functional aspects and to ASN.1. Given the importance of completing the ASN.1 freeze and the release on time, 3GPP working groups are doing both ASN.1 review and completing some remaining functional aspects in parallel.   The main target is to complete Rel-16 on time as scheduled.   This of course increases the chance of finding issues that cannot be solved in a non-backward compatible manner, but this risk is always present, and we have means to deal with it. 
 
From my perspective, 3GPP has not shifted Release 16 completion. There is a huge effort by the 3GPP community to keep the freeze dates by using virtual meetings and progressing discussions by email and conference calls. The plenary session in June, when the freeze was and is still scheduled, was shifted by two weeks to allow more time to finalize the corrections and make Rel-16 more stable.
 
IDCC: Will the plenary session take place in the virtual space somehow to follow social distancing practices?
 
Diana: Yes. The March plenary took place about a week ago, and was all done by email.  The June plenary is also expected to be done in the virtual space. 
 
3GPP has actually been doing these virtual meetings since February, and every working group in 3GPP has tried different methods to move things forward. So, for example, RAN1 meetings, which address physical layer aspects, were done purely by email over two weeks, instead of the typical one-week in person meetings. The RAN2 working group, which I'm involved in, also took place over two weeks. In addition to email discussions we also had conference calls, which actually helped our progress significantly.  Other groups are considering introducing conference calls in follow up virtual meetings.
 
The virtual meetings in February allowed the groups to make a surprising amount of progress and complete an important part of the work.  This is why I think we will maintain the June timeline for Release 16. I know people were very skeptical about how much progress could be made over email and conference calls, but in the end, I think we were pretty productive. Of course, we were much less efficient than before, but we were still very productive.
 
IDCC: Why do you think these efforts were so productive?
 
Diana: There are two aspects that contributed to the progress we made. First, quite simply, everybody knew we were in an unusual situation because of the pandemic. Secondly, Release 16 is at the end of the release, so whatever remains as an open issue is likely to be very specific and detailed, while most of the more complicated and controversial issues that require face-to-face discussion were already completed by the end of the last year.  For Release 16, we were left with several small but detailed issues, and given the global circumstance, delegates were more willing to compromise and finish the release for the good of the whole industry, rather than fighting for specific individual objectives. There was a nice atmosphere of people wanting to compromise and progress things, which was very nice to see.  Of course, the virtual meetings were a lot more work for delegates and leadership, but 3GPP leadership did a great job organizing and facilitating the discussions in a way to encourage progress and consensus. 
 
IDCC: Like the rest of the world, we don't know how long this pandemic will endure, or how long we're going to have to practice social distancing. What kind of impact do you think this will have on the overall standards development outlook for the next couple of years?
 
Diana: That's a very good question and it's been a topic of discussion with 3GPP leadership. Leadership has suggested and hopes that we'll be back to normal functioning by August. I and a few others proposed that we should be more conservative and prepare as if we'll have no more face-to-face meetings until the end of the year and will need to continue meeting in a virtual space. What we're trying to do in 3GPP is find ways to make meetings more efficient. Every group is exchanging ideas on how we can make progress, assuming that we may have to do this virtually for a year.
 
IDCC: We've discussed Release 16, but what about Release 17? That release was shifted to December 2021, correct?
 
Diana: Yes. Rel-17 has been shifted by three months. When the first meetings were cancelled in February – coincidentally when Release 17 meetings were supposed to start – 3GPP leadership decided to not conduct any Release 17 work until groups could meet again face-to-face. The rationale behind that decision is because the beginning of a release always produces a lot of diverging views, and it's difficult to reach consensus unless you're having a coffee, a chat, or explaining the technical details face-to-face.
 
However, given the current projections for the pandemic, 3GPP leadership has decided that we will start Release 17 over virtual meetings.
 
IDCC: Do you think the pandemic will have a significant impact on the timelines or the efficiency of the standardization process? We don't know the timeline for the pandemic, and probably won't for some time -- how significant do you think that impact will be? 
 
Diana: It depends on how long it will last, of course. I think it's inevitable that it will have an impact and delay of things. Like I said, virtual meetings are not as efficient as meeting face-to-face. Whatever we could achieve in one week of face-to-face meetings, now requires two weeks of emails and conference calls. Even then, I don't even think we can achieve 50 percent of what we were achieving face-to-face in one week.
 
So of course, it's going to delay things, but at the same time, it might also force a prioritization of our features. Maybe 3GPP would consider prioritizing some of the most important features and re-scope Rel-17 work to complete them on time. The alternative is to slow down the entire release schedule and prolong the implementation of feature improvements in future releases. The other option being considered is hosting additional ‘ad-hoc’ meetings in January next year.
 
IDCC: Early industry analysis suggests that consumer demand for 5G wireless services may fall somewhat because consumers impacted economically by the pandemic may not have as much money to spend on services. Do you think the other aspects of 5G will follow suit? Given the 5G consumer focus on enhanced mobile broadband (eMBB) use cases, and increasing enterprise focus on ultra-reliable low latency (URLLC) and the massive machine type communication (mMTC) 5G use cases, will the pandemic’s effects be felt equally across all three corners of the spectrum?
 
Diana: I don't think it will necessarily impact one use case more than others. I think what could be impacted is the priority in which things are developed.  
 
The pandemic has inevitably impacted life as we know it, and certain things like remote diagnostics, surgery, etc. that require URLLC may become more important and necessary than ever with 5G.  Proximity detection, gaming, AR/VR, virtualization, etc. may also become very important and go up the priority list. At the same time, there are certain things within eMBB that still need to be improved to support some of the high data rate requirements of emerging use cases.
 
IDCC: Isn’t the eMBB use case increasingly important right now because so many people are in home isolation watching Netflix and streaming video, causing some video services to throttle down their streaming speeds because the demand is so high?
 
Diana: Right, but don't forget that some of the capacity issues are actually on the network side and not really on the wireless side.
 
IDCC: That's true.
 
Diana: I personally feel it's very difficult to know which use cases will be impacted. The way 3GPP works is, if they have time, they will address several use cases simultaneously, because they always prepare for the future. We're preparing for use cases that most customers don't even have in mind yet – everything we do today will not be deployed for another four years, at least.
 
So, I think the short-term impact won't be felt immediately. If we must prioritize, that's where we may feel the impact. The operators and industry players will let us know what's essential to them so that we can focus our attention accordingly. 
 
This scenario could actually re-scope Release 17 a little bit, but as of now, 3GPP is not planning on revisiting the scope of Rel-17. The plan is that 3GPP will get things done on time with this shift. For example, they are already considering adding new meetings during the year or next year (virtual, of course), in an effort to adhere to this new three-month timeline shift. 
 
IDCC: Thank you for sharing these considerations. To switch topics a bit, will the pandemic have any impacts on spectrum we should address?
 
Diana: I know that some of the International Telecommunication Union (ITU) meetings where future use of certain spectrum is discussed have been delayed, and some of the spectrum auctions are being postponed as well. That certainly could delay some of the operators from getting the spectrum they need for deployment.
 
IDCC: Do you mean, that is because they don't have all of the spectrum they need right now for those deployments?
 
Diana: Well, I think some operators have already gotten some spectrum, but they also rely on future spectrum to expand and be able to provide all the services that they've promised or want to provide. So far, operators have purchased some spectrum both in the mmWave spectrum and below 6GHz spectrum.  However, additional spectrum will be dependent on further auctions and availability. Until then, operators cannot plan for further deployments.   
 
That might cause some delays, but most operators already have one part of the spectrum to kick off their initial 5G deployments and further 5G enhancements.
 
IDCC: Finally, what does this pandemic tell us about the importance of the wireless industry – and 5G – to the world?
 
Diana: First of all, I think it shows the importance of being able to stay connected, especially during these critical times and while the majority of the world population is in full isolation. It's one of the first times I started to truly appreciate the criticality and importance of having the technology we have today – to allow us to function remotely for a large number of aspects.  We can stay in touch with family and friends, work from home, learn online, be diagnosed remotely without going to the hospital, and be able to do almost anything from our phones.
 
If you look at what is going on right now, we see that 5G is being used for health monitoring, remote diagnostics for doctors, and 5G robots used in hospitals in Wuhan to protect staff from the virus. We'll even see an importance placed on supporting video gaming. Virtualization, a key feature of 5G, is also proving extremely important nowadays because everything has been moving towards the cloud and it is what allows us to function remotely. I think everybody now understands the importance of being able to be virtual and have remote capabilities. And 5G offers all those opportunities.
 
*****
 
Forward-Looking Statements

This blog contains forward-looking statements within the meaning of Section 21E of the Securities Exchange Act of 1934, as amended. Such statements include information regarding the company’s current expectations with respect to the impact the coronavirus pandemic will have on 3GPP wireless standards, the timeline for their development, and demand for 5G services. Words such as "expects," "projects," "forecast," “anticipates,” and variations of such words or similar expressions are intended to identify such forward-looking statements.

Forward-looking statements are subject to risks and uncertainties. Actual outcomes could differ materially from those expressed in or anticipated by such forward-looking statements due to a variety of factors, including, but not limited to, the duration and long-term scope of the ongoing coronavirus pandemic and its potential impacts on standards-setting organizations and the company’s business. We undertake no duty to update publicly any forward-looking statement, whether as a result of new information, future events or otherwise except as may be required by applicable law, regulation or other competent legal authority.

InterDigital is a registered trademark of InterDigital, Inc.
For more information, visit: www.interdigital.com.

December 16, 2019 / IEEE / Posted By: Roya Stephens

This week, InterDigital became the latest donor to the IEEE Information Theory Goldsmith Lecture Program, an award created to highlight the achievements of exceptional female researchers early in their careers, while providing opportunities to acknowledge and publicize their work. With our donation to the program, established this year in honor of Dr. Andrea Goldsmith, InterDigital stands alongside Microsoft, Intel, Nokia, Google and others across in tech ecosystem in elevating female researchers and supporting diversity and inclusion both within and outside of our business.

InterDigital knows that a diversity of backgrounds and perspectives enables us to approach big problems, conduct complex research, and develop effective solutions that work across ecosystems and experiences, despite our modest size. We feel a strong responsibility to acknowledge the benefits of diversity in engineering and innovation, while also encouraging gender diversity of researchers in engineering and the field of information theory.

As part of the Goldsmith Lecture Program, each yearly award recipient will deliver a lecture of her choice to students and postdoctoral researchers at one of the IEEE Information Theory Society’s (ITSoc’s) Schools of Information Theory. In addition to driving inclusion and diversity of thought within the IEEE and the ITSoc, the Goldsmith Lecture Program helps provide more visibility and acknowledgment of the contributions of female researchers to technology. The impact of this program extends beyond IEEE and InterDigital, giving female researchers a platform to share their work while inspiring and encouraging new, more diverse students to explore and innovate with technology.

InterDigital is proud to support the Goldsmith Lecture Program and we congratulate the 2020 award recipient, Ayfer Özgür. Ayfer is currently an assistant professor within the Stanford University’s Electrical Engineering Department, and conducted her postdoctoral research with the Algorithmic Research on Networked Information Group.

To learn more, about the IEEE Information Theory Society and the Goldsmith Lecture Program, please visit: https://www.itsoc.org/honors/goldsmith-lecture

December 4, 2019 / Posted By: Patrick Van de Wille

This week, InterDigital partner Avanci, a licensing platform focused on IoT and the connected car market, announced a new patent licensing agreement with Volvo Cars. After similar deals with a variety of carmakers including Volkswagen, Porsche, BMW, and others, the new announcement shows the great momentum that this transparent, simple licensing solution is generating.

As a founding member of Avanci since 2016, InterDigital joins industry leaders in making its portfolio of 3G and 4G standards-essential patents available to innovators and manufacturers across the IoT ecosystem through a simple, one-stop solution.

Avanci’s agreement with Volvo Cars brings new opportunities to the connected car ecosystem and raises the number of auto brands licensed through the marketplace to 14. In return, Volvo Cars will receive licenses to the 2G, 3G, and 4G essential patents licensed by InterDigital and Avanci’s 35 other marketplace participants, as well as future contributors.

Congrats, Avanci!

https://www.businesswire.com/news/home/20191202005918/en/Avanci-Announces-New-Patent-License-Agreement-Volvo

October 22, 2019 / Posted By: Roya Stephens

As a company dedicated to advanced research and development in wireless and video, we know that our innovations are only enhanced by partnerships with industry leaders and respected academic institutions. That’s why InterDigital is so proud that Dr. Mohammed El-Hajjar has been awarded the Royal Academy of Engineering (RAEng) Industrial Fellowship to work alongside InterDigital to support the evolution of 6G technology.

Dr. El-Hajjar, a professor at the School of Electronics and Computer Science at the University of Southampton, was recently awarded the prestigious RAEng Industrial Fellowship to spearhead a joint project with InterDigital to advance the research and design of transmission receivers for 6G wireless systems. Just 19 professors and researchers were bestowed Industrial Fellowships this year, based on the caliber of research proposal, direct support from an industry partner, and a clear and significant impact on the industry. Other engineering fellowships were awarded for innovative research in plastic waste recycling, carbon dioxide capture, 3D-reconstructed human skin, and more.

The Royal Academy of Engineering Industrial Fellowship

Being awarded the coveted RAEng Industrial Fellowship is an excellent acknowledgment of the proposed benefit our research will bring to industry and validation of InterDigital’s industry leadership in developing the foundations for 5G and 6G networks.

“InterDigital is proud to work alongside Dr. Mohammed El-Hajjar on this project and join the more than 50 industrial partners that have supported RAEng Industrial Fellowship recipients over the past five years” said Dr. Alain Mourad, Director Engineering R&D at InterDigital. “This is a nice validation of InterDigital’s industry leadership in developing the foundations for 5G and 6G networks alongside our global partners and top-of-class university professors.”

During his fellowship with InterDigital, Dr. El-Hajjar’s research will build upon several years of collaboration with InterDigital to advance the research and design of wireless transceivers for 6G systems. Specifically, Dr. El-Hajjar will design and develop new signal processing techniques based on the concept of Holographic Multiple-Input Multiple-Output (MIMO), which enables unprecedented data rates up to Terabits per second whilst mitigating the challenges of complexity, energy consumption and cost of large antenna arrays in Massive MIMO. The value of this collaborative research will be foundational for the long-term evolution of 5G into 6G.

“With mobile subscribers continuing to demonstrate an insatiable demand for data and billions of smart wireless devices predicted in future services for smart homes, cities, transport, healthcare and environments, the explosive demand for wireless access will soon surpass the data transfer capacity of existing mobile systems, said Dr. El-Hajjar. “Achieving the vision of fiber-like wireless data rates relies on efficiently harnessing the benefits of massive MIMO and millimeter wave frequencies. A major challenge for achieving this vision is the design trade-off of the underlying cost, complexity and performance requirements of massive MIMO in future wireless communications.”

As a result, Dr. El-Hajjar’s research on Holographic MIMO will improve upon the current, state-of-the-art Massive MIMO framework. Today’s 5G New Radio (NR) networks have largely adopted Massive MIMO, a concept in which base stations are equipped with an array of antennas to simultaneously serve many terminals with the same time-frequency resource. Massive MIMO utilizes hybrid digital-analogue beamforming, in which the number of users, or streams, depends on the number of available radio frequency chains. While Massive MIMO has enabled high energy and spectral efficiency, scalability to the number of base station antennas, and the ability to employ simple data processing at the transmitter and receiver edge, this method faces several hardware impairments. Namely, hybrid beamforming requires a significant number of radio frequency chains and faces inaccuracies in the angular resolution of phase shifters in analogue beamforming.

Holographic MIMO

Dr. El-Hajjar and InterDigital’s joint project will center around the concept of Holographic MIMO, a new and dynamic beamforming technique that uses a software-defined antenna to help lower the costs, size, weight, and power requirements of wireless communications. In other words, the Holographic MIMO method implements a phased antenna array in a conformable and affordable way so that each antenna array has a single radio frequency input and a distribution network to vary the directivity of the beamforming. Utilizing machine learning tools within the Holographic MIMO design ensures a high level of adaptability and reduction of signal overhead at the transmitter and receiver levels, while enabling support for Massive MIMO that is 10 times greater than what is available in 5G NR today.

The Holographic MIMO technique will be foundational to the long-term evolution of 5G into 6G networks. Though this technique has a timeframe of five to ten years before it matures and can be implemented in future iterations of 5G NR, our collaborative research will enable unprecedented data rates while mitigating the challenges of cost, complexity, and energy consumption presented by large antenna arrays in Massive MIMO situations. This fellowship project also aligns with InterDigital’s ongoing research on meta-materials based large intelligent surfaces with the 6G Flagship program at the University of Oulu, as large intelligent surfaces include large antenna arrays that would require techniques like Holographic MIMO to support efficient and operational beamforming.

The year-long Industrial Fellowship will run until September 2020, and InterDigital’s collaboration with the University of Southampton on Beyond 5G intelligent holographic MIMO extends through 2022 as part of InterDigital’s sponsorship of a three-year PhD studentship program at the university.

September 26, 2019 / Posted By: Roya Stephens

InterDigital marked its debut at the IBC trade show in Amsterdam by showcasing five cutting-edge video demonstrations and taking home an award for Best in Show for the Digital Double technology  

A first impression is a lasting one. At last week’s International Broadcasting Convention (IBC) trade show in Amsterdam, InterDigital not only made its debut as a company with expertise and advanced research in wireless and video technologies, but also left a lasting impression with our award-winning technologies and contributions to immersive video.

Group Photo Engineers from InterDigital's Home Experience, Imaging Science, and Immersive Labs at IBC 2019

 

Throughout the week, engineers from InterDigital R&I’s Home Experience, Immersive, and Imaging Science Labs in Rennes, France displayed their contributions to next-generation video coding standards, volumetric video frameworks, compression schemes, and streaming applications, as well as a cutting-edge tool to automate the creation of digital avatars in VFX, gaming, VR, and other video applications. At the end of the five-day convention, InterDigital received a prestigious prize and recognition of our significant work to enable immersive video and streaming capabilities of the future.

 

InterDigital Wins Best of Show for the Digital Double  

InterDigital received the IBC Best of Show award, presented by TVB Europe for innovations and outstanding products in media and entertainment, for our cutting-edge “Digital Double” technology. Developed in InterDigital’s Immersive Lab, the Digital Double tool improves upon the traditionally time- and labor-intensive 3D avatar creation process to automatically create a person’s digital avatar in less than 30 minutes! Although the Digital Double technology completely automates the avatar creation process, it also gives users the option to make stops and manually finetune the avatar at each step. Using a rig of 14 cameras, the technology computes a full 3D mesh of a person’s face and upper body from the cameras’ images to create more human-like avatars and a precise set of facial expressions for animation.

Bernard Denis and Fabien Danieau hold the IBC Best of Show award for the digital double
As we enter the 5G era of ultra-low latency and high bandwidth, video viewers will desire, and be able to enjoy, more immersive video experiences, and our Digital Double tool will become increasingly important to content producers.    

The Best in Show award recognized the Digital Double’s potential to enhance immersive video opportunities of the future, where individuals could see themselves in real-time as a character in a film or on television or even virtually participate in a game show alongside a presenter, contestants, and audience on screen. The Digital Double technology started at the highest end of the market, this time in Hollywood film production, and is likely to eventually make its way into the consumer mainstream.

The Digital Double’s foundational facial animation control for expression transfer (FACET) technology has already been used by production companies like Disney and Paramount in blockbuster films such as the Jungle Book remake and the Shape of Water. We are excited to explore this award-winning tech’s applications in virtual reality, gaming, and other immersive experiences where an individual’s digital avatar can be adapted to each context.

 

 

InterDigital’s Contributions to Tech Innovation in the Digital Domain

In addition to the Digital Double technology, InterDigital’s Research and Innovation teams displayed 5G their advanced research to support next generation and future video streaming capabilities. Laurent Depersin, Director of the InterDigital R&I Home Experience Lab, provided an overview of InterDigital’s contributions to video innovations during a panel discussion on “Technological Innovation in the Digital Domain.” Laurent spoke alongside peers from VoiceInteraction and Haivision, to explore the innovations needed to support high resolution and intensive data applications for the video content of the future. You may view Laurent’s panel discussion here.  

During his presentation, Laurent outlined new video applications that drive the need for technological innovation, as well as InterDigital’s Home Experience Lab’s commitment to develop technologies that both connect and improve user experience in the home. Laurent identified mass increases in video consumption, the popularity of interactive and immersive content like VR and gaming, and the trend towards ultra-high bandwidth and ultra-low latency content in the form of immersive communication and 8K video, as the key drivers of InterDigital’s innovative work in video technology.

Laurent Depersin outlines technological innovation in the digital domain
 

Versatile Video Coding: Improving on the High-Efficiency Video Coding (HEVC) Standard

Lionel Oisel demonstrates the enhanced capabilities of the VVC standard

5G InterDigital’s demonstration on Versatile Video Coding (VVC), presented by Michel Kerdranvat and Imaging Science Lab Director Lionel Oisel, reflects our work to develop cutting-edge tools that analyze, process, present, compress, and render content to improve the production and delivery of high-quality images.

The InterDigital R&I lab’s contribution to the VVC standard enhances the video compression efficiency of the existing High-Efficiency Video Coding (HEVC) standard published in 2013. Specifically, its demonstration compared the HEVC and VVC video standards and showed how VVC can compress and improve video delivery by lowering the bandwidth and bitrate required for Standard Dynamic Range (SDR), High Dynamic Range (HDR) and immersive, 360-degree video content.    

 

The Need for Point Cloud Compression for Immersive Video  

The InterDigital Imaging Science Lab’s demo on Point Cloud Compression, presented by Céline Guede and Ralf Schaefer, built upon the HEVC video coding standard to showcase the vital need for video compression mechanisms to enjoy increasingly immersive and interactive video experiences in VR, AR, and 3D imagery.        

Point Clouds are sets of tiny “points” grouped together to make a 3D image. Point Cloud has become a popular method for AR and VR video composition, 3D cultural heritage and modeling, and geographic maps for autonomous cars. While this method has many benefits, it is important to remember that each Point Cloud video frame typically has 800,000 points, which translates to 1,5000 MBps uncompressed – a massive amount of video bandwidth. To address this challenge, our Imaging Science Lab has participated in the development of a Point Cloud Compression method being standardized in MPEG to support widespread industry adoption of the Point Cloud format for immersive video. InterDigital showcased its video-based Point Cloud Compression capabilities in a Point Cloud-created AR video demo streamed to a commercially available smartphone in real time. This technique will support the crisp, low-latency deployment of immersive video experiences through existing network infrastructure and devices.

Ralf Schaefer displays Point Cloud compression on a comercially available smartphone
 

The Challenges and Potential for Volumetric Video  

In concert with our efforts to compress and deliver high bandwidth video, InterDigital R&I’s Immersive Lab also demonstrated its innovative work to enhance immersive experiences that meet our interactive media demands. To give context to the importance of its technological contributions, Immersive Lab Technical Area Leader Valérie Allié delivered a presentation on the challenges and potential of volumetric video and the various applications in which it might be deployed.  

   

Valérie Allié delivers a presentation on the opportunities of volumetric video content

 

Volumetric video is hailed as the next generation of video content where users can feel the sensations of depth and parallax for more natural and immersive video experiences. As AR, VR, and 3D video become a more mainstream consumer demand, providers will require tools to deliver the metadata necessary to produce a fluid, immersive or mixed reality video experience from the perspective of each viewer. As a result, content providers may face challenges in maintaining high video quality while supporting user viewpoint adaptation and low latency.  

MPEG Metadata for Immersive Video: A Roadmap for Volumetric Video Distribution  

Valérie Allié and Julian Fleureau’s demo on MPEG Metadata for Immersive Video outlined both the steps to create volumetric video and the requisite format for its distribution. Unlike flat 2D video experiences, volumetric video is much larger and cannot be streamed over traditional networks. In addition, volumetric video requires the capture of real video through camera rigs, the development of computer-generated content, the creation of a composite film sequence using VFX tools, and the interpolation of a video’s view to create a smooth, unbroken rendering of immersive content from the user’s point of view.                    

 

Addressing the Challenges of Six Degrees of Freedom (6DoF) Streaming

Visitor experiences InterDigital's 6DoF streaming video capabilities on a VR Headset

The significance of the MPEG codec for immersive and volumetric video was put on display in the InterDigital R&I Home Experience Lab’s Six Degrees of Freedom (6DoF) streaming demo, presented by Charline Taibi and Rémi Houdaille. 6DoF refers to the six movements of a viewer in a 3D context, including heave for up and down movements, sway for left and right movements, surge for back and forward movements, yaw for rotation along the normal axis, pitch for rotation along the transverse axis, and roll for rotation along the longitudinal axis.

Using a computer-generated video streamed through a VR headset, the demonstration showed how the standards and codecs developed by InterDigital’s labs can be utilized to stream fully immersive volumetric video with six degrees of freedom over current network infrastructure.

The demonstration achieved a seamless and immersive experience by streaming only content from the viewers’ point of view.  

InterDigital left a lasting impression on all who visited our IBC booth and networking hub and experienced the Research and Innovation Labs’ innovative demos. We are excited to play a role in the pioneering compression solutions and streaming capabilities that will drive and enable the immersive video experiences of the future.

IDCC at MWC19 Panel Series Feeds  |  InterDigtal Booth Demos at MWC19

Updated on February 28, 2019 at 4:56 PM CET

Te Mananers band link to Facebook

We met them pretty much by accident, we were looking for a demo to do with a video company we were in collaboration with and the folks at that company suggested streaming video from some local band playing around Barcelona. We got them to come play our booth the last day, getting guest passes for what appeared to be an incongruous bunch of folks for a mobile industry trade show. Their name is The Mañaners, a made-up word that loosely translates as “The Tomorrow Guys,” which is kind of what we try to be too.

One wearing shorts and gray-brown dreadlocks to his waist, one porkpie hat-wearing guy with an accordion. A six-and-a-half-foot tall Brazilian carrying a miniature Latin percussion set and a wearing a smile that beams like a thousand suns. Two horn players, who would walk around as they played, forming an impromptu conga line. A Chilean percussionist who would slap a box for rhythm and punctuate each performance with a flamenco dance delivered with a mix of abandon and austere seriousness.

They drew an enormous crowd and we were transfixed. They’ve now been our Mobile World Congress band for seven years. Seven years! And, in a trade show world where what you did last year generally gets thrown out the next and entire pieces of furniture are built, installed, and chucked in a dumpster after five days, every year we grow closer together. We like to innovate at InterDigital and the band were an innovation; we were certainly the first company at Mobile World Congress to bring a band in and close each day with a little beer-sprinkled show, and I know this because I recall the negotiations with show staff about sound levels, their representative standing next to our booth with a decibel meter.

Why a band? It started out as, “they draw a crowd, so great.” But over the years it’s become more than that. We have people who come every year to see them, planning some of their MWC agenda around our end-of-day event. It caps off a hard day with some energy, for everyone, our booth staff and our guests. They’ve recorded a song for our employees.

But it is still more than that. One of our regulars, a Dutch lady who works for a global telecom company, said it best after Tuesday’s show: “This trade show is all about electronics, beep boop beep, robots trying to teach us how to smile, but these guys are so human. They’re the most human thing about Mobile World Congress.”

Fundamentally, this is what our industry should be about. We at InterDigital call it “Creating the Living Network,” but what does that mean? It means that the network becomes more alive each year, growing in capabilities, becoming more autonomous, more capable, more enmeshed with our lives. But it also means that the entire purpose of the network and of our devices is to enable our living, enable us to become more human, not less. We’re part of the living network.

This year, there was competition. We were nervous. The neighboring booth, one of the largest telecom companies in the world, had picked up on the band thing and had a very slick production, a lounge singer in a magnificent cocktail dress and a bowtied DJ who was presented as “an international artist.” The lounge singer sang perfect notes. The DJ programmed some electronic beats.

The contrast was incredible.  Fede took to our stage with his beaten-down Spanish guitar and dreadlocks swaying, his singing almost primal. The horn players lifted their instruments, which convert human breath, not digital signals, into music. The rhythm section laid down a Brazilian/reggae beat, smiling at each other. The only binary option was the black and white of accordion keys… and the crowd began to abandon the lounge/DJ act and gather around our booth. After a couple of songs we had about 60 people swaying, heads bobbing, beers in hand, while the lounge act was down to a half-dozen attendees, likely booth employees. Even at a trade show about technology, the human still wins hands down.

- The InterDigital Communications Team

Updated on February 28, 2019 at 12:34 PM CET

Recap post: Open Source - Collaboration is the Key

In much the same way as 5G has caused players throughout the industry to rethink business and technology models, it has created a shift in thinking in the areas of technology licensing and open source. "Gone are the days when we think 'open source' means free. That's not telecom, that's 20 years ago with a completely different movement," said Arpit Joshipura, general manager for networking & orchestration of the Linux Foundation. Joshipura was speaking on a panel discussion this week at MWC19, hosted by InterDigital, on the subject of what role open source will play in the evolving 5G ecosystem.

What open source means today has been redefined in the 5G context, according to Joshipura. The goal, he says, is to "establish governance, policies and practices that foster collaboration based upon the need of the project. As we move into different areas of the network, and as we move into different areas of technology, if the governance is set up properly, you get the best of both worlds."

Fostering collaboration is a worthy goal for any industry, yet it's not always easy, when licensing agreements are enmeshed with global standards and a broad range of companies. New models will necessarily evolve and allow this collaboration to flow more smoothly while respecting the needs of standards bodies, patent holders and licensees.

In particular, open source software (OSS) licensing will play a role in fostering collaboration, while complementing the initiatives of standards organizations. The moderator of this panel at the InterDigital booth was Axel Ferrazzini, managing director of 4iP Council. This organization recently produced a white paper on this topic, which can be found HERE. The paper states:

"Given that most standards development organizations (SDOs) have pre-existing intellectual property rights (IPR) policies based on fair, reasonable, and non-discriminatory (FRAND) access to essential patents, a key challenge for SDOs has therefore been to determine how OSS licensing could coinhabit with SDOs’ existing IPR policies. This has led to considerable discussion and debate and some confusion."

It's a new but essential challenge facing the industry, particularly for the mobile network operators, as the radio access network is moving toward being more open in the future.

"In order to get that dream realized, our whole industry needs to work together to find a sustainable future together," said Dr. Chih-Lin I, Chief Scientist of Wireless Technologies for China Mobile Research Institute. "We are facing a great brand-new opportunity. Finally, we are at a point where we are seeing true deep convergence of information, communications, and electronics technology."

Linux Foundation’s Joshipura added historical context. "This is a 142-year-old industry," says Joshipura. "It was never open since the phone was invented, but in just the last three years we have completely transformed from a walled garden to an open architecture."

"Nobody can beat shared IP and collaboration," Joshipura added. "No single vendor is smart enough or has the hundreds of millions of dollars of R&D to create this ecosystem. Collaboration is king."

- The InterDigital Communications Team

Updated on February 28, 2019 at 11:42 AM CET

Moving From Visionary Views to Using This to Fix That

As we did last year, we’ve been hosting media partners at our booth – major media names in the wireless space who use our booth’s soundstage to host interviews with some of the leaders of our industry. Our media friends this year were Light Reading, Telecoms.com, TelecomTV, and The Mobile Network, and we’re grateful for the value they bring to our booth.

Today, Ray Le Maistre of Light Reading interviewed Luis Jorge Romero, Director General of the European Telecommunications Standards Institute (ETSI), and an interesting topic came up. Mr. Romero mentioned that, among the newest areas that ETSI is being asked to look at is standardization of the communications for vessels and port authorities around the area of shipping.

You would think there would be one way for a diversity of shipping companies, public authorities, ports and other players in a massively important industry to communicate. In fact, there are not: “Right now there are a variety of point solutions but no standardized method of communications, and the result is less efficiency than there could be,” said Mr. Romero.

It was surprising to hear IoT, which is generally expressed in such vague, “visionary” terms, explained so directly and clearly in relation to specific use case. Imagine though, an industry that is so crucial, so pervasive, so global, and that cuts so horizontally across all industries from agriculture to energy to manufactured goods: if even a 5% efficiency could be found, or 10%, imagine the global impact. We’re used to hearing people talk about how technology can transform the world. Here was an important person in our industry saying, specifically, it would be nice if we could use this to fix that.

That the topic arose also highlights the importance of standards bodies. Many vendors meet with many companies, and if we aggregate those meetings over time you end up with an industry conversation – and often a conversation that is characterized by vendors telling companies what problems they have (whether or not those companies share that concern) and what solutions they have for them. But in this case, it is a specific example of an industry with an issue reaching out to a body that represents another industry, which in turn can mobilize everyone to solving a problem that is real.

“The more we put the technology on the table and the more people engage, the more ideas we’ll have,” said Mr. Romero in his interview. Simply, and beautifully, put.

- Patrick Van de Wille

Live Video Feed: IDCC at MWC19 from Hall 7, Stand 7C61

Updated on February 27, 2019 at 3:15 PM CET

It is the nature of hot topics to be high on hype, and low on detail. The words “edge computing” have been incredibly hot at MWC19, and indeed InterDigital’s own booth contains not one, not two, but three demos looking at various aspects of edge computing. But to start fleshing out the topic, InterDigital also brought together some of the leading thinkers on edge developments, on a panel moderated by Bob Gazda, Senior Director, InterDigital Labs.

Experience the Recorded Panel Session >>

There is a great deal of excitement about edge computing and virtualization throughout the industry, and this fever is not likely to cool off anytime soon. This is largely because, at the most basic level, edge computing is really about a new way of thinking – a shift from a fully-centralized client/server network computing model and toward a model where certain network and computing resources are pushed further away from the core of the network, out to the customer equipment commodity servers and on-premise access points, and even into consumer, commercial and industrial devices themselves.

When this paradigm becomes a widespread reality, gone are the days where every request from a device has to travel all the way to the network core in order to be processed. Devices will communicate with each other, and will in effect communicate with the network itself. This is a major shift taking place in the industry, to put it mildly. It also represents a fusing of mobile network and basic IT, where specialized telecom equipment gives way to largely software- or cloud-based service operating on standard hardware like IT server rooms.

As with any major shift, there are challenges ahead that must be addressed. Some of these challenges are technical, some are regulatory, and some are business challenges.

What is driving this shift in the industry? What applications will require this kind of approach? First, there's an overall trend toward major improvements in latency, which is probably the single biggest overriding factor driving edge computing. When we improve latency and reduce delay, more things are possible – it has an unlocking effect.

"You can take existing cloud applications that run in the cloud happily, and bring them to the edge to improve their performance, latency and customer experience," said Dr. Rolf Schuster, Director of the Open Edge Computing Initiative. Alternatively, certain applications are what Schuster calls "edge-native" meaning that they need the edge or they won't work otherwise. "For example, a head-mounted display that actually needs the low latency in order to function... it wouldn't work otherwise," he said. "We're also seeing applications around drones and automotive that need the edge."

There's some confusion around just what is meant by "the edge," meaning where the border exists somewhere between the network and the device. Arturo Azcorra, Director of IMDEA and Co-Founder of 5TONIC suggested that there probably will need to be multiple borders – there will be a first category of edge that is very close to the user (which he calls "extreme edge"); a second category of edge slightly further away from the user, probably in the base station; and there will also be the cloud. This three-layered model is important, he says, "because it will add huge flexibility to address many different types of applications."

In addition to flexibility, this layered model also leads to discussions of service-based architecture (SBA). As the design of the network shifts away from a core-network dependent model, SBA provides a path forward for the industry. "If you want to host services close to the end user dynamically, you can't stop at the mobile core network," said Dirk Trossen, Senior Principal Engineer, InterDigital Labs. "You have to include the radio access network, and the actual devices themselves as well."

It's important that the industry not forget about the "service" aspect of SBA. "Typically the telco industry has placed too much emphasis on functionality, which drives things toward this point-to-point type of architecture," said Todd Spraggins, Strategy Director of Oracle's Communications Global Business Unit. "What's been refreshing with SBA is the notion of having a service that's API-defined, that will let people use innovation to create or consume those services."

Experience the Recorded Panel Session >>

Finally, one of the most interesting and potentially disruptive aspects of edge computing and the fusion of mobile and IT is the potential industry transformation it may trigger. As Laurent Depersin, Director Research and Innovation for Technicolor's HOME Lab, points out, some edge services may not be provided by traditional telcos, and third-party providers will likely see the edge transformation as a trigger to enter the industry. "I see huge opportunities for verticals: transportation, energy, facility management," said Depersin. "Maybe we'll see new actors joining the market and trying to sell this new resource."

- The InterDigital Communications Team

Updated on February 27, 2019 at 8:23 AM CET

Our very own Alan Carlton – Vice President of InterDigital Labs – was honored to be included in a panel discussion in the mainstage auditorium complex here at Mobile World Congress today. The panel's topic was one on the minds of a lot of network operators these days: the economics of high-frequency bands. 

It's long been recognized that as radio access networks are deployed in these bands, network capacity increases greatly, but there's a downside too: high frequencies don't travel as far. Higher frequency networks will have to be much more densely deployed to ensure adequate coverage.

This session featured speakers from Huawei, T-Mobile, and Ericsson. Chaobin Yang from Huawei talked about technical considerations for decreasing cost per bit of data transmission and reception, and the balance that must be achieved between smaller equipment and larger antenna arrays, with massive MIMO being a very attractive option.  

Karri Kuoppamaki of T-Mobile provided an operator’s viewpoint, highlighting the need for a multi-band approach, with millimeter wave bands covering dense urban areas, mid-frequency bands for the broader metro area, and lower bands outside the metro areas. "Millimeter wave is not the only frequency band for 5G," he said.  

That approach was echoed by Thomas Noren, head of 5G Commercialization for Ericsson. Noren also emphasized a multiband approach, noting that such an approach allows lower frequency bands to carry more of the traffic that doesn't need high-band connections. Noren also reminded the audience of the fact that even an ultimate 5G deployment will probably include some 3G (and certainly some 4G/LTE) technology for the next several years.  

Joining Carlton on the panel were Dr. Li Fung Chang, the 5G program architect for the Industrial Technology Research Institute in Taiwan; and Tiago Rodrigues, general manager of the Wireless Broadband Alliance.  

In consensus with the other speakers and panelists, Chang stated that massive MIMO, carrier aggregation and spectrum sharing were going to be good techniques for the development of 5G, but that there are many practical issues to be considered as well.

Carlton discussed how InterDigital's 10-plus years of involvement in the millimeter wave spectrum has involved being strong proponents of a progressive roadmap in the millimeter wave space. "We very much believe in the economics of high-frequency spectrum," he said. "There's lots of reasons for that, mainly the applications: the fronthaul, backhaul and fixed wireless access are commercial products. You can go buy them and deploy them, experiment with them and become experts in millimeter wave technology through them."  

He went on to describe how when 5G new radio was first envisioned, it was never a one-part story that was going to happen below 6 gigahertz. The promise of 5G -- particularly the 100 Mbps bandwidth -- was always going to necessarily involve high-frequency spectrum working cleverly in concert with lower frequency bands. There simply was never going to be a way to achieve the demands of 5G without such an approach. 

Another important point to remember, Carlton said, is the cost of spectrum at auction has fallen dramatically. "Observationally, that spectrum is costing on the order of 40 times cheaper than sub-6Ghz spectrum," he emphasized. "If you marry that fact to the vision of 5G -- that we will one day get to the vision of a more open ecosystem in the RAN -- I think it paints a very positive story for the economics of millimeter wave and high-frequency spectrum's application."  

Where millimeter wave technology gets very interesting, in Carlton's view, is with regard to small cell technology. Carlton says that he sees two ways to approach this from an economic perspective. The first, he says, is to piggyback onto legacy small cell technology. With the majority of early 5G NR deployments happening in urban metro areas, the industry can effectively deploy millimeter wave small cells on the backs of LTE small cells. This will allow the carrier to manage the costs on a case-by-case basis instead of doing a massive infrastructure buildout all at once.  

The panel and presentations were technically focused, but taking a step back from the presentations one thing was clear. Unlike previous generations of cellular standards, which delivered a single, definable set of technology capabilities and spectrum requirements, 5G involves a broad variety of solutions, for a broad variety of use cases and mobilizing a diverse array of spectrum and network assets. For operators, and for equipment companies, that brings both risk and opportunity. Fascinating times ahead as 5G deploys over the next years.

- The InterDigital Communications Team

Updated on February 26, 2019 at 5:11 PM CET

At one time the world had reliable digital voice signals on mobile devices and 3G was under development and people wondered, “what’s the use? What need does it meet that’s unmet?” Then Apple revolutionized the handset form factor and made it a vehicle for web-based data, and the reason for 3G became clear. But then the world had solid web-based data to the phone and people weren’t 100% clear on what the use was for 4G. But then streamable video services and video enablement on social platforms became a thing, and the need for 4G was clear.

Right up to the launch of 5G, we’ve been hearing much of the same thing: “I can already get great HD video to my phone with 4G – what could we possibly need 5G for?” InterDigital hosted a panel on that topic this afternoon at Mobile World Congress, and we haven’t looked at a transcript or created a word cloud but one word kept coming back, over and over again that might offer a clue about eventual use cases: latency.

If you look at what we have today, we have reliable delivery of video, yes, but much of that video is pre-produced, encoded on servers, and when you click on it you get a pause, a little spinning circle maybe, and THEN you get the content. And you don’t care: a second or two to wait for the buffer to fill and the content you wanted to flow is irrelevant since you’re not time-bound. The process isn’t interactive: it’s request made and then request filled.

But it’s becoming clear that that won’t be good enough in a 5G world. The use cases that people are talking about – future workplaces, pure interactivity, real-time engagement and tailoring of remote content – needs the extremely low latency that 5G brings. Our own edge-and-fog 5G-Coral demo at MWC involves a phone interacting with a 5G edge server and controlling the view from a 360-degree camera, and the immediacy of the experience is startling. As a non-engineer, I’ll describe it as follows: we used to have demos that went along the lines of “see, you move the handset and now you can see the view moving to match it.” Our demo this year is more about watching the view change as you move the device. It has some 5G latency in it, maybe a touch more than what we’ll eventually see in 5G, but it feels immediate.

It’s not clear what the use case on that will be, but you can sense its possibilities: sports watching where you can dynamically change your view based on what you’d like to see during a live event. A virtual workplace that feels real. Remote doctors interacting with onsite medical personnel in real time. Group gaming that is simply a level beyond what we have today. But the sense is growing on me that the step-change from previous generations won’t be around speed, it will be about latency.

- Patrick Van de Wille

Updated on February 25, 2019 at 5:23 PM CET

Below you can read the report of the first panel we hosted at MWC19, on the topic of Immersive Video. It led me to some thoughts on how research streams eventually come together into an overarching solution that drives a new use case, a new business, a new experience.

The panel was discussing Volumetric Video and immersive experiences, with our guest from Fraunhofer mentioning 1.6 terabytes of data per minute. Do the appropriate napkin math, verified for me here by one of our very serious senior engineers, and that yields a bandwidth need of about 26 Gb/s to deliver an immersive view based on 4 screens at a time. You also run into latency issues, that need to be addressed, as well as computing resource restrictions – both of which are being addressed by edge computing and edge network technology.

Listening to the presentation, I was struck by how discrete research streams can come together at a point in the future to yield a solution. At the time folks are working on research, the use case can seem impossibly far out – and, let’s face it, it’s business innovators who generally come up with the business ideas, not necessarily research scientists. And research is necessarily specialized and highly involving. It can be tough to look up from it and see the linkages. For example, at some point different researches were working on location services, e-commerce, and communications technology, possibly only dimly aware of each other. At some point, those capabilities combined to form ride-sharing.

So here was a video researcher talking about 32 cameras yielding enormous amount of screens and data and requiring ultra-low latency. Thirty feet away, one of our teams was demonstrating a technology that provides a scalable method for selecting relevant immersive streams so that not all the screens need to be delivered to the device, saving bandwidth and computing resources. Fifteen feet away from that, our edge computing and connectivity people were showing a technology that enables large video streams to be processed at the edge, reducing latency to a minimum.

Eventually, it’s easy to see the possibility of all three technologies being implemented at once, in the same solution. That solution might be called immersive sports. It might be called the collaborative workplace of the future. And that’s how it all comes together.

- Patrick Van de Wille

Volumetric Video: Science Fiction or Reality? 

Updated on February 25, 2019 at 3:50 PM CET | WATCH THE FULL PANEL NOW >

Virtual reality and augmented reality have been scaling the hype curve for many years now. The technology is exciting and impressive, and filled with possibility. It is also still in a stage of development where it faces a range of very real technological challenges. But recent developments in immersive image and video capture and editing, coupled with advancements in display and streaming/distribution technology, appear to be drawing us all closer to a time and place where those challenges will be more completely addressed.

AT MWC19 in Barcelona today, a fascinating panel discussion took place around the topic of immersive video. Moderated by Gael Seydoux, Director of Research and Innovation for Technicolor, the panel was comprised of three executives at the forefront of technology development in this space.

WATCH THE FULL PANEL NOW >

One of the demos at InterDigital’s MWC19 booth is a volumetric photo booth, that uses a 16-camera array to capture depth and volume via parallax, providing an immersive experience without the need for headsets. "We need volumetric video today for VR experiences," said Valerie Allié, Technical Area Leader of Light Field and Photonics for Technicolor. "When you experience VR, you feel that something is missing, and what's missing is the volumetric effect, especially for real video. What we're demonstrating here at MWC19 is that volumetric video can be experienced on a smartphone or another 2D display, and you do have a different experience than standard video."

Allié's comments were contrasted by those of Mimesys CEO Rémi Rousseau, who discussed work his company was doing to develop real-world front-end applications for this technology – specifically, future workplace and collaboration capabilities. One such application is what he describes as a sort of "holographic conferencing" application – akin to the holodeck we may remember from Star Trek or the holograhic communications seen throughout the many Star Wars films. "We're fortunate to have 40 years of science fiction that shows us the path for volumetric communication," Rousseau said with a laugh. "We realized that the 'killer' use case for VR & AR is about communication, about presence, about the feeling of being there with someone."

While companies like Mimesys are doing their development work largely using off-the-shelf capture sensors like Microsoft Kinect, Dr. Ralf Schaefer, director of Research and Innovation at the Fraunhofer Institute for Telecommunications, is taking a more academic and research-scale approach to solving these complex problems. Using a framework of immersive videoconferencing and a studio studded with no less than 32 high-resolution cameras, Schaefer and his team are working in part to define what volumetric video really means and how it can be applied.

"The problem with videoconferencing today is that you have a camera, which looks at you; and through the display you always look down because the camera looks above you," says Schaefer. "So we started to look at the problem of how we establish eye contact and correct the viewing angles to create a more realistic conferencing experience."

According to Schaefer, what "volumetric" really means in the video context is that you have a 3D model that's molded from computers, that can be manipulated and viewed from all sides, but using real people as the source imagery. This is all very complex, and these videos create mountains of data – 1.6 terabytes of data in fact for each minute of volumetric videoconference, according to Schaefer's research.

Not only is there a significant data management and bandwidth challenge, there's a processing challenge as well. "We're able to render any view on a subject with our video capture methods," says Allié. "If we reduce the number of cameras we use for capture, we can reduce the amount of time required to process the images." It's certainly a balance between image quality and processing speed/latency. The data processing challenges are fairly monumental still.

"Real-time processing is probably not feasible at the moment," says Rousseau. "It's too much data right now to achieve the very low latency we need for a truly real-time experience."

But the industry is working hard to overcome these challenges. The image quality is improving, and some hybrid experiments look promising for intermediate solutions. Part of the panel discussion involved a theoretical way that a high-resolution volumetric still image of a subject could be the basis for some computer-assisted animation. The highest quality images that can be captured today are usually captured in a highly-controlled studio environment. This indicates that consumer applications for volumetric video may be further away than those employed by the enterprise or industrial and entertainment sectors.

"We are confident that this technology could get to real-time capability in the future," says Schaefer "But it likely won't be in the home right away."

WATCH THE FULL PANEL NOW >

All in all, the volumetric video space is certainly going to be an interesting one to watch in the next several years, as improvements in image capture, bandwidth and latency help to carry what was once a science fiction fantasy into a very actual reality.

- The InterDigital Communications Team

MWC and Avoiding “Show of Everything and Nothing” Syndrome 

Updated on February 25, 2019 at 10:43 AM CET

Having been to Mobile World Congress yearly now for a period of well over a decade, I’ve seen this conference evolve. It has evolved alongside this industry, which is the most transformative industry the world has seen since… maybe since agriculture. And MWC has evolved beautifully: it is certainly the most interesting and impressive industry conference in the world.

And yet every year, that marveling at the incredible evolution of the show is accompanied by a sense of fear that this conference, which has grown enormous, will reach and exceed that point where a show simply becomes too big. What has saved this show from that fate has been the combination we see in the wireless industry of incredible diversity of solution but tremendous unity of purpose: to connect things, new things, better, faster, and more seamlessly.

My fear is rooted in seeing the evolution that has taken place at that other tech industry mega-show, CES in Las Vegas. There was a time when the consumer electronics industry was small enough and unified around a handful of themes: major home electronics, gaming and toys, home entertainment, perhaps some automotive. That made CES a great show. But then what we saw was the integration of electronics capability into everything, and so CES became a show about everything. Everything and nothing. One year, to get from a meeting to a wireless company booth visit, I had to cut across three halls showing exercise equipment, vacuum cleaners, plush toys that spoke, massage chairs, the car stereo section…

This year in Barcelona, I’ve seen the first possible stirrings of that. As we entered the convention center, one of the outdoor relaxation areas, with chairs and tables, was fully taken over by Husqvarna, the Swedish makers of everything from motorcycles to chain saws. The theme of the area was the company’s wireless autonomous lawnmowers.

Now don’t get me wrong, I have no issue with what I assume is a fine product. But if MWC becomes a show that includes companies that simple sell a product that includes a wireless connection, we’ll be in trouble. Because quite soon, the world will contain many, many products that include wireless connections. Drones. Windows. Scooters. Medical equipment. As our CEO Bill Merritt is fond of saying, in the future if something can produce some sort of data, it will be connected.

I’m not worried yet. Walking the halls and looking at the major demos, there’s still a unity of purpose that makes this show great, and the GSMA’s content people have uncompromisingly made the conference portion the highlight of the wireless year, with tremendous topics – some general, some quite technical, all relevant. InterDigital is lucky to have two people participating in that, the third year in a row we’ve been asked to speak. But every year I wonder how long we’ll be able to balance the continued growth of the industry and expansion of wireless into new areas with the focus that has made MWC great.

- Patrick Van de Wille

Live Updates from IDCC at #MWC19

Posted on February 25, 2019 at 9:00 AM CET

Greetings from Barcelona! Our team is here, and all are putting the final touches on everything in preparation for Mobile World Congress 2019. As with every year, there are many exciting events planned for the upcoming week, and we'll be writing about them here. If you haven’t already, you may want to bookmark this page to read updates from the show floor, and to watch our live video feed. As they are published, these posts will also be linked via our Twitter and LinkedIn pages, using the hashtags #MWC19 and #IDCCatMWC19.  

We're hosting a live panel series each day at our booth, featuring a global collection of industry thought leaders. More detail on these panel discussions and a list of speakers can be found at https://www.interdigital.com/post/interdigital-live-from-mobile-world-congress-2019#. We'll post recaps, summaries and insights after each of those panel sessions, and other content as well.  

There will also be a live video feed from the booth available on this page during show hours all week.   

If you're at the show, be sure to stop by for a visit -- Hall 7, Stand 7C61 -- to see technology demonstrations of our radio and core network test beds, some new VR streaming technology, an emulator for researching autonomous vehicle safety via edge computing, and discussion of 5G standards work.  

We look forward to seeing you and hope you enjoy MWC19!  

- The InterDigital Communications Team

 

January 24, 2019 / MWC19 / Posted By: The InterDigital Communications Team

IDCC at MWC19 Panel Series Feeds  |  IDCC at MWC19 Live Blog Feeds

Stop by Hall 7, Booth 7C61 any time to meet with our world-class engineers, engage with our demos, and see what's activating the next generation of wireless and video experiences. We'll be showcasing ground-breaking demonstrations, including Edge Computing (5G-CORAL and AdvantEDGE), 5G Testbeds (Network and Radio), 5G Standards and Beyond, and Immersive Video and Video Standards.

5G Edge – Autonomous Drones on AdvantEDGE  

This demonstration highlights the benefits of Edge Computing combined with 5G Ultra Reliable Low Latency Communications (URLLC) – to realize a Mission Critical Automation application vertical.  See how drones operate and interact autonomously in a dense urban environment, navigating obstructed views due to buildings by reporting telemetry and sensor information to an Edge-enabled Collaborative Collision Detection and Avoidance (DAA) service, deployed at the “extreme edge” of the network (e.g. point of access sites).   Learn how InterDigital realized this application and others utilizing our AdvantEDGE platform, an agile mobile edge emulation environment. 

5G-CORAL -- Virtual Reality Video Streaming

This demonstration highlights the benefits of micro-services based distributed computing at the extreme edge of the 5G network. Visitors will experience E2E 360 video streaming deployed cost-efficiently across three tiers of computing nodes (low, medium and high end) all under unified orchestration and control.  The demo features two 360-degree cameras, each capturing a separate event happening in a different area of the booth, and users can view video streaming from either camera location, using a smart phone or oculus rift goggles while consuming only a fraction of the bandwidth.          

5G Core Network Testbeds

See how InterDigital’s Service-based Architecture (SBA) platform is currently being deployed and evaluated in Bristol and Barcelona in multi-partner initiatives such as the Horizon2020 FLAME and the UK-funded Smart Tourism projects.  The demo will show performance benefits across the network through Layer 2 multicast, as well as the terminal.  These benefits include increasing the flexibility of network-wide service deployments as mobile applications install and facilitate dynamic mobile function offloading, which improves usability and user experience beyond current smartphone-centric applications.

5G Radio Access Testbeds

Learn how mmWave is a continuing part of the 5G evolution, from testbeds to new standards initiatives. Take a remote look at live demonstrations from our labs in New York, which showcase our 28GHz NR development platform with beam management between UE and gNB.  Get a firsthand view of our available research platforms including 60GHz Edgelink™, a NR Modem Processing Unit, and a 28GHz phased array Mast Head Unit.

360° Video Experience with Adaptive Streaming

This Joint demo of InterDigital Inc. and Technicolor Research & Innovation will showcase 360° video streaming from 13 pre-recorded camera views, each with 2kx2k color and depth, stored in HEVC video format in an MPEG-DASH server. Based on viewer position and orientation, seven camera views are adaptively streamed to the DASH client.  Video streaming segments are decoded, views are synthesized and rendered to display in real time at 30fps.  See how a Head Tracker tracks a human face, observe motion parallax, and use a joystick to track 360° video views.

Beyond 5G Standards Evolution

Learn about what’s happening in 5G Standards, and what's coming next for Beyond 5G technology.

Volumetric Photobooth

Volumetric content is the future generation of video that is generated by a set of multiple cameras, a Light Field Acquisition system.  Technicolor Research and Innovation will demonstrate image acquisition with a multi camera set-up, calculating a volumetric portrait that will be rendered on any smartphone for an immersive view experience. See how a volumetric video can lead to an immersive and interactive experience on a simple 2D screen, sensing depth and parallax, like watching a scene through a real window.

oneTRANSPORT® Data Marketplace

Data Exchanges and marketplaces are critical in collaboration and the creation of new Smart City services. This demo will showcase Chordant’s oneTRANSPORT Data Marketplace commercial service. We will demonstrate features, such as signing up for the service, uploading and downloading real-time and historical data, setting up licensing and monetary terms, open and selective sharing of data, visualization of data, and the creation of consumption and billing reports. One of the newest elements we will show is the consumption and sale of premium data from a large company that specializes in connected car services and transportation.

Smart City, a System of Systems

Smart Cities are complex ecosystems encompassing many players, subsystems, verticals, devices, types of connectivity, and services. Interoperability is key to tying everything together into a perfect System of Systems.  The Chordant platform, based on the oneM2M standard, is one such enabler of interoperability. This demo will showcase the use of Chordant’s platform in a smart city supply chain scenario. The demo will feature tracking of items at farms, in transit, at warehouses, refrigeration units, grocery stores, and restaurants in a manner that enables improved efficiency and quality assurance. Different types of devices, services and forms of connectivity will be featured, including Wi-Fi, Zigbee, and LPWA networks such as LoRA and NB-IoT.  

IDCC at MWC19 Live Blog Feeds  |  InterDigtal Booth Demos at MWC19

Stop by Hall 7, Booth 7C61 any time to meet with our world-class engineers, engage with our demos, and see what's activating the next generation of wireless and video experiences. We'll be showcasing ground-breaking demonstrations, including Edge Computing (5G-CORAL and AdvantEDGE), 5G Testbeds (Network and Radio), 5G Standards and Beyond, and Immersive Video and Video Standards.

Don’t miss our daily “CREATORS at MWC” panel series designed to move us to the extreme edges of the network and the brink of our collective imaginations. We’ve assembled some of the best and the brightest industry thought leaders -- representing China Mobile, European Broadcasting Union, Ericsson, FLAME, Mimesys, Network2030, Nokia, Open Edge Computing, Samsung, US PAWR, Technicolor, Telenor, Vodafone, 5G VINNI, 5G Smart Tourism, 5TONIC and others – to spark fresh ideas in-person and via webcast at InterDigital.com.   Our studio stage will also feature interviews and discussions with media partners, including Light Reading, The Mobile Network, TelecomTV and Digital TV Europe.  

CREATORS at MWC19 Panel Series
Live from Hall 7, Booth 7C61
  

Immersive Video: Augmenting the View 
Recorded on: Feb. 25, 2019 at 11:30 AM CET 

 
The nature of video experiences is changing, and technology is opening new dimensions. This panel will focus on the potential of Augmented Video for Immersive experiences as used in Light Field and videogrammetry capture setup and displayed in Virtual and Augmented Reality devices. Discussion will evolve around capture and editing specifications and use cases as much as the display constraints and distribution formats required to produce compelling immersive user experiences for video games and interactive/linear AR/VR/XR content. 

Panelists:
  • Valerie Allié, Technical Area Leader, Light Field and Photonics, Technicolor
  • Rémi Rousseau, CEO, Mimesys
  • Dr. Ralf Schaefer, Director, Video Division, Fraunhofer HHI 
Moderated by  Gael Seydoux, Director, Research and Innovation, Technicolor
 
 

How Far Can Edge Computing Take Us to a New Network Architecture?
Recorded on: Feb. 25, 2019 at 3:30 PM CET 

 
The edge computing and virtualization fever does not seem to be showing any signs of abatement. There seems to be quite some promise to take it further deep into the network architecture by leveraging the computing capability emerging in several new customer premise equipment (CPE) and user equipment (UE) such as vehicles, robots, and drones. Yet the challenges both technical and economical in taking this step deeper into UEs and CPEs are numerous, from technical considerations of mobility, volatility and heterogeneity to business and regulatory considerations of multiple providers (including the user), security and privacy. This panel will discuss how edge computing is becoming more pervasively distributed into the network including UEs and CPEs, what technologies are involved, what is wrong with today’s approach to edge computing, what value is there in driving this down to devices and what are the barriers to making this happen. 

Panelists:
  • Dr. Rolf Schuster, Director, Open Edge Computing Initiative
  • Laurent Depersin, Director Research and Innovation HOME Lab, Technicolor
  • Arturo Azcorra, Director of IMDEA and VP and Co-Founder, 5TONIC
  • Todd Spraggins, Strategy Director, Communications Global Business Unit, Oracle 
  • Dirk Trossen, Sr. Principal Engineer, InterDigital and FLAME Technical Manager
Moderated by Robert Gazda, Sr. Director, Engineering, InterDigital 
 
 

How Will Open Source Play a Role in the Evolving 5G Ecosystem?
Recorded on: Feb. 26, 2019 at 10:30 AM CET 

 
Open source is reshaping technology adoption and is a critical driver for the transformation taking place in many industries. The emergence of SDN and NFV has fueled the growth of cloud-based services, and now with the convergence of information technology (IT) and communication technology (CT) the telecommunication industry is changing as well. Widespread adoption of open source projects such as the OpenAirInterface Software Alliance (OSA) and O-RAN Alliance are now transforming the RAN. We’ll explore the role of open source in U.S. and European testbeds and its potential transformational impact on SDO, telecom business models and 5G innovation.

Panelists:
  • Raymond Knopp, President, OpenAirInterface Software Alliance
  • Chih-Lin I, Chief Scientist, Wireless Technologies, CMRI, China Mobile
  • Abhimanyu Gosain, Technical Program Director, Northeastern University College of Engineering and Strategic Member, OSA
  • Arpit Joshipura, General Manager, Networking & Orchestration + Edge/IOT, Linux Foundation 
  • Fred Schreider, Sr. Director, Engineering, InterDigital

Moderated by Axel Ferrazzini, Managing Director, 4iP Council

 

5G Network Trials: What Are All the Verticals Doing? 
Recorded on: Feb. 26, 2019 at 3:30 PM CET 

 
Many 5G trials are currently undertaken in preparation for the rollout and start of 5G throughout 2019 with a focus on showcasing the higher speeds that 5G originally promised. But the wider industry is also looking beyond those promises towards verticals such as media broadcast, transportation, rural rollouts and others but also preparing towards future beyond 5G efforts with experimentally driven research platforms. In this panel, we will bring together deployment insights from broadcasters, neutral host providers such as smart cities as well as European efforts on large-scale 5G trials and international efforts on large-scale wireless research platforms. We will discuss the approaches taken for trial deployments, e.g., based on white box approaches, as well as the use cases considered in those trials to shed light on what is there beyond an enhanced Mobile Broadband 5G Trial.

Panelists:
  • Dan Warren, Head of 5G Research at Samsung R&D Institute  (5GVINNI)
  • Darko Ratkaj, Senior Project Manager, European Broadcasting Union
  • Gael Seydoux, Director, Research and Innovation, Technicolor
  • Monique Calisti, CEO, Martel Innovate and Director, Next Generation Internet Outreach Office (FLAME)
  • Jim Burgess, Bristol City Council 5G Programme Lead 
Moderated by Dirk Trossen, Sr. Principal Engineer, InterDigital Europe
 
 

Beyond 5G: What is Coming Next? 
Recorded on: Feb. 27, 2019 at 3:30 PM CET 

 
The first set of complete technical specifications and solutions towards well-publicized 5G vision, aka IMT-2020 proposal, is due to be presented to ITU in WRC, October 2019. The recent shift of focus from research and development to deployment and commercialization of 5G systems concurrently unveils the “what comes next?” question embodied in an overarching beyond-5G vision (e.g. 6G). This panel will address fundamental questions pertinent to beyond-5G era such as shortcomings of 5G and whether a unified industry consensus on a beyond-5G vision has started to shape or not. Bringing together experts from core elements of technology generations, the panel will further explore the emerging pillars of beyond-5G including networks, terminal architectures, and radio, along with necessary evolution and disruption anticipated in each of these pillars. The panelists will shed light on beyond-5G inter-networking candidates, e.g. the ultra-flat network vision that radically alters conventional (including 5G) OSI stack; potential link-level connectivity disruptors in terms of new frequency bands, PHY, and beyond-Shannon ideas, and new device/terminal definitions that go well-beyond conventional, monolithic hardware/software device architectures, and role of AI/ML as a cross-cutting tool in beyond-5G.

Panelists:
  • Patrick Waldemar, Vice President Research, Telenor
  • Colin Willcock, Head of Radio Network Standardization, Nokia and Chairman, 5G-PPP 5G-IA
  • Mostafa Essa, RAN AI and Data Analytics Distinguished Engineer, Vodafone
  • Abhimanyu Gosain, Technical Program Director Northeastern University and Technical Program Director of US PAWR
Moderated by Alan Carlton, VP, InterDigital Europe
 
 

Hear from InterDigital senior leaders, Alan Carlton and Jim Nolan, on the MWC main stage.

MWC Main Stage Featuring InterDigital Executives
Tuesday, February 26th 1:00 - 2:00 p.m. 5G Deployment in High-Frequency Bands are Uneconomic
Alan Carlton, Vice President, InterDigital Labs
Tuesday, February 26th 3:30 - 4:30 p.m. Cashing in on Industrial Data Conference
Jim Nolan, Executive Vice President, InterDigital

Learn what’s on the leading edge, leading up to mass 5G proliferation and activation in 2021 and beyond.

Stop by any time to meet with our world-class engineers, engage with our demos, and see what’s activating the next generation of wireless.  We’ll be showcasing ground-breaking demonstrations, including Edge Computing (5G-CORAL and AdvantEDGE), 5G Testbeds (Network and Radio), 5G Standards and Beyond, and Immersive Video and Video Standards, and there will be plenty in-store to excite your senses, including daily live music, featuring The Mañaners.

InterDigital Booth Demonstrations

  • 5G Edge –Autonomous Drones on AdvantEDGE
  • 5G-CORAL -- Virtual Reality Video Streaming
  • 5G Core Network Testbeds
  • 5G Radio Access Testbeds
  • 360° Video Experience with Adaptive Streaming
  • Beyond 5G Standards Evolution
  • Volumetric Photobooth
  • oneTRANSPORT® Data Marketplace
  • Smart City, a System of Systems

Learn More

September 19, 2018 / Posted By: Patrick Van de Wille

The 39th annual IEEE Sarnoff Symposium is taking place September 24-25, and InterDigital Senior Director Robert Gazda will be presenting an invited talk. The IEEE Sarnoff Symposium has brought together telecom and communications experts from industry, universities, and governments since 1978. It is a premier forum for researchers, engineers, and business executives, with keynotes, invited talks, expert panels, tutorials, demos, exhibits and poster presentations.

Robert’s talk is titled “Edge Computing in Emerging 5G Networks”. Attendees will discover options for deploying and integrating Edge Computing into the 5G system. Edge Computing is a foundational technology that enhances network flexibility, efficiency, and bandwidth utilization while meeting demanding 5G KPI’s for ultra-reliable/low-latency applications including automated factories and autonomous vehicles.

Robert will discuss key challenges for deploying and managing the 5G Edge Cloud such as orchestration, network slicing and resource demand coordination across stakeholders. He will also talk about how Edge Computing is being supported within the 3GPP 5G specifications, and go over a few 5G Edge Cloud deployment scenarios as compared to Edge in 4G.

“Edge Computing in Emerging 5G Networks” is slated for Monday, September 24 from 1:50 p.m. (13:50) to 2:10 p.m. (14:10) in Ballroom A. The symposium is being hosted at the NJIT Campus Center in Newark, N.J. For more details on the 39th annual IEEE Sarnoff Symposium, visit the event website.

March 27, 2018 / IoT, oneM2M / Posted By: Kelly Capizzi

Recently, the oneM2M™ Technical Plenary Chairman presented Catalina Mladin, Member of Technical Staff, Convida Wireless™, with an award for her outstanding contributions to the development of oneM2M standards.

Catalina was nominated by a prominent North American operator, AT&T, for her contributions towards the oneM2M Rel-3 specifications in the area of interworking oneM2M to underlying 3GPP networks via the SCEF T8 interface. On March 12, she received the award at the oneM2M Technical Plenary Meeting held in Dallas (pictured below).  In addition to Catalina, Bei Xu of Huawei and Wolfgang Granzow of Qualcomm were also recognized for the highest standards of excellence, innovation and quality.

Formed in 2013 as a joint venture between Sony Corporation of America and InterDigital, Convida Wireless is focused on research into the future of connectivity and the Internet of Things (IoT). Comprised of 8 of the world’s leading ICT standards bodies, 6 global fora and SDOs, and over 200 companies, oneM2M is the global standards initiative for Machine to Machine Communications and IoT.

This is a great achievement for Catalina, the Convida IoT Team and InterDigital. Please join us in congratulating Catalina on her accomplishments!

 

 

February 16, 2018 / Posted By: Patrick Van de Wille

Of course, not many companies in the tech space have the history that InterDigital does: we were founded one year after Intel introduced the first microprocessor, and the same year as Wang introduced the first word processor and that Pong was launched. We thought it would be great if we dug into the vaults, pulled together some of our material, and created a series of short videos on some of the most important moments in our history as a company. The result is available at https://www.interdigital.com/history/

We take pride in the fact that many of our shareholders, partners, alumni and others have a long history of association with the company, many dating back to our International Mobile Machines (IMM) days. For those who’ve been part of the extended InterDigital family for a long time, I hope you’ll enjoy the comments and memories of Gary Lomp, Fatih Ozloturk, Brian Kiernan, and others whose time with the company has marked our history, as well as footage of the first Ultraphone deployments, the Reagan Ranch, the Avianca crash, and other oddball moments that have marked our first 45 years. 

January 22, 2018 / IoT, Smart Cities / Posted By: Kelly Capizzi

Global Smart Cities IoT technology revenues are to exceed US$60 billion by 2026 according to a recent report by ABI Research, a market-foresight advisory firm providing strategic guidance on the most compelling transformative technologies. The report, titled “Smart City Market Data,” provides an in-depth understanding of connections, technologies, and revenues across all major Smart City segments, as well as insight into the ecosystem in terms of suppliers and initiatives.  

ABI Research highlights InterDigital’s Smart Cities-focused business, Chordant™, as a key Smart City IoT solutions and platforms provider alongside industry-leading peers that include Cisco, PTC, Microsoft, Huawei, Nokia, NVIDIA, Verizon, Siemens, IBM, SAP, and Amazon. The firm states that only IoT technology suppliers that are addressing specific challenges cities are facing will win, and critical success factors include ecosystem support, standards-based interoperability, guaranteed technology lifecycle management, and more.  

The report also cited the fastest growing verticals include EV charging stations and micro-grids, smart waste management and environmental sensors, smart parking, and smart street lighting. In another recent report on Smart Cities and Cost Savings, commissioned by InterDigital, it was found that smart street lights are expected to cut repair and maintenance costs by 30 percent, contributing to potential savings as much as $4.95 billion annually for governments. 

Reports such as these from ABI Research reinforce how important the role of Smart Cities will be in both economic and social terms for our future. However, it is imperative that governments, enterprises, and citizens work together to deliver the true potential of Smart Cities.  

To learn more about the report, please click here, or for more IoT research, click here.

December 1, 2017 / Posted By: Patrick Van de Wille

We’d like to congratulate our friends at Avanci, the patent licensing platform that provides licenses to cellular standards-essential patents for the Internet of Things, which we joined in 2016. Through the summer they’ve been growing, adding Vodafone, Panasonic and Sharp to the platform, increasing the value for licensees. And today, they announced their first licensee: German car maker BMW. Their news announcement is available at the link below. Congratulations! 

http://avanci.com/release/bmw-group-becomes-new-licensee-avanci-platform-securing-license-standard-essential-patents-cellular-standards/

November 20, 2017 / SEA, 5G, FLIPS, MEC, NFV, SDN / Posted By: Kelly Capizzi

Device virtualization is on its way and may become a reality far faster than you can imagine. A recent BizTech article explores the road to device virtualization and how 5G can make service endpoint agents (SEA) a reality in potentially less than a decade.  

So, let’s back up – what is a SEA? A SEA device will work differently than your current smartphone. In the device virtualization paradigm, the same principles that allow virtualization across data centers or the abstractions of EPC elements in the cloud are applied to enable the dynamic decomposition of functions in a device into executable tasks. These functions may then be assigned for execution wherever this may be optimal, e.g., in another end user device or in a nearby Edge node. According to Alan Carlton, Vice President, InterDigital Europe, “the essential notion of the SEA is that any user interface can become yours when you need it,” as quoted in the BizTech article.  

While there have been steps toward device virtualization, a mature 5G network will be required for SEAs to be fully functioning devices that offer benefits to end-users. SEAs require the increase in internet speeds and lower latency that 5G is expected to bring by 2020. However, Ted Rappaport, professor of electrical engineering at New York University’s Tandon School of Engineering, explains in the article that, “it will take a few years for the 5G technology to mature and reduce in price before we see SEAs, but the world will begin to use them in the early to mid-2020s.”  

InterDigital has been an active contributor to wireless standards for over four decades and currently is focused on helping establish the next generation of wireless. In August, the company announced the world’s first successful Mobile Edge Computing 5G network architecture trial that unveiled new IP networking technology expected to form part of the 5G network architecture.  

Click here to read the full article or for more on device virtualization and SEAs, click here.

October 20, 2017 / Posted By: rachel.rorke

Earlier this month, the Philadelphia Business Journal announced that InterDigital's Desa Burton has been named as a 2017 "Veteran of Influence."  This award recognizes those who not only excelled in their military careers, but have also turned their experiences into successful business careers.

Desa currently serves on the Board of Directors for the United Service Organizations of Pennsylvania and Southern New Jersey (Liberty USO), which is an organization whose mission is to enhance the quality of life of the U.S. Armed Forces personnel and their families.  As a member of the Liberty USO Career Transitions Committee, Desa will lead a new initiative focused on STEM education for transitioning military service members and veterans.  InterDigital - as the founding sponsor of this initiative - is joined by other Liberty USO partners using the software developer training platform offered through Zipcode Wilmington.  Through Desa's efforts, this initative will enable service members and veterans to transition from military service into highly sought-after software development careers.

In addition, Desa has a long history of service in support of veterans' programs and initiatives.  Desa was recently selected to serve as the Philadelphia area coordinator for the United States Naval Academy Alumni Association Women's Shared Interest Group.  In that capacity, Desa will coordinate the efforts and interests of women alumni of the United States Naval Academy in the local area.  She is also a former and founding member of various veterans affinity bar associations, affinity groups, and programs.

Desa is a graduate of the United States Naval Academy.  Prior to her legal career, she served as a Surface Warfare Officer in the U.S. Navy and U.S. Naval Reserve, achieving the rank of Lieutenant Commander.  Desa completed the following assignments while proudly serving her country:  Engineering Officer onboard U.S.S. Briscoe (DD-977, Destroyer) responsible for a division of twenty personnel and the ship’s emergency response teams; Landing Craft Officer for Assault Craft Unit 2 responsible for five Landing Craft Utility and their crews totaling sixty personnel deploying with the U.S.S. Saipan (LHA-2); Counternarcotics Operations Staff Officer (Colombia) for U.S. Southern Command coordinating assistance provided by the United States to the Colombian counternarcotics brigade; and Staff Officer for U.S. Central Command assisting with counterinsurgency operations following the 9/11 terrorist attacks.

The Philadelphia Business Journal will be hosting a special awards ceremony on November 9 at the Ballroom at the Ben.  Desa will additionally be featured in a special section of the Philadelphia Business Journal, which will be released on November 10 online and in print.

Please join us in congratulating Desa on this tremendous honor!

 

(Pictured from left to right: Ranae McElvaine, Jannie Lau, Desa Burton, Josh Schmidt, Amy Miraglia, Christos Ioannidi)

October 6, 2017 / HEVC, standards / Posted By: rachel.rorke

On Wednesday, September 27, the Academy of Television Arts & Sciences announced that the 2017 Primetime Emmy® Engineering Award winner will be HEVC, or High Efficiency Video Coding, which is a technology standard that helps deliver ultra-high definition video to everything from smart phones to stadium displays.

The Emmy for HEVC will be awarded to the Joint Collaborative Team on Video Coding, or better known as the JCT-VC.  The JCT-VC is a group of engineers from the Video Coding Experts Group (VCEG) of the International Telecommunication Union (ITU) and the Moving Picture Experts Group (MPEG) of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).  This committee is comprised of representatives about 200 industry and academic institutions around the world, including InterDigital.

To meet consumer demands of highest quality video, the HEVC standard delivers more video at a higher resolution on the available bandwidth.  The HEVC standard improves upon the previous standard using many new coding technologies, one of which increases the size of block regions, which allows for more flexibility in block partitions and better predictability in the algorithms applied to each block of each frame.  Together these technologies contribute greatly to the enhanced efficiency of HEVC for video compression.

This new compression coding has been adopted, or selected for adoption, by all UHD television distribution channels, including terrestrial, satellite, cable, fiber and wireless, as well as all UHD viewing devices, including traditional televisions, tablets and mobile phones.

Join us in congratulating our InterDigital engineers who participated in the development of HEVC in the JCT-VC, as well as our other industry colleagues for this great achievement!

To see the full winners profile, click here for more.

September 11, 2017 / FLIPS, 5G, MEC / Posted By: Kelly Capizzi

InterDigital is delighted to announce that is has been shortlisted for the Global Telecoms Awards (also known as the Glotels), the industry’s most prestigious set of awards run by Telecoms.com. The company has been shortlisted alongside Accedian, Cloudify, NetNumber and SK Telecom in the ‘Ground-breaking Virtualization’ category for its Flexible-IP services (FLIPS) solution, which was showcased in the successful world’s first MEC-over-5G architecture trial recently in Bristol. 

The Glotels serve as a celebration of outstanding companies who are paving the way for innovation within the telecoms industry. The awards recognize advances in areas such as 5G innovation, IoT in practice, sustainability and customer experience.   

InterDigital has been recognized for its achievements in the 5G innovation space, namely in relation to its FLIPS solution. In July 2017, InterDigital successfully showcased the FLIPS solution during the world’s first successful MEC trial using 5G-ready network architecture. The three-week trial, hosted on the testbed deployed by Bristol is Open (BIO) and in partnership with CTVC, demonstrated the use of information-centric networking (ICN) and software-defined networks (SDN) to deliver IP-like content and computing services at much higher throughput and reduced latency. The trial was the culmination of a two-year network topology development project and is one of six solutions selected as an ETSI MEC proof-of-concept. 

This is the second year in a row that InterDigital is shortlisted for a Glotel: last year, InterDigital’s IoT Solutions were shortlisted for the ground-breaking oneTRANSPORT smart city initiative in the UK. Making the shortlist for this prestigious award underscores InterDigital’s commitment to delivering world-class standards within the realm of future wireless. 

A special thank you to our partners CTVC and BIO – partnering is such an important part of our research, and in European projects alone we collaborate with more than 75 companies, research organizations, regional authorities and universities. Well done to all those involved in the FLIPS solution and Bristol is Open projects and keep fingers crossed for the winner announcement at the forthcoming award ceremony. 

August 31, 2017 / 5g, mobile edge computing / Posted By: rachel.rorke

A few weeks ago, InterDigital shared its partnership on a live 5G trial with CVTC Ltd. and Bristol is Open.  Since the conclusion of the event, InterDigital has seen major achievements in the 5G space as a result of the trial.

The real-world joint trial, hosted in Bristol, UK, used a scavenger hunt game developed by InterDigital Europe and CVTC Ltd. as part of a European-funded project called POINT; this month, the team completed the world’s first trial of mobile edge computing (MEC) based on new IP networking technology expected to form part of the 5G network architecture, a major advancement in 5G research.

Specifically, the trial showcases InterDigital’s Flexible-IP services (FLIPS) technology, which improves on experienced latency for such MEC services while reducing the overall network utilization when accessing such services.  Within the trial, participants experienced video latency reductions from several tens of milliseconds and video distribution that was six times more efficient than standard IP technology.

This successful trial quickly garnered excellent media coverage.  Check out some of the industry features in the links below!

July 20, 2017 / 5g / Posted By: rachel.rorke

This weekend marks the final days of a next generation 5G Internet trial in Bristol, England. The trial, called Bristol is Open (BIO, for short) is designed as an internet-enabled scavenger hunt in the central part of Bristol. The trial is open to anyone with an Android device who is interested in testing new technology.

Users with Android devices are invited to participate by downloading the app and taking part in a walk-around search for clues. The game was developed by InterDigital Europe in conjunction with CVTC as part of a European funded project called POINT. You can find more about POINT by clicking here.

The project is proving the use of new IP networking technology, which will contribute to 5G technology developments. More specifically, the technology will improve on experienced latency of services over the network, such as video streaming, while reducing the overall network utilization when accessing such services.

The BIO game involves teams of individuals from 3-6 players (each with an Android device), and takes approximately 45 minutes to complete. If you’d like to help test a new 5G wireless technology, you have until July 23 to participate. So download the app at the Google Play store HERE and make your way to Bristol City Centre this weekend!

June 26, 2017 / Posted By: Kelly Capizzi

InterDigital is delighted to announce that our Legal Department was recently named a Legal Department of the Year by The Legal Intelligencer, the daily law journal of record for the Commonweath of Pennsylvania and the oldest legal daily in the United States.

At an awards dinner held at the Crystal Tea Room in Philadelphia on June 22nd, The Legal recognized corporate legal departments that stand out in specific categories of achievement, with InterDigital Legal being honored for its “General Excellence.”

 InterDigital Legal Team

Pictured from left to right: InterDigital's Deb DiBattista, Amy Hudak, Christi Renshaw, Damian Hamme, Christos Ioannidi, Jannie Lau, Andy Isztwan, Ralph Neff, Desa Burton, Matt Shaw, Josh Schmidt receiving their honor.

We are proud to share the news and congratulate the entire Legal team!

June 21, 2017 / 5G / Posted By: Kelly Capizzi

The Centre Tecnològic de Telecomunicacions de Catalunya (CTTC) recently signed an agreement with InterDigital to collaborate on the development of New Radio (NR) concepts targeted for 5G Phase 2. NR, the global standard being driven by 3GPP for a new 5G air interface, is a continuing part of the mobile broadband evolution to deliver on the requirements of 5G. According to 3GPP’s standardization timeline, Phase 2 is set to begin in 2018 and will result in full support of 3GPP NR requirements.  

The CTTC is a non-profit research institution based in Barcelona, resulting from a public initiative of the Regional Government of Catalonia. The institution’s research activities are focused on technologies related to the physical, data-link and network layers of communication systems, and to the Geomatics. The research group involved in the collaboration agreement with InterDigital is the Mobile Networks Department of the Communication Networks Division of CTTC.  

InterDigital and CTTC have an established history and highly-respected research relationship that stems from consortium collaborations such as in European H2020 5G-Public Private Partnership (PPP) projects.  

The agreement between CTTC and InterDigital will focus on New Radio including technology development, system proof-of-concepts, and simulation work, among other areas. InterDigital is pleased to extend their research efforts with CTTC and look forward to the work to come.  

For more information on InterDigital’s wireless R&D efforts, visit http://www.interdigital.com/wireless/

For more information on CTTC’s Mobile Networks department, visit http://networks.cttc.es/mobile-networks/  

June 7, 2017 / Posted By: Patrick Van de Wille

As many of you know, two years ago we changed to a fully virtual Annual Shareholder Meeting format, in keeping with our role as a technology leader. Some of you will recall that the Q&A portion of last year’s meeting was cut short by a storm-driven power failure, but we think the odds of that happening twice in a row, coinciding with our annual meeting date and time, are pretty low. So looking forward to seeing you next week!

The virtual format comes with some new meeting registration procedures, so we’ve posted a document that clearly spells out what you need to do to attend online. The press release went out this morning, and the document can be viewed here.  

Please make sure you read it, and factor in some time on the day of the event to register correctly.  Think of it the same way you’d attend in person, where you’d leave in plenty of time to make it in case of traffic and would factor in time to register at the desk. Online registration opens at 10:30 a.m. Eastern Time on June 14.  The instructions are simple, and there are resources (including a toll-free number) available if you’re having any issues.  

Thanks, and looking forward to your participation.  

-P

May 31, 2017 / IoT, Smart Cities / Posted By: Kelly Capizzi

While the IoT market, in general, was slow to capture the public's imagination prior to 2017, one area where IoT is blooming is in the smart city and smart building industry. Mass rollouts of IoT in an industrial setting, including urban environments and business hubs, are beginning to garner success. It is more important than ever to understand what smart cities are, how they get smart, what technologies have the greatest impact, potential challenges, and much more.  

A number of InterDigital IoT experts alongside their industry peers plan to share emerging technologies, complex challenges, best practices and reasonable expectations for the future of smart cities throughout this month. Just some of the upcoming opportunities to hear from InterDigital include:

InterDigital has been an active contributor to the development of standards-based IoT to enable multiple ecosystem partners to provide IoT solutions that range from industry specific applications to advanced data marketplaces.  In addition, the company is actively involved in organizations that are dedicated to the acceleration of IoT including ATIS, IIC, GSMA NB-IoT, and oneM2MTM, among others.

To learn more about InterDigital’s work in IoT, please click here.

May 11, 2017 / IoT, 5G, Smart Cities, Data Exchange / Posted By: Kelly Capizzi

The 5G network, AR/VR, and machine learning, among other innovations are providing abundant possibilities to rapidly advance the power and opportunity of Smart Cities technology. The Alliance for Telecommunications Industry Solutions (ATIS), a forum where ICT companies convene to find solutions to shared pressing challenges, recently developed the Smart Cities Technology Roadmap to provide an overview of the network-enabled technologies that have the greatest impact on Smart Cities from the vantage point of key ecosystem players, including InterDigital.  

The Smart Cities Technology Roadmap was developed through extensive interviews with Smart Cities planners, CIOs, CTOs, and other key decision makers as well as leading ICT industry companies to foster a more intelligent approach to Smart City planning through better budgeting, purchasing and staging decisions. As an ATIS member company, InterDigital participated in the development of the roadmap alongside industry peers that include AT&T, Cisco, Ericsson, Nokia, Qualcomm, and Verizon, among others. The roadmap covers a number of topics such as technology framework, technology enablers, technology-enabled applications, and more.  

Within the area of technology enablers, InterDigital contributed its expertise on data exchanges and data marketplaces. The roadmap defines both environments, identifies the value creation opportunity for Smart Cities and covers the importance of standardization for the true realization of Smart City applications. From the beginning, InterDigital has been an active contributor to the development of standards-based IoT to enable multiple ecosystem partners to provide IoT solution enablement ranging from industry specific applications to advanced data marketplaces.   

View the full Smart Cities Technology Roadmap here or to learn more about data exchanges, please click here.

May 9, 2017 / Posted By: Kelly Capizzi

InterDigital is delighted to announce that our Executive Vice President, General Counsel and Secretary, Jannie Lau, was recently appointed to the innovative and highly regarded Comcast NBCUniversal Joint Diversity Advisory Council (JDC). 

The Comcast NBCUniversal JDC, comprised of national leaders in business, politics and civil rights, provides advice to senior executive teams at Comcast and NBCUniversal regarding their development and implementation of diversity and inclusion initiatives in the following five focus areas: Governance, Our People, Supplier Diversity, Programming and Community Investment. The JDC includes four nine-member Diversity Advisory Councils representing the interests of African Americans, Asian Americans, Hispanics and women. It also has representatives from other diverse groups, including Native Americans, veterans, people with disabilities and members of the lesbian, gay, bisexual and transgender (LGBT) community. 

Jannie was appointed to a two-year term on the Asian American Advisory Council, serving alongside other leaders from Morgan Stanley, UPS, Coca-Cola, the NAACP, National Urban League, Asian Pacific American Institute for Congressional Studies, Asian Americans Advancing Justice and U.S. Hispanic Chamber of Commerce, among others. 

We are proud to share the news and congratulate Jannie!

April 13, 2017 / 5G / Posted By: Kelly Capizzi

Cloud Radio Access Network (C-RAN) is the visualization of base station functionalities by means of cloud computing, and it is considered to be one of the key forward-looking 5G technologies. It also was the focus of a research paper, co-authored by an InterDigital engineer, that was recently awarded the 2017 Best Paper by the Journal of Communication Networks (JCN)!  

JCN, a bimonthly journal published by the Korean Institute of Communications and Information Sciences (KICS) with the technical co-sponsorship of the IEEE Communications Society, is committed to publishing high-quality papers that advance the state-of-the-art and practical applications of communications and information networks. The subjects covered by JCN include all topics in communication theory and techniques, communication systems, and information networks.  Annually, JCN selects the top paper among all JCN publications throughout the previous year to be awarded the prestigious Best Paper Award.

InterDigital’s Dr. Onur Sahin collaborated with Prof. Osvalda Simeone, New Jersey Institute of Technology; Dr. Andreas Maeder, NOKIA Networks; Prof. Mugen Peng, Beijing University of Posts and Telecommunications; and Prof. Wei Yu, University of Toronto; to produce the 2017 award-winning paper titled, “Cloud Radio Access Network: Virtualizing wireless access for dense heterogeneous systems.” The paper provides a concise overview of the research on C-RAN with emphasis on fronthaul compression, baseband processing, medium access control, resource allocation, system-level considerations and standardization efforts.  

The paper was originally published in the April 2016 JCN special issue on Cloud Radio Access Networks, which aimed to address fundamental research issues regarding the analysis and implementation of C-RANs with emphasis on the interplay between wireless interface and the fronthaul network. The authors will be presented with the 2017 JCN Best Paper award at the IEEE International Conference on Communications on May 22, in Paris, France.  

Click here to read the full award winning paper.

March 21, 2017 / Posted By: rachel.rorke

On the heels of our previous announcement, in which Liangping Ma's and Ed Ehrlich earned new leadership positions in the mobile industry, InterDigital is delighted to continue the streak of recognized engineers in Behrouz Aghili.

A little over two weeks ago, Behrouz, a principal engineer in our Melville office, was elected vice-chair of the CT plenary, which oversees the work of all CT, a very important working group in the 3GPP standards-setting organization. CT stands for the “Core Network and Terminals” Technical Specification Group, and is responsible for defining the interfaces/protocols within the core network and between the core network and terminal devices – what’s called the NAS layer. The other elected officials to the CT plenary are representatives from Huawei, Deutsche Telekom and NTT Docomo, so as always we’re in tremendous company.

This is an excellent accomplishment, and we'd like to congratulate all the engineers who have recently been selected as leaders in the industry, as well as all our folks in standards research who were able to work hard to get these results.  Thank you!

March 14, 2017 / IoT / Posted By: Kelly Capizzi

Wiley, a global leader in scholarly journals, recently published Internet of Things and Data Analytics Handbook featuring a practical case study written by InterDigital engineers along with our partners at ARUP.  

The IoT and Data Analytics Handbook describes essential technical knowledge, building blocks, processes, design principles, implementation, and marketing for IoT projects. The handbook opens with an overview and anatomy of IoT, ecosystem of IoT, communication protocols, networking, and available hardware, both present and future applications and transformations, and business models. It also addresses big data analytics, machine learning, cloud computing, and consideration of sustainability that are essential to be both socially responsible and successful. Design and implementation processes are illustrated with best practices and case studies in action.  

The book, edited by Hwaiyu Geng, is comprised of works written by subject matter experts, including 74 international experts from nine countries around the world in the consumer and enterprise fields of IoT. InterDigital’s Alan Carlton, Rafael Cepeda and Vanja Subotic along with ARUP’s Tim Gammons are among the international expert contributors for their work on the chapter, “Defragmenting Intelligent Transportation: A Practical Case Study.”  

The chapter focuses on the transport industry and a solution to today’s transportation fragmentation, the oneTRANSPORT® data marketplace initiative.  The oneTRANSPORT initiative is an Innovate UK supported project in the area of “Integrated Transport: In-Field Solutions.” The project addresses both immediate and anticipated future challenges facing the transport industry (e.g. congestion, shrinking Local Authority budgets, end of subsidies, and the like).  The oneTRANSPORT initiative involves eleven cross-sector partners including five Transport Authorities (data owners and use case providers), a technology platform provider (InterDigital), a transport industry specialist, and four transport sensor/device manufacturers & transport analytics providers. To learn more about the oneTRANSPORT initiative, please click here.  

For more information on the book, please click here.

March 8, 2017 / 5g, 3gpp / Posted By: rachel.rorke

InterDigital is delighted to announce that two of our employees, Liangping Ma and Ed Ehrlich, have recently been selected to represent our company in leadership roles in the mobile industry.  We are very excited for them to showcase their expertise in their respective new roles, and can’t wait to share their successes along their journeys.

Liangping Ma, IEEE ComSoc Distinguished Lecturer for 2017-2018

The IEEE Communications Society (ComSoc) has selected InterDigital’s Liangping Ma as an IEEE ComSoc Distinguished Lecturer for 2017-2018.  ComSoc’s Distinguished Lecture Tour is designed to create a network of subject-matter experts and renowned authorities on communications and networking-related topics.

Liangping is a Member of the Technical Staff in the San Diego, CA office of InterDigital.  His current research interests include 5G radio access networks, ultra-reliable and low-latency video communication, and machine learning.  He and his team invented a number of fundamental technologies on video communication and cognitive radios. Liangping has served as the Chair of the San Diego Chapter of the IEEE Communication Society since 2014. He received his PhD from the University of Delaware and his B.S. degree from Wuhan University, China.

Ed Ehrlich, Vice Chair of ATIS WTSC-RAN

In early February, ATIS (Alliance for Telecommunications Industry Solutions) held elections for some of their committee leadership roles and announced that InterDigital’s Ed Ehrlich was elected vice chair of ATIS WTSC-RAN.  ATIS is a forum where information and communications technology companies convene to find solutions to pressing industry challenges, and is the North American Organizational Partner for 3GPP.  As vice chair of WTSC-RAN (which stands for Wireless Technologies and Systems Committee – Radio Access Networks), Ed will be part of the team that provides North American inputs into a wide range of radio access issues that 3GPP tackles, the main topic now being 5G.

Ed is a Member of the Technical Staff in the Conshohocken, PA office of InterDigital.  This March marks Ed’s fifth year with InterDigital, though he has been involved in the mobile industry for over 25 years.  He specializes in leading edge wireless technologies with emphasis on the development of industry alliances, standards, and regulatory requirements, and earned a B.S. degree from the University of Illinois at Urbana-Champaign.

These achievements are excellent markers of InterDigital’s talent and hard work; please join us in congratulating Liangping and Ed for their great accomplishments!

February 27, 2017 / MWC, 5G, IoT, V2V, IDmwc17 / Posted By: Kelly Capizzi

UPDATED February 28, 2017 - 9:00 AM CET -

Watch the below live feed from the InterDigital booth at MWC 2017:



UPDATED February 28, 2017 - 7:00 AM CET

Day One of Mobile World Congress 2017 ended in success! The InterDigital new booth space was flooded with foot traffic for our exciting demos and our meeting rooms were consistently packed throughout the entire day.

Demos, led by our enthused industry expert staff, include Crosshaul with EdgeLink™, 5G Access, Next Generation IP Networking, Personalized Virtual Reality, and IoT Solutions for Smart Cities, Smart Building, Intelligent Transport and Energy and Utilities! In addition, this year features a number of highly interactive demos that seem to be a hit including:

    • Remote Surgery: 4G vs. 5G where two participants at a time compete in an entertaining game of "remote surgery" – one player via a low-latency 5G connection and the second player via a slower (higher-latency) 4G connection. So far, 5G seems to be the winner! But don’t worry, both participants are winners as they receive a fun sticker and get entered in a chance to win an Amazon Echo or Apple TV!

    • Contextual Driving Platform where participants must safely drive a RC car via a virtual reality headset, physical steering wheel and pedals while avoiding obstacles and other autonomously driven RC cars. The CDP assesses and aggregates driving data to provide the participant with a risk level score and ranks them on our leaderboard! All participants get entered in a chance to win an Amazon Echo or Apple TV as well!
    • Check out this coverage from Telecoms.com here to learn more:  Tech isn’t the issue for self-driving cars, we are – InterDigital

Another new feature this year at the InterDigital booth is our studio space!  Day One featured an afternoon full of interviews conducted by our media friend, Light Reading, with a number of industry leaders covering industry hot topics.

The studio space will continue to be full throughout the show with a number of videos and panel sessions such as:

Crosshaul – The Fusion of Fronthaul and Backhaul in 5G
February 28, 2017
11:00AM – 12:00PM CET

Connected Car: Enabling a new suite of automotive experiences
March 1, 2017
10:00AM – 11:00AM CET

Internet of Things: The Use Case Experience
March 1, 2017
1:00PM – 2:00PM CET

Fragmentation of IoT
March 1, 2017
3:00PM – 4:00PM CET

Finally, the day drew to a close with a stellar performance from a returning favorite and local band, The Mañaners! Crowds formed around and throughout the booth to enjoy the wonderful music and lively performance.


UPDATED February 27, 2017 - 5:55PM CET  -Posted February 27, 2017 - 8:00AM CET (2:00 AM Eastern) -

Welcome to our live blog post directly from our new booth space at Mobile World Congress 2017, Hall 7 Stand 7C61! This post is the spot to be kept up to date with everything going on at the event.  We'll continue to add to this post throughout the week, so check back often for updates. Your All-Access Pass to Everything InterDigital at MWC 2017: http://www.interdigital.com/post/your-allaccess-pass-to-everything-interdigital-at-mobile-world-congress-2017

 
February 23, 2017 / mwc, iot, 5g, IDmwc17 / Posted By: Rachel Rorke

Check out our live blog here: http://www.interdigital.com/post/interdigital-live-from-mobile-world-congress-2017

Mobile World Congress 2017 is just around the corner, and InterDigital is once again thrilled to showcase our contributions to the mobile industry in Barcelona, Spain.  InterDigital will be represented by a number of our experts in IoT and 5G, who will be operating demos, participating in conferences, and hosting panels throughout the week of February 27 – March 3.

This year, we've upgraded to a brand new space – Hall 7, booth 7C61.  Our beautiful booth is fully updated with contemporary meeting spaces, demo displays, and a modern grass wall emblazoned with an illuminated InterDigital logo.

See below for a comprehensive list of all the exciting events we have planned!


DEMOS

Remote Surgery

Two players compete in an entertaining game of "remote surgery" – one player with a low-latency 5G connection and a second player with a slower (higher-latency) 4G connection.  Players interact with the four challenges (modeled after surgical tasks) through a pair of 3D video goggles and a 3D video camera, providing the "remote surgery" effect.  This 5G use case will highlight the capabilities of 5G-Crosshaul technology with InterDigital’s EdgeLink™ platform to support the demanding requirements of 5G traffic.

Crosshaul with EdgeLink

The Transport Team will install InterDigital's EdgeLink™ platform that includes three mmW nodes each with redundant antennas to create a small mesh network, matching the configuration that was recently used in the 5G-Berlin Field Trial as part of our participation in the EU-funded 5G-Crosshaul H2020 project.  In addition to remote surgery traffic, the Crosshaul system over EdgeLink™ will service Fronthaul and Backhaul traffic flows, matching expected 5G network levels.

5G Access Demo: Latency Reduction in Multi-hop Communication Systems

This demo will show the benefits of InterDigital's Fast Forward technology versus conventional Packet routing implemented in a 3 node mmW wireless network.  The platform was developed using Xilinx FPGAs and Infineon BGT radios, as well as InterDigital's physical layer design capable of high throughput and ultra-low latency.

Next Generation IP Networking & Personalized Virtual Reality

Showcasing the performance gains for operators achieved through our Next Generation Networking platform, FLIPS, we will have 30 to 50 users accessing a particular video at roughly the same time, who will experience the potential for latency reduction.  While standard IP routing technology would lead to individual streams being sent to each client (resulting in a linear cost increase for operators), our FLIPS solution will deliver a single video stream to all clients, resulting in a performance gain that equals the number of viewers, therefore significantly reducing the costs of transmission for operators.

Participants in the second part of this demo will experience VR in a more individualized way than ever before.  Our personalized VR platform is for people to experience the virtual reality and stay connected to the real world at the same time.  The platform makes use of next generation video and network technologies to overcome technical challenges and significantly improve personalized VR experience.

IoT Solutions

InterDigital will present four demos powered by its suite of IoT solutions, driven by its oneMPOWER™ standards-based IoT platform and wot.io™ integration framework.  They will have a heavy focus on smart cities areas and key strategic parternships including system integrators like Harman, our oneTRANSPORT™ initiative partners, and security solutions providers.

Contextual Driving Platform

As cars become more and more autonomous, the software in the car needs to replace human senses and perceptions with automated means to continuously assess, perceive, and respond to risks on the road.  InterDigital's Innovation Partners in collaboration with IHMC will present their Contextual Driving Platform, which aims to demonstrate cooperative sensor data fusion techniques based on data from on-board sensors and V2X communications for situational awareness.  Visitors can experience a replication of the real world experience with the platform through a virtual reality driving game at InterDigital's booth.


SESSIONS

5G Impact
(Featuring Fred Schreider & Bob Gazda)
March 2, 2017
11:30AM – 1:00PM CET
Hall 4, Auditorium 5

NFV: A Re-Examination
(Featuring Alan Carlton)
February 28, 2017
3:30 PM – 4:40PM CET
Hall 4, Auditorium 3


LIVE PANELS @ HALL 7, BOOTH 7C61

Crosshaul – The fusion of Fronthaul and Backhaul in 5G
February 28, 2017
11:00AM – 12:00PM CET
Click here to access the event page.

Connected Car: Enabling a new suite of automotive experiences
March 1, 2017
10:00AM – 11:00AM CET
Click here to access the event page.

Internet of Things: The Use Case Experience
March 1, 2017
1:00PM – 2:00PM CET
Click here to access the event page.

Fragmentation of IoT
March 1, 2017
3:00PM – 4:00PM CET
Click here to access the event page.

 

 

February 14, 2017 / XCellAir, wi-fi / Posted By: Kelly Capizzi

Operators have the potential to miss a $6.7 billion opportunity from consumer interest in paid-for managed Wi-Fi. This warning comes from InterDigital’s XCellAir, an expert in Wi-Fi Quality of Experience (QoE), based on their findings from a new survey of 1,000 consumers each in the US and UK. The $6.7 billion opportunity comes from revenue lost from consumers willing to pay for managed Wi-Fi services (estimated $3.3 billion) and OPEX savings from a reduction in helpline calls and engineer visits (estimated up to $3.4 billion).

 XCellAir released this result and several other significant findings from the survey on Tuesday, and there has been a tremendous amount of media attention surrounding the findings throughout this week. Check out just some of the media coverage below:

XCellAir also recently announced the launch of its Wi-Fi Advisor, a free tool which helps ISPs and operators to determine the Wi-Fi service customers need in order to provide reliable Wi-Fi performance throughout the home.

Click here for more on the consumer survey or for more information on XCellAir, click this link!

January 20, 2017 / stem, iot / Posted By: Rachel Rorke

Science, technology, engineering, and mathematics (STEM) education is vital not only for the success of the wireless technology industry, but also for future generations to be encouraged to grow in new directions.  InterDigital is committed to the advancement of STEM education and is always excited to pursue new opportunities that support our local communities.  Thus, we are thrilled to share that our Internet of Things (IoT) business unit recently sponsored a set of IoT kits for a high school program with which one of our consultants, Nadine Manjaro, is involved.

In October 2016, Nadine presented to a group of STEM Themed Institute students at New Brunswick High School in New Brunswick, New Jersey.  New Brunswick High School provides students with the chance to learn in smaller learning communities, which are split into four overall “Themed Institutes,” including STEM; Fine, Visual & Performing Arts; Law, Human & Public Service; and Humanities.

Nadine, an alumnae of New Brunswick High School, was inspired to pursue her career in engineering after professionals came to the school when she was attending to share their love of STEM with her and her classmates.  Her story helped her connect with the students at the school, whose population is over 90% minority students from lower-income backgrounds.  As an immigrant, Nadine overcame a number of challenges on her path to engineering success, but her commitment and passion for STEM enabled her to earn undergraduate degrees in both Industrial Engineering and Economics, as well as a master’s degree in Engineering Management.

Nadine’s presentation and the positive feedback it received was the catalyst that inspired her to develop a program that introduces students to the IoT and create a dialogue that highlights the merits of a STEM-related career.  She is fiercely dedicated to encouraging the next generation to pursue STEM careers, and hopes to replicate her experience with other lower-income high school students as an extension of her work in IoT.  Nadine shared this enthusiasm with InterDigital’s IoT business unit leaders, and they readily jumped on board to sponsor IoT starter kits in support of the cause.

The goal of Nadine’s project is to give students a hands-on demonstration of IoT in hopes of generating interest and motivation to pursue careers within the growing STEM field.  InterDigital celebrates Nadine’s efforts to help the local community and is proud of her support of furthering STEM education.  We are excited to see what great stories and conversations come out of this initiative!

January 5, 2017 / CES, Patents / Posted By: Kelly Capizzi

2017 holds promise for many exciting advances and milestones in the technology industry, including the 50th Anniversary of The International Consumer Electronics Show (CES) in Las Vegas, NV, being held this year from January 5-8. CES 2017, the world’s gathering place for all who thrive on the business of consumer technology, showcases the hottest tech trends through exhibition, keynote addresses, marketplaces, panel discussions and more! The 2017 conference program is bursting with exciting sessions including a policy focused panel featuring InterDigital’s Rob Stien.  

The conference program is divided into four major categories comprising over 40 different tracks to hone in on the essential industry trends. “Trolls and Tech: How to Fix Patents” takes place as part of the Innovation Policy track on January 6th from 1-2 PM PST.  The panel will discuss hot-button topics surrounding patent rights and legislation that impact and drive innovation. Michael Patrick Hayes, Sr. Manager, Government Affairs, Consumer Technology Association, will moderate the dynamic panel discussion among the following legislators and innovators:

  • Michelle K. Lee, Under Secretary of Commerce for IP, Director, US Patent and Trade Office, Office of the Under Secretary. U.S. Patent and Trademark Office (USPTO);
  • Tyler Grimm, Legislative Director, Office of Representative Darrell Issa;
  • Colin Anawaty, Director of Product, athenahealth;
  • Julie Samuels, Executive Director, Tech:NYC; and
  • Rob Stien, Vice President of Government Relations & Regulatory Affairs, InterDigital, Inc.

You won’t want to miss this informative and exciting discussion!  Attending CES? The panel is located in LVCC, North Hall, N254.  Can’t make the show? Live stream the panel by clicking here.

 

December 16, 2016 / Machine Learning / Posted By: Kelly Capizzi

In October 2016, the International Academy, Research and Industry Association (IARIA) hosted its eighth International Conference on Emerging Networks and Intelligence, and last month awarded five “Best Papers” from the event’s call for papers including a submission from an InterDigital engineer and her collaborators.  

IARIA’s International Conference on Emerging Networks and Intelligence, EMERGING 2016, serves as a platform to present and evaluate the advances in emerging solutions for next-generation architectures, devices, and communications protocols. The conference solicits technical papers from academic, research, and industrial contributors that focus on topic areas such as computing trends, mobility and ubiquity, intelligent services, applications and services, semantics and adaptation, among others.   

InterDigital’s Shoshana Loeb along with Applied Communication Sciences’ Ben Falchuk, Chris Mesterharm, and Euthimious Panagos, submitted a research paper titled, “Machine Learning Techniques for Mobile Application Event Analysis,” which was recognized as one of the five top papers.  The “Best Papers” award was based on review of the original submission, the camera-ready version, as well as the presentation during the conference.  

The paper explains how JumpStart, a real-time event analytics service, utilizes machine learning techniques to empower developers and businesses to both identify users exhibiting similar behavior and discover user interaction patterns that are strongly correlated with specific activities. The discovered patterns can then be used to enable contextual real-time feedback through JumpStart’s complex event pattern matching.  

Click here to read the full award winning paper.

 

December 13, 2016 / IoT, wot.io, oneMPOWER, oneTRANSPORT, oneM2M, ETSI / Posted By: Kelly Capizzi

The mobile industry is characterized by global standards, and various standards-based solutions are being proposed by organizations such as the 3GPP, ETSI and oneM2M™ for the Internet of Things. As an industry leader, InterDigital has been an active contributor to the development of standards-based IoT from the beginning and continues to help drive the efforts, which can be seen in some of the recent efforts and involvement of our engineers.  

In early November, InterDigital’s Dale Seed, Principal Engineer, IoT Research and Development, led a webinar that provided an overview of new features and functionality supported by a newly-published Release 2 version of the oneM2M™ standard. oneM2M™ is a global organization creating a scalable and interoperable standard for communications of devices and services used in M2M applications and the IoT. Missed the webinar? Check out the recorded version here.  

A few weeks later, Dale participated in a panel session alongside industry experts from Nokia, Sierra Wireless, Qualcomm and ETSI at the Grand Slam ’16 Internet of Things Virtual Conference. The panel, titled “oneM2M™ Release 2: Setting the Standard for IoT Interoperability,” highlighted the role oneM2M™ plays in providing interoperability across a number of IoT connectivity protocols.  

InterDigital’s IoT team also participated in ETSI’s IoT/M2M Workshop 2016 which took place from November 15-17, 2016 in Sophia Antipolis, France.  Owen Griffin, Senior Manager, InterDigital Europe, presented on the oneTRANSPORT initiative and the role of InterDigital’s oneMPOWER™ powered by wot.io™ within the smart transport initiative. Also at the workshop, the team demonstrated how the work within the oneTRANSPORT and Smart Routing initiatives allows data and platform integration through four United Kingdom counties as well as the city of Birmingham and brings the end user closer to data producers.  

Most recently, the company participated in TTA and ETSI’s 3rd oneM2M™ Interop event in Kobe, Japan. InterDigital participated and conducted joint testing to verify multi-vendor interoperability of oneMPOWER™ with other participants. The company has been a participant along side industry leaders such as Cisco, HERIT, Huawei, KETI, Ricoh and Qualcomm since the event’s inception in September 2015.  

For more information on InterDigital IoT solutions, please click here.

December 7, 2016 / 5G, Standards / Posted By: Kelly Capizzi

While the 5G story is still being written, the requirements to deliver that story are well-known and seem to point towards the collaboration of multiple standards bodies. The challenging requirements also prove a need to look beyond the “traditional” mobile standards leaders of Third-Generation Partnership Project (3GPP) and Global System for Mobile communication (GSM). In a recent NetworkWorld article, InterDigital’s Alan Carlton explains why the Internet Engineering Task Force (IETF) will be key for standardizing 5G.  

Alan explains that while the 3GPP will continue to play an important role, the IETF will specify a significant number of key 5G protocols. The collaboration of the 3GPP and the IETF is not necessarily a new relationship, but actually a continuation of a trend started back in the development of 3G. 3GPP liaised with the IETF, who was then developing all of the protocols for the emerging Internet, throughout the development of 3G to avoid duplication of functionality, according to Alan.  However, with the coming of 5G, it seems the IETF will play a much more integrated and critical role than with previous generations of mobile networks.   

Read the full article here.

November 4, 2016 / IoT, connected vehicle / Posted By: Kelly Capizzi

While the development of autonomous vehicle technology has progressed rather rapidly, regulatory requirement development is just starting to progress. GovTech, a top government technology news website, recently featured an article by InterDigital’s Serhad Doken that examines a number of key items policymakers need to consider and gives a peak into a future with autonomous vehicles as widespread reality.  

In the article, Serhad discusses the U.S. Department of Transportation’s AV policy, which was unveiled in September 2016 as the world’s first autonomous vehicle policy. He explains while the policy is definitely a step in the right direction, there are a number of things to consider as the technology evolves. Serhad breaks down some of the additional items for consideration into five categories: testing in different climates and conditions; wireless upgrades; regulations; data; and technology standards. Check out the full GovTech article here.

Another area of consideration for autonomous vehicles, not mentioned above, is safety. Serhad’s colleague Samian Kaur recently covered an approach to potentially improving autonomous vehicle safety in an Industrial IoT 5G article. In the piece, she discusses a multi-sense approach, coined collaborative sensor fusion, that aims to address one of the main limitations of current self-driving technologies – reliance on a single mode of perception. Click here to learn more about sensor data fusion in the full article.  

To learn more about autonomous vehicles, please click here.

November 3, 2016 / IoT, oneTRANSPORT / Posted By: Kelly Capizzi

This past weekend, oneTRANSPORT hosted at Arup London their first in a series of three hackathons due to take place during the duration of the project!  

oneTRANSPORT is a revolutionary smart city initiative focused on addressing the challenges in transportation systems with Internet of Things (IoT) technology. Comprised of academic, industrial and public partners, and with sponsorship from Innovate UK, oneTRANSPORT is laying the basis for smarter multi-modal, multi-region and multi-system transport networks in the UK.  

The purpose of the event was to learn more about the oneTRANSPORT platform while beginning the process of engagement with the developer community, understanding the capabilities of data through the oneTRANSPORT platform and identifying any gaps or issues with the platform.   Participants took part in the event as in teams or as an individual, and answered a series of challenges that were established by the oneTRANSPORT project partners prior to the event. The participants were provided the weekend to work on their solutions with presentations due Sunday afternoon. The presentations were judged by external industry experts with a history of involvement in hackathon events including Rafael Cepeda, InterDigital; Andy Emmonds, Transport for London; and Kieron Arnold, Satellite Catapult. Prizes offered to winners included Arduino IoT Starter Kits and ‘Lewis Hamilton’ Mercedes F1 Remote Control Cars.  

The solutions presented included prototype applications for Silverstone to use for the Grand Prix, a data quality assessment of the historical data that was made available and a review of the historical Bluetooth journey times to determine optimal journey times. The winning solution came from Matthew Hall, Tracsis, who looked at car park activity to determine when there are peaks in traffic exiting/entering the car parks and the journey times as a result of the peaks.  

Overall, the first oneTRANSPORT hackathon was a success with participants enthused about the opportunities that come through the oneTRANSPORT platform.  

Stay tuned for more details on the next hackathon, sponsored by InterDigital, in Spring 2017!


 

October 5, 2016 / Posted By: Kelly Capizzi

By giving its employees paid time off to vote on Election Day this year, InterDigital is joining over 250 other technology companies, including Spotify, Autodesk and SurveyMonkey, in the TakeOffElectionDay campaign.  

“We are proud to join this campaign as a way to support civic engagement and political participation by our employees,” said Jannie K. Lau, Executive Vice President, General Counsel and Secretary of InterDigital.  

On Tuesday, November 8th, all of InterDigital’s U.S. offices will open at 1:30 p.m. in order to allow employees ample time to vote. Moreover, all employees who bring in an “I Voted” sticker or other reasonable proof of voting that afternoon will be entered into a raffle for prizes.  

“I applaud this step. This is a big help for me since I have a long drive and it has always been a race in the evening to make it to the poll booth in time,” commented Sudhir Pattar, Senior Staff Engineer.  

Check out a complete list of participating companies here!

September 29, 2016 / IoT, 5G, oneTRANSPORT, oneMPOWER, wot.io / Posted By: Kelly Capizzi

InterDigital is delighted to announce that is has been shortlisted for two upcoming awards - the Global Telecoms Awards (also known as the Glotels), and the World Communication Awards (WCA). In the Glotels, InterDigital has been shortlisted alongside Nokia, Openmarket, Truphone and Türk Telekom in the ‘Harnessing the IoT opportunity’ category for its contributions to the oneTRANSPORT initiative. InterDigital also made the shortlist for the WCAs ‘Smart Cities Award’, also for its work related to oneTRANSPORT, with InterDigital shortlisted alongside Cisco and Telekom Malaysia.

The Glotels serve as a celebration of outstanding companies who are paving the way for innovation within the telecoms industry, while the WCAs are hailed to be one of the industry’s most prestigious awards, recognizing innovation, merit and outstanding performance in telecoms. Both award schemes seek to recognize advances in areas such as 5G innovation, IoT in practice, sustainability and customer experience.  

InterDigital has been recognized for its achievements in the IoT space, namely in relation to its work on the oneTRANSPORT smart transport initiative. Comprised of a consortium of eleven partners, oneTRANSPORT is built on InterDigital’s IoT platform, oneMPOWERTM powered by wot.ioTM, which is compliant with the oneM2MTM standard.

Currently being trialed across some of the UK’s biggest counties (Buckinghamshire) and cities (Birmingham), oneTRANSPORT is a real-world example of how IoT technologies are transforming the way businesses, authorities and consumers connect with one another.  

Also, shortlisted alongside oneTRANSPORT for the WCA’s ‘Smart Cities Award,” is the Bristol is Open (BIO) project. BIO is an open “programmable city” project that provides citizens with the ability to participate and contribute to the way their city works. In December 2015, InterDigital joined the project as an industrial partner and now actively participates in helping to build the world’s first open programmable city.   

Making the shortlist for these two awards underscores InterDigital’s commitment to delivering world-class standards within the realms of IoT and future wireless, while offering innovative technological solutions that are making a real-world impact.  

Well done to all those involved in the oneTRANSPORT and Bristol is Open projects and keep fingers crossed for the winner announcements at the forthcoming award ceremonies.

 

September 20, 2016 / IoT / Posted By: Kelly Capizzi

Hofstra University will host the first presidential debate on September 26, 2016. The University, which hosted presidential debates in 2008 and 2012, will be the first university to host debates in three consecutive presidential election years.  As a lead-up to the first 2016 presidential debate, the University has organized an interdisciplinary debate program titled, Debate 2016.

Debate 2016 is a series of 25 events – including an Internet of Things panel featuring two experts from InterDigital –  that addresses issues such as race relations, international trade agreements, foreign policy, public education, infrastructure, and federal budget policy.

“Internet of Things –  Technology, Standards, Policy, and Opportunities,” will be a panel discussion comprised of industry experts including Jim Nolan, EVP, IoT Solutions, InterDigital; Bob Wild, Senior VP of Intelligent Production Solutions; Christopher Cave, Director R&D, InterDigital; and Joshua Mecca, President and co-founder of M&S Biotics, LLC; on Thursday, September 22. The panel will discuss the progress and challenges in merging multiple technologies together to create the IoT. The panelists will also cover how industry standards, government policies and regulations help facilitate emerging IoT use cases.

Click here to learn more about the debate or to learn more about the IoT, please visit the vault.

September 9, 2016 / Posted By: Kelly Capizzi

Beyond grateful. This is the phrase local inventor, Charles Paris, uttered when asked about his recent experience working with a few members from InterDigital’s legal team. Charles, along with partner Karen Parenti, needed assistance with a patent application for audio loud speakers, so they turned to the Delaware Patent Pro Bono Program, which introduced them to a small team comprised of individuals from InterDigital’s legal department.  

The Delaware Patent Pro Bono Program launched in November 2015 as part of the United States Patent and Trademark office (USPTO) program that aims to provide free legal assistance to under-resourced inventors interested in securing patent protection for their inventions. The program is a collaborative effort of the Delaware Law School and registered patent attorneys of the Delaware bar, along with assistance from the USPTO.

alternate text
Pictured: Karen Parenti, Charles Paris and John Gillick with the audio speaker casing at InterDigital's Delaware office.

InterDigital’s John Gillick, Senior Patent Counsel, saw the opportunity to participate in the Delaware Patent Pro Bono Program and registered to be listed individually as one of the available patent attorneys.  John expressed his interest to get involved with a number of his colleagues, who proceeded to volunteer in supporting roles. In February, John came across Charles’ submission and after a brief interview agreed to help with writing the patent application.  

John made a visit to the inventors' home and spent hours thoroughly vetting the invention through challenging questions and prototyping. “We couldn’t have done a fraction of the thorough investigation on our own,” stated Charles. “John’s determination and focus helped us explore the idea to the fullest expression.” After thoroughly vetting the invention, John began the process of writing a patent application for the audio loud speaker casing.  At this time, he pulled in a few of his colleagues to help with execution of numerous drafts. Patrick Igoe, Director, Partner Development, reviewed the application to provide additional input and Kathy Higgins, Patents Administrator Prosecution, assisted with the paperwork required to file an application as well as the extensive filing of the application. Overall, the work resulted in truly a team effort among InterDigital and the inventors!  

This is just one of two projects that the InterDigital legal team has taken on from the Delaware Patent Pro Bono Program this year. The team looks forward to continuing in the program and providing support to under-resourced local inventors in their journeys to secure patent protection.  

Click here to learn more about the Patent Pro Bono Program.

August 30, 2016 / IoT, Web, 5G / Posted By: Kelly Capizzi

You are probably all too familiar with typing in keywords or phrases into a Web search engine to find sites related to the content you want to discover. But in the connected world of the future, how is it going to work when your fridge needs to do a search for something?  

In a recent NetworkWorld article, InterDigital Europe’s Alan Carlton poses this question and discusses the need for a new method to search the Web that will allow IoT devices to discover other “things.”  

Alan starts off with a brief background on how current search engine technology works. He proceeds to cover why the current technology will not work for most IoT cases, therefore leaving us with a search engine problem in IoT. Finally, Alan discusses emerging solutions to that IoT search problem. Specifically, he examines a new type of search engine called a Resource Directory that is being defined in the Internet Engineering Task Force as well as the work being done in the Hypercat consortium.

Click here to read the full article.

August 24, 2016 / LoRA, standards, IoT / Posted By: Kelly Capizzi

The LoRaWAN-connected dev kit was launched by LoRa Alliance members Semtech, Libelium and Loriot on August 11th. Following the announcement, Internet of Business published an article on the news which featured insight from InterDigital’s Jim Nolan, EVP, IoT Solutions.  

The LoRaWAN-connected dev kit is optimized for smart city, smart security, smart environment and smart agriculture applications as stated in the article. Internet of Business discusses how the creation of the development kit is a great example of how companies within the LoRa Alliance are working together to create complete solutions that are accessible to individuals, schools and companies of all sizes.  The article addresses the topic of lock-in avoidance and includes a quote from Red Hat’s Russell Doty cautioning against proprietary IoT interfaces.  

A recent Machina Research study, commissioned by InterDigital, took a look at the impact of a fragmented proprietary approach versus a standards-based approach to IoT. The study uncovered that open standards in IoT deployments could accelerate growth by 27 percent and reduce deployment costs by 30 percent by 2025.  

However, there’s a plethora of connectivity options today for engineers and application developers working on products and systems for the IoT, including LoRaWan. So which standard will become dominant? The potential dominance of any standard for IoT will depend significantly on particular use cases and applications, as explained by Jim Nolan to Internet of Business.  

Click here to read the full article, or visit The Vault to learn more about InterDigital’s role in IoT.

August 15, 2016 / mmW, densification, 5G / Posted By: Kelly Capizzi

In millimeter-wave deployments, densification is a necessity, not an option, for a number of reasons. Recently, Senza Fili Consulting analyst Monica Paolini sat down with InterDigital’s Alpaslan Demir to discuss the role of densification in millimeter-wave bands.  

Alpaslan is a part of the team at InterDigital working on the next-generation millimeter-wave network that is addressing challenging problems in densification. He stated in the interview that, “the beauty of these frequencies is that the new bandwidth they make available is tremendously large” which “gives you the ability to increase densification.” Alpaslan went on to say that, “the more bandwidth you have, the more data you can transmit,” and explained that this ability is what makes millimeter-wave bands a priority for operators and why there is an overall industry push for that expansion.

Watch the full interview below or click here to download the full transcript:

July 21, 2016 / 5G, Next Gen Networks, ICN, SDN, IP / Posted By: Kelly Capizzi

Last month, 5G World 2016 brought together a number of telecom industry leading companies, including InterDigital, to demonstrate their latest work in making 5G a reality and enabling services to run over it. On the floor of the show, 5G World TV caught up with InterDigital Europe’s Dirk Trossen on the latest innovations and 5G insight from the company.  

In the video, InterDigital Europe’s Dirk Trossen demonstrates and explains our Next Generation Networks platform, which provides a flexible routing solution (FLIPS) based on an Information Centric Networking (ICN) approach. The platform is a hybrid of pure ICN and IP, to leverage the best of both: re-introducing multicast, particularly for HTTP unicast scenarios, as a way to drive down bandwidth costs.  

The technology has been developed in collaboration with other collaborators from the EU-funded collaborative research and development projects, POINT ("iP Over IcN —the betTer IP") and RIFE (aRchitecture for an Internet For Everybody). In addition, FLIPS recently became ETSI's MEC Industry Specification Group's fourth proof-of-concept.  

Check out the video below or click here to learn more on Next Generation Networks!

July 19, 2016 / IoT / Posted By: Kelly Capizzi

It’s about the expansion of capability, not the expansion of connectivity states InterDigital’s Chonggang Wang in a recent Wireless Week article focused on IoT. He explains that the world continues to see IoT primarily in terms of connections. However, it’s the transformational value of IoT on business that really matters, and that in turn will be the driver of connectivity.
 
In the article, Chonggang explains not only why there is a need for true interoperability, but also provides examples of how we can achieve the interoperability needed in different scenarios. He stresses that it won’t be enough to simply add billions of devices and connect them, we will need to organize the connections in new ways.
 
Click here to read the full article or to learn more about InterDigital’s work in IoT, visit www.interdigital.com/iot

July 6, 2016 / IoT, standards / Posted By: Kelly Capizzi

City authorities and their technology partners could squander $341 billion by 2025 if they adopt a fragmented versus standardized approach to IoT solution deployment. This statistic was uncovered in a report from Machina Research, commissioned by InterDigital, that focuses on the importance of standardization in IoT deployment. Recently, InterDigital’s Alan Carlton participated in a Q&A session with IT Pro Portal to discuss the report further.  

In the Q&A, Alan explains why fragmentation has occurred and what must be done to ensure the continuous evolution of the IoT. “Standards are already in place at national and sector specific levels but the volume of different standards makes it difficult for verticals and industries to implement interoperable standards,” states Alan when asked why this fragmentation occurred as IoT evolved. He describes that a clear framework, preferably a bottom-up method, needs to be put in place to encompass all standards and enable the integration of data across silos.  

Click here to read the full Q&A or to download the Machina Research report, click here.

June 30, 2016 / IoT, MWCS16, Standards / Posted By: Kelly Capizzi

If you’ve been on the floor of a Mobile World Congress event, you’ve likely read (or at least have seen) a copy of Mobile World Daily, the official publication of Mobile World Congress events. Mobile World Congress Shanghai 2016 kicked-off yesterday and along with it came the day one issue of Mobile World Daily featuring the latest coverage and analysis of event news. Among the top stories? Standards' role in smart cities with statistics from a recent Machina Research report commissioned by InterDigital.  

The story, titled “Standards can sharply lower smart city costs,” features insight from Chris Heckscher, VP of service provider sales at Cisco, on how standardization can substantially lower costs of IoT deployment. Heckscher explains that deploying IoT solutions without interoperability standards would add $341 billion in costs worldwide by 2025. This statistic comes directly from the recent Machina Research report, which analyzes potential IoT deployments in smart cities. The report, which launched in early May, shows that using non-standardized versus standards-based solutions for IoT will increase cost of deployment, hinder mass scale and adoption, and stifle technology innovation for smarty city initiatives worldwide.  

Click here to read this edition of Mobile World Daily, or view the full Machina Research report here.    

June 24, 2016 / IoT / Posted By: Kelly Capizzi

Many people have been making fantastic claims about things like IoT and driverless vehicles, that may in the end turn out to be true, but where is the first clear ROI case going to be made that stimulates adoption? InterDigital’s Jim Nolan, Executive Vice President of IoT Solutions, poses this question in his latest article featured on EDN.com yesterday.  

In the article, Jim tackles the answer to his question in the case of driverless vehicles and makes an argument for driverless trucking as a clear ROI trigger for IoT adoption. He details the inefficiency/source of lost revenue in trucking, the technology being proposed to address the inefficiencies and the compelling ROI that could be delivered. To illustrate the current progress in driverless trucking, Jim provides numerous examples including the formation of a new company, Otto, which recently formed with the goal of turning legacy commercial trucks into self-driving trucks by retrofitting hardware kits to existing truck models. Click here to read the full “Driverless Trucks: ROI Trigggers in IoT Adoption.”  

This article is the second featured within Talking Things, Jim’s new blog on EDN.com. Talking Things will explore the role of global standards, impact on diverse verticals, and the key technologies and techniques associated with the realization of the IoT. Check out other entries from Jim here.

June 15, 2016 / Posted By: Patrick Van de Wille

Just a quick note to our valued shareholders that additional questions and answers from the Annual Shareholder Meeting have been posted, and can be viewed here. Our apologies for the mishap with the power failure – everything we could control was going great, but we can’t control the weather!
 
When the meeting ended there were three questions by e-mail that had not been answered and a further five questions submitted via the online platform remaining in the queue, with some overlap between the questions as is invariably the case. We’ve also responded to a lengthy comment on why we hold a virtual shareholder meeting.
 
Thank you all for attending this year – with over 90 online attendees it was a success, but even we can’t do much about confirmed tornadoes!

June 13, 2016 / IoT, Wi-Fi, SmartCities / Posted By: Kelly Capizzi

Last month, InterDigital released an insightful report by Machina Research that revealed open standards in IoT deployments could accelerate growth in smart cities by 27% and reduce deployment costs by 30%. Analyst and Managing Director of WiFi360, Adlane Fellah, recently cited this statistic and more in an excellent article in which he explores the growth of the IoT.  

The growth of the IoT as it relates to smart cities can take one of two directions – the current fragmented path or a more efficient and beneficial standardized path, states Adlane in the article. He details why standardization is so important for smart cities and lists a number of drivers for adoption of standards in IoT for smart cities. For example, standards consolidate and shape the market for third-party developers by ensuring that there is a sustainable market for their activities. In close, Adlane explains why the IoT in smart cities cannot be accomplished without carrier-grade Wi-Fi deployed throughout the city.

Read the full WiFi360 article here or to download the full Machina Research report, click here.

June 1, 2016 / Posted By: Patrick Van de Wille

As many of you know, last year we changed to a fully virtual Annual Shareholder Meeting format, in keeping with our role as a technology leader.  The new format came with some new meeting registration procedures, so we’ve just posted a document that clearly spells out what you need to do to attend online. The press release went out yesterday, and the document can be viewed here.
 
Please make sure you read it, and factor in some time on the day of the event to register correctly.  Think of it the same way you’d attend in person, where you’d leave in plenty of time to make it in case of traffic and would factor in time to register at the desk. Online registration opens at 10:30 a.m. Eastern Time on June 8.  The instructions are simple, and there are resources (including a toll-free number) available if you’re having any issues.
 
Looking forward to your participation and questions!
 
-P

June 1, 2016 / video, HEVC, SCC, standards / Posted By: Kelly Capizzi

The amount and variety of content in video is constantly changing. The continuous evolution creates a need for continued compression and standards evolution. Recently, engineers have developed a new extension to HEVC called Screen Content Coding. InterDigital’s Dr. Yan Ye, Director of Engineering, discusses this new extension in a recent article published on StreamingMedia.com.

Screen Content Coding (SCC) extension to HEVC is designed for the new variety of screen-captured content beyond the conventional camera-captured content. The new variety refers to captures from video games and tutorials, which feature animation, text and graphics along with camera-generated video. In the article, Yan describes how SCC further improves coding efficiency for this type of screen-captured content in two main areas – compression efficiency and flexibility.  

She also explains the compelling results of the standards project, which wrapped up in February of this year after nearly two years of development. Ultimately, SCC will enable a better video experience for any video products that need to efficiently deliver a large amount of screen-captured video content.  

Click here to read the full article, or to learn more about video, visit The Vault.  

May 16, 2016 / IoT, oneM2M, security, standards / Posted By: Kelly Capizzi

The Internet of Things (IoT) market continues to grow, but there is a major industry concern that could cause growth to the hit the breaks a little – security. IoT security is a multi-layered problem that includes added complexity from supplier diversity and legacy systems. In a recent Internet of Things Today article, InterDigital’s Yogendra Shah and Gemalto’s Francois Ennesser explore security solutions and services for the IoT.   

The article opens with a characterization of security in the IoT context and clearly illustrates the current IoT security problem. Yogendra and Francois explain how the global oneM2M standard architecture enables IoT applications. oneM2M’s platform architecture consolidates the essential components of any IoT application into a three-layer model to ensure a consistent and modular framework for IoT application developers and users as stated in the article. Finally, the pair details the standards’ hop-by-hop security strategy as well as a long-term road map.

Here’s a look at the oneM2M security standardization road-map featured in the article:

 

Want to know more? Read the full article here.

May 6, 2016 / IoT, oneM2M, standards, oneTRANSPORT / Posted By: Kelly Capizzi

City authorities and their technology partners could squander $341 billion by 2025 if they adopt a fragmented versus standardized approach to IoT solution deployment. This warning comes from Machina Research in a new white paper, commissioned by us, that analyzes potential IoT deployments in smart cities. The report shows that using non-standardized versus standards-based solutions for IoT will increase the cost of deployment, hinder mass scale and adoption, and stifle technology innovation for smart city initiatives worldwide.
 
The report launched on Thursday, and there has been a tremendous amount of media attention surrounding the findings. Don’t just take our word for it, check out just some of the media coverage below:

Interested in reading the full report? Download it here.  Or get more information on InterDigital’s work in IoT, click here.

 

April 29, 2016 / IoT, 5G, ICN, SDN/NFV / Posted By: Kelly Capizzi

“Picture this: you are in an open-plan office, and you need a stapler...” This is how InterDigital’s Alan Carlton, Vice President of InterDigital Europe, begins his real-world analogy that explains the publish-subscribe model used by Information Centric Networks (ICN) in a recent NetworkWorld article.

In the article, Alan explains what ICN is, why it is important and the role it will have in next-generation 5G networks. He describes to the reader how ICN will play a role in 5G for some very good reasons that include ICN will provide much-needed efficiencies, performance improvements and will align networking architecture with the Internet of Things (IoT). IoT follows the same exact model, publish-subscribe, as ICNm explains Alan in the article.

Want to know more? Click here to read the full article!

April 21, 2016 / IoT, Wi-Fi / Posted By: Kelly Capizzi

The Internet of Things is here and continues to evolve at an exponential rate. However, as it evolves more and more challenges are realized. So what are some of the ways we can help support the growing IoT? Spectrum sharing like Wi-Fi… stated InterDigital’s Juan Carlos Zuñiga, Principal Engineer, at State of the Net Wireless 2016 on Monday.

The conference was the second installment in the State of the Net, America’s premier Internet policy conference series. State of the Net Wireless (STONW) brings together top government officials and private sector experts to address the trends and policies shaping wireless Internet technologies such as mobile advertising, the Internet of Things, mobile data consumption, and wireless broadband.

Juan Carlos’ statement on spectrum comes from his participation in an STONW panel titled “How Can We Make the Internet of Things the Innovation of Things?” The panelists included experts from Internet Society and the National Telecommunications and Information Administration along with Juan Carlos. The experts explored IoT challenges that ranged from privacy and security to spectrum and battery life. And answered questions such as are policymakers and industry prepared for the future, and will there be a smooth transition to the Innovation of Things?

Watch the full panel below or click here for more information:

April 7, 2016 / STEM, 5G / Posted By: Kelly Capizzi

Think back to a Friday afternoon in high school… you probably couldn’t wait to get out of the building and go out with your friends or catch up on your favorite television show.  

But, last Friday, roughly 50 high school students from George Washington Carver High School for Engineering and Science located in Philadelphia, PA elected to stay at school to learn about preparing for a future in a STEM-related field.

InterDigital’s John Kaewell, Engineering Fellow, organized a group of engineers that included Mike Jeronis, Vice President, Research & Development; Catalina Mladin, Member of Technical Staff; and Bob Flynn, Senior Staff Engineer, to participate in a STEM mentoring session led by Dr. Ted Domers, Principal of George Washington Carver High School of Engineering and Science.  

The mentoring session consisted of InterDigital staffers rotating through four round table discussions with the highly motivated students. Students posed questions such as how did the engineers choose their university, what is 5G, what programing languages and tools are used in their work, what a typical work day contained, and what’s the most valuable advice the engineers ever received.  It was an extremely successful session for both the students and the engineers! So much so that the group plans to tour the students around the InterDigital offices and attend another mentoring session in the fall. Kudos to John for leading this great opportunity!  

The mentoring event is just another example of how committed InterDigital is to investing in the ideas and people of the future. We’ve supported several local STEM-related efforts, including sponsorships of the Delaware Children's Museum Junior Engineers Program and a wireless communications laboratory at Delaware State University. Click here to see more on InterDigital and STEM education.

April 6, 2016 / 5G, EdgeHaul, MWC16, Crosshaul / Posted By: Kelly Capizzi

High speed and low latency are expected to be cornerstone 5G requirements, particularly for the delivery of virtual reality and augmented reality. 5G-Crosshaul, a Horizon 2020 PPP project cofounded by the European Commission, recently highlighted how InterDigital utilized its EdgeHaul millimeter-wave mesh backhaul technology to deliver a live, functioning virtual reality telepresence use case at Mobile World Congress 2016.

5G-Crosshaul shared a video of Doug Castor, Senior Director, InterDigital, demonstrating EdgeHaul at MWC 2016. In the video, Doug demonstrates how the platform is a solution for gigabit connectivity for today’s applications such as small cell backhaul, residential broadband, and carrier Wi-Fi cable extension. He goes on to explain how the solution is also a development platform for the emerging 5G challenges that will demand low latency and multi-gigabit dense networking. In the near future, InterDigital’s EdgeHaul will extend to support multiplexed fronthaul traffic, and be integrated with the 5G-Crosshaul.

The 5G-Crosshaul project, comprised of 21 partners, aims to develop a 5G integrated backhaul and fronthaul transport network that will enable a flexible and software-defined reconfiguration of all networking elements in a multi-tenant and service-oriented unified management environment. To learn more about the project, click here.

Check out the EdgeHaul demo below:

March 28, 2016 / 5G, video, RCR, NGN / Posted By: Kelly Capizzi

“The coming of “5G” will resolve a variety of network issues, but in video it will only make the situation worse,” stated InterDigital’s Dirk Trossen in the first article of his two-part series on video delivery featured in RCR Wireless News’ Reader Forum today.

In the article, Dirk discusses how video is a source of pain for cable companies and mobile operators due to the use of a technology, and delivery via a network architecture, that was designed with general data in mind. He warns that while 5G will resolve some network issues, it is not the solution for video. Dirk explains that the demands of the future will require more than just a turbocharge of the current approach – it will require a whole new approach. Keep reading here. And stay tuned for a follow-up from Dirk that covers finding a better way.

At InterDigital, Dirk is responsible for driving the development of systems and solutions that improve the overall end user experience, with a focus on mobile network architecture. And he was recently recognized among the top 50 industrial IoT and 5G industrialists and innovators by RCR Wireless for his leadership in multiple European 5G projects.

For more information on the European 5G projects, click here. Or to read Dirk’s full article, please click here.  Plus, check back for a a link to his follow-up article that explores a better way to connect video demand and network resources.

March 28, 2016 / Posted By: Kelly Capizzi

Brian G. Kiernan, former vice president and chief scientist of InterDigital, has been recognized for engineering excellence by Newark College of Engineering at New Jersey Institute of Technology (NJIT).

Brian’s alma mater recognized him with the Outstanding Alumnus Award at the 2016 NCE Salute to Engineering Excellence on March 9, 2016. Kiernan worked at InterDigital, or its predecessor companies, for close to thirty years in various roles.  As former vice president and chief scientist at InterDigital, Kiernan was responsible for worldwide industry standards activities and aided in the development of new market, product and technology initiatives.
 
Brian’s career was filled with contributions to major wireless standards organizations around the world. In 2013, he was honored with the IEEE Computer Society Hans Karlsson Award, which recognizes “outstanding skills and diplomacy, team facilitation and joint achievement, in the promotion of computer standards where individual aspirations, corporate competition, and organizational rivalry could otherwise be counter to society's benefit.”

Presently, Brian remains an active NJIT alumnus as chair of the University’s Undergraduate Research and Innovation External Advisory Board, a member of the ECE Department’s Industry Advisory Board, and a member of the Highlander Angel network.
 
Take a look at the announcement by NJIT here – Congrats, Brian!

 

March 17, 2016 / IoT, IEEE, Privacy / Posted By: Kelly Capizzi

The Internet of Things (IoT) raises unique challenges when it comes to privacy. For example, IoT privacy must focus on the individual. This is according to InterDigital’s Juan Carlos Zuñiga, principal engineer, in a recent post featured on IoT Global Network.  

Juan Carlos explains that with IoT privacy we need to defend not only the individual that owns the device, but all individuals as they can be surrounded by devices that don’t necessarily belong to them. How do we start to do that? Keep reading here.  

To learn more about the IoT, visit the InterDigital vault or type “IoT” into the search box above. 

March 14, 2016 / 5G, IoT / Posted By: Kelly Capizzi

Will 5G require a new air interface? Where does OFDM fit in? Or will 5G utilize an umbrella framework pulling in different options? With the 5G air interface yet to be determined, the industry is actively working towards answers to these questions. And as a result, industry leader opinions are popping up all over wireless media.

ReThink Technology Research’s Wireless Watch, a weekly set of research notes from the leading wireless analysts, recently published a comment relating to the announcement of Cohere joining the race to determine the 5G air interface. The comment cited Alan Carlton, Vice President, InterDigital Europe, weighing in on the requirement of a new air interface for sub-6 GHz Internet of Things applications, and for high frequency bands. Check out Alan’s insight here.

Today, NetworkWorld published an article penned by Alan titled, “Will 5G say farewell to OFDM?” in which he addresses waveform candidates for 5G and where ODFM fits in. In the article, he explains that while a definitive answer is not agreed yet, there are several waveforms that could be potential candidates, spanning a wide range from very simple single carrier to very sophisticated multi-carrier waveforms. Want to know the main candidates? Keep reading here.

For more information on 5G, visit the InterDigital Vault or search “5G” in the search box.

March 10, 2016 / 5G, IoT, MWC16 / Posted By: Kelly Capizzi

What is 5G and how is it going to affect your life? CNBC’s Arjun Khrapal posed these questions and more to InterDigital’s CEO Bill Merritt and Wireless Broadband Alliance’s CEO Shrikant Shenwai in an interview at Mobile World Congress 2016.  

“5G is a revolutionary change in networks because it will enable a whole host of services that to date have not been available,” stated Bill. “Much like Internet in the 1980’s changed peoples lives, 5G will do the same thing.” It is anticipated that 5G will provide all new data services that consumers can get from people aggregating data that’s being collected over the networks. For example, some of the big areas will be smart cities, home automation, enhanced health services, smart transport, etc.  "The consumer will have the ability to bring everything together and manage it much more effectively," explains Shrikant.  

Also part of the conversation, the Internet of Things and its tie to 5G, the challenges of standardization, industry collaboration and the roadmap of 5G to the consumer.  

Click here to watch the full interview. And to learn more on 5G, visit the InterDigital Vault!

March 8, 2016 / Posted By: Patrick Van de Wille

The oneTRANSPORT initiative in the UK has been an enormous success, highlighting the tremendous role that breaking down data silos and exposing new sources of data can play in improving services for consumers… and driving revenue for public authorities.

It’s also been getting a lot of attention in the technology media. After an excellent article last week in ComputerWeekly, this time it’s Computer Business Review having a look at the project, with a very interesting angle: how cities and public authorities can use IoT technology (in this case, InterDigital’s oneMPOWER™ platform) to monetize their data at a time of ongoing public spending cutbacks. “Projects such as oneTRANSPORT show that councils have important assets already available that might help them ease the pain of spending cuts – and technology might be the way to unlock them,” says CBR’s Alex Sword. 

More information on oneTRANSPORT is available in the InterDigital Vault, at this link or by searching “oneTRANSPORT” in the search box.

March 2, 2016 / IoT, oneTRANSPORT / Posted By: Kelly Capizzi

The intelligent transportation system category is one of the most pertinent subcategories of IoT in the smart cities market. One of the earliest projects in this area is the oneTRANSPORT project, which has emerged from Innovate UK’s integrated transport initiative. Recently, Computer Weekly published an article that takes a closer look at the project and its partnerships.  

The article focuses specifically on two of the project’s eleven partners – Buckinghamshire Council and InterDigital Europe. David Aimson, project manager, Buckinghamshire Council, describes the project as a smart city solution for a suburban area. He explains that the council joined the project looking for a solution to merge and commercialize their datasets.  The oneTRANSPORT project utilizes InterDigital's oneMPOWER™ standards-based platform to integrate fragmented datasets into a holistic application that can result in more efficient transport systems.  

Rafael Cepeda, Senior Manager, InterDigital, dives into the importance of an open standards-based platform. IoT is built primarily on the concept of interoperability and the only way to create interoperable applications is through open standards. oneTRANSPORT seeks to enable interoperability beyond just big cities, but to suburban and rural communities as well. This is a critical advantage for partners such as Buckinghamshire Council who seek to commercialize data effectively.  It allows for potential transport apps and services to extend beyond the Buckinghamshire borders and into neighboring territories resulting in better end user experience.   

Want to know more? Keep reading here.

February 24, 2016 / MWC16, 5G / Posted By: Kelly Capizzi

Mobile World Congress (MWC) 2016 is well underway! As the industry’s largest event, there is a ton to see across the floors of the Fira Gran Via and Fira Montjuic in Barcelona. The Fast Mode, a leading news outlet for global telecom, is compiling snapshots from all around the event on their blog– and included InterDigital!  

The blog post titled, “MWC Barcelona 2016 – The Sights and Scenes from Fira Gran Via” displays four snapshots that highlight InterDigital. The snapshots show our booth layout, our EdgeHaul™ millimeter-wave mesh backhaul technology, our 5G access technology platform and Bill Merritt, Chief Executive Officer, interviewing with CNBC on 5G.   

Take a look to see what’s happening all around the MWC here.

February 22, 2016 / MWC16, 5G, IoT / Posted By: Kelly Capizzi

Update: Thursday, February 25, 2016  - 1:15 PM CET (7:15 AM Eastern)

Today is day 4 – the final day of MWC 2016!  Day 4 may be shortest day as the event closes at 4 PM CET, but the booth will still be full of visitors, demonstrations and meetings. 

Throughout the conference, we’ve recorded a number of interviews and demonstrations! All that great video content along with slideware from the booth will be available on the InterDigital Vault following the event. In the meantime, check out some snapshots from days 1 – 3:      

         

  

Don’t miss what’s happening in the booth on Day 4! We will again use our millimeter wave 5G technology to stream the below live feed straight from the InterDigital booth: 

 

Enjoy and be sure to check back next week for links to all the content we’ve captured at the event!  

 

Update: Wednesday, February 24, 2016 -1:49 PM CET (7:50 AM Eastern)

After a great first two days, we’re ready for day three at MWC ’16. Our demo stations and meeting rooms have been full, and visitors are really enjoying the showcase of 5G and IoT technologies at the booth this year! And to add to excellent booth traffic, The Mañaners energetic performance packed the house yesterday evening. Here are just a few photos from the action yesterday: 

Dr. Rafael Cepeda demonstrating oneTRANSPORT, a smart city initiative driven by InterDigital and ten partners. 

Nick Podias explaining how our oneMPOWER™ platform enables home and health use cases.

Bob Gazda helping a visitor try out the telepresence use case presented over our EdgeHaul™mmW solution.

Want to see what’s happening in the booth today? Once again, we will use our millimeter wave 5G technology to stream the below live feed from the InterDigital booth!

Also, the band will be back for a final performance tonight! Tune in below or stop by Hall 7 stand 7A71 at 5:30PM CET (11:30 AM EST).   

Update: Tuesday, February 23, 2016 - 3:00 PM CET (9AM Eastern)

In just 30 minutes, your guided tour of the booth along with interviews of some of our experts will be available below. Get ready to tune in for some great insight into what we are showcasing at MWC ’16!

Plus, don’t miss a lively performance from The Mananers, a popular local band (pictured), at 5:30PM CET (11:30AM EST) via the live stream above!

Update: Tuesday, February 23, 2016 - 6:45 AM CET (12:40AM Eastern)

Day two of MWC holds a lot of excitement! We will stream a tour guided by Patrick Van de Wille, Chief Communications Officer of InterDigital in which he will talk with some of our subject matter experts on the main topics and demonstrations within the InterDigital booth. The audio and video will be streamed live at 3:30PM CET (9:30AM Eastern). Check back around 3:00 PM CET (9AM Eastern) for more details.

We are also involved outside of the booth! Alan Carlton, Vice President of InterDigital Europe, will provide closing remarks at an executive luncheon presented by FierceWireless and sponsored by Ascom, InterDigital and Qualcomm. The luncheon titled, “The Path to 5G: What Operators Need to do to Prepare for the Network of the Future,” will feature a panel discussion that explores the 5G vision and disruptive new technologies that are emerging in new spectrum bands such as millimeter wave.

Remember to check out the live stream below to see what’s happening in the booth!

Posted February 22, 2016 - 9:30AM CET (3:30 AM Eastern) -

Welcome to our live blog post directly from our booth at Mobile World Congress 2016, Hall 7 stand 7A71!  This post is the spot to be kept up to date with everything going on at the event.  We’ll continue to add to this post throughout the week, so check back often for updates.

Watch the below live feed from the InterDigital booth at MWC 2016, using millimeter wave 5G technology to stream virtual reality with ~1 millisecond latency.

 

February 18, 2016 / MWC16, 5G, Virtual Reality / Posted By: Kelly Capizzi

The mobile industry will descend onto Barcelona next week to see the latest technological developments, next generation services and growth strategies at Mobile World Congress (MWC), the industry's premier conference. In order to sift through the noise, Strategy Analytics, a leading research and analytics firm, issued a report that lists the top 10 priorities for the show – and InterDigital made the cut.  

The report, titled “MWC 2016: Your Pocket Guide,” recommends the top 10 handsets and topics as well as the key industry leaders within the topics at MWC.  For handsets, the Strategy Analytics list includes the Samsung Galaxy S7, LG G5, Huawei P9 and HTC One M10 among others.  As far as industry topics, the pocket guide advocates 5G and Virtual Reality as among the leaders.  

According to the report, “…5G represents the next 10 to 20 years of mobile phones and IoT. We expect companies like Qualcomm, Ericsson and InterDigital to be right at the forefront of 5G demos or presentations at MWC 2016.” As far as Virtual Reality, the firm predicts that Samsung Gear VR, HTC Vive and Google Cardboard will be among the best devices for trial at the show.  

This mention underscores InterDigital’s long history as a leader in mobile technology development and highlights the strength of the company's research and contributions to the industry. The company recently announced that it will showcase a number of working 5G access and network solutions years ahead of broad market rollout at MWC 2016.  

Check out the full report here.

February 17, 2016 / 5G, Wi-Fi, IEEE, RAT / Posted By: Kelly Capizzi

Wi-Fi increasingly has become an essential part of the 4G picture and it is predicted to take on an even more integrated role in 5G networks. Similar to prior generations, the 3rd Generation Partnership Project (3GPP), the mobile broadband standard, has taken the lead in driving the 5G radio standards process. With 3GPP at the forefront, there are questions of where Wi-Fi’s standards organization, IEEE 802.11, fits into the process.  This is the topic of a recent Wi-FI360 blog post authored by Caroline Gabriel, Research Director & Co-Founder of Rethink Technology Research, a distinguished firm in wireless research.

In her post, Caroline states that the IEEE 802.11 group has the potential to influence the 5G process, but needs to determine the best approach. She dives into whether or not IEEE 802.11 should form a closer relationship with the 3GPP to ensure alignment between the Wi-Fi and cellular in the standardization process. Caroline references a presentation from InterDigital’s Joseph Levy, principal engineer, in which he discussed the development of the relationship between the two sets of standards. 

In his presentation, Joe outlined the potential for 802.11 to be a 5G RAT within the 3GPP architecture and argued that the Wi-Fi community would enrich the 3GPP process. The 802.11 agenda dovetails with the mutli-RAT architecture which is believed to be essential for the support of 5G use cases such as multi-gigabit video to ultra-low power M2M to ultra-dense venue networks.  

However, 5G will be more than just RATs. It will also deal with core network and other areas that include IP standards, higher network layers and applications. It may also be the last generation in which the 3GPP takes the lead in defining standards at the radio level. Therefore, Caroline states that it is an important year for the IEEE 802.11 to push itself forward as a key participant.  

Read the full article here.

February 8, 2016 / 5G, SDN, NFV, Virtualization / Posted By: Kelly Capizzi

InterDigital’s Alan Carlton, Vice President, InterDigital Europe, was recently accepted to participate in the International Data Group (IDG) Contributor Network. IDG’s NetworkWorld, the premier provider of information and intelligence for Network and IT executives, will feature regular contributions from Alan under the blog titled “5G and Future Mobile.” Last Friday marked the first published contribution: “5G is coming and it is the future of mobile.”  

In the article, Alan discusses what he refers to as the true vision for fifth generation mobile networks.  He explains that in 5G wireless will grow into a truly horizontal industry that provides support for literally everything. To face this challenge, 5G will need to be built on a foundation of established IT thinking that takes the technologies to new levels and depth of integration. Finally, Alan briefly explores one of the hottest topics in telecom today, Software Defined Networking (SDN)/ Network Function Virtualization (NFV), and how these concepts will be utilized in 5G.   

With over 25 years in the wireless technology industry, Alan’s well suited to provide valuable insight into the future of mobile. He currently leads 5G and Internet of Things research efforts at InterDigital Europe. His team actively participates in three EU projects funded by Horizon 2020, the €80 billion European research and innovation program focused on the development of 5G, as well as a European Commission study of socioeconomic impact of 5G.  

Click here to read the full article and stay tuned for the latest from Alan!  

Thus far, the vast majority of our blog posts have focused on the machine-to-machine opportunities the Internet of Things affords. Today I thought I would show a simple but powerful example of how easy it is to extend that connectedness to one of the tools we use every day--the web browser. And in so doing, I hope to illustrate some of the power of the wot.io data service exchange™.

As you probably already know, virtually all of the modern web browsers offer the ability to create plugins that extend the functionality of the browser. Today we will create a simple extension for Google Chrome that will send some metadata about the current web page off to the wot.io data service exchange™.

Here's a quick video demonstration:

A Peek at the Code

The heart of the extension is just a simple JavaScript file. In fact, the rest of the extension is really just the HTML and CSS for the popup and options pages.

Because one of the protocol adapters that wot.io supports is HTTP, we can implement our "send to wot.io" functionality with a simple AJAX request:

var send = function() {  
    chrome.tabs.query({active: true, currentWindow: true}, function(tabs) {
        if (tabs.length) {
            var tab = tabs[0],
                xhr = new XMLHttpRequest(),
                msg = JSON.stringify(["link", tab.url, tab.title, tab.favIconUrl])
            xhr.onreadystatechange = function() {
                if (xhr.readyState === 4) {
document.getElementById('status').textContent = 'Sent successfully'  
                    setTimeout(function() {
                        window.close()
                    }, 750)
                }
            }
            xhr.open('PUT', options.url)
            xhr.setRequestHeader('Authorization', 'Bearer ' + options.token)
            xhr.send(msg)
        }
    })
}

var buildUrl = function(options) {  
    return 'http://' + options.cluster + '.wot.io/' + options.cluster +
        '/' + options.exchange + '/' + options.meta
}

Connecting People to All the Things...

Cool, right? And apparently useful, too, seeing as there are whole companies whose products do essentially what our simple extension does—create a web-accessible RSS list of bookmarks.

But how does that relate to IoT?

So glad you asked.

Remember back in the last section when I said how convenient it was that wot.io offers adapters for protocols like HTTP? What I didn't point out at the time was that any data resource on the wot.io data service exchange can be referenced via any of those protocols (subject to the authorization permissions of the user's access token, of course).

This means that if my data topology contained a resource whose messages are sent through a device management platform like ARM mbed Device Server, ThingWorx, or InterDigital's oneMPOWER Platform, sending data from the web browser to one of their connected devices would be as simple as changing a single value in the settings dialog. Same thing with devices or applications connected to a connectivity platform like PubNub.

And of course, any of the other 70+ data services on the wot.io data service exchange™ also get modeled as simple named data resources, making it as easy to send data interactively from a web browser to NGData's Lily Enterprise Hadoop platform as it is to send it to business logic in scriptr or to device connected to one of the aforementioned device management platforms.

Connecting All the Things to People...

But that's not even all! Because wot.io adapters are bi-directional, we could have just as easily selected the Web Socket protocol instead of HTTP for our Chrome extension's implementation. In that case, we could have still configured it to send data to the wot.io exchange just as before, but we could have also configured it to receive data from the exchange.

Whether that was data directly from devices or data that had been transformed by one or more data services, the possibilities are limited only by the logical data topology and your imagination.

Powerful Abstractions

The point of this post is hardly to claim that a toy browser extension rivals a polished product like Pocket. And of course, it could have just as easily been a web application as a browser extension. Nor was this post even intended to claim that sending IoT data to and from a web browser is novel.

The point of this post is to show how little effort was required to connect a hypothetical, real-world application to and from literally any of the connected data streams that we model on our wot.io data service exchange™ because they are all accessible through unique URLs via any of the numerous supported protocols.

Let that sink in, because it's really powerful.

December 17, 2015 / Posted By: wotio team

In the past two parts, we constructed a Docker container suitable for deploying within the context of the Data Service Exchange, and then published the MNIST training set over the wot.io Data Bus. In the third part, we will retrofit a model to accept the training data from the Data Bus. The basic architecture we would like for our data flow looks like this:

We will load the MNIST training data via the training.py we created in the last part, and send it to a mnist bus resource. Then using bindings we will copy that data to both model1 and model2 resources, from which our two models will fetch their training data. The programs model1.py and model2.py will consume the training data, and print out estimates of their accuracy based on the aggregated training set.

This architecture allows us to add additional models or swap out our training data set, without having to fiddle with resource management. As the models can be deployed in their own containers, we can evaluate the effectiveness of each model in parallel, and discard any that we are unhappy with. When dealing with large amounts of training data, this distribution methodology can be especially handy when we are uncertain of the degree to which the choice of training data and test suite influences the apparent fitness of the model.

The code in training.py creates the mnist resource. The branch connecting mnist to model1 is created programmatically through the model1 code:

The act of creating the binding, and then consuming from the bound resource sets up the remainder of the top branch in the diagram. A similar bit of code occurs in our second model:

As the wot.io Data Bus uses software defined routing, the code will ensure that this topology will exist when the programs startup. By asserting the existence of the resource and the bindings, the under the hood configuration can abstract out the scaling of the underlying system.

In the consume_resource event, we invoke a train callback which runs the model against the training data. For each of the models the training code is largely the same:

vs.

The behavior of each is as follows:

  • receive an image and it's label from the bus
  • convert the image into a flattened array of 28*28 32 bit floating points
  • scale the 8bit image to a range of [0-1]
  • convert the label to a one hot vector
  • save the image data and label vector for future accuracy testing
  • run a single training step on the image
  • every 100 iterations, test the accuracy of the model and print it to stdout

from outside the docker container, we can inspect the output of the model by invoking the docker logs command on the model's container to see the output of the program. As long as the RUN command in the docker file was of the form

RUN python model1.py

all of the output would be directed to stdout. As the model programs work as wot.io Data Bus consumers, and never exit, these commands are properly demonized from a Docker perspective and do not need to be run in a background process.

We could further modify these models to publish their results to another bus resource or two, by adding a write_resource method into the training callback, making the accuracy data available for further storage and analysis. The code for doing so would mirror the code found in training.py for publishing the mnist data to the bus in the first place. This accuracy data could then be stored in a search engine, database, or other analytics platform for future study and review. This capability makes it easy to run many different models against each other and build up a catalog of results to help guide the further development of our machine learning algorithms.

All of the source code for these models is available on Github and correspond the the tensorflow tutorials.

December 15, 2015 / Wi-Fi, LTE, RF / Posted By: Kelly Capizzi

Tunable RF Components and Circuits: Applications in Mobile Handsets, a recent CRC Press published book, features industry perspective on key tunable technologies and applications including a chapter written by two InterDigital principal engineers – Alpaslan Demir and Tanbir Haque.  

The book intends to give readers a technical introduction to the state of the art in tunable radio frequency components, circuits and applications. It provides a foundational overview that covers tunable RF components ranging from tunable antennas to power amplifier envelope tracking concepts. The book opens with a market overview written by the editor, Jeffrey L. Hilbert, president and founder of WiSpry, Inc., a world leader in tunable RF technology, and features thirteen chapters from multiple contributors that are among the leading practitioners in the field.  

In the chapter titled “Case Study of Tunable Radio Architectures,” Alpaslan and Tanbir provide a system architecture perspective to show how individual tunable RF components are tied together.  The protocol stack interaction has been captured based on a Wi-Fi and LTE coexistence use case.  Combined the two experts provide over 40 years of industry experience focused in RF systems and hardware.   

Click here to learn more and access a preview of the book.

December 14, 2015 / chess, gaming, pentaho, statistics, analytics, reporting / Posted By: wotio team

This post is part 3 of our series on connecting gaming devices with the wot.io data service exchange™.

Storing Gameplay Event Data with wot.io

In the last post we saw that after retrofitting an open source chess application to connect via PubNub, we could route gameplay messages from connected gaming devices to the MongoDB No-SQL datastore. Even when the messages were generated or transformed by other data services like scriptr, the logic required to store messages in MongoDB remained loosely coupled to the implementation of any single data service.

Now that we have taken our IoT event data and stored it in a suitable datastore, let's look into how we might start analyzing and visualizing it with another wot.io data service.

Custom Reporting with Pentaho

Pentaho is one of a number of wot.io data services specializing in extracting value from data at rest like the event data now captured in our MongoDB datastore.

Pentaho is an excellent choice for modeling, visualizing, and exploring the types of data typically found in IoT use cases. And it's ability to blend operational data with data from IT systems of record to deliver intelligent analytics really shines in the context of the wot.io data service exchange™ where multiple datastores and adapters to different enterprise software systems are not uncommon.

Just as you might imagine in a connected gaming use case, we wanted to create reports showing gameplay statistics for individual users, as well as aggregate reports across the entire gaming environment. Rather than write about it, have a look at this video:

December 11, 2015 / MQTT, protocol adapters, Elasticsearch, thingworx, bip.io, scriptr, ARM / Posted By: wotio team

One long-overdue lesson that the Internet of Things is teaching younger engineers is that there are a whole host of useful protocols that aren't named HTTP. (And don't panic, but in related news, we have also received reports that there are even OSI layers beneath 7!)

Since we have posted about wot.io's extensive protocol support before (for example, here and here), today I thought I'd share a quick video demonstrating that protocol interoperability using MQTT. Enjoy.

Photo Credit: "Networking" by Norlando Pobre is licensed under CC BY 2.0

December 11, 2015 / 5G, IoT, LTE / Posted By: Kelly Capizzi

With the new year right around the corner, experts across the mobile ecosystem are discussing potential key trends for 2016 that include the impact of new architectures as well as emerging technologies on networks and the business. InterDigital’s Chris Cave, Director, Research and Development, recently participated in a webinar conducted by RCR Wireless along with industry experts from Recon Analytics, 4G Americas and Sonus to examine what 2016 will bring.  

The forward-looking panel discussion focused on topics such as assessments of the status and importance of VoLTE, VoWi-Fi, 5G, virtualization and LTE-U, as well as hiring trends and new business structures.  A big thing to look forward to in 2016? The overall kick-off of studies towards a new 5G radio interface. As Chris explains in the webinar, it will be a long process…but next year will be particularly important as it when the basic building blocks for future systems will be identified. To listen to more of the panelists’ predictions, and view the full on-demand webinar, please click here.  

In addition to the webinar, RCR Wireless News published a report titled “2016 Predictions” that takes an in-depth look at the 2016 expectations for workforce needs, network topologies, 5G standardization, emergence of new technologies and more. Check out the special report here.

December 8, 2015 / Smart City, ARM, ARMmbed, scriptr, bip.io, thingworx, Elasticsearch / Posted By: wotio team

Data Service Providers

In part 1 of this series, we went over the various ARM devices that were combined with open data from the London Datastore to represent the connected device side of our demo. In part 2, we described how to use employ one or more device management platforms like Stream Technologies IoT-X Platform or ARM mbed Device Server to manage devices and send their sensor readings onto the wot.io data service exchange™.

Now, let's see how we can route all that valuable device data to some data service providers like scriptr, ThingWorx, and Elasticsearch to extract or add business value to the raw IoT data streams.

Dataflow Review

Recall that back in part 1 we started with the MultiTech model car, modified to include a MultiConnect® mDot LoRaWAN module with an accelerometer sensor. The sensor sent the accelerometer data to a MultiTech MultiConnect® Conduit™ gateway using the Semtech LoRa™ low-power, long-range wireless RF interface. The Conduit was registered with Stream's IoT-X platform.

Since wot.io is fully integrated with IoT-X, making the device data available on the wot.io exchange where it could be sent to any of the data services was as easy as setting up the data routes.

scriptr for Business Logic

Part of the reason for measuring the accelerometer readings in the smart vehicle was to detect if it has been involved in an accident. Some of the numerous and obvious opportunities for such intelligence include insurance, emergency response dispatch, traffic routing, long term traffic safety patterns, etc.

However, in order to translate the raw sensor readings into that business intelligence, we have to determine whether there was a sufficiently rapid deceleration to indicate an accident.

Of the many wot.io data services that could be employed, scriptr is an excellent choice for embodying this type of business logic.

scriptr is a cloud-based, hosted JavaScript engine with a web-based Integrated Development Environment (IDE). Since we can route wot.io data streams to specific scriptr scripts, we can use it to write our simple deceleration filter:

Notice that the script receives our messages containing the raw X,Y,Z-plane acceleration readings. After parsing these parameters, we do a simple check to determine whether any of them exceed a given threshold. If so, we return a cheeky message back onto the wot.io data service exchange.

Notice that the the message we returned is a simple JSON object (although it could have been anything--XML, plain text, or even binary data). Furthermore, it does not contain any information about the destination. It simply returns a value.

That is, our script does not need to know where its response will be routed. Indeed, it may be routed to multiple data services! Loosely coupling data services together in this fashion makes for a much more flexible and resilient architecture.

bip.io for Web API Automation

Next, we chose to route any warning messages returned from our scriptr script to a bip.io workflow (known as a "bip") that we named tweet so that we could notify the appropriate parties of the "accident". Although we called it tweet, bip.io bips can easily perform complex workflows involving any of its 60+ data service integrations (known as "pods").

For the demo, we kept our bip simple, consisting of only twitter and email pods. But you can readily imagine how, given the conditional logic and branching capabilities of bip.io, much more complex and interesting workflows could be created with ease. For example, device events could be stored and visualized in keen.io, sensor data could be appended to a Google spreadsheet involving complex functions and charts, or SMS messages could be composed into a template and texted via SMS through Twilio.

Since we authenticated the Twitter pod through our wot.io developer account @wotiodevs, whenever the data from the accelerometer is determined by scriptr to have exceeded the safety thresholds, we can see our tweets!

ThingWorx as an Application Enablement Platform

ThingWorx is a full-featured IoT platform that enables users to collect data from a variety of sources and services and build out applications to visualize and operate on that data.

In our case, we took the real-time location data originating from the mobile devices being managed by Stream's IoT-X and ARM's mbed Device Server platforms and routed them through the wot.io data service exchange to our visualization, or mashup application, in ThingWorx.

We also routed traffic camera and traffic sign data from the London Datastore through wot.io and into the same ThingWorx mashup.

To make the data useful, in our mashup we included a Google Map widget and then in real-time, we plot each mobile device, camera, sign with a different icon based on their current locations.

Users can interact with any of these data points: clicking on a camera icon, for example, will display an image of what is actually being captured by that traffic camera at the given intersection. Below, I selected a camera that is located on the River Thames and has the Tower of London with Big Ben in its view!

While it's fun to sight see around London, in a Smart City, we can also imagine ways to use these cameras and digital signs to help us efficiently move assets (usually vehicles!) through a congested downtown area. For example, if we zoom into a traffic heavy portion of London, we can view the camera feeds and digital roadsigns in an area. Here, we can see that this sign's text currently displays a message that this route is closed for resurfacing.

And the camera in the area even shows how traffic cones are being set up to move traffic away from the roadwork!

And since we already know that with wot.io, messages can be routed to multiple data services as easily as to a single service, displaying the messages on a map is hardly the end of the story. Just as one trivial example, imagine correlating the timing and text and locations of digital signs with the resulting traffic disruptions to optimize how best to announce construction work.

Elasticsearch & Kibana

Finally, we also routed the telemetry messages from all those cellular, satellite, and LPRN-based mobile devices embedded in vehicles traveling around the city of London through wot.io and into an instance of Elasticsearch and Kibana to create a real-time heatmap visualization of the number of managed devices by geographic region.

Elasticsearch is a powerful, distributed, real-time search and analytics engine. Traditionally applied to numeric or textual data (as we have discussed previously), Elasticsearch also shines in geospatial indexing scenarios as well. In this case, our histogram is colored based on the number of devices currently reporting in each geographic subregion.

Conclusion

As London and other major cities begin to connect, open, and share all the data their IoT devices collect, wot.io allows for the creation and extraction of real, actionable business value from that data.

Whether through its many options for device management platforms from the likes of ARM and Stream Technologies, or its support for any of the hardware devices through numerous protocol adapters, or its support for web-based data feeds like the London Datastore, or its ability to flexibly route data to multiple data services like scriptr, bip.io, ThingWorx, or Elasticsearch, wot.io is clearly the data service exchange™ for connected device platforms.

London Knightsbridge Street Photo Credit: By Nikos Koutoulas [CC BY 2.0], via Flickr

December 8, 2015 / Posted By: wotio team

Training Data

In the last part, we created an environment in which we could deploy a Tensorflow application within the wot.io Data Service Exchange (DSE). Building on that work, this post will cover distributing training data for our Tensorflow models using the wot.io Data Bus.

The training data set we're going to use the the MNIST Database maintained by Yann LeCun. This will allow us to build on Tensorflow's tutorials using the MNIST data:

And as this is not going to be a Tensorflow tutorial, I highly recommend you read all three at some point. Let's look at how we're going to use this data.

Architecture of a Solution

The system that we're going to build consists of a number of components:

  • a Training Data Generator
  • two Production Data Sources
  • four Machine Learning Models
  • three Consumer Applications

Between the components, we will use the wot.io Data Bus to distribute data from each of the training data set and the production data sources to the different models, and then selectively route the model output to the consumers in real time. Due to the nature of the wot.io DSE, we can either build these applications inside of the DSE security context, or host the applications externally going through one of the authenticated protocol adapters. For purposes of this article, we will treat this design decision as an exercise up to the reader.

For my sample code, I'm going to use the AMQP protocol adapter for all of the components with the wot-python SDK. This will make it easy to integrate with the Tensorflow framework, and will make it possible to reuse code explained elsewhere.

Training Data Generator

The first component we need to build is a Train Data Generator. This application will read a set of data files and then send individual messages to the wot.io Data Bus for each piece of training data. The wot.io Data Bus will then distribute it to each of our machine learning models.

As our ML models will be built in Docker containers in the wot.io DSE, we can treat each instance of a model as a disposable resource. We will be able to dynamically spin them up and down with wild abandon, and just throw away our failed experiments. The wot.io DSE will manage our resources for us, and clean up after our mess. The Training Data Generator will allow us to share the same training data with as many models as we want to deploy, and we don't have to worry about making sure each model gets the same or similar data.

We can do our development of the application inside of a container instance of the wotio/tensorflow container we made in the last tutorial.

docker run -i -t wotio/tensorflow

This will drop us in a bash prompt, which we can then use to develop our training data generator. Next we'll setup an isolated Python environment using virtualenv so that while we're developing our solution we don't pollute the system python. It will also make it easier to capture all of the dependencies we added when creating a new Dockerfile.

virtualenv training

We can select this environment by sourcing the training/bin/activate file:

. training/bin/activate

We'll build the rest of our application within the training directory, which will keep our code contained as well. You can checkout the code from GitHub using:

git clone https://github.com/wotio/wot-tensorflow-example.git

The MNIST data in contained in a couple of gzipped archives:

  • train-images.idx3-ubyte.gz
  • train-labels.idx1-ubyte.gz

You can think of these files a pair of parallel arrays, one containing image data, and then an identifier for each image in the other. The images contain pictures of the numbers 0 through 9, and the labels take on those same values. Each training file has a header of some sort:

Image data file

Label data file

The goal will be to load both files, and then generate a sequence of messages from the images selected at random, and sent with the label as a meta-data attribute of the image data. The models will interpret the messages with meta-data as training data, and will invoke their training routine on the message. If the message doesn't have a meta-data label, it will instead be run through the model it will forward the result to the consumer with the most likely label attached in the meta-data field. In this way, we can simulate a system in which production data is augmented by machine learning, and then passed on to another layer of applications for further processing.

To read the image file header we'll use a function like:

And to read the label file header we'll use:

Both of these functions take a stream, and return a tuple with the values contained in the header (minus the magic). We can then use the associated streams to read the data into numpy arrays:

By passing in the respective streams (as returned from prior functions), we can read the data into two parallel arrays. We'll randomize our output data by taking the number of elements in both arrays and shuffling the indexes like a pack of card:

With this iterator, we are guaranteed not to repeat any image, and will exhaust the entire training set. We'll then use it to drive our generator in a helper function:

Now we come to the tricky bit. The implementation of wot-python SDK is built on top of Pika, which has a main program loop. Under the hood, we have a large number of asynchronous calls that are driven by the underlying messaging. Rather than modeling this in a continuation passing style (CPS), the wot-python SDK adopts a simple indirect threading model for it's state machine:

Using this interpreter we'll store our program as a sequence of function calls modeled as tuples stored in an array. Start will inject our initial state of our finite state machine into a hidden variable by calling eval. Eval prepends the passed array to the beginning of the hidden fsm deque which we can exploit to mimic subroutine calls. The eval function passes control to the _next function which removes the head form the the fsm deque, and calls apply on the contents of the tuple if any.

The user supplied function is then invoked, and one of 3 scenarios can happen:

  • the function calls eval to run a subroutine
  • the function calls _next to move on to the next instruction
  • the function registers an asynchronous callback which will in turn call eval or _next

Should the hidden fsm deque empty, then processing will terminate, as no further states exist in our finite state model.

This technique for programming via a series of events is particularly powerful when we have lots of nested callbacks. For example, take the definition of the function step in the training program:

It grabs the next index from our randomized list of indexes, and if there is one it schedules a write to a wot.io Data Bus resource followed by a call to recuse. Should we run out of indexes, it schedules an exit from the program with status 0.

The write_resource method is itself defined as a series of high level events:

wherein it first ensures the existence of the desired resource, and then sends the data to that resource. The definition of the others are too high level events evaluated by the state machine, with the lowest levels being asynchronous calls whose callbacks invoke the _next to resume evaluation of our hidden fsm.

As such, our top level application is just an array of events passed to the start method:

By linearizing the states in this fashion, we don't need to pass lots of different callbacks, and our intended flow is described in data as program. It doesn't hurt that the resulting python looks a lot like LISP, a favorite of ML researches of ages past, either.

A Simple Consumer

To test the code, we need a simple consumer that will simply echo out what we got from the wot.io Data Bus:

You can see the same pattern as with the generator above, wherein we pass a finite state machine model to the start method. In this case, the stream_resource method takes a resource name and a function as an argument, which it will invoke on each message it receives from the given resource. The callback simply echoes the message and it's label to stdout.

With this consumer and generator we can shovel image and label data over the wot.io Data Bus, and see it come out the other end. In the next part of this series, we will modify the consumer application to process the training data and build four different machine learning models with Tensorflow.

December 7, 2015 / Posted By: wotio team

One of the early inspirations for the wot.io Data Service Exchange was the need to deploy and evaluate multiple machine learning models against real time data sets. As machine learning techniques transition from an academic realm to enterprise deployments, the realities of operational costs tend to inform design decisions more than anything else, with the key forcing function becoming percentage accuracy per dollar. With this constraint in place, the choice of model often becomes a search for one that is "good enough", or which model provides adequate accuracy for minimal operational cost.

To make this concept more concrete, we can build a simple distributed OCR system using the wot.io data bus to transmit both training and production data. The wot.io Data Service Exchange currently provides access to services like facial and logo detection services through Datascription, and visual object recognition and search through Nervve. But for this demonstration, we will connect a demo application written using Google's TensorFlow machine learning library. This will allow us to demonstrate how to build and deploy a machine learning application into the wot.io Data Service Exchange. As TensorFlow is released under the Apache 2 license, we will also be able to share the code for the different models we will be testing.

Getting Started With Python

The wot.io Data Service Exchange supports a wide range of languages and protocol bindings. Currently, we have library support for JavaScript, Erlang, Python, Java, C/C++, and Perl. Since TensorFlow is written in python, our demo application will use the wot-python bindings. These bindings interface with the AMQP protocol adapter for the wot.io data bus, and model the data bus's resource model on top of the AMQP interface. To install the bindings, we'll first create a virtualenv environment in which we'll install our dependencies:

Linux

Mac OS X

This will create a virualenv environment which will contain tensorflow and the wot-python bindings for local development. While this can be useful for testing, in the production deployment we will use a Docker container for deployment. The wot.io Data Service Exchange can deploy docker container and manage their configuration cross data centers and cloud environments. As the wot.io Data Service Exchange has been deployed in Rackspace, IBM SoftLayer, and Microsoft Azure, it is useful to be able to produce a production software artifact that works across platforms.

Creating a Dockerfile

We will use the Linux version as the basis of creating a Docker container for our production application, but it can be useful. To start with we'll base our Dockerfile upon the sample code we make available for system integrators: https://github.com/wotio/docker-example. To build this locally, it is often useful to use VirtualBox, Docker and Docker Machine to create a Docker development environment. If you are using Boot2Docker on Mac OS X, you will need to tell docker machine to grow the memory requirement for the VM itself:

docker-machine create wotio -d virtual box --virtualbox-memory "12288" wotio

As the default 1GB isn't large enough to compile some of tensorflow with LLVM, I had success with 12GB, YMMV. Once you have each of these installed for your platform, you can download our sample build environment:

This will build a sequence of docker container images, and it will be on top of the wotio/python that we will install TensorFlow. At this point you'll have a working container suitable for deploying into the wot.io Data Service Exchange. In the next blog post we'll build a sample model based on the MNIST data set and train multiple instances using the wot.io Data Bus.

December 7, 2015 / Smart City, ARM, ARMmbed, Stream, IoT-Xtend, Iot-X / Posted By: wotio team

Recall that in a previous post, we discussed a collection of ARM-based devices and open data sources that comprised the basis for wot.io's demonstration at Mobile World Congress 2015 last Spring. Today we will continue our deeper look into the technology behind that demonstration by examining the device management platforms that were employed.

Managing Devices with Stream Technologies IoT-Xtend™

In a large organization or municipality, the issue of just simply managing all of the connected devices is usually the first challenge one encounters before all of the resulting data can be collected and analyzed.

This is where Stream Technologies comes in. Their IoT-Xtend™ Platform is an award winning connected device management platform designed to monitor, manage and monetize device endpoints, manage subscriptions, and provide robust billing and Advanced Data Routing. Xtend provides Multi Network Technology capability in one comprehensive platform. Xtend serves and supports complex multi-tenant and multi-tiered sales channels. Its web-based user interface can be used to view which devices are actively transferring data and allow for the management, routing, and metering of data.

Previously we described how we created embedded applications to run on each of the demonstration devices and could then connect to our device management platform.

In fact, what we were doing was leveraging the extensive device integration capabilities of Stream and their IoT-Xtend™ Platform. Specifically, Stream has integrated numerous cellular, satellite, LPWA, and Wi-Fi devices into their platform. In cases like our demonstration, where an integration did not already exist, creating a new integration was simply a matter of sharing the schema to which our messages would conform, and the communication protocol it would be using.

So the notification messages being sent from the devices to IoT-Xtend looked something like this for devices containing GPS sensors (like the u-blox C027):

{
    "sim_no": SIM_ID,
    "quality_idx": QUALITY,
    "gps_time": TIME,
    "dec_latitude": LAT,
    "dec_longitude": LON,
    "number_sat": SATELLITES,
    "hdop": HDOP,
    "height_above_sea": HEIGHT
}

When the device is powered on, it connects to Stream using its LISA-C200 CDMA cellular modem and begins to send it's location data from the GPS receiver. Because its SIM card has been provisioned and managed in IoT-X, the device telemetry data is received by and made visible in the IoT-X web-based user interface.

Connecting Stream IoT-Xtend™ to the wot.io Data Service Exchange

wot.io and Stream have fully integrated the Stream IoT-Xtend™ device management platform with the wot.io data service exchange™. This means that notification and telemetry data from the managed devices can be routed to and from any wot.io data services for analysis, visualization, metering and monitoring, business rules, etc.

In part 3 of this series, we will explore a few of the many data services demonstrated at Mobile World Congress.

ARM mbed Device Server

Of course, wot.io is all about choice and selecting the best-of-breed services fit for specific needs. As such, one might be interested in exploring one of the other device management platforms on the data service exchange.

One such option is ARM mbed Device Server. We have already written extensively about our close integration of ARM mbed Device Server to the wot.io data service exchange.

Whether your ARM mbed Device Server needs are to bridge the CoAP protocol gap, combine it with other connectivity or device management platforms, manage device identities and authorizations, send complex command-and-control messages to devices, or simply subscribe to device notifications, or to host production-scale deployments and handle your data service routing needs, wot.io has you covered.

Connecting Stream IoT-Xtend™ with ARM mbed Device Server

In addition to a direct integration between IoT-Xtend™ and wot.io, the devices managed in Stream Technologies IoT-Xtend™ platform can also be integrated with other device management platforms. In particular, at Mobile World Congress we demonstrated a configuration in which devices were registered in both the Stream IoT-Xtend™ and ARM mbed Device Server platforms.

Real-world IoT solutions will often involve multiple device management platforms. In such situations, it is often desirable to consolidate the number of interfaces used by operations staff, or to leverage the unique strengths of different platforms. For example, an organization or municipality deploying a smart city initiative may elect to use IoT-Xtend™ for its SIM-management and billing-related capabilities while standardizing on ARM mbed Device Server for device provisioning and security. Or as another example, perhaps they would like to standardize on ARM mbed Device Server, but a vendor or partner uses another device management platform like IoT-Xtend™.

wot.io provides enterprise customers with the powerful ability to interoperate between and across device management platforms, sharing both device data and command and control messages with data services in either direction.

Next Time...

In our final post in this series, we will discuss some of the data services that were used to analyze, visualize, and manipulate the Smart City data coming from the devices managed by IoT-Xtend™ and ARM mbed Device Server, and the London Datastore project.

December 4, 2015 / thingworx / Posted By: wotio team

wot.io's Container as a Service (CaaS) not only enables the deployment of applications such as data services in public or private cloud infrastructure, but also enables deploying a data service directly onto a laptop. This is achieved using the power of Docker and the wot.io Thingworx container.

Docker has built Docker Toolbox to simplify the install on your laptop. Using Docker Toolbox, we can pull and run the wot.io Thingworx container, which we created for provisioning Thingworx instances for the LiveWorx hackathon in 2015. This enables you to deploy a full featured Thingworx instance on your laptop.

The video here demonstrates installing Docker Toolbox, logging into Docker Hub, downloading and running the wot.io Thingworx container. The video has been sped up, but the overall process was approximately 5 minutes.

You saw in the the video a great example of deploying a containerized version of Thingworx on a laptop. As Docker containers are portable, the same container can just as easily be deployed in the cloud as it was on my laptop. To do so, simply spin up a cloud VM running Docker, login to Docker Hub, and pull and run the wot.io Thingworx container.

December 3, 2015 / ARM, ARMmbed, MultiTech, Stream, Smart City, u-blox, NXP / Posted By: wotio team

Last Spring, wot.io teamed up with a number of partners including Stream Technologies and ARM at Mobile World Congress 2015 to demonstrate an IoT Smart City solution combining data from live vehicles moving about the London area with data from the London Datastore.

We already posted the following video, which provides a good overview of what we presented at the event, but we wanted to take this opportunity to do a deeper dive to describe some of the technology behind the demo.

So this will be part 1 of a series of posts where we explore the assembly of an interoperable Smart City solution powered by wot.io and its data service exchange™.

Toward Smarter Cities in the UK

The population in the city of London, UK, is exploding and is expected to reach 9 million people before New York City. In light of that prediction, the governments in London and the United Kingdom have begun to lay out plans to utilize digital technologies to become a Smart City in an effort to help stem and even solve many of the challenges that arise from such a massive and rapid population increase.

In support of that vision, the Greater London Authority established the London Datastore initiative. According to its website the London Datastore was created

as a first step towards freeing London’s data. We want everyone to be able access the data that the GLA and other public sector organisations hold, and to use that data however they see fit—for free.

London's passenger and road transport system is among the most advanced in the world, and was one of the first smart services that London opened to developers as part of the London Datastore initiative. The result was an unprecedented volume of open data with which to develop smart city solutions.

As part of our smart city application, we were able to find a whole section in the datastore devoted to traffic and transportation. We built adapters to read from this feed into wot.io and route near-real-time data from traffic cameras and road signs to multiple data services.

Smarter Traffic Through Instrumentation

Just one of the many facets to a smarter city initiative is to learn, understand, and make decisions based around traffic patterns. Naturally, analysis and decision-logic require data. Following are a few examples of how wot.io partners are filling these needs with a wide array of ARM mbed-based hardware and software products.

Connecting Devices with Multitech

In order to demonstrate how detailed information about traffic accidents could be used to assist emergency services or even to otherwise manipulate traffic patterns in response, MultiTech placed its MultiConnect® mDots (inexpensive radios using the new Semtech LoRa™, low power, wide area RF modulation) inside a remote-controlled model car and drove it around the ARM booth. The car sent sensor info (including x-y-z plane accelerometer readings)

to a MultiConnect® Conduit™ gateway using the Semtech LoRa™, low-power, long-range wireless access (LPWA) RF technology in the European 868MHz ISM band spectrum. The Conduit packaged and then sent the sensor data to Stream’s award winning IoT-X platform.

Connecting Devices with u-blox

Another common requirement for smarter traffic in a connected city is detailed knowledge about the geo-location of devices embedded in vehicles.

The u-blox C027 is a complete IoT starter kit that includes a MAX-M8Q GPS/GNSS receiver and a LISA-C200 CDMA cellular module with SIM card.

As you can see from the photograph, we added an extended GPS antenna to help with satellite reception given that we were going to be using it from inside our urban office building location.

It was easy enough to use the web-based IDE on the ARM mbed Developer Site to build a lightweight embedded C application. The application simply reads the GPS data from the GPS/GNSS receiver on the device, and sends it to a TCP endpoint exposed by the Stream Technologies IoT-Xtend™ API using the cellular modem to connect to a local cellular network. Using the cell network for connectivity makes the system completely mobile, which is perfect for vehicles driving around a city.

Ultimately, the embedded application sends JSON messages to the Stream API looking something like the following:

{
    "sim_no": SIM_ID,
    "quality_idx": QUALITY,
    "gps_time": TIME,
    "dec_latitude": LAT,
    "dec_longitude": LON,
    "number_sat": SATELLITES,
    "hdop": HDOP,
    "height_above_sea": HEIGHT
}

Connecting Devices with NXP

Another ARM mbed device, the NXP LPC1768,

was used to demonstrate two-way communications with the wot.io data service exchange™. Ambient temperature was monitored through its on-board temperature sensor, analyzed by business logic, and sent back to the device in the form of specific commands to manipulate the device speaker and LED intensity.

Live Mobile Devices

Last, but certainly not least, the demonstration also included a number of cellular-, satellite-, and LPRN-based mobile devices embedded in vehicles traveling about the city of London in real-time. The devices were managed by the Stream IoT-x platform, and telemetry and geo-location messages were communicated to the wot.io operating environment through our WebSocket protocol-based streaming data adapter.

Next Time

Today we took a brief look at the diverse device and open data feed-based sources of smart city data that comprised the wot.io demonstration that we presented earlier this year in Barcelona.

Tune in next time for a closer look at how we managed those devices with device management platforms from ARM and Stream Technologies, and how their data was integrated onto the wot.io data service exchange™.

December 3, 2015 / ICN, 5G, NFV, SDN / Posted By: Kelly Capizzi

With increases in demand for video and next generation services, content needs to live closer to the end user. Therefore, a possible key piece to realizing the full potential of 5G could be the information centric network (ICN). Recently, InterDigital Europe’s Dirk Trossen, principal engineer, sat down with RCR Wireless News’ Jeff Mucci, CEO and Editorial Director to further discuss the role of an ICN.

“An information centric network is a network where the information itself is identified, instead of sending a packet from an IP address...you’re streaming information and the actual information is at the center,” Dirk explains to Jeff. He goes on to talk about the current problem that carriers face and the approaches, CDNs and overprovisioning, being used to solve the problem. He clarifies that the current approaches will not be sustainable when you consider the demands that 5G will create.  

Dirk provides Jeff with an example of a potential 5G network model and describes how ICN will fit into the 5G picture. Finally, he provides insight into what he feels will prompt carriers, over-the-top players and other service providers to embrace this new network.

Watch the full discussion below:

In a previous post we demonstrated how the wot.io operating environment can be used to translate between a number of different message protocols.

Today, we will build on that with a concrete example demonstrating how the protocol bridging capabilities and device management integrations of wot.io can actually be used to extend the capabilities of device management platforms to encompass those other protocols!

In particular, we'll show how to take devices that only speak MQTT, manage them with ARM mbed Device Server, which does not speak MQTT, all while maintaining the ability to route notification data to and from the usual full complement of data services on the wot.io data service exchange™. Just for grins, we'll even subscribe to the notification data over a WebSocket connection to show that the protocol conversions can be performed on the "inlet" or the "outlet" side of the data flows.

IoT/M2M Interoperability Middleware for the Enterprise

While the problem statement above may not sound all that interesting at first blush, consider that:

  • the devices are speaking MQTT
  • potentially, to a separate message broker
  • MQTT is a stateful, connection-based protocol that runs over TCP/IP
  • ARM mbed Device Server expects devices to speak CoAP*
  • CoAP is a stateful, connectionless protocol that runs over UDP by default
  • WebSocket is a stateful, connection-based protocol that runs over TCP/IP
  • that gets upgraded from the stateless HTTP protocol that runs over TCP/IP

and it should start to become apparent pretty quickly how impressive a feat this actually is. (* the newest version of ARM mbed Device Server appears to also support HTTP bindings in addition to CoAP, but it's still a protocol impedance mismatch either way)

Because the real world is made up of multiple device types from multiple vendors. Making sense of that in an interoperable way is the problem facing enterprises and the Industrial IoT. It is also what separates real IoT/M2M solutions from mere toys.

the real world is made up of multiple device types from multiple vendors. Making sense of that in an interoperable way is the problem facing enterprises and the Industrial IoT. It is also what separates real IoT/M2M solutions from mere toys.

In fact, it would be extremely uncommon for enterprises not to have devices tied to multiple device management platforms. Managing them all separately in their individual silos and performing separate integrations from each of those device management platforms to data services representing analytics or business logic would be an absolute nightmare.

Let's see how wot.io's composable operating environment makes it possible to solve complex, enterprise-grade interoperability problems.

Building Automation and Instrumentation with B+B SmartWorx Devices

Recently, one of our partners were showing off how they had outfitted their office with a bunch of IoT devices. And of course (as always happens any time you get more than one engineer together), it wasn't long before we had their devices connected up to the wot.io operating environment so they could analyze with their IoT data to make it actionable.

The actual devices they were using were a collection of Wzzard Wireless Sensor nodes connected to Spectre Industrial Routers acting as gateway devices. These devices from wot.io partners B+B SmartWorx not only comprise a rock-solid, industrial-grade IoT platform, but they also speak MQTT—a common IoT/M2M device level communication protocol. As such, they were perfect candidates for our protocol interoperability demonstration.

Overview

The following diagram represents a high-level overview of our interoperability demonstration:

  • device [1] is located outside our partner's engineering area, and has a thermocouple, voltmeter, and x,y,z-axis motion sensor
  • device [2] is located in our partner's server room, and has a thermocouple, voltmeter, and two analog inputs corresponding to ambient temperature and relative humidity
  • device [3] is located in our partner's front conference room and has a thermocouple, voltmeter, and two digital inputs corresponding to motion sensors

For demonstration purposes, we have only modeled a few of the many devices and sensors that are installed on our partners' premises. All three of these physical devices communicate with a local MQTT broker [4] to which our MQTT adapter [5] is subscribed. An example message looks something like

TOPIC: BB/0013430F251A/data,  
PAYLOAD: {"s":6,"t":"2015-11-30T15:18:33Z","q":192,"c":7,"do2":false,"tempint":48.2}  

In addition, we have simulated a fourth device, [6], just to demonstrate how the wot.io operating environment can also act as an MQTT broker [7] in order to support scenarios where a separate broker does not already exist.

Irrespective of the originating device, sensor data from these devices is routed through a series of adapters that ultimately:

  • model the data as logical resources on the wot.io message bus,
  • register "virtual devices" in ARM mbed Device Server [14] to represent the original devices
  • close the loop by subscribing to notifications for the new "virtual devices"
  • route data to wot.io data services like bip.io

Modeling Device Data

One of the more powerful capabilities of the wot.io operating environment is its ability to model device data streams as one or more loosely-coupled logical layers of connected data resources. These data resources form a directed graph of nodes and edges representing the sources/destinations as connected device data streams, and the processes (or data services) that operate upon them. One data resource is transformed by a data resource into another resource which can, in turn, serve as the input for one or more other data services.

A result of this architecture is that one can target any data resource as the source of any data service.

For our present demonstration, this means, for example, that we can represent the physical devices [1-4] as "virtual devices" in a device management platform of our choosing—ARM mbed Device Server in this case—whether or not it even supports devices of that make and manufacturer.

In fact, at a high level we will perform the following mappings between different representations of the device topology:

  • physical device data represented as MQTT topics of the form BB/<deviceId>/data are mapped to
  • logical wot.io data resources of the form /bb/<deviceId> which are further mapped to
  • logical wot.io data resources of the form /bb-data/<deviceId>.<sensorType>.<virtualDeviceId> which are registered and managed as
  • ARM mbed Device Server endpoints and resources of the form <deviceId>, <sensorType> which are then subscribed to and made available again to wot.io data services as
  • wot.io data resources of the form /bb-stream/<deviceId>.<sensorType>

Now that's some serious interoperability. Let's zoom in even a little closer, still.

Managing Virtual Devices with ARM mbed Device Server

Recall that earlier we described significant impedance mismatch between the protocols involved in this interoperability exercise. Let's examine the other adapters involved and see how they can be cleverly combined to resolve our impedance issues and allow us to manage virtual devices in ARM mbed Device Server.

Picking up where we left off earlier in reference to our architecture diagram,

  • adapters [9], [10], and [11] compose and send messages requesting the creation of new logical wot.io data resources. The adapter [9] maintains or references a mapping of virtual CoAP endpoints (more on these later) provisioned for each device. Specifically, an example message sent to the HTTP request adapter [10] might look like this
[
  "request", 
  "POST", 
  "http://demos.wot.io/demos/bb-data/tempint.virtual7/tempint.virtual7.#",
  "",
  { "Authorization": "Bearer <token>" }
]
  • the same messages emitted from [8] that are used to create new resource bindings are also routed to a simple controller [9] that composes a message for the CoAP adapter [13] to send to ARM mbed Device Server [14]. This CoAP message registers a virtual device and supplies a custom UDP context endpoint [15] as the location of said virtual device. (Notice that in actuality, our virtual CoAP device is actually spread across several different adapters in the wot.io operating environment!) An example CoAP pseudo-message (since CoAP is a binary protocol, I'll save you the raw tcpdump output) is basically
POST: /rd?ep=0013430F251A&et=BBSmartWorx%20Sensor&con=coap://172.17.42.1:40010&d=domain  
BODY: </tempint>;obs  

In order to maintain the registration, [13] will also send CoAP registration update messages as required by ARM mbed Device Server once the initial registration has occurred.

With just these few adapters, we have successfully used CoAP to registered virtual devices to represent our real MQTT-based devices in ARM mbed Device Server. You can see they now appear in the endpoint directory of mbed Device Server's administration interface:

Subscribing to Virtual Device Notifications

Now that our devices have been virtualized in our device management platform of choice, we can treat it as any other fully integrated wot.io data service. That is, we could choose to subscribe to notifications for one or more of the device resources and route them to one or more data services.

  • first, we would need to subscribe to mbed Device Server notifications by sending a message to the mbed Device Server adapter [17]. For our example, we just used a curl call [16] to the HTTP adapter [11] for simplicity.
  • the mbed Device Server adapter [17] will subscribe to the indicated endpoint and resource
  • in response, ARM mbed Device Server [14] will send a CoAP GET message to its registered endpoint (which you will recall is one of the CoAP adapters [15] that were provisioned by [9] and registered by [12]). These CoAP messages between mbed Device Server [14] and the CoAP adapter [15] look something like this (again resorting to pseudo-code to convey the binary message details):
GET /tempint  
OBSERVE=0, TOKEN: <token>  

NB: observe=0 means observable is true! Also, notice that the device identifier is missing and only the resource name is specified. This is because back in [9], we mapped the stateful, UDP-based endpoint for a specific physical device to a specific virtual CoAP adapter [15]--the one that is receiving this GET request.

The response sent back to ARM mbed Device Server [14] from the CoAP adapter [15] would look something like this:

CODE: 2.05  
TOKEN: <token>, OBSERVE=<observationSequence>  
BODY: 42.1  
  • next, ARM mbed Device Server sends these notifications to its registered callback: namely, the HTTP adapter [11]
  • after we route the messages through one more simple transformation [18] to return the deviceId and sensorId metadata to the logical wot.io resource path,
  • we can consume the device notifications through the WebSocket adapter [19] using and/or route it on to other data services like bip.io [22] for further transformation or analysis.

Routing to wot.io Data Services

Now that our notification data is once again represented as a wot.io data resource, we can route it to any of the services in the wot.io data service exchange™.

For example, if we create a "bip" in the web API workflow automation tool, bip.io we can pick off specific attributes and send them to any of a number of other 3rd party service integrations. For example, see the steps below to append rows to a Google Sheet spreadsheet where still more analysis and data interaction can occur. For example, column D in our spreadsheet contains a custom base64 decoding function written in JavaScript.

Conclusion

Today, we have demonstrated an extremely powerful capability afforded by the wot.io operating environment whereby we can combine a several protocol and data service adapters to extend device management services from IoT platforms like ARM mbed Device Server to devices that speak unsupported protocols like MQTT.

In a future post, we will build on this concept and show how we can turn any wot.io data resource into a "virtual device" managed by a device management platform—even ones that don't originate from devices at all!

It is through powerful capabilities like these that wot.io can truly claim to offer real-world IoT/M2M interoperability middleware for the enterprise.!

December 1, 2015 / M2M, oneM2M, IoT / Posted By: Kelly Capizzi

InterDigital’s Jim Nolan, EVP, InterDigital Solutions, recently provided an exclusive interview to m2mnow, a leading global IoT news source. In the interview, Jim discusses the oneM2M standard along with InterDigital’s role in Internet of Things (IoT).  

 “You can look at oneM2M as a ‘standard of standards’,” states Jim in the interview. He explains that the oneM2M standard is so important because it takes a truly horizontal approach and applies across different industry verticals. In fact, Jim cites this as one of the main reasons for InterDigital’s involvement with oneM2M. The company has concentrated its IoT efforts on standards-based solutions and the suite of horizontal services necessary to enable multiple IoT applications across multiple verticals.  

To close the interview, Jim extends advice to companies that are deploying IoT services and solutions. He suggests that they focus on standards-based solutions that have an eco-system of multiple solution providers in order to have multi-vendor interoperability.  

Want to hear more from Jim? Read the full interview here.  

November 24, 2015 / gaming, chess, android, mongodb, adapters, scriptr / Posted By: wotio team

This post is part 2 of our series on connecting gaming devices with the wot.io data service exchange™.

Routing Gameplay Event Data with wot.io

In the last post we saw that after retrofitting an open source chess application to connect via PubNub, connecting data services was easy since wot.io already has a PubNub adapter. No further changes were required to the application source code to connect to wot.io and send game play events to its data services or connected devices.

In fact, the only thing left to do was decide which wot.io data services to use in conjunction with our gaming system.

Storing Game Statistics with MongoDB

The data format we created for representing game play events uses a JSON variant of Portable Game Notation, or PGN. An example move message might look something like this:

{ 
  "gameId": "123", 
  "game": "continue",
  "user": "wotiosteve", 
  "move": "e2-e4", 
  "moveSeq": 1 
}

and a game ended message might look something like this:

{
  "gameId": "123",
  "black": "wotiosteve",
  "white": "wotiojim",
  "moves": [ [ "f3", "e6" ], [ "g4", "Qh4#" ] ],
  "winner": "wotiosteve",
  "gameResult": "0-1",
  "game": "end",
  "date": "2015.09.21"
}

Since we wanted to store every move made by every player in every game played across our system, we looked at the datastore options available in the wot.io data service exchange. For simplicity, we opted for the No-SQL flexibility of MongoDB and its native support for JSON documents like our game messages.

We chose the PGN format for our moves, but there are other formats that represent chess moves as well. Since MongoDB doesn't require you to define a fixed schema, we could easily add new formats in the future without requiring changes. Whether storing chess moves or IoT data from many different devices or device management platforms, this flexibility makes MongoDB a nice choice for storing data whose format can change over time.

Quick Review on Adapters

As a review, wot.io adapters can be configured to listen for messages on one bus resource and send any messages it may generate to some other bus resource. Typically, these resource paths are set declaratively and stored in the wot.io configuration service, where they are represented as environment variables that get passed into instances of the Docker containers which comprise running data services or adapters.

What that means is that while we can, of course, connect directly to our MongoDB datastore using any of the available drivers or clients, we can also interact with it simply by sending messages on the wot.io bus to and from a MongoDB adapter.

Inserting data into MongoDB

The wot.io adapter for MongoDB makes it possible to insert and query a MongoDB datastore by sending messages on the wot.io bus.

Since each MongoDB adapter can be configured to connect to a specific database and collection in a particular specific instance of the MongoDB data service, to insert a document, one need only route a JSON message that looks like this

[ "insert", {some-JSON-object} ]

to the adapter, et voila, the document will be asynchronously inserted into the configured MongoDB collection.

Reading data from MongoDB

Similarly, we can fetch data by routing a JSON message that looks like this

[ "find", {some-BSON-query} ]

to the same adapter, and the query result will be sent to the destination resource to which the adapter has been bound. (There are additional options for controlling the number of documents in the response, etc., but we'll keep it simple for this example.)

We can also send messages requesting aggregate queries. For example, the message our chess example sends to retrieve statistics for a given user looks like this:

[
  "aggregate",
  [
    {
      "$match": { 
        "gameResult": { "$exists": true },
        "$or": [
          { "white": "wotiosteve" },
          { "black": "wotiosteve" }
        ]
      }
    },
    {
      "$project": {
        "win": {
          "$cond": [ 
            { "$eq": [ "$winner", "wotiosteve" ] }, 
            1, 
            0 
          ]
        },
        "lose": {
          "$cond": [
            { 
              "$and": [
                { "$ne": [ "$winner", null ] },
                { "$ne": [ "$winner", "wotiosteve" ] }
              ]
            },
            1,
            0
          ]
        },
        "draw": {
          "$cond": [
            { "$eq": [ "$winner", null ] },
            1,
            0
          ]
        },
        "user": "$user"
      }
    },
    {
      "$group": {
        "_id": null,
        "total": { "$sum": 1 },
        "win": { "$sum": "$win" },
        "lose": { "$sum": "$lose" },
        "draw": { "$sum": "$draw" }
      }
    },
    {
      "$project": {
        "user": { "$concat": [ "wotiosteve" ] },
        "total": 1,
        "win": 1,
        "lose": 1,
        "draw": 1,
        "action": { "$concat": [ "statistics" ] }
      }
    }
  ]
]

Clearly, MongoDB query documents can get rather complex—but they are also very expressive. And wot.io makes it easy for other data services to interact with our MongoDB datastore.

Composable Data Services

What do we mean by making it easy for other data services to interact with our MongoDB datastore? Well, for example, we might employ a data service like scritpr which allows us to route messages to custom JavaScript logic endpoints.

Let's say that we have coded an algorithm to calculate the minimum number of moves that could have been used to beat an opponent. We can route our game-end messages (like the one shown above) at this data service.

Note that scriptr does not have to possess any MongoDB integration. Nor does the script have to even know that its response might be routed at MongoDB. The "insert" message could just as easily be processed by a different data service—say, Riak. Or both for that matter!

This is what we mean when we say that wot.io's data routing architecture allows for loosely coupled, composable solutions.

Next Time...

Next time we'll take a look at how we can connect another wot.io data service, Pentaho, with our new MongoDB datastore to produce some custom reports.

November 23, 2015 / AT&T, M2X, Elasticsearch, Kibana, WebSocket, MTA, GTFS, Digital Signage, Metro, riak / Posted By: wotio team

As with so many industries, the Internet of Things is changing the face of digital signage. With so much real-time, contextual data available, and with the ability to apply custom business logic and analysis in the cloud and send signals back to connected devices for a closed feedback loop, is it any surprise that the retail and advertising markets are taking notice? According to the International Data Corp.:

Digital signage use in retail outlets will grow from $6.0 billion in 2013 to $27.5 billion in 2018

Companies are already anticipating this trend. For example, our partners at B+B Smartworx have designed an entire vertical digital signage solution centered around their Wzzard Intelligent Edge Node and Spectre Cellular/Internet Gateway devices.

In this post, we use the wot.io data service exchange™ to combine the device management capabilities of the AT&T M2X platform with publicly available NYC subway data and an instance of Elasticsearch to quickly build out an example end-to-end digital signage solution.

About the MTA

According to its website, the MTA, or

Metropolitan Transportation Authority is North America's largest transportation network, serving a population of 15.2 million people in the 5,000-square-mile area fanning out from New York City through Long Island, southeastern New York State, and Connecticut.

And since

MTA subways, buses, and railroads provide 2.73 billion trips each year to New Yorkers – the equivalent of about one in every three users of mass transit in the United States and two-thirds of the nation's rail riders. MTA bridges and tunnels carry more than 285 million vehicles a year – more than any bridge and tunnel authority in the nation.

all those eyeballs seemed like a logical opportunity for our imaginary IoT digital signage company.

MTA integration

In order for our digital signage company to maximize the revenue opportunity that these subway travelers represent, we realized that it would need to know when a given sign is approaching a given train station. That way, the ads displayed (and the advertising rates we charged!) could be contextualized for attractions at or near each stop in real time. In other words, we wanted to create an advertising-driven business model where we could sell ad-spots on our digital signs just as if they were commercials on television, displaying commercials for companies near specific train stops as the trains approached those stops.

Rather than spec out a costly greenfield hardware solution with additional sensors like GPS to enable the tracking of signs on the trains (especially considering the satellite reception to the subway trains would be far from ideal much of the time), we decided to support a hypothetical existing installed signage base and infer the train position based on the data available through the MTA Real-Time Data Feeds service. A data service which was also, conveniently enough, already available as a feed on the wot.io data service exchange.

Interested readers may want to peruse the GTFS-realtime Reference for the New York City Subway for all the gory details, but we're sure that since you're reading this blog you have already realized that composing a solution out of existing data service integrations is much better than writing each of them yourself from scratch!

(Of course, combining web-based data feeds like the MTA and combining them with IoT device data is nothing new to wot.io. For another example in the Smart City space, see how we combined traffic disruption, traffic sign, and traffic signal data from London Datastore with ThingWorx, ARM mbed Device Server, Elasticsearch, and several ARM-based devices from u-blox, NXP, and Multitech at Mobile World Congress 2015.)

About AT&T M2X

Now that we decided that we would be modeling our digital signs as a virtual combination of web-based data and actual device connectivity, we needed to select a device management platform to provision and manage the digital signs themselves.

AT&T's M2X platform provides time-series data storage, device management, message brokering, event triggering, alarming, and data visualization for industrial Internet of Things (IoT) products and services. As such, it seemed like a good fit to function as either a device management platform (DMP) or data service provider (DSP), to use wot.io terminology. Throughout the balance of this post, we will show how we used it as both.

About the wot.io AT&T M2X adapter

wot.io's ability to integrate to multiple device management platforms is certainly no surprise to readers of this blog. For example, we have recently seen posts using ARM mbed Device Server, oneMPOWER, ThingWorx, DeviceHive, PubNub, Firebase, and even bip.io, just to name a few!

In a similar vein, we created an adapter to integrate M2X with wot.io. In order to streamline the process of getting connected to M2X, we also made it available through the M2X developer website.

This means that we can connect and manage devices using AT&T's M2X platform and can allow them to communicate with other data services using the wot.io data service exchange--exactly the combination that we were after to allow us to model our digital signs as hybrid, or virtual devices that combine physical device data with other data feeds like the one from the MTA.

Technical Overview

With the major building blocks for our digital signage solution thus identified, we were able to sketch out an overview of the system architecture:

We decided to configure the GTFS adapter to fetch MTA data every 30 seconds using a scheduler. We would also provision a logical resource for each sign device to represent the current advertising lineup for that sign.

We were pleased to note that by modeling the data as virtual resources in the wot.io operating environment, we would be able to easily route data for specific devices or groups of devices to specific data services. For example, calculating advertising rates for different signs could be as simple as routing them to different "versions" of a data service implementing the pricing logic. In other words, the complex problem of dynamically changing the advertising lineups based on MTA updates occurring at different frequencies had been reduced to a series of simple routes and transformations of data streams within the wot.io operating environment.

While we were thinking along these lines, it occurred to us that we'd probably also want to be able to track and report on the number of times each ad spot was displayed and which sign on which train and at which location etc. It was easy to simply route all the same device notifications to be indexed in an instance of Elasticsearch. We also needed a datastore to house our advertising lineups for each sign as they were sold, so we opted for the [No-SQL])https://en.wikipedia.org/wiki/NoSQL) key-value store Riak from Basho. And we did all this without breaking or even affecting the rest of the solution.

Such is the power of the wot.io data modeling and routing facilities: it enables solution providers to decouple the logical model of the data, and the services operating on various subsets of that data, from the implementation of the data services themselves. Nice.

Digital sign provisioning application

For the purpose of our simple example, a pair of web applications have been created to illustrate how custom applications can be created around device provisioning and signage data visualization. In a real scenario, we'd probably leverage another wot.io data service like ThingWorx or JBOSS to build out our user interfaces. Or for that matter, we might want to use something like the iOS or Android libraries to build a mobile app, for example.

The example train provisioning application uses a Web Socket connection to the wot.io operating environment to listen for and display the trains currently in service based on the MTA data feed.

Using an interface like this one, an operator from our digital signage company could provision which of our signs (if any) reside on a given train

resulting in the following display:

indicating in this case that a sign with a device ID of 1464a49ea6f1862bf6558fdad3ca73ce is located on train $4 1337+ UTI/WDL. In order to accomplish this feat, the train provisioning application sent a message back to the wot.io operating environment over a Web Socket connection to request a train sign device be provisioned to the given train. A quick review of our architecture diagram above will show that in turn, that message was used to register the sign as a device in M2X.

Additionally, now that the device has been provisioned in M2X, any advertising lineups for this train and its next stop can be looked up in Riak. For example, in the following screenshot you can see that there are lineups provisioned for two signs on the 01 1450 SFY/242 train, and for one sign on the 01 1454 242/SFY train.

M2X as a device management platform

Recall that we said that our train provisioning application sent a message requesting that we register a sign as a device in M2X. You can see from the following screenshot that a sign device was, indeed, registered successfully (the device ID we used in M2X is a composition of the train id and sign id)

Now, as new messages with updated advertising lineups are determined by the Request Signage Data adapter and sent from the wot.io operating environment, we can see that they are, indeed, being received in the M2X interface for this sign:

Displaying the dynamic ads on our digital signs

In lieu of actual digital signs, we created a simple simulator application to demonstrate how the the advertising lineup from a sign on a given train changes as it approaches each station. Once again, the application leverages wot.io's Web Socket protocol adapter for near real-time notifications to keep the application's lineup synchronized just like a digital could.

For example, we can see that the lineup of ads that have been sold for the first sign on train #01 1454 242/SFY is one list while stopped at station #106S

but changes once the train leaves the station and is approaching station #107S

M2X as a data service provider

Besides the obvious utility for managing and provisioning devices, as described above, the M2X platform can also be used as powerful data service.

For example, we could create dashboards for a simple overview of the advertising statistics for a given digital sign device

or we could display the number of ad impressions delivered, or number of distinct advertisers represented in a time-series graph

Of course, we have only scratched the surface in our simple example. There are many more features of the M2X platform. And they become even more powerful when combined with the wot.io data service exchange™.

Riak and Elasticsearch

We have discussed Riak and Elasticsearch in previous posts, so you can read more there to see details and other applications of those data services.

Flexibility and Future-proofing

This prototype of a dynamic signage system demonstrates how a complex solution for a real, Industrial IoT vertical can be composed from a relatively small number of existing data services and declarative routing logic in the wot.io operating environment. This approach shows the flexibility and choice of working with an exchange containing multiple data services. You add components as you need them based on their fit and functionality.

And new components can be added in the future with only the work needed to add the new service. Get a new source for ad inventory? Add the API or data service as a provider with an adapter. Want to manage how much inventory comes from different providers? Add a data service that allows you to control the flow of ads based on the rules you set.

The wot.io data service exchange gives you choice during your initial design and provides flexibility going forward allowing you to continue to get the most from your initial investment and still add new products down the road.

November 23, 2015 / binary, video, audio, images, Nervve, Datascription / Posted By: wotio team

The majority of sample IoT applications available online, including many of our own, use JSON, XML or some other simple, human-readable, text-based format. It makes sense because it's easy to show the data at various parts of the application, it's easy to create new demo data, and it's easy to work with JSON data in server-side data services because it's a well-supported format for web-based services.

But in the real world of connected devices, not all IoT data will look like JSON or XML. System designers are already concerned about the bandwidth use of the oncoming wave of devices and are advocating for leaner, more compact formats. In his keynote at ARM TechCon 2015 earlier this month, Google's Colt McAnlis encouraged developers to be looking at lighter-weight binary protocols like FlatBuffers or protocol buffers.

And of course, an increasing number of IoT solutions will incorporate audio and video streams—either as primary or as additional sources of IoT data. For example, security monitoring systems or advanced sensors attached to machinery watching product output as part of an industrial quality control system both involve the collection, transmission, and analysis of audio streams.

The following video demonstrates how binary data can be transmitted across and manipulated within the wot.io operating environment.

Properly working with binary data isn't trivial, and it's an important aspect of an enterprise-caliber data routing system. wot.io adapters and its operating environment work independently from the payload its messages carry, and can therefore accept and route messages with payloads of arbitrary format. Allowing adapters to flexibly consider message payloads alternately as either opaque blobs or as transparent, structured data proves to be immensely valuable in real-world industrial IoT scenarios.

In fact, the wot.io data service exchange™ has a number of data services like Nervve and Datascription that provide search, transcription, and metadata extraction from binary audio and video streams. If you'd like to learn more about these and other data services, contact wot.io today!

November 23, 2015 / sempercon, flowthings, bipio, circonus, webinar / Posted By: wotio team

In this webinar, wot.io hosts two of our partners, Sempercon, an IoT systems integrator, and Flowthings, a data service provider in the wot.io data service exchange™. Sempercon describes how they transformed Go2Power’s battery backup unit into a connected IoT system. Featured with wot.io's data service exchange™ and operating environment are Sempercon’s CirrusCon device management and real-time stream processing platform and a pair of wot.io data services: bip.io for flexible web API automation, and Circonus for powerful monitoring and analytics.

November 20, 2015 / Posted By: wotio team

wot.io just returned from ARM TechCon 2015 presenting our data service exchange for connected device platforms. One of the major differentiation's for wot.io is our IoT platform interoperability. There are hundreds of IOT platforms, and new ones announced every week. Oracle announce yet another IoT platform at TechCon. In one talk there was a prediction that a home owner will have over 20 different IoT connected platforms in the home and car, that equates to 20 different apps for the home owner to deal with. wot.io offers the ability to aggregate and unify the IoT platforms, whether they be industrial, enterprise or home.

As a interoperability example we showcased ARM mbed device server and pubnub together with several data services to augment the aggregated connected device data. We recorded a video of how attendees can connect an ARM-based device to wot.io data services using PubNub. On the bottom of PubNub's IoT page and on their developer page you can see the breadth of libraries available. If you can get your ARM-based device connected using one of those, you can access bundle of data services we have made available.

When you go through the registration process, wot.io will provision accounts and some sample resources in bip.io and scriptr.io. After logging in, you can add your account details to activate PubNub and Circonus. Here's just one example of what we can do!

November 19, 2015 / riak, basho, data service exchange, docker / Posted By: wotio team

Today, we wanted to give you a peek under the hood and walk through an example of how wot.io and our partners make wot.io data services available in the wot.io data service exchange. In particular, we will show how to create a Riak data store cluster. Naturally, we will be doing this using Docker since Docker containers make up the building blocks of the wot.io operating environment™.

Riak is a key-value, No-SQL data store from Basho. And it's one of the data services available in wot.io's data service exchange™. Let's see how it is done.

Initializing a Riak cluster with Docker

Often times, the first step in getting a service to run as a wot.io data service is to create a Docker container for it. In this case, much of our work has already been done for us, as a suitable recipe is already available on the public Docker Hub.

For the purposes of this example, we will be instantiating two Docker containers, each with its own Riak instance. Once we confirm that they are running successfully as independents, we will join them together into a cluster.

To get started, we pull the latest Riak docker image from devdb/riak repository on the public Docker registry.

$ docker pull devdb/riak:latest

It helps to pre-download the Riak docker image into the local Docker cache.

Starting up riak1 instance

Once the image has been stored in the Docker cache, we are now ready to kick off the first Riak instance.

$ mkdir ~/riak_storage; cd ~/riak_storage
$ docker -H unix:///var/run/docker.sock run --dns 8.8.8.8 --name riak1 -i -d -p 18098:8098 -p 18087:8087  -v `pwd`/data:/var/lib/riak -t devdb/riak:latest

The Riak initialization will take a few minutes to complete. Once it's finished, we will be able to check that the instance is empty using the handy HTTP REST interface that Riak exposes:

# querying data store on riak1 
$ curl -s localhost:18098/buckets?buckets=true | jq .
{
  "buckets": []      # 0 items
}

This shows that there are no buckets in the data store currently. That's ok. We'll populate the data store in a minute.

Starting up riak2 instance

Let's go ahead and instantiate the second Riak container as well

$ cd ~/riak_storage
$ docker -H unix:///var/run/docker.sock run --dns 8.8.8.8 --name riak2 -i -d -p 28098:8098 -p 28087:8087  -v `pwd`/data2:/var/lib/riak -t devdb/riak:latest

and confirm that it, too, is empty

# querying data store on riak2
$ curl -s localhost:28098/buckets?buckets=true|jq .
{
  "buckets": []      # 0 items
}

Injecting data into riak1 data store

Now that both Riak instances are up and running, we are ready to populate one of the instances with some test data. Once again, we can use the curl tool to place data on riak1 using the HTTP REST interface.

# populating with for loop
for i in $(seq 1 5); do
  curl -XPOST -d"content for testkey-${i}" \
    localhost:18098/buckets/testbucket/keys/testkey-${i} 
done

Checking contents on riak1

Now that it has some data, querying riak1 should confirm for us that our POSTs had been successful

# querying data store on riak1
$ curl -s localhost:18098/buckets?buckets=true | jq .
{
  "buckets": [
    "testbucket"      # 1 item
  ]
}

We found the Riak bucket named 'testbucket' that we created earlier. Showing what's inside 'testbucket' we can see:

$ curl -s localhost:18098/buckets/testbucket/keys?keys=true | jq .
{
  "keys": [
    "testkey-1",
    "testkey-5",
    "testkey-4",
    "testkey-2",
    "testkey-3"
  ]
}      # 5 keys

Querying one particular key, we also have:

$ curl -s localhost:18098/buckets/testbucket/keys/testkey-5
content for testkey-5

Meanwhile, riak2 remains empty...

We can check that the data store on riak2 hasn't been touched.

# querying data store on riak2 again
$ curl -s localhost:28098/buckets?buckets=true|jq .
{
  "buckets": []
}

So far, riak2 instance remains empty. In other words, so far we have two independent Riak data stores. But we wanted a Riak cluster...

Joining the two Riak instances into a cluster

We are now ready to join the two Riak instances, but before we do, we'll have to collect some information about them. We need to find the IP addresses of each of the containers.

To confirm the status of the Riak instances, we can check the member-status of the independent instances. This command happens to also tell us the container IP addresses. We can run member-status using the docker exec command for riak1:

# checking member-status on riak1
$ docker exec riak1 riak-admin member-status
============================ Membership =============================
Status     Ring    Pending    Node
---------------------------------------------------------------------
valid     100.0%      --      'riak@172.17.5.247'   # 1 result
---------------------------------------------------------------------
Valid:1 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

and again for riak2:

# checking member-status on riak2
$ docker exec riak2 riak-admin member-status
============================ Membership =============================
Status     Ring    Pending    Node
---------------------------------------------------------------------
valid     100.0%      --      'riak@172.17.5.248'   # 1 result
---------------------------------------------------------------------
Valid:1 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

Noting the IP addresses (for riak1: 172.17.5.247 and for riak2: 172.17.5.248), we can proceed to join riak2 instance onto the riak1 instance. To do so, we will run run 3 Riak commands: riak-join, riak-plan and riak-commit.

The riak-join command will basically register the connection on the two machines.

$ docker exec riak2 riak-admin cluster join riak@172.17.5.247
Success: staged join request for 'riak@172.17.5.248' to 'riak@172.17.5.247'

The riak-plan command will report the connection info.

$ docker exec riak2 riak-admin cluster plan
========================== Staged Changes ===========================
Action         Details(s)
---------------------------------------------------------------------
join           'riak@172.17.5.248'
---------------------------------------------------------------------

NOTE: Applying these changes will result in 1 cluster transition

###################################################################
                   After cluster transition 1/1
###################################################################

============================ Membership =============================
Status     Ring    Pending    Node
---------------------------------------------------------------------
valid     100.0%     50.0%    'riak@172.17.5.247'
valid       0.0%     50.0%    'riak@172.17.5.248'
---------------------------------------------------------------------
Valid:2 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

WARNING: Not all replicas will be on distinct nodes

Transfers resulting from cluster changes: 32
  32 transfers from 'riak@172.17.5.247' to 'riak@172.17.5.248'

And finally, the riak-commit will save the changes.

$ docker exec riak2 riak-admin cluster commit
Cluster changes committed

Once you see this message, the two data stores will begin the cluster building process. The information on the two data stores will start to be synced.

Confirming the data stores are clustered correctly

Now we can check the cluster status. If we immediately run member-status right after the riak-commit, we will see that membership ring at this state:

$ docker exec riak2 riak-admin member-status
=========================== Membership ============================
Status     Ring    Pending    Node
---------------------------------------------------------------------
valid     100.0%     50.0%    'riak@172.17.5.247'
valid       0.0%     50.0%    'riak@172.17.5.248'
---------------------------------------------------------------------

After distribution time

Since riak1 was populated with only our test entries, then it won't take long to distribute. Once the distribution is finished, the clustering will be completed. You will see:

$ docker exec riak2 riak-admin member-status
============================ Membership =============================
Status     Ring    Pending    Node
---------------------------------------------------------------------
valid      50.0%      --      'riak@172.17.5.247'
valid      50.0%      --      'riak@172.17.5.248'
---------------------------------------------------------------------
Valid:2 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

Checking contents on riak2

Now that the distribution has completed, listing the buckets from riak2 will show the cloned dataset from the data store riak1.

# querying the buckets (now) on riak2
$ curl -s localhost:28098/buckets?buckets=true|jq .
{
  "buckets": [
    "testbucket"
  ]
}

And querying the testbucket shows our keys, as expected:

# querying the buckets (now) on riak2
$ curl -s localhost:28098/buckets/testbucket/keys?keys=true|jq .
{
  "keys": [
    "testkey-3",
    "testkey-4",
    "testkey-2",
    "testkey-5",
    "testkey-1"
  ]
}

And of course, querying one of these keys, we get:

# querying the key (now) on riak2
$ curl -s localhost:28098/buckets/testbucket/keys/testkey-5
content for testkey-5

Note that the results from riak2 are the same as that from riak1. This is a basic example of how Riak clustering works and how a basic cluster can be used to distribute/clone the data store.

Encapsulating the Riak cluster as a wot.io data service

Now that we have the ability to instantiate two separate Riak instances as Docker containers and join them together into a single logical cluster, we have all the ingredients for a wot.io data service.

We would simply need to modify the Dockerfile recipe so that the riak-join, riak-plan and riak-commit commands are run when each container starts up. While this naïve mechanism works, it suffers from a couple of drawbacks:

  • Each cluster node would require its own Docker image because the startup commands are different (i.e., one node's commands have riak1 as "source" and riak2 as "target", while the other node's commands are reversed).
  • The IP addresses of the Riak nodes are hard coded, dramatically reducing the portability and deployability of our data service.

There are other details in making a data service production ready. For example, a production data service would probably want to expose decisions like cluster size as configuration parameters. wot.io addresses these concerns with our declarative Configuration Service for orchestrating containers as data services across our operating environment. To complete the system, we would also add adapters to allow any other wot.io service to communicate with the Riak service, sending or querying data. But these are other topics for another day.

Conclusion

Today we've taken a brief peek under the hood of creating a wot.io data service. Thankfully, most customers would never encounter any of the complexities described in this post, because wot.io or one of its partners have already done all the heavy lifting.

If you are interested in making your data service available on the wot.io data service exchange, check out our partner program and we'll help get you connected to the Internet of Things through an interoperable collection of device management platforms and data services.

November 18, 2015 / gaming, chess, android, pubnub, wot.io, IoT / Posted By: wotio team

This is the first in what will be a series of posts describing how to connect gaming devices with related data services in the wot.io data service exchange™.

A marriage made in heaven

It should come as no surprise that gaming devices, whether of the console, mobile, or even maker variety, more than qualify as things in the Internet of Things. Nor should it be surprising that engineers love to play games. (I mean, how else could the world would have come up with something like this)?

So in a way, it was only a matter of time before we got around to a project that combined the two.

It all started with a game

In his essay "The Morals of Chess" (1750), Benjamin Franklin wrote:

The Game of Chess is not merely an idle amusement; several very valuable qualities of the mind, useful in the course of human life, are to be acquired and strengthened by it, so as to become habits ready on all occasions; for life is a kind of Chess, in which we have often points to gain, and competitors or adversaries to contend with, and in which there is a vast variety of good and ill events, that are, in some degree, the effect of prudence, or the want of it.

As much as we'd like to claim that the genesis of our gaming application series was rooted in such lofty ideals, the truth of the matter is that we chose chess primarily because projects like Deep Blue have popularized the understanding that chess presents a worthy challenge to demonstrate the true power of computing. (And you know chess is cool if Dr. Who had to win a game to save the universe).

Having thus selected the game, we set about to take up that challenge by combining chess with Android devices and the power of the wot.io data service exchange™.

A quick search on GitHub uncovered an open-source implementation of a chess game designed for Android devices. "Yes, this should do quite nicely," we thought to ourselves.

Adding real-time connectivity with PubNub

Unfortunately, however, while the original version of the chess game does support networked game play between two different devices, all of the communications are sent to a central FICS server.

Since our plans include integration with various wot.io data services and potentially even other IoT devices like local LED lights or clock timers, we realized that we would need to modify the game to somehow send game play events separately to each of these devices and services. And then if we wanted to add new devices or services later, we would need to update all of the game applications on all of the Android devices. Ugh.

"That doesn't seem to fit in very well with our thinking about these Android devices as IoT things" we thought. What our connected gaming devices really needed was a pub/sub approach to connectivity.

In other words, it sounded like a perfect fit for PubNub, one of the wot.io's data services.

We already knew that PubNub had a great story for developing multiplayer online games. So we wondered how difficult it would be to make it work with our Android application. Of course, we should have known they had that covered with their list of over 70 different SDKs

It was a relatively straightforward exercise to replace the networking logic with the PubNub SDK. In fact, PubNub Presence even enabled us to add player presence information—a feature that the original game did not have. You can find the PubNub enabled version of the source code for the chess game on GitHub.

So in very short order we were able to take an existing game and connect it to PubNub's real-time data platform with presence information—and in the process, effectively replace an entire purpose-built FICS service in the cloud. Not bad.

Routing gameplay event data with wot.io

Of course, after connecting the devices via PubNub, the rest was easy since wot.io already has a PubNub adapter.

Wait...you mean there are no more changes required to the application source code to connect to wot.io, its data services, or connected devices? I thought the only thing we did was to build a connected device application that knows how to connect to PubNub?

Exactly. All we needed was the name of the PubNub channel we had the game using, and the pub/sub API keys for our account, et voila! Game play events were available on the wot.io data service exchange, ready to be easily routed to or from any of the myriad data services that wot.io provides.

Yes, that's right. Because the wot.io adapter for PubNub is bi-directional, not only can data services subscribe to the game play events being transmitted by the devices over PubNub, but they can also publish messages that can be sent back to the devices via PubNub as well.

Any ideas for interesting data services that could take advantage of such power suggesting themselves to you yet? Well they did to us, too.

Next Time...

But before we get ahead of ourselves, as a first step, next time we'll see how easy it was to store all of the game play events in a data store using the wot.io data service exchange™.

Chess Photo Credit: By David Lapetina (Own work) [GFDL or CC BY-SA 3.0], via Wikimedia Commons

November 17, 2015 / Thingworx, wot.io, wot.io thingworx extension / Posted By: wotio team

wot.io™ is now part of the ThingWorx IoT Marketplace with an extension that integrates the ThingWorx® Platform with the wot.io data service exchange™. ThingWorx is a widely adopted IoT platform, allowing rapid development of IoT solutions and wot.io provides new data data services with interoperability across IoT/M2M platforms. Together, they are a perfect complement.

Here's some things that make wot.io's ThingWorx extension pretty cool:
  1. wot.io can deploy, on demand, any number of Thingworx IoT application platforms, fully networked, as a containerized data service to a public cloud, a private cloud, or even a personal computer.

  2. When ThingWorx is deployed as a data service in the wot.io data service exchange, wot.io’s ThingWorx extensions works as a "thing", allowing click and drag interoperability between ThingWorx any other IoT platform or application in the wot.io data service exchange.

wot.io + ThingWorx = some pretty awesome mashups & functionality

wot.io with ThingWorx enables app creators, hardware developers, system integrators and data service providers to deploy IoT projects seamlessly in the wot.io’s cloud-based operating environment. The new extension made it easy for our developers to create a mashup as one of our demos for ARM's TechCon 2015 conference. The screenshot below shows the mashup with various ThingWorx widgets, each populated with a data feed connected graphically with the wot.io extension. You can see a live version in the video on our ARM TechCon blog post.

You can find the wot.io ThingWorx extension listed in the ThingWorx Marketplace. More information about the extension, including a video demo, is available in the wot.io labs blog post on creating and using the extension. And if you are a ThingWorx user, contact us to get set up with access to a wot.io operating environment.

November 17, 2015 / airport, faa, firebase, bipio, scriptr, wot.io / Posted By: wotio team

One of the key benefits wot.io provides is interoperability between different connected device platforms. One really interesting advantage of the wot.io architecture is that it allows developers to take both IoT data from connected devices as well as data from web-based feeds and combine them in interesting ways using dynamic data services like Firebase.

For this application, we're going to use Firebase's ability to store and sync data in realtime and subscribe to changes in their FAA (Federal Aviation Administration) dataset which shows the latest airport delay and status updates in realtime.

Separately, we'll create our own custom Firebase dataset, based off the FAA data, where we store points of interests around delayed airports and expand the scope of those interests based on the severity of the delay.

Making Lemonade from the Lemons of Others

It's no surprise that hotels want to predict demand to help them fill rooms to maximize their profits. When flights are delayed or canceled, an immediate opportunity is created for hotels near the affected airports to reach a collection of new hotel customers. When flights are delayed, travelers may miss their connections at hubs, or worse, have their flight canceled until the following day. The inconvenienced flyers will need to decide if they will wait it out at the airport or book a room in a nearby hotel.

If only the hotels knew about the delays and the dilemma facing the travelers, maybe they could sway them toward opting for a night's stay with some type of discount...

We'll show how we can combine data services in the wot.io operating environment to help predict when a person may be delayed so that hotels nearby with available rooms can connect.

But of course, it doesn't end with hotels. After all, wot.io is a data service exchange. Other parties like taxis and or Uber may be interested in the same information to predict a potential surge in ridership, and route vehicles to the area as necessary. Or discount travel services like Expedia or Kayak may wish to use the data to rate and sort hotels - enticing the same delayed flyers to use their services instead. Naturally, in addition to the delay information, the sorting algorithm for presenting these options to an end customer would typically employ custom business logic based on different revenue sharing rates, etc.—business logic that can, once again, be readily represented as composable data services like scriptr, for example.

Overview

Above is a block diagram giving an overview of the solution we'll be describing in the following sections.

Using Firebase as a Data Source

You can view the FAA data by pointing your browser to the FAA Firebase feed. You can see that at the time of this post, MSP (Minneapolis International) was experiencing a weather-based delay.

Delay at MSP

We can subscribe to notifications whenever the delay information changes and route the data through the wot.io data service exchange into a service like scriptr where we can run custom business logic.

Augment the Data with Scriptr

In this case, we wrote a quick script take the incoming airport notifications and fetch the geolocation data for that airport using the Google Maps API.

In addition to being useful on its own, we can further augment the now geo-located airport information with a list of nearby attractions using the Google's Places API. In fact, in this case we have modeled the type of business logic that a travel service like Expedia might use by targeting different types of places and offers based on the delay severity. So a short delay might target a discounted drink offer at a nearby bar, while a longer delay might result in an offer for an overnight stay at a nearby hotel.

A key point that should not be missed is that the location lookup script didn't have to know about the hotel lookup script. In other words, wot.io's data routing architecture allows us to decouple the flow of data from the execution of the data services. If tomorrow we decided that we also wanted to send the geotagged airport delays to another data service like Hortonworks Enterprise Hadoop for archival and analytics on how to optimize our travel company offerings for seasonal trends in travel delays, we wouldn't have to modify the logic in the airport script at all.

Using Firebase as a Data Store

Like many of the data services in the wot.io data service exchange, Firebase supports both read and write interactions. With our newly augmented data set that mashes up the FAA data with Google's location and point-of-interest data, we can use the wot.io Firebase adapter to route the data back into a brand new collection of our own in Firebase.

List of area Hotels

Custom Applications and wot.io

Many times, end-to-end IoT solutions will include custom user-facing applications in addition to the sorts of behind-the-scenes data services we have discussed thus far. Not a problem—to wot.io, they're just another data service!

As an example, we created a fictitious consumer-facing web application for the hotel and discount travel brands, transportation services, and restaurants and bars. This application can send and receive information to and from wot.io using Web Sockets for near realtime connections to our new dataset in Firebase as well as any other data services on the exchange. In this case, it's just a simple web application written using node.js. When delays are encountered and the business logic dictates that hotels should be offered, each hotel gets its own dynamically generated page with buttons for the customer to act on.

Example Hotel Brand Web application

And of course, wot.io connectivity is hardly limited to web applications. Here is a native Android mobile application for taxi fleet managers to use for monitoring and deploying their vehicles based on airport delay information. Not only does this application illustrate how mobile applications can connect to the wot.io data service exchange as easily as web applications, but it also illustrates how multiple parties may use the similar data for different purposes.

Example Taxi Application

Happy Travels!

Although we've "flown through this post" rather quickly (sorry, I couldn't resist), hopefully we've demonstrated how wot.io's composable data service architecture provides significant power and flexibility by connecting to multiple systems and providing data services that can manipulate IoT and web-based data in near realtime fashion. This simple demo also shows the value of pulling data from multiple sources, transforming it, and creating a new stream of data that becomes useful in brand new ways of its own.

So good luck on your next trip, and we hope you don't experience any delays!

November 17, 2015 / ARM TechCon / Posted By: wotio team

wot.io spent last week in Santa Clara, CA, attending ARM's TechCon 2015 conference and we had a bunch of new things to show. As an ARM mbed partner, we had a kiosk location in ARM's mbed zone, which this year was a huge booth located right in the middle of the expo floor. As a reflection of the growth of the mbed ecosystem, the booth had 4 kiosk areas with 4 partners each for a total of 16 mbed partners represented in the booth!

ARM mbed Device Server was a key part of our demo and it was exciting to see strong interest in our delivery of the ARM mbed Device Service in the wot.io data service exchange.

Our kiosk had 3 ARM partners who were also wot.io partners, so that worked out well. Two hardware partners, Atmel and u-blox were represented with hardware in our demo. One of our key data service provider demos was an integration with ForgeRock and they were in the booth right next to us as well.

The demo had a sample of our new integration with Informatica as well as well as examples of bip.io, scriptr.io, ThingWorx, and Circonus. The premise of the demonstration was to show interoperability between IoT platforms and creating actionable results from the resulting connected device data for the enterprise.

ARM has posted a short video of our demo from the show:

You can also see videos of other ARM partners from the mbed Zone this year.

If you're interested in more detail on the data services, here is another video the describes them more fully:

In addition to our presence on the expo floor, we gave a talk as part of the Software Developer's Workshop. Our talk was titled Empower Your IoT Solution with Data Services and as the title suggests, we demonstrated some of the wot.io data services that are particularly useful for engineers and developers as they design IoT solutions.

It was a great show and we'll have more details on the technology behind the demos soon. As always, if you have questions about wot.io or any of our data services, don't hesitate to contact us!

November 14, 2015 / Freescale, bipio, ARM, mbed, wot.io, data service exchange / Posted By: wotio team

I’m writing this from the plane while traveling home from the ARM TechCon 2015 conference. ARM has a vibrant community of IoT innovators, and ARM TechCon was a great event where wot.io were finalists for the Best IoT Product award and hoped for a repeat win to back up our 2014 Best of Show award. Congratulations to our partners at Atmel, though, for beating us out in the end. Of course, we like to think we helped nudge them ahead by featuring Atmel hardware as a part of our demonstration. We’ll get them next year, though ;-)

On the final day of the Expo, I happened to be chatting with some folks from Freescale—another wot.io partner and fellow ARM mbed Zone exhibitor, not to mention Best in Show and Reader’s Choice award winner! The folks from Freescale had seen how we were already routing sensor data from their devices to a complex array of data services, and wanted to know how difficult it would be for Freescale developers to harness a tiny sliver of that power—say to connect their devices to ARM mbed Device Server and another data service like bip.io—and most importantly, to get it working quickly and easily.

Unfortunately, the Expo hall was closing and we were packing up our respective booths; but I told him that what he asked should be easy since the work was really already done. I promised to pull together some sample code and instructions on the plane the next morning.

So that's just what I'll try to do in this post.

ARM TechCon Demo

The following is a block diagram showing the array of data services that we demonstrated at the event.

The full demo really deserves a post of its own. For now, I just want to outline the simplest possible way to hook up a Freescale device to ARM mbed Device Server and bip.io.

Connecting the FRDM-k64f to ARM mbed Device Server

In a previous post, we've already shown how to get a Freescale FRDM-k64f board running ARM mbed OS connected to an instance of ARM mbed Device Server. So just to keep things fresh, this time we'll start from an existing ARM example project.

The mbed OS Application

As expected, it was quite straightforward to update the existing example code to work with wot.io. (In fact, the code changes were quicker than the updates to the README!) You can find the source code on Github.

In source/main.cpp we only needed to change the location of our mbed Device Server

const String &MBED_SERVER_ADDRESS = "coap://techcon.mds.demos.wot.io:5683";  

and then, since our ARM TechCon demonstration server was not using TLS, we need to remove the certificate-related attributes and instead set the M2MSecurity::SecurityMode to M2MSecurity::NoSecurity when we create the register server object:

    M2MSecurity* create_register_object() {
        // Creates register server object with mbed device server address and other parameters
        // required for client to connect to mbed device server.
        M2MSecurity *security = M2MInterfaceFactory::create_security(M2MSecurity::M2MServer);
        if(security) {
            security->set_resource_value(M2MSecurity::M2MServerUri, MBED_SERVER_ADDRESS);

/*
            security->set_resource_value(M2MSecurity::SecurityMode, M2MSecurity::Certificate);
            security->set_resource_value(M2MSecurity::ServerPublicKey,SERVER_CERT,sizeof(SERVER_CERT));
            security->set_resource_value(M2MSecurity::PublicKey,CERT,sizeof(CERT));
            security->set_resource_value(M2MSecurity::Secretkey,KEY,sizeof(KEY));
*/
            security->set_resource_value(M2MSecurity::SecurityMode, M2MSecurity::NoSecurity);

        }
        return security;
    }

We should now be able to build our mbed OS application using yotta. (see the README for instructions). In my case, I think I'd better wait until I get off the plane before I start programming blinking devices with dangling wires to test this out, though.

Connecting to wot.io data service exchange

To view or otherwise transform or integrate our device data using http://bipio.cloud.wot.io, a popular web API automation tool available on the wot.io data service exchange, follow these simple steps:

  1. Sign up for a free account on bip.io if you do not already have one.
  2. Create a new workflow (called a "bip") and name it freescale.
  3. Select "Incoming Webhook" as the trigger for this bip, as we will be instructing mbed Device Server to send it notifications via HTTP.
  4. For now, add a simple "View Raw Data" node and set its value to the "Incoming Webhook Object". This will allow us to see the messages being received from the device. Later on, of course, you can do much more interesting things with your workflow.

Setting a notification callback in mbed Device Server

Since we want to have mbed Device Server send device notifications to bip.io, we need to register a notifcation callback via the REST API. The general form of the API call is

curl -X PUT 'https://mds.example.com/notification/endpoint' -d '{"url":"http://callback.example.com/path"}'  

to have notifications sent to "http://callback.example.com/path". But in our case, we will also need to supply some security credentials for the API call and some custom headers for the callback in order to make everything work for bip.io. In addition, once we have registered our callback, we need to subscribe to notifications for a particular resource. Recall that our dynamic button-press resource was identified as /Test/0/D in main.cpp. The final API calls have been captured in the script bipio_subscribe for convenience:

#!/bin/bash
# Simple script to simulate a device sending
# sensor readings to a bip.io workflow automation

MDS_USER=freescale  
MDS_PASS=techcon2015  
MDS_HOST=techcon.mds.demos.wot.io  
MDS_PORT=8080

BIPIO_USER=techcon  
BIPIO_TOKEN="dGVjaGNvbjp3b3RpbnRoZXdvcmxk"  
BIPIO_BIPNAME=test  
BIPIO_ENDPOINT="https://$BIPIO_USER.bipio.demos.wot.io/bip/http/$BIPIO_BIPNAME"  
BIPIO_HEADER_HOST="$BIPIO_USER.bipio.demos.wot.io"  
BIPIO_HEADER_CONTENT="application/json"  
BIPIO_HEADER_AUTH="Basic $BIPIO_TOKEN"  
BIPIO_HEADERS="{\"Host\":\"$BIPIO_HEADER_HOST\", \"Content-Type\":\"$BIPIO_HEADER_CONTENT\", \"Authorization\":\"$BIPIO_HEADER_AUTH\"}"

echo "Sending subscription request to ARM embed Device Server..."  
curl -X PUT \  
    -H "Content-Type: application/json" \
    -d "{\"url\": \"$BIPIO_ENDPOINT\", \"headers\": $BIPIO_HEADERS }" \
    "http://$MDS_USER:$MDS_PASS@$MDS_HOST:$MDS_PORT/notification/callback"
curl -X PUT \  
    -H "Content-Type: application/json" \
    "http://$MDS_USER:$MDS_PASS@$MDS_HOST:$MDS_PORT/subscriptions/wotio-freescale-endpoint/Test/0/D"
echo -e "\nDone."  

Now, with the FRDM-k64f board connected, we can run the bipio_subscribe script to have notifications sent to our new bip. We can also view the "Logs" tab for our test bip to verify that notifications are being received, or the "View" tab of the "View Raw Data" to see the messages themselves.

That's it! That's all there is to it! Of course, we can modify our bip to do something interesting like manipulate the messages, send them as part of SMS or email messages, save them to Google documents or send them to any of the other 60+ integrations that bip.io offers.

Next Time

Next time, we'll tweak the demo a bit further to hook up to the newly announced ARM mbed Device Connector cloud service—a convenient tool to use during prototyping—and use our button-push events to interact with other services on the wot.io data service exchange.

I love composable data services.

One of the great tools available to developers who use the wot.io Data Service Exchange is the protocol adapter framework. The wot.io protocol adapters make it possible to take existing applications which speak a given protocol, and use them to generate new data products. The wot.io Data Bus allows for exchanging these data products across multiple protocols in real time, making it easy to share data across services and customers.

Currently, the wot.io Data Service Exchange protocol adapters have production support for HTTP, WebSockets, MQTT, CoAP, AMQP, TCP, and UDP. Labs also experimental support for JMS, ZeroMQ, STOMP, XMPP, DDSI-RTPS, and JDBC currently in development. In addition to the open standard protocols, the DSE also has application specific adapters for various databases, search engines, logging systems, and the like.

A sample applications

To see how these protocol adapters work in conjunction with applications, we'll create a simple node-red flow that will publish a stream of 5 messages per second across MQTT to the wot.io Data Bus. This feed with then be replicated and split into two separate output streams to clients using CoAP and WebSockets at the same time.

The node-red flow for our example application consists of five nodes:

  • an inject node to start the flow of messages
  • a function node which generate the message payload
  • a delay node in a feedback configuration to generate 5 messages per second
  • a debug node to display the output in the node-red console
  • a MQTT node to send the messages to the Data Bus

The data can be read off the wot.io Data Bus using off the shelf applications such as coap-cli and wscat. While I happened to use command line tools for this application, it is easy enough to use standard libraries like the CoAP libraries provided by ARM in mbed OS, or even your favorite web browser.

To see this application in action, please watch the following video:

In the above video, I show the creation of three wot.io Data Bus resources:

  • wot:/wot/narf
  • wot:/wot/coap
  • wot:/wot/ws

The names of these resources allow you to identify them across protocols. There is nothing intrinsically magical about any of these names, and any protocol can read or write to any of the resources. It is the URLs of each of the application connections:

  • mqtt://token@host:port/account/resource
  • coap://token@host:port/account/resource
  • ws://token@host:port/account/resource

that determines how the data is translated between encapsulation layers. The creation of resource bindings as demonstrated in the video, also provides a means to duplicate and route data between resources. The wot.io data bus routing supports filtering payloads based on regular expression matches, as well as hierarchical pattern matches. These advanced features are accessible through the protocol adapters, as well as, through the wot.io command line tools.

Since each Data Bus resource is addressable through all of the protocol adapters and the application adapters, it is possible to mix and match data making it both available to consumers of a data product, as well as, making it available for internal storage, processing, and analysis through applications made available as part of the Data Service Exchange. Each Data Bus resource has individualized access controls, with each type of resource operation (reading, writing, creating, deleting, binding, etc.) being controllable through policy. These access controls allows developers using the wot.io Data Service Exchange to make available some or all of their data to interested parties. With the protocol adapter framework in place, the wot.io Data Bus makes it easy to provide your customers with their data through what ever protocol they choose for their application.

November 9, 2015 / wot.io, Critical Mention, MongoDB, Pentaho / Posted By: wotio team

As we mentioned in a previous post about NGDATA and scriptr.io, we have a partnership with Critical Mention giving us access to their enriched real-time media stream containing transcribed content of broadcasts across radio and television in the US, Canada, Mexico, UK, and other countries. As we showed previously, this rich feed of data can be processed in many ways and the data stream itself transformed with data services into new useful feeds.

As another example of wot.io data services that can operate on the Critical Mention media feed, we routed the feed into a MongoDB instance. Working with one of our systems integration partners DataArt, we then set up transformations on the data in an instance of Pentaho. In addition to the transcribed text of the broadcasts, the messages in the feed have additional data including the country where the broadcast originated, the network, etc. We created Pentaho transformations based on this data and were able to quickly create graphs showing the frequency of countries in the feeds.

This is a great example of how wot.io can route high-volume data to multiple deployed data services for processing. It also provides a glimpse at the types of things that are possible with the Critical Mention feed. We captured some of the details in a video. Enjoy!

November 9, 2015 / IoT, 5G / Posted By: Kelly Capizzi

InterDigital’s Rafael Cepeda, Senior Manager, examines if 5G can become an enabler that provides flexible connectivity and core tools for IoT building blocks in a recent article featured on TelecomEngine. Rafael states that with 5G developments well underway, the conversation has turned to how 5G will impact society and intersect with the Internet of Things (IoT).  

In his article, Rafael utilizes examples from the transport sector and smart cities to reveal how the mobile networks of today are not designed to handle the large expansion of diverse data that will come with IoT. However, 5G networks are anticipated to provide the flexibility required for the data generated by the IoT. Therefore, IoT will drive the dynamic configuration of the 5G network. Ultimately, Rafael explains that the two will work together to deliver the ultimate efficient configuration that will serve all end user’s needs whenever, and wherever.  

Click here to read the full article or to learn more about 5G and IoT, visit the vault.

November 4, 2015 / iot, wot.io, NGDATA, Critical Mention / Posted By: wotio team

IoT implementations produce all different types and frequencies of data. Some generate low volume, high value data, and others generate very high volume data streams that become valuable when they are processed and analyzed. For these types of applications, the data processing solutions need to be able to store and analyze this large volume of data quickly and efficiently. One solution for this use case in the wot.io data service exchange is NGDATA and their Lily big data toolset.

At wot.io, we deliberately define IoT in the broadest sense when we think about data processing solutions. For us, any device in the field generating data is suited to an IoT data solution. These devices can be traditional IoT like sensors on a tractor or a machine in a factory or a shipping container, but they can also be a set-top box delivering media and processing user preferences to create a better user experience. One such stream of data is that compiled by our data exchange partner at Critical Mention where they process media streams in real-time and provide a rich data feed of activity across radio and television broadcasts. Although some may not consider this a typical sensor IoT scenario, this is exactly the type of high-volume data feed wot.io partner solutions are built to handle.

In one implementation, we worked with our data service partner NGDATA to offer a Hadoop and Solr based big data data service and then routed a sample Critical Mention data stream to it. We were then able to query the live data for a variety of information that users might find interesting like trending topics, brand mentions, and the times and frequencies select issues are discussed. Other partner services, like those provided by Apstrata now named scriptr.io, could also be applied to search and process the data from Lily. This video gives an overview of how we did it.

NGDATA's Lily toolset also has a set of user interfaces provided as part of the solution. You can get a feel for those tools below.

The examples in the video are designed and configured for banking, media, and telecom verticals, but you can imagine trending and alerting applied to the Critical Mention data product, or even industrial use cases where trending is monitored for tracked devices, machines, or vehicles out in the field.

This application of existing data services like NGDATA to IoT data streams, with the broadest definition of IoT, is what excites us at wot.io. The broad set of data services in our exchange bring both industry-standard and innovative solutions to IoT projects of all types.

wot.io is an authorized partner with Critical Mention to add value to the Critical Mention broadcast data stream. If you're interested in access to the Critical Mention data stream please contact us at: info@wot.io

November 4, 2015 / IoT, wot.io, texas instruments, beagleboard, oneMPOWER / Posted By: wotio team

In this post I connect Texas Instruments Sensortags to oneMPOWER™, a M2M/IoT device management platform and implementation of the oneM2M specification, developed by the OneM2M standards organization.

wot.io IoT middleware for the connected Enterprise

Here I build on previous work (part 1, part 2) done with TI Sensortags and the Beaglebone Black from beagleboard.org, to demonstrate how easy it is to combine data services in an IoT solution using wot.io.

As you will recall from those previous posts, I used the wot.io data service exchange™ to employ DeviceHive as a device management platform data service from which I routed device notifications through some transformation logic in scriptr; and on to a Nest thermostat integration in bip.io and monitoring & metering stripchart in Circonus.

While DeviceHive is an excellent, open-source option for device management, wot.io data service exchange is about choice and IoT platform interoperability.

Today we're going to demonstrate how to use an alternative device management platform with the wot.io data service exchange middleware and the oneMPOWER™ device management platform as a wot.io data service. The loose coupling of wot.io's routing architecture and data service adapters keep everything else working seamlessly, resulting in a powerful, composable IoT/M2M solution. While not demonstrated in this post, both DeviceHive and oneMPOWER could be deployed to work together in the wot.io data service exchange.

oneM2M & oneMPOWER

oneM2M represents an extensive set of entities and protocol bindings, designed to tackle complex device management and connectivity challenges in the M2M and IoT sector. Naturally, a full treatment of how the oneM2M system works is beyond the scope of this article, and I refer you to the oneM2M specifications if you want to learn more. For this demo, you'll want to refer to these in particular:

Additionally, you will soon find public code samples in github: [currently private for review]

One of the tools that InterDigital makes available to oneM2M developers is a client application designed to view the resource hierarchy in a given oneMPOWER system. We'll use it here to visualize state changes as we interact with the oneM2M HTTP bindings. At the top is a reference diagram of oneM2M entities, helpful to have at your fingertips. You can see events as they happen in the console window on top, and at the bottom is the resource viewer. Keep an eye there for resources to appear as they are created.

InterDigital's Resource Tree Viewer

Note, the tool header lists MN-CSE, for a Middle Node, but we're working with an IN-CSE, an Infrastructure Node. These oneM2M designations are actually very similar—differentiated by their configuration to correspond to their roles in a large-scale oneM2M deployment. Don't worry about it for now, just watch the resource tree!

Application Entity Setup

For this demonstration, we will first create an Application Entity (AE) by hand, in the Common Services Entity (CSE) instantiated by the oneMPOWER server. In a full system, the devices or gateways would not typically be responsible for defining the full resource tree, so here we use curl commands against the oneMPOWER HTTP REST interface endpoints. The message is sent as XML in the body of an HTTP POST, but per the specs you can use other encodings like JSON, too.

Note that all parts of the calls are important, with critical data represented in the headers, path, and body!

curl -i -X POST -H "X-M2M-RI: xyz1" -H "X-M2M-Origin: http://abc:0000/def" -H "Content-Type: application/vnd.onem2m-res+xml; ty=2" -H "X-M2M-NM: curlCommandApp_00" -d @payloadAe.xml "http://$IPADDRESS:$PORT/$CSE"

HTTP/1.1 201 Created  
Content-Type: application/vnd.onem2m-res+xml  
X-M2M-RI: xyz1  
X-M2M-RSC: 2001  
Content-Location: /CSEBase_01/def  
Content-Length: 367

<?xml version="1.0"?>  
<m2m:ae xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-ae-v1_0_0.xsd" rn="curlCommandApp_00"><ty>2</ty><ri>def</ri><pi>CSEBase_01</pi><ct>20151030T221330</ct><lt>20151030T221330</lt><et>20151103T093330</et><aei>def</aei></m2m:ae>  

The body of the POST contains this XML data, including the application ID for the AE:

<?xml version="1.0"?>  
<m2m:ae xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-ae-v1_0_0.xsd" rn="ae">  
    <api>myAppId</api>
    <rr>false</rr>
</m2m:ae>  

Verifying the AE

Next we'll perform a simple check to make sure that the Application Entity was properly configured in the CSE. We expect to get a reply showing what we configured for the AE, and no errors.

curl -i -X GET -H "X-M2M-RI: xyz1" -H "X-M2M-Origin: http://abc:0000/def" "http://$IPADDRESS:$PORT/$CSE/curlCommandApp_00"

HTTP/1.1 200 Content  
Content-Type: application/vnd.onem2m-res+xml  
X-M2M-RI: xyz1  
X-M2M-RSC: 2000  
Content-Length: 399

<?xml version="1.0"?>  
<m2m:ae xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-ae-v1_0_0.xsd" rn="curlCommandApp_00"><ty>2</ty><ri>def</ri><pi>CSEBase_01</pi><ct>20151030T221330</ct><lt>20151030T221330</lt><et>20151103T093330</et><api>myAppId</api><aei>def</aei><rr>false</rr></m2m:ae>  

And you can see above, the ID myAppId is there! It worked! We can also see it appear in the resource tree viewer, here shown as the green box labeled "def" (a "foo" name drawn from the create call above):

Create a Container

In order to store content in a CSE, you must first create a Container entity. This is just a named bucket into which your content instances will go. Here's the call to set up a container named curlCommandContainer_00. The XML payload is more or less empty as the name implies, as we are not setting any extended attributes here.

curl -i -X POST -H "X-M2M-RI: xyz2" -H "X-M2M-Origin: http://abc:0000/$CSE/def" -H "Content-Type: application/vnd.onem2m-res+xml; ty=3" -H "X-M2M-NM: curlCommandContainer_00" -d @payloadContainerEmpty.xml "http://$IPADDRESS:$PORT/$CSE/curlCommandApp_00"

HTTP/1.1 201 Created  
Content-Type: application/vnd.onem2m-res+xml  
X-M2M-RI: xyz2  
X-M2M-RSC: 2001  
Content-Location: /CSEBase_01/def/cnt_20151030T221435_0  
Content-Length: 407

<?xml version="1.0"?>  
<m2m:cnt xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-cnt-v1_0_0.xsd" rn="curlCommandContainer_00"><ty>3</ty><ri>cnt_20151030T221435_0</ri><pi>def</pi><ct>20151030T221435</ct><lt>20151030T221435</lt><et>20151103T093435</et><st>0</st><cni>0</cni><cbs>0</cbs></m2m:cnt>  

And again, the viewer shows our container created successfully, in red. It's labeled by the resource identifier (also returned in the XML response we see above), and not by the resource name that we provided. (If you hover over the block you can verify the extra info is correct.)

Create a Content Instance

Now we're ready to get to the fun stuff, sending actual data from our devices! Before we go over to the device script, we'll run one more test to make sure we can create a Content Instance by hand.

Of note here is that each Content Instance needs a unique identifier. Here you can see its name specified by the request header X-M2M-NM: curlCommandContentInstance_00. If you run the same command with the same name, it will fail, as the content instance already exists. This makes sure you can't accidentally erase important data.

curl -i -X POST -H "X-M2M-RI: xyz4" -H "X-M2M-Origin: http://abc:0000/$CSE/def/cnt_20151030T221435_0" -H "Content-Type: application/vnd.onem2m-res+xml; ty=4" -H "X-M2M-NM: curlCommandContentInstance_00" -d @payloadContentInstance.xml "http://$IPADDRESS:$PORT/$CSE/curlCommandApp_00/curlCommandContainer_00"

HTTP/1.1 201 Created  
Content-Type: application/vnd.onem2m-res+xml  
X-M2M-RI: xyz4  
X-M2M-RSC: 2001  
Content-Location: /CSEBase_01/def/cnt_20151030T221435_0/cin_20151030T221557_1  
Content-Length: 417

<?xml version="1.0"?>  
<m2m:cin xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-cin-v1_0_0.xsd" rn="curlCommandContentInstance_00"><ty>4</ty><ri>cin_20151030T221557_1</ri><pi>cnt_20151030T221435_0</pi><ct>20151030T221557</ct><lt>20151030T221557</lt><et>20151103T093557</et><st>1</st><cs>2</cs></m2m:cin>  

This is the content we sent in the body of the request, again as XML. You can see the data field in the con element, which is the integer 22.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>  
<m2m:cin xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-cin-v1_0_0.xsd" rn="cin">  
    <cnf>text/plain:0</cnf>
    <con>22</con>
</m2m:cin>  

And our content instance appears in the viewer as well, in the orange block:

Content Instance

And you can see the details in a pop-up. Notice the parentID, and that it matches the container's ID from above. You can also see the data we sent at the bottom, the value 22:

Content Instance Details

Send Device Data

Running on the BeagleBoard device, we have a small Python script that communicates with the oneM2M HTTP REST interface to send periodic telemetry data to the oneMPOWER instance, and ultimately on to the wot.io bus via the wot.io oneMPOWER adapter. First, the header, where we import some libs and set our configuration: the CSE name, app name, and container name must match what's been configured in the oneMPOWER instance.

#!/usr/bin/env python

import time  
import httplib  
import os

command_temp = "python ./sensortag.py 78:A5:04:8C:15:71 -Z -n 1"  
hostname = "23.253.204.195"  
port = 7000  
csename = "CSE01"  
appname = "curlCommandApp_00"  
container = "curlCommandContainer_00"  

Next, we set up some simple helper functions to

  • read the sensor data from the TI SensorTags connecting to our device via Bluetooth (see previous post for details),
  • compose a Content Instance XML message, and
  • send it to the HTTP endpoint.

Finally, we loop and sleep to generate time-series data. Simple!

def readsensor(cmd):  
    return float(os.popen(cmd).read())

def onem2m_cin_body(value):  
    message = """<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<m2m:cin xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-cin-v1_0_0.xsd" rn="cin">  
<cnf>text/plain:0</cnf>  
<con>%s</con>  
</m2m:cin>""" % value  
    return message

def send(value):  
    body = onem2m_cin_body(value)
    headers = {}
    headers['Content-Length'] = "%d" % len(body)
    headers['Content-Type'] = "application/vnd.onem2m-res+xml; ty=4"
    headers['X-M2M-NM'] = "my_ci_id_%s" % time.time()
    headers['X-M2M-RI'] = "xyz1"
    headers['X-M2M-Origin'] = "http://abc:0000/def"
    path = "/%s/%s/%s" % (csename, appname, container)
    con = httplib.HTTPConnection(hostname, port)
    con.request("POST", path, body, headers)
    res = con.getresponse()
    print res.status, res.reason, res.read()
    con.close

while True:  
    print "Reading sensor\n"
    value = readsensor(command_temp)
    print "Got %f - sending\n" % value
    send(value)
    print "Sleeping...\n"
    time.sleep(30)

And now a quick example of the output as we run the above script. We see it read the SensorTag data as we have done in the past, assemble a content instance message, and send it via HTTP POST. Created content instances appear in the specified container, just as we saw above, and from there the telemetry flows back to the wot.io bus and on to other data services.

root@beaglebone:~# ./main.py  
Reading sensor

Got 44.643921 - sending

201 Created <?xml version="1.0"?>  
<m2m:cin xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.onem2m.org/xml/protocols CDT-cin-v1_0_0.xsd" rn="my_ci_id_1446511119.68"><ty>4</ty><ri>cin_20151103T003839_9</ri><pi>cnt_20151030T221435_0</pi><ct>20151103T003839</ct><lt>20151103T003839</lt><et>20151106T115839</et><st>9</st><cs>13</cs></m2m:cin>

Sleeping...  

oneMPOWER Protocol Analyzer

It's worth noting that there's also a protocol analyzer available in the resource tree viewer, which is handy for debugging communication sequences. You'll see some of our requests represented below:

OneM2M Protocol Analyzer

Ship your IoT Solution with wot.io Data Services

As you will recall from my previous post, we have now done everything necessary to

Whew! That's a mouthful! What a relief that wot.io's loosely-coupled architecture supports the DRY principle so that we only had to modify the third bullet. The rest of that complex data flow just continued to work just like before!

From data in motion to data at rest, and with an ever-growing selection of data service partners, wot.io has you covered, including enterprise-ready solutions like oneMPOWER. Ready for more? Head over to wot.io and dig in!

November 4, 2015 / Posted By: wotio team

We're pleased to announce bip.io's new premium plans will boost productivity with new pods and actions, finer control over scheduling, generous bandwidth and priority support for your integrations.

bip.io is a free hosted platform for the open source bip.io server. If you're running your own bip.io server, you can continue to mount your servers into the hosted platform on the Free plan as you've always done.

As a special thanks to bip.io's supporters for contributing to our success, customers already using premium features have been automatically upgraded to the bip.io Basic plan, free of charge. The next time you log into bip.io, all features for this plan will be automatically unlocked.

In addition to the original Free and Enterprise licensing plans, 3 upgrade levels have been added for those who want to go the extra step, to power users, to bip.io pro's!

plans and pricing.

Premium Pods

You'll notice that some pods in bip.io have been marked as premium, requiring an upgrade.

Upgrading to any premium plan will automatically unlock all premium Pods, which will be instantly available. Premium users will automatically acquire new Pods as they become available.

Here's the full list as it stands today

Community Pods

Community Pods are the staple bip.io integrations you might already know and love.

Scheduling

Event triggers on the free plan have always run every 15 minutes, except when they are manually initiated. On a premium plan, you can now schedule triggers to run at any time or timezone, as frequently as every 5 minutes.

Bandwidth

Wow, your bips are busy!! We've taken the average real bandwidth that bips use in each plan monthly and doubled it. If you start regularly exceeding the monthly bandwidth for your plan - Great Job! We'll reach out to you with assistance upgrading.

Thank you!

Like any migration from a historically free platform to one that starts to charge for usage, there's bound to be a lot of concerns and questions. Reach us at hello@bip.io with whatever is on your mind.

November 4, 2015 / IoT, oneMPOWER, oneM2M, ETSI, oneTRANSPORT / Posted By: Kelly Capizzi

Internet of Things (IoT) technology will be significant in many ways. One benefit of IoT technology is to help relieve some of the daily pressures and make life a little easier. Last week at the IoT Korea Exhibition 2015, InterDigital demonstrated how IoT could assist athletes as well as attendees on the day of a sporting event.  

A lot of information is available on the day of a sporting event that can already be collected by IoT devices and uploaded to cloud-based storage.  Information such as an athlete’s health, transportation to the event, stadium information and more. If only all this data could be made easily available and accessible to app developers, an app could be developed to alleviate stress and improve the overall experience.  

Storing data on a standards-based platform such as InterDigital’s oneMPOWER™ IoT platform, enables application developers to spend more time dreaming up new apps and less time worrying about where to find the data, how is it formatted, and how to access it.

Take a closer look at each use case and application for the oneMPOWER IoT platform presented at IoT Korea Exhibition 2015:

What’s up next? InterDigital will demonstrate its oneMPOWER IoT platform at the ETSI M2M Workshop 2015 featuring oneM2M on December 9-11, 2015 in Sophia Antipolis, France. The workshop will be focused on ETSI’s standardization for M2M as well as Smart Cities and Smart Living.  

For more on InterDigital’s work in IoT, please visit the vault.

October 30, 2015 / wot.io, HTTP/HTTPS relay / Posted By: wotio team

Within the wot.io data service exchange (DSE), we often need to interface different web APIs to each other. It is not uncommon for 3rd party software to have limited support for secure connections. While we provide SSL termination at our secure proxy layer, there are many legacy applications which can only communicate via HTTP. You can envision the problem space are follows:

relay matrix

In order to address this connectivity space, we developed a HTTP/HTTPS relay that allows us to receive both types of requests and translate their security context. A request can be received via HTTP or HTTPS and be sent out via HTTP or HTTPS independent of the original request. By adding a simple rewrite rule to our proxy layer, we can specify the desired translation. This makes it easy for our developers to integrate software that lacks support for a security scheme to integrate with one that does.

base architecture

Another interesting side effect of the relay is that we can use it to transparently proxy both request and response data to the wot.io data service exchange. Both the request data and the response data are made available on the exchange as resources. We can then integrate those data streams into any of the integrated data services.

wot architecture

With the wot relay, we could log all of the requests to a 3rd party web api made through the relay to a search engine like Elasticsearch, so that we can later study changes in usage by geography. At the same time, we could store the raw requests into a database like Mongodb for audit purposes, and use a data explorer like Metabase to create custom dashboards. We could also log the same data to a monitoring and analytics provider like Circonus and monitor the reliability of the 3rd party servers.

From the viewpoint of the HTTP client, the backend service be it http or https is available through the corresponding wot relay address. The developer of that application need not worry about changes in requirements for data retention, metering, monitoring, or analytics. The only change is that instead of calling the desired service directly, they would call it through a relay.wot.io or secure.relay.wot.io address. The only difference between the two addresses is that the secure.relay.wot.io only works with HTTPS servers, and will not relay to a HTTP server.

From a security standpoint, the client may connect to both the secure and insecure relay addresses through HTTP or HTTPS. In the case of TLS connects, we guarantee that we will validate the upstream certificates, and will refuse to relay the request should we not be able to verify the x509 certs of the upstream server. In this fashion, your application need only be able to verify the signature of the wot relay service itself, and not worry about validating the signatures of all of the potential upstream services you might wish to interrogate. In this way we preserve the chain of trust, and do not engage in any man-in-the-middle TLS tricks.

Having this layer of indirection, most importantly, opens up the full data set to inspection. Since each request and each response is forwarded to the wot data bus, any number of data services can be used to process both the requests and responses. This allows the application to be extended in new ways without having to rewrite the integration. The relay makes it easy to generate new valuable data streams and insights from your application behavior at the cost of a few milliseconds in additional request latency. As the wot data bus allows for filtering data both based on meta-data and content, it also allows you to expose the data in highly selective fashion to your customers, internal clients, and legal authorities.

wot extended architecture

Using the wot protocol adapters, we can also expose this data in real time to a wide variety of protocols: HTTP, WebSockets, TCP, UDP, MQTT, CoAP, XMPP, and AMQP. These protocol adapters provide a protocol specific interpretation of the wot data bus resources, complete with authentication and complex authorization rules. For example, you could subscribe to a wot data bus resource as an MQTT topic, or observe it as a CoAP resource. These protocol adapters also support bi-directional communication, allowing a developer to inject their own data back into the wot data bus as a sequence of data grams or as a stream of data. The flexibility of this system allows for easy cross communication between many different types of applications in different contexts.

October 29, 2015 / LoRaWAN, wot.io, ARMmbed, MultiTech / Posted By: wotio team

One of the more interesting developments in the massive IoT ecosystem is the quickly growing base install of LoRaWAN devices. As wot.io partners Stream Technologies and MultiTech bring these new technologies to market, we are happy to be able to provide the data services infrastructure to support these new IoT connectivity options. Let's look at some of the options now available.

Networking Options

When people speak of the burgeoning proliferation of connected devices in the IoT ecosystem, one thing that is sometimes overlooked is considering the implications of which network type the actual device(s) run on. For many touted use cases, the more common networks are ill-suited for the task:

  • WiFi takes up too much power and the range is too limited
  • Cellular (3G/LTE) is too expensive per connected unit
  • Cellular (CAT-0) is still a few years out.
  • Bluetooth LE range is too limited.

LoRaWAN however, in tandem with some other 6LoWPAN networks like SigFox, are making strides to fill that void of being all of 1) low power 2) low cost and 3) long range. Perfect for any IoT application!

Demo Setup

Let's get started setting up our LoRa network. For this we're using:

  • MultiTech Conduit Gateway, with a
  • MultiTech MTAC LoRa Gateway Accessory Card installed
  • MultiTech mDot modules, mounted on a
  • MultiTech Developer Kit

The LoRa mDot uses an ARM processor and is ARM mbed enabled, so developing LoRa solutions using these ARM-powered devices much faster and more pleasant.

If you're using a USB-Serial cord as I am to connect to the DB9 port, you can use the following to see that the line is connected:

> ls /dev/tty.*
...
/dev/tty.PL2303-00001014

The tty.PL2303-* listing above confirms that our Serial line is connected to our USB port.
You can also confirm that you are properly connected when the D7 LED is lit up on the MultiTech Developer Kit (UDK).

I'm using CoolTerm to send in the AT commands to the MultiTech LoRa mDot module which we have mounted on a MultiTech UDK.

AT  
AT&F  
AT+RXO=1  
AT&W  
AT+FSB=7  
AT+NI=1,wotioloranetwork  
AT+NK=1,<ENTER_PASSPHRASE>  
AT&W  
ATZ  
AT+JOIN  
AT+ACK=1

After that's confirmed, we simply drag&drop our compiled binary from the mbed online editor onto the device, and it flashes, connects, and starts sending data automatically!

We can now hop over to our MultiTech Conduit and use the Node-RED interface to see that data is flowing from our LoRa mDot into the Conduit. So let's take that data and pipe it into the wot.io Operating Environment.

From there, that LoRaWAN data can be combined with other data sources and is easily fed into a wide range of data services to truly unlock its value.

You can check out the full module source code over at the ARM mbed website. And check out other posts in the wot.io labs blog and our main website for information on data services for your IoT application.

“The Internet of Things” (IoT), and the amount of data from connected devices is in the early stages of tremendous growth over the next few years. A recent report from McKinsey estimates its potential economic impact could be up to $11.1 trillion by 2025. The impact of this projected growth is already making its way into the operations of many enterprises. While this number is staggering in its implication, enterprises have a lot of work ahead to create value from the IoT systems and the resulting wave of IoT system data. How many different connected devices or IoT systems are in your home now? Think about a mature Enterprise. The McKinsey report states “interoperability between IoT systems is critical. Of the total potential economic value the IoT enables, interoperability is required for 40 percent on average and for nearly 60 percent in some settings.” While it’s stated that interoperability will leverage the maximum value from IoT applications, are enterprises really ready for an IoT data from one or more IoT systems?

Some evidence would suggest not. In one use case brought to light by McKinsey, up to 99 percent of connected device data is not currently being used beyond simple operational awareness such as anomaly detection. Part of this problem can be attributed to closed IoT systems that don’t allow for interoperability between the local and cloud based IoT systems and the data service providers that can create actionable results for the Enterprise. Another part of the problem is caused by not having a solid solution for Big Data aggregation combined with a good Enterprise Application Integration strategy.

Here are a couple of questions enterprises need to take into consideration in order to succeed when deploying IoT platforms:

  1. How flexible is the enterprise in terms of working with multiple IoT systems providers and data services in an interoperable environment?
  2. Does the enterprise have access to Enterprise Application Integration (EAI) and Integration Platform-as-a-Service (iPaaS) solutions?

It’s fairly straightforward to connect device data from one IoT System to one data service provider for analysis and reporting, but the challenge comes in aggregating data from multiple IoT systems to be processed by multiple best-in-class data service providers to get the most out of your data. This is where the need for interoperability becomes very important. It’s difficult to scale your solution to it’s maximum potential when limited by closed systems or locked data.

There is technical prowess required to make IoT solutions work together, enterprises that once tried to consolidate their systems with one all-encompassing vendor are now embracing the interoperability of many specialty vendors to provide the best operational efficiency and accelerated deliverables. Before IoT Systems, many successful enterprises were already utilizing a mix of on-premise EAI platforms and cloud based iPaaS solutions. Major vendors offering EAI and cloud based iPaaS solutions have started to think about the integration of connected device data from multiple IoT and Machine-to-Machine (M2M) systems but have yet to complete the solution. If your enterprise wants to become a part of the IoT landscape, you need to have good answers to how you’re going to integrate multiple IoT platforms and create actionable results from IoT data.

To learn more, visit wot.io.

October 28, 2015 / wot.io, ARMmbed / Posted By: wotio team

This past week, I managed to get some free time to dig into ARM mbed OS, and get a simple application talking to wot.io. The most obvious way to do this was to use ARM's own mbed-client interface to communicate with an instance of the mbed Device Server. As mbed Device Server (MDS) is already integrated into wot.io's Data Service Exchange, as soon as I had the mbed-client code integrated into an application, I should be able to get my data into an existing workflow.

Looking at the supported board options, and what I had laying on my desk already, I decided to use the Freescale FRDM-k64f, since it's Ethernet interface has a supported lwip driver. The board comes prepopulated with a FXOS8700CQ accelerometer and magnetometer, which has a very similar interface to the Freescale MMA8452Q that I've used in a few of my projects. While there is no obvious direct mbed OS support for the accelerometer in the FRDM-k64f hardware abstraction layer, we should be able to use the HAL support I2C interface to talk to it. ARM already provided a FXOS8700Q sensor driver under the Apache 2 license that can be easily ported to mbed OS.

Reading over the mbedos documentation, and yotta documentations, I managed to set up both a local development environment on my Mac, and I also setup a home lab development environment using a Docker container. The Docker container makes it easy to move my development environment from machine to machine, such as from home lab to office lab and back again.

Creating a new project using yotta is straight forward using the text based wizard:

Here I've created a project called accel and creates a skeleton of the module.json file which describes to yotta how to manage the dependencies for our applications. To install our own dependencies, we can use the yotta install to inject our dependency.

At the time of this writing this will modify the module.json file as follows:

If you were to do this at a later date, the version string saying ^1.1.15 will probably be different. ARM mbed OS is undergoing rapid development, and the file I generated just last week was ^1.1.11, almost a patch a day! This rapid development can be seen in all aspects of the system. On any given day, the yotta build command which actually compiles the binary application, will return different deprecation warnings, and occasionally entire libraries are in an unusable state. Generally the optimized builds will compile cleanly, but I have had problems with yotta build --debug-build failing to compile due to register coloring issues. That said, as ARM mbed OS leaves beta, I expect these issues will be resolved.

To setup the development environment for the Freescale FRDM-k64f, it is also necessary to select the appropriate target:

This configures our build environment to build for the FRDM-k64f using arm-none-eabi-gcc. A complete list of available target architectures can be obtained using the yotta search target command:

Switching which C++ compiler you use requires switching the full target. Switching targets will fetch the associated target dependencies as well, and as such it is important to build after you've selected your target. With the target set to frdm-k64f-gcc, we can build the source code for this project.

The behavior is as follows:

  • initialize i2c
  • initialize FXOS8700QAccelerometer
  • initialize Ethernet adapter
  • acquire a DHCP lease
  • initialize an ipv4 tcp/ip stack
  • pick a random port and
  • create a M2MInterface object to connect to MDS
  • create a M2MSecurity context for our MDS connection
  • create a M2MDevice object to create the device's OMNA LWM2M resources
  • create a M2MObject that will actually represent our device sensor tree
  • create two M2MResources for our x and y axis (not following the OMNA LWM2M conventions)
  • add our M2MObject and M2MDevice to a list to be used to update registrations
  • setup a timer to update the registration every 20 seconds
  • enable the accelerometer
  • setup a timer to sample the accelerometer every 5 seconds
  • setup a callback to setup the device and resource registrations
  • start the main scheduler loop

To build the application, we can simply use the yotta build command to generate a accel.bin file in build/frdm-k64f-gcc/source/. This is the build artifact that is our application binary. The boot loader on the FRDM-k64f board knows how to load this file on reset. We can install this application by copying the accel.bin file to the USB storage device (on my Mac /Volumes/MBED).

Once the the application binary is installed, the device registers itself with mbed Device Server (MDS) running on our demo cluster. The data is then available to all of the services which request device notifications. The routing layer inside of the wot.io data service exchange ensures that only those users who have the rights to see the data from a given device can see it.

As the wot.io data service exchange supports multiple protocols, we can use an off the shelf command line client to read the data from the data service exchange. Just to quickly test getting this into a script, we can use wscat, a node module which speaks the RFC6455 WebSocket protocol:

October 28, 2015 / 5G / Posted By: Kelly Capizzi

5G will require many fundamental changes and innovations to the overall network architecture as well as exploit new system degrees of freedom. This includes the consideration of other potential waveform candidates. Wireless Week published an article written by InterDigital’s Dr. Afshin Haghighat, senior staff engineer, that discusses potential 5G waveform candidates that can overcome limitations of the orthogonal frequency multiple access (OFDM) waveform.  

The article, titled Which Wave Will 5G Ride, examines five potential waveform candidates that Afshin and his colleagues recently reviewed in this EURASIP editorial.  The five waveforms? Faster-than-Nyquist, Filter Bank Multi-Carrier, Universal Filtered Mutli-Carrier, Zero-Tail DFT-s-OFDM and Generalized Frequency Divison Multiplexing. Afshin provides a brief explanation of the five waveforms along with their key benefits.    

The waveform that 5G will rely on is still to be determined. However, Afshin expresses that there is one clear thing:  when it comes to 5G and waveform, one size will not fit all.  

Click here to read the full article or learn more on 5G at the vault.  

October 26, 2015 / NFV, SDN, 5G / Posted By: Kelly Capizzi

InterDigital’s Dirk Trossen, Principal Engineer, recently joined RCR Wireless’ Dan Meyer in an episode of NFV/DSN Reality Check. During the episode, Dirk discusses the role of NFV and SDN in 5G along with the recent IP-over-ICN-over-SDN implementation demonstrated by a team of academic and industrial researchers from InterDigital Europe and University of Essex.  

Dirk explains virtualization is present everywhere in 5G – radio access, core network space, etc.  5G will not just be about a bigger pipe. It is expected to be an extremely intelligent generation of mobile technology. Dan and Dirk discuss some of the industry challenges on the path to 5G. For example, Dirk points out that rethinking owner infrastructure and the ability to get mobile operators to release some control is a major challenge that the industry will face in this evolution.  

Want to learn more? Click below for the full episode:

 

The Internet of Things (IoT) is a hot industry, poised for substantial growth. In fact, International Data Corporation (IDC) projects the industry will grow from 655.8 billion in 2014 to 1.7 trillion in 2020. That’s over 1 trillion dollars in growth in just six years! Because IoT is such a hot topic in the tech industry, Tech in Motion in NYC hosted a meetup event on Tuesday, October 13th 2015, sponsored by Grind + Verizon to discuss all things IoT. wot.io Founder and CEO Tom Gilley was invited to speak along with key speakers from other IoT companies in New York City.

The speakers talked about the IoT problems their respective businesses are solving, as well as their perspective of the direction IoT is moving in. From sensors and wireless connectivity becoming common in household products, to triggering automatic ordering of household products when they get low, to the numerous types of wearable devices companies are working to create, it’s easy to see why IoT is getting so much attention. Dash Founder & CEO Jamyn Edis says “IoT is clearly a macro trend that is engaging professionals in the tech space. The question is, when does a broad trend become a real, day-to-day, revenue-driving opportunity for companies and people working at them? We are not yet there.”

When asked about the most pressing issue in IoT right now, Ted Ullrich, Founder of Tomorrow Lab, said “On the commercial product side, there in an open pasture for creating new devices, brands, and services. Wireless protocols like WiFi, Bluetooth, and Cellular are sufficient for these products for now, but planning an infrastructure for the future of 30+ billion things on the internet is a different story.” Quite right he is.

Since there are no dominant IoT standards, many companies have a mix of internal, closed IoT platforms with plans to adopt new platforms like Thingworx. Thingworx is a well known brand, and is good at what it does. The organizational result is a mix of IoT platforms that have interoperability issues. McKinsey Institute recently stated that “Interoperability is critical to maximizing the value of the Internet of Things. On average, 40 percent of the total value that can be unlocked requires different IoT systems to work together.” This is the big picture reason why wot.io exists: to create an interoperability environment where device management platforms can seamlessly and securely connect with enterprise applications for data analysis, reporting, automation and much more.

It’s no surprise to us this event was a success, with a packed room and over 500 people RSVP’d to attend. The audience was engaged, enthusiastic and asked plenty of questions, both during the event and after the talks were over. Where do you see the Iot industry heading? Leave your comments below.

*McKinsey Institute quote from the June 2015, “Unlocking the full potential of the Internet of Things”

Hardware Recap

In Part One of this demo, we took two Texas Instruments Sensortags, connected them using Bluetooth LE to a Beaglebone Black, ran a Node.js gateway to connect to DeviceHive1, and saw it all work. This is the diagram of our hardware setup, as completed at that point:

Fantastic! Our hardware works. Now we are going to hook some data services up using the wot.io data service exchange™, and do some fun stuff with it.

Data Services

Now let's expand it to include everything we'll do with the data services. We are going to use scriptr, Circonus, bip.io, and a Nest thermostat. Here's the plan:

  1. Send the data from DeviceHive to scriptr for processing
  2. Using scriptr, massage our data, and make some logs
  3. Send the data from scriptr to Circonus for graphing
  4. Send the data from scriptr to bip.io for alerting and control of the Nest thermostat

Message Flow Graph

Below is a diagram of the message flow. All the green lines are implemented using the wot.io data service exchange™ (which I also call the bus), connecting data service sources to data service sinks.

Data Services Message Flow Graph

You'll notice that some of the scripts, bips, and graphs are named temperature, and others are named color. I have a confession - to save time, I just stuck with the default setup that comes out of the box with wot.io's Ship IoT initiative which converts temperature units and maps them onto the color spectrum for use with some Philips Hue bulbs like we saw in an earlier post. I just figured that since wot.io has so many data services, and I have so little time, why not just re-use what was already done? So, let's just agree to ignore the fact that scripts named color might no longer have anything to do with color. Maybe we're just coloring our data. Ok? Onward!

Scriptr

scriptr.io Screenshot

Our data's first stop after leaving DeviceHive is scriptr, so we'll start there. The scriptr.io data service offers a very fast way to create custom back-end APIs to process your data in the cloud using JavaScript. This enables fast productivity and power for your Internet of Things (IoT) and other projects, ever more so when tied to other data services via wot.io. All the messages come into a script called transform, as defined by the wot.io bus configuration.

scriptr: transform

The first task we perform on our message stream is a data normalization step. You'd expect to see something like this in most real-world applications—a layer of abstraction that transforms incoming messages to a unified format for subsequent services to consume. This script will massage the incoming messages into this simple JSON structure, and remove bits that may no longer be relevant now that we are outside of the local network that the originating devices were using:

[ device_id, { key:value, ... } ]

for keys:  
key: "name" | "value" | "units"

for values:  
name: "temperature" | "humidity"  
value: a floating-point number  
units: "C" | "F" | "%RH"  

For example, from this input message,

{"action":"notification/insert","deviceGuid":"ca11ab1e-c0de-b007-ab1e-de71ce10ad01","notification":{"id":1558072464,"notification":"temperature","deviceGuid":"ca11ab1e-c0de-b007-ab1e-de71ce10ad01","timestamp":"2015-10-15T20:28:33.266","parameters":{"name":"temperature","value":23.7104174805,"units":"C"}},"subscriptionId":"00000000-6410-4e1a-b729-000000000000"}

...we get this output message:

["ca11ab1e-c0de-b007-ab1e-de71ce10ad00",{"name":"temperature","value":23.7104174805,"units":"C"}]

Now we are ready to sink these normalized messages back onto the bus for further processing by other data services.

As the message flow graph above illustrates, messages from transform will use the bus to fan out and sink into convert and color in scriptr, and also into bip.io and Circonus.

Here's our full transform code:

// Convert DeviceHive Notification to well known format of [<devicehive deviceId>, <devicehive parameters>]
var log = require("log"),  
    data = JSON.parse(request.rawBody).data,
    payload = data && data[0];
log.setLevel("DEBUG");  
log.debug("testraw: " + JSON.stringify(data[0]) );  
if (payload && payload["deviceGuid"] && payload["notification"]["parameters"]) {  
  response = JSON.stringify([payload["deviceGuid"], payload["notification"]["parameters"]]);
    log.debug("response: " + response);
    return response
}
log.debug("Invalid Request: " + JSON.stringify(payload))  

scriptr: convert

This is a utility set up to demonstrate data transformation and message decoration. We take messages from the incoming data source, parse out the type and units, and create a new data structure with additional information based on the incoming message. This data source will be sent in a message to whatever sink is configured.

A more complex implementation could take incoming data, perform lookups against a database, add semantic analysis, analyze for part-of-speech tagging, or do any number of other things. Complex message graphs composed of small, well-defined services let us build up behaviours from simple parts—much like the Unix philosophy when it comes to small command-line tools.

In this case, we convert Celsius to Fahrenheit, or Fahrenheit to Celsius, depending on what the incoming format is, and put both values into the resulting message. For humidity we simply pass along the value and label it as rh for relative humidity.

  switch (units) {
    case "c":
      // The incoming reading is in celsius. Convert to Fahrenheit
      response.tf = temp && (temp * 9 / 5 + 32).toFixed(1) || "N/A";
      response.tc = temp && temp.toFixed(1) || "N/A";
      break;
    case "f":
      // The incoming reading is in Fahrenheit. Convert to celsius
      response.tf = temp && temp.toFixed(1) || "N/A";
      response.tc = temp && ((temp - 32) * 5 / 9).toFixed(1) || "N/A";
      break;
    default:
      response.error = "unknown units";
  }

These demonstration messages currently sink into Scriptr's logs, and can be used in future systems. Here's the result of a temperature message, and we can see the incoming ºC data was converted to ºF and logged:

scriptr convert logs

Scriptr: color

Once again, this script was originally meant to control a Philips Hue lamp, but we've co-opted it to send data along to bip.io and control our furnace. (I've left in the color calculations if you're curious). It would be trivial to expand the message graph in bip.io to do the lamp control, I just didn't have the time to set it up. Aren't I quite the model of efficiency today?

// Unpack the parameters passed in
var log = require("log"),  
    timestamp  = request.parameters["apsws.time"],
    data = JSON.parse(request.rawBody).data,
    reading = data && data[0],
    deviceId = reading && reading[0];

if (reading && (reading[1] instanceof Object) && reading[1].name == "humidity") {  
  // we just drop humidity messsages here, as this is intended to control
  // the thermostat settings later on and nothing else at this time.
  return null;
}

var celsius = reading && (reading[1] instanceof Object) && reading[1].value;

// Convert temperature in range of 0C to 30C to visible light in nm
// 440-485 blue, 485-500 cyan, 500-565 green, 565-590 yellow, 590-625 orange, 625-740 red
// 300nm range, 30C range
var temperature = celsius < 0 ? 0 : (celsius > 30 ? 30 : celsius),  
    color = 440 + (300 * (celsius / 30));

// Populate response values or default to non-value
var response = {  
  time: timestamp,
  temperature: celsius.toFixed(2),
  color: parseFloat(color.toFixed(2)),
  device: deviceId || "N/A"
};
log.setLevel("DEBUG");  
log.debug("response: " + JSON.stringify(response));  
return JSON.stringify(response);  

Circonus Graphs

Circonus is designed to collect your data into graphs, dashboards, analytics, and alerts. While it is often used for DevOps or IT Operations style monitoring, we're showcasing how well it serves as a key component of an IoT solution. Today, we'll simply use it to graph our timeseries messages and send ourselves an alert if the data stops flowing. This could indicate a problem with the battery in the Sensortag, or that we are out of range. Use your imagination, the sky is the limit, and Circonus has a powerful feature set.

Checks

You can see the four device IDs here, and the checks that were set up as part of this demonstration message flow.

Graphs

As the metrics are colleted, Circonus tracks it and can create graphs and dashboards for you. There's only a bit of data shown in the graph here because I've only had it running for a few minutes.

There are some powerful analytics tools and alerts at your fingertips here. It's hard to show with the small amount of data, but you can use anomaly detection, trend prediction, and many other functions on your data. This is a simple sliding window moving average, which we could use to smooth out spurious temperature readings and prevent the furnace from turning on needlessly.

Alerts

Circonus maks it simple to notify you with an alert if the data stops flowing. This is essential for mission-critical systems.

bip.io Workflow

We've covered the details of creating a bip.io workflows elsewhere, and many of the details like endpoints, auth tokens, etc. are already taken care of for us automatically by the wot.io integrations and tooling.

Here in the dashboard we can see the four bips that are referenced in the above message flow graph. Each has the device ID embedded into the name, and the endpoint.

bips

We'll have a look at two of them, both for the sensor ID ending in 00 (which is from the device MAC ending in :70, way back up the chain!). First, the alert.

alert bip

Here we see the overall message flow inside the alert bip. Incoming messages from the wot.io bus are processed by a math expression, a truthyness check, and if it all passes the criteria, an email alert is sent.

temperature email alert bip

Here's the expression. It's basic; we are simply checking if the temperature is too low, which could indicate some problem with the heating system:

temp check expression

The truthy check looks at the result of the previous Calculate expression, and will trigger the following node in the graph if it's true:

truthy check

And finally, we send an email alert, going to an address we specify, with the data embedded in it via template replacements:

email alert config

Simple!

Nest Temp Control bip

Now we have a simple bip set up to take the incoming temperature message, calculate an error factor, and generate an offset temperature setting for the Nest thermostat. Unfortunately, Nest doesn't have an API call that lets us send the sensor temperature in directly. Granted, that's an odd use case, but they probably haven't heard of this cool idea yet ;)

Temp Control Bip

With the lack of the sensor API, we need to get creative. We'll take the value from the sensortag, and calculate an error offset:

(desired_temp - sensed_temp)

Then we'll combine the error offset with the desired temperature:

(desired_temp - sensed_temp) + desired_temp

Here it is in the Math function in the bip, with a set point of 20ºC:

Temp offset calculation

This will give us a new set point for the Nest, and we send it along in the bip as pictured above. This is a basic setup, and you would want to refine this for long-term use. I'd suggest adding hysteresis to prevent the furnace from turning on and off too rapidly when close to the set point, and calibrate yourself a PID control loop to smooth things out.

Wrap-Up

This concludes our writeup of what turned out to be a rather complex message flow graph. We started with a local network of devices, built a hardware and software gateway to get those devices out to a device management platform, connected that to the wot.io bus, and wired up some powerful tools whose depths we have only started to plumb.

Yet even with all the complexity and details that we covered, you can see how simple it is to compose behaviors using the wot.io data service exchange™. And that is the whole point: to get us quickly to a working system. And since it's based on a fully scalable architecture, your solution is ready to grow with you from prototype into production.

In other words, you can focus on Shipping your IoT!

See you next time!

Link to Part One of this series

  1. DeviceHive, like wot.io, is a member of the AllSeen Alliance.

October 16, 2015 / wot.io, bip.io, email notifications / Posted By: wotio team

At wot.io we're always working on interesting new things and as you can see here, we like to blog about them. With everything going on, people were asking for an easy way to be notified when new blog posts go up. The answer was to use one of our data services, bip.io, to watch the blog RSS feed and send email when a new post goes up. In this post, we'll explain how we did it and how you can use bip.io to set up a similar notification system for any blog that has an RSS feed.

What will you need?
1. Link to RSS feed of blog (usually blogurl.com/rss)
2. Free bip.io account
3. Free or premium Mailgun Account. (You can also use Mandrill)

Step 1: Sign up on bip.io (It's free!) or sign in if you already have an account.

Step 2: In this step, we'll create a new bip and add Syndication (or RSS) feed as an Event.

Click on Create a bip

Proceed to Select Event Source

Find Syndication in the list of available pods

Step 3: In this step we will configure Syndication Pod to 'listen' to the RSS feed.

Click on Subscribe To A Feed

In this example, we'll subscribe to a labs.wot.io feed, but the process is the same for most syndication feeds. Enter a Feed name and Feed URL

Click OK

Step 4: Add an Email Notification.

Click Add an Action

Select "Mailgun" from the available pods

You will be asked to authenticate the API Key from Mailgun if you are using it first time on bip.io. The API Key can be found on your Mailgun account by going to Mailgun Dashboard --> Click on Domain Name (Tutorial)

Choose the "Send an Email" action

Connect the incoming RSS feed with Mailgun Pod by dragging your mouse pointer from Syndication pod to Email pod. It will look like this

Step 5: Configuring Email

Double-click on Mailgun pod to open the configuration window

Enter details like From, Mailgun Domain Name and recipient address.

Next, configure the subject of email. bip.io offers various attributes to include in the text like Post Title, Summary, Author, Date etc.

Post Author is selected by default on bip.io.
Here's how my subject and Text email looks like -

The email body can hold HTML formatting and attributes.

Here, I have added attributes Title, Article Summary and Link. They can be separated by <br /> tags to add line breaks in the email.

Click OK.

We're all set! Go ahead and click Save to save your bip.

Now that it's running, here's how I see email notifications in my gmail inbox

More Pods and Actions

This is a simple bip, but it handles the complexity of parsing the incoming feed, making it easy for you to format the outgoing email message. Plus it handles all of the details of communicating with the Mailgun API. And there are many more things you can do with bip.io like adding some functions to watch for certain things in incoming messages, modifying the content before you send your email, or sending email to different people depending on the content. You can also add many more notification types including sending text messages (Twilio), posting to Slack, or even creating your own curated RSS feed.

October 15, 2015 / Video, DASH, SCTE / Posted By: Kelly Capizzi

The Society of Cable Telecommunications Engineers’ (SCTE) Digital Video Subcommittee recently established a new working group, “Next Generation Systems,” and an InterDigital senior manager was named co-chair.  SCTE serves as the technical and applied science leader for the cable telecommunications industry and is the premier membership organization for technical cable telecommunications professionals.

The SCTE Digital Video Subcommittee identifies requirements and develops standards for the design and operation of systems for delivery of video, audio, and associated data for the cable industry.  The subcommittee features multiple working groups focused on areas that include video and audio services, stereoscopic, DASH, network architecture and management among others.

The purpose for the creation of the Next Generation Systems working group is to look for new technologies and new areas that may provide value to the North American cable industry. The group will be lead by co-chairs Alex Giladi, Video Software Architect of InterDigital and Yasser Syed, Distinguished Engineer of Comcast. Alex leads adaptive streaming interoperability and standardization projects, projects related to MPEG Systems in MPEG and is involved in DASH Industry Forum. His role in SCTE underscores InterDigital’s involvement at the forefront of the evolution of broadband video technologies.  

To learn more about InterDigital's role in this technology, visit the vault.

October 14, 2015 / 5G, TIA / Posted By: Kelly Capizzi

The fifth generation wireless standard is expected to underpin new technology deployments as well as future technologies that at this time can only be imagined. A recent survey completed by TIA and sponsored by InterDigital reveals mobile operators views on 5G development and deployment.

TIA 5G Operator Survey 2015,” discloses that most mobile operators believe 5G will require a significantly different system architecture from today’s cellular network and that 5G will have multiple radio interfaces. It also shows that a dominant majority of mobile operators expect Asia to lead the development and deployment of 5G, ahead of North America and Europe.

On September 28, TIA NOW, TIA’s online video network, hosted a live video panel with industry leaders and experts to discuss the findings from the TIA 5G Operator Survey. Abe Nejad, Anchor and Head of Editorial Content, TIA NOW, moderated a dynamic discussion that included panelists Gabriel Brown, Senior Analyst at Heavy Reading and author of the 5G Operator Survey and White Paper; Shawn Covell, VP of International 5G Advocacy at Intel; Dr. Michael Peeters, Chief Technology Officer of the Wireless Business Line at Alcatel-Lucent; Dr. Jeff Reed, Professor at Virginia Tech; and B.K. Yi, Chief Technology Officer at InterDigital.

“5G should be a transformational technology, impact other industries and ultimately society…within the survey response, it is interesting that operators aspire to all those things, but are being quite pragmatic about achieving those things,” stated Gabriel when asked about the operators’ responses to the TIA survey.

To learn the other industry leaders’ takes on the 5G Operator Survey findings, click below to watch the full video panel:

Winter is Coming

As I write this, it is mid-October. For those of us who live in the northern lands, our thoughts are turning towards turning up the thermostats to fight off the chill.

Snow in Buffalo, NY <span style='font-size: 60%;'> modified from <a href='https://en.wikipedia.org/wiki/Lake_Storm_%22Aphid%22#/media/File:Buffalo_snow_storm1.jpg'>image © Jason Safoutin (CC BY 2.5)</a></span>

Trouble is, some of these old houses aren't very well insulated, and they are filled with warm spots and cold spots. Thermostats regulate temperature based on the sensor inside them, and they are pretty good at it. But I was thinking, what sense does it make to have the thermostat regulate its own temperature, making it nice and comfortable where the thermostat sensor is? I want it nice and comfortable where I am, not where the thermostat is! And I move around. And I move around into cold spots in these old houses. I may be in the kitchen, or basement, or spare room, or sleeping. This needs to be fixed.

Tech to the Rescue!

Texas Instruments offers Sensortags, develpment kits for their CC2650 ultra low-power Bluetooth MCU chips. Combined with a BeagleBone Black (powered by an ARM Cortex-A8 CPU at 1GHz, the Texas Instruments Sitara AM3358BZCZ100), they will be the basis for our solution here.

We're going to connect these using DeviceHive - an open source device management platform developed by DataArt Solutions, and available as a data service on the wot.io data service exchange™. DeviceHive helps you manage devices after they are deployed—handling registration, commands sent to and data received from the device, and a number of other useful tools. From DeviceHive, we'll use the wot.io data service exchange to integrate with additional data services from our partner exchange and finish our system.

Prerequisites

If you want to follow along with this demo, here's what you'll need:

Hardware Setup

There are a number of ways you can connect to your Beaglebone Black for development. I hooked up a 3.3V USB to Serial adapter (remember, not 5V!) to the console port, an Ethernet connection for outbound network access and SSHing in, and USB for power. I've got a little USB power board with a switch on it from Digispark, which is very handy when doing USB-powered hardware stuff, and will save your USB ports some wear and tear. The Bluetooth dongle is connected to the Beaglebone Black's USB port.

Note: You'll need to have the Bluetooth dongle plugged in before you boot or it may not work. There are known issues with hot-plugging Bluetooth dongles on this board.

Here's what my setup looked like:

System Architecture

Here is the block diagram for what I'm planning, and the message flow. To keep things simple for now, we'll be talking about the data services in Part Two of this blog entry.

On the left, we have the two TI Sensortag boards. On each board we have the SHT21 sensor, and the TI CC2541 (or CC2650) controller. The SHT21 communicates via I2C. The CC2541 MCU speaks Bluetooth, and communicates with the bluepy utilities on the Beaglebone Black. The Node.js gateway applications uses those bluepy utils to poll the Sensortags.

In turn, the gateway uses Web Sockets to talk to the DeviceHive service on the wot.io data service exchange. It first handles device registration, which will tell the DeviceHive service there is a new device if it has not seen it before. Once registered, it creates a persistent Web Socket connection, periodically sends sensor readings, and listens for commands from DeviceHive. (We aren't using any of those commands for this demo, but it's easy to do using the wot.io DeviceHive adapter. It's especially powerful when those commands are generated dynamically by custom logic in a wot.io data service, say, in response to sensor readings from some other device!)

The wot.io DeviceHive adapter can then subscribe to receive the device notification data that the gateway sends to DeviceHive. We've configured wot.io to route those device messages to a number of interesting data services. But details on that will have to wait for Part Two!

Bluetoothing

First we need to talk to the Bluetooth Sensortags. I used the the bluepy Python library by Ian Harvey for this, as it includes a sample module that's already set up to speak Sensortag. Great time-saver!

If you are using the current CC2650 sensortags, it should work out of the box. If instead you have the older CC2541 Sensortags, in bluepy/sensortag.py, you should comment out these lines:

262     #if arg.keypress or arg.all:  
263     #    tag.keypress.enable()  
264     #    tag.setDelegate(KeypressDelegate())  

Those lines support features that aren't present on the older Sensortag, and it will error on initialization.

Some tools you should know about for working with bluetooth include hciconfig and hcitool. Also, gatttool is useful.

You can bring up and down your Bluetooth interface just like you would with Ethernet, perform scans, and other things as well:

hci0:   Type: BR/EDR  Bus: USB  
        BD Address: 5C:F3:70:68:C0:B8  ACL MTU: 1021:8  SCO MTU: 64:1
        DOWN
        RX bytes:1351 acl:0 sco:0 events:60 errors:0
        TX bytes:1333 acl:0 sco:0 commands:60 errors:0

root@beaglebone:~# hciconfig hci0 up  
root@beaglebone:~# hciconfig  
hci0:   Type: BR/EDR  Bus: USB  
        BD Address: 5C:F3:70:68:C0:B8  ACL MTU: 1021:8  SCO MTU: 64:1
        UP RUNNING PSCAN
        RX bytes:2201 acl:0 sco:0 events:97 errors:0
        TX bytes:2022 acl:0 sco:0 commands:97 errors:0

We need to get the Bluetooth MAC address of the Sensortags we are using. From the console on the Beaglebone Black, we will use the hcitool utility to get it. Here's the procedure, do it one at a time for each tag if you have multiple:

  1. Insert a fresh CR2032 battery into the Sensortag.
  2. Press the side button to initiate pairing; the LED will begin blinking rapidly (and dimly!)
  3. On the Beaglebone console, initiate a Bluetooth LE scan:

    root@beaglebone:~# hcitool lescan LE Scan ... 78:A5:04:8C:15:70 (unknown) 78:A5:04:8C:15:70 SensorTag

  4. Once you get your tag's Bluetooth MAC address, you can hit control-C to cancel the scan.

Once you know the MAC address, you can use the tools included in bluepy to easily talk to the tag. Try it out like this (first making sure the LED is blinking on the tag by pressing the switch, if it went to sleep):

root@beaglebone:~# ./sensortag.py 78:A5:04:8C:15:71 --all  
Connecting to 78:A5:04:8C:15:71  
('Temp: ', (24.28125, 20.36246974406589))
('Humidity: ', (24.686401367187493, 32.81072998046875))
('Barometer: ', (23.440818905830383, 979.6583064891607))
('Accelerometer: ', (-0.03125, 0.015625, 1.015625))
('Magnetometer: ', (-12.847900390625, 36.224365234375, 166.412353515625))
('Gyroscope: ', (-3.0059814453125, 3.082275390625, -0.98419189453125))

The sensor on the tag we'll be using for this demo is the SHT21 temperature and humidity sensor. We will use temperature to start, but we could easily expand our algorithms to take humidity into account, and adjust the heat accordingly. There are tons of other applications possible here, too!

Note also that I further modified the sensortag.py script to give us raw numerical output for the temperature and humidity, separately, using the -Y and -Z flags. This made subsequent code simpler.

DeviceHive Gateway

DeviceHive lets devices register themselves with a DeviceHive server instance, and then send along data. There are mechanisms in DeviceHive to send data and commands back to the devices as well, which could be used to update firmware, or take actions based on processing in one or more data service providers.

In our github repo is a device gateway coded with Node.js, using DeviceHive's Javascript libraries and a websockets connection.

Demonstration

Here's a quick walk-through of the hardware setup, gateway code, and a demonstration of the Sensortags sending data through to the DeviceHive device management platform:

To Be Continued

That wraps up part one of this demo, which covers the hardware and device management setup. In the next installment, we'll look at the data services, and cook up the magic that will control our thermostat. Stay tuned!

UPDATE: Part Two is now published.

October 12, 2015 / Posted By: Kelly Capizzi

D. Ridgely (Ridge) Bolgiano, former Vice President, Research, and Chief Scientist at InterDigital and longtime member of its Board of Directors, passed away on October 3, 2015. Ridge was a scientist, engineer, inventor and relentlessly creative mind who helped found and grow what has become an almost $2 billion publicly traded company, owned and operated a network of radio stations, made enormous contributions to his alma mater, and lived a unique life.

At InterDigital, Ridge helped pioneer many of the developments that launched the mobile device industry, which today totals $3 trillion in value around the world. Ridge invented several of the company’s key products (one of which is on part of the Museum of Modern Art permanent collection) and contributions to mobile, and pioneered the company’s licensing program. Ridge was Vice President, Research, and Chief Scientist at InterDigital until his retirement in 2008, served on the company’s Board of Directors until 2009 and was inducted into InterDigital’s Hall of Fame in 2006. For InterDigital and its predecessor companies, he is a named inventor in 25 issued patents in the United States and over 100 more patents and applications worldwide.

Ridge graduated from Haverford College with a degree in physics, specializing in Radio Frequency Transmission and Propagation. In addition to his work as a scientist and engineer, he owned and operated several radio stations (Key Broadcasting) in the Philadelphia area.

“Not only did Ridge make very lasting contributions to InterDigital, but he helped establish some of the basic technologies that underpin the mobile industry today,” said William Merritt, President and CEO of InterDigital. “We are deeply saddened by this loss and offer his family and friends our sincerest condolences.”

October 8, 2015 / Posted By: wotio team

Monitoring is a vital tool when developing, optimizing and understanding the health of your application services and infrastructure. wot.io has several data monitoring services in our data service exchange™ and we deploy and use a few of these as part of our own monitoring system. In this blog we're outlining how we use these monitoring services with a tour of our virtual machines, message bus and third party data services in our data service exchange.

Our monitoring setup can be broken down into 3 basic parts:

  • automated deployment
  • historical metric collection
  • host checks and alerting

Automated deployment

We use the power of docker and wot.io's configuration service for automated service deployment. Each newly deployed virtual machine (VM) automatically spins up with a default set of monitoring client containers.

Historical Metrics

We use a Graphite server fronted by Tessera dashboards to collect and view our historical metrics. By default, we collect metrics for each host (collectd) and all of its running containers (dockerstats). Both containers send metrics to a Graphite server; which Tessera queries to populate its dashboards.

Let's take a look at our default dashboards that are generated when we provision a new VM. This is accomplished by posting a json dashboard definition to a Tessera host.

Default tessera dashboards
Tessera and collectd in action

Checks and alerts

The final piece of our monitoring system is Sensu. Sensu is written in Ruby, backed by RabbitMQ and uses Nagios-style checks to alert us when bad things happen; or in some cases when bad things are about to happen. By default sensu-client gives us a basic keep alive. We have added our own checks to notify us when other more specific problems arise.

wot.io checks:

  • container checks: verifies that all the containers that are configured to run on that host are indeed running
  • host checks: lets us know if we are running over 90% usage on cpu, memory or disk
  • application checks: sensu-client will run all checks placed in the /checks dir of any container running on that host

We use the standard 4 Nagios levels:

  • ok: exit code 0
  • warning: exit code 1
  • critical: exit code 2
  • unknown: exit code 3

Ok, warning and unknown alerts are sent as emails and slack posts. We reserve critical alerts for big things like containers not running and host has stopped sending keepalives. Critical alerts go straight to PagerDuty and our on-call team.

Example sensu container check
Example sensu application check

As described above, we use these tools to monitor and collect data on our systems and also make them available to customers if they have additional needs for these data services. And the integration into our deployment system automatically launches the appropriate agents, which is essential when we deploy a large number of services as once, like we did for the LiveWorx Hackathon.

October 7, 2015 / IoT, oneTRANSPORT / Posted By: Kelly Capizzi

Across the world, urban transport is feeling strain from environmental pressures and congestion. A recent video, published by ARUP, explains how smart mobility initiatives like oneTRANSPORT and Drive Smart are helping cities tackle the growing problem.

oneTRANSPORT uses Internet of Things technology to share existing transport data to enable expert developers and analytics communities to develop new public information services and tools. The project aims to improve travel experiences for customers and generate new revenues for local authorities on a truly nationwide basis. The two-year commercially-focused project was proposed by a consortium that consists of eleven partners including ARUP and InterDigital Europe.

Drive Smart is a program that offers benefits to New York City drivers through an extensive in-vehicle data collection while designed to reduce congestion, pollution and crashes. ARUP created the program alongside New York City Department of Transportation to help drivers save time and money and drive safely.

Smart mobility is about making transport networks safer, more sustainable and efficient. Initiatives such as oneTRANSPORT and Drive Smart are designed to take transport to the next level and change the world a little bit. Click here to watch how intelligent transport services can change the way cities move.

October 7, 2015 / 5G / Posted By: Kelly Capizzi

The European Association for Signal Processing Journal on Wireless Communications and Networking (EURASIP JWCN) recently published a special issue focused on 5G …, and InterDigital’s Dr. Afshin Haghighat, Member of Technical Staff, InterDigital Labs, served as a guest editor. EURASIP exists to further the efforts of researchers by providing a learned and professional platform for dissemination and discussion of all aspects of signal processing.

The special issue, titled “5G Wireless Mobile Technologies,” features numerous research articles that relate to the next-generation 5G wireless mobile system. Afshin served as a guest editor for the issue alongside six other industry and academic experts from Huawei Technologies, Institut National de la Recherche Scientifique, Hong Kong University of Science and Technology, TU Dresden, Sungkyunkwan University and McGill University. In addition to editing the collection, the group co-authored an editorial titled “Enabling 5G Mobile Wireless Technologies” that describes key 5G enabling wireless mobile technologies and discusses their potential and open research challenges. The editorial serves as an introduction to the article collection, which can be found here.

EURASIP JWCN aims to bring together science and applications of wireless communications and networking technologies with emphasis on signal processing techniques and tools. EURASIP JWCN has been an Open Access journal since 2004 and covers subject areas that include antenna systems and design, coding for wireless system, signal processing techniques and tools, ultra-wide-band systems, and much more. To learn more, please visit the journal at http://www.jwcn.eurasipjournals.com/.

October 6, 2015 / Posted By: wotio team

In my last blog post, I discussed a sample architecture for an IoT application:

Sample IoT Architecture

where in the data is passed through a series of successive stages:

  • Acquisition - receiving data from the sensor farm
  • Enhancement - augmenting data in motion with data at rest
  • Analysis - applying machine learning and statistics to the data
  • Filtering - removing non-actionable data and noise
  • Transformation - converting it into an actionable format
  • Distribution - delivering to the end user or application

This architecture is based on a number of real world deployments that have been in production for more than a couple years. Each of these deployments share a number of problems in common relating to how the system architecture influences the tradeoffs between cost, throughput, and latency. These three factors are the most common real world constraints that must be taken into account when designing an IoT solution:

  • Cost - the money, time, and mindshare sunk into the system
  • Throughput - the volume of messages over time the system can handle
  • Latency - the time it takes for data to translate to action

At wot.io, we have found it necessary to build new software test equipment to better model the behavior of our production systems. Most existing load testing and modeling tools do not deal well with highly heterogenous distributed networks of applications. Towards this end, we have produced tooling like wotio/ripple for modeling the behavior of data services:

In the above video, I simulated an application in which 1750 messages per minute, were generated in a spiky fashion similar to a couple real world systems we have encountered. Anyone who has seen a mains powered sensor farm come on after a blackout will recognize this pattern.

exchange A

This is a typical pattern which results when the device designers assume that the devices will come online at random times, or decide to lockstep the message sending to a GPS clock. This acquisition phase behavior can be very noisy depending on the environmental characteristics.

The next step, we simulate some acquisition and enhancement phase activity of adding data to the data in motion by querying a database. To do this, we add a 10 second delay to each of the messages. The time shifted signal looks like:

exchange B

The ripple software allows for simulating a delay ramp, wherein the delay increases over time based on the number of messages through the system as well. This can be invaluable for simulating systems that suffer from performance degradation due to the volume of data stored in the system. For this sample simulation, however, I've stuck with a fixed 10 second delay. Being able to simulate delays in processing can be invaluable when multiple streams of data must be coordinated.

Another common constraint one encounters is a cost vs throughput constraint. For example, you may want to license a software application that is restricted in the number of CPUs per unit price. The business may only be able to afford enough CPU licenses to account for sufficient throughput of the per minute volume, but not the instantaneous volume.

exchange C

For these sorts of applications, we can simulate a maximum rate limit on the application. The ripple.c exchange above demonstrates the stretching of the input signal due to queueing that data between exchanges B and C. Here, we're simulating a 40 messages per second throughput limit. Theoretically, this system could process 40 * 60 = 2400 messages per minute, which is sufficient to handle our 1750 messages per minute load, but at a cost of adding latency:

Latency over Time

Here we can see the impact of this queuing on the per message latency over time. The above graph shows about 4 minutes of messages, and the per message latency of each. The reason for this is the messages are enqueued due to not being able to process them as fast as they are coming in briefly:

Queue B

This sawtooth graph is a result of feeding more data into the system than the rate limited process can remove it. This behavior results in highly variable latency across the lifespan of the application:

Latency Historgram

In this histogram of the 4 minute sample, you can see a spike around 10s of latency. This spike accounts for roughly 1/8th of all of the messages.The other 7/8ths of the messages however, range from 10s of latency to over 35s of latency. This variability in latency is a classic tradeoff that many IoT systems need to make in the real world. If you are expecting to act upon this data, it is important to understand how that latency impacts the timeliness of your decision.

By combining both delays and rate limits, along with different generator patterns, we can better develop models of how our systems behave under load long before they go to production. With wotio/ripple, we were careful to keep our test generation, application simulation, and our analysis phases decoupled. The message generator and the latency report generators are separate servers capable of being run on different hardware. As the software is written in Erlang, it is easy to distribute across a number of Erlang VMs running on a cluster, and through Erlang's built in clustering, can be coordinated from a single shell session.

The test program used to generate the above graphs and topology is as follows:

This sample file demonstrates the following features:

  • consume, Source, Filename - consumes messages from Source and logs their latency to Filename
  • pipe, Source, Sink - consume messages from Source and forward to Sink as fast as possible
  • limit, Source, Sink, Rate - consume messages from Source and forward to Sink at a maximum rate of Rate messages per second
  • delay, Source, Sink, Base, Ramp - consume messages from Source and forward to Sink with a Base delay in ms with Ramp ms delay added for each message processed
  • generate, Message, Pattern - send the sample test message (with additional timestamp header) at a rate of messages per second specified in the Pattern.

In the near future, we will be adding support for message templates, sample message pools, and message filtering to the publicly released version of the tools. But I hope this gives you some additional tools in your toolbox for developing your IoT applications.

October 5, 2015 / ship iot, columbia, atmel, iot, columbia university, wot.io / Posted By: wotio team

Over the summer, wot.io visited Columbia University in New York City to participate in an evening of presentations that were part of an interesting new graduate level course they are offering on IoT. The event, organized by IoT Central, had a packed agenda full of IoT presentations and information, including some demos of Atmel devices sending data to wot.io data services.

At the event, we demoed some early versions of our Ship IoT initiative, showing how Atmel devices can be connected to multiple data services from the wot.io data service exchange. In this demonstration we used PubNub for connectivity, and routed it to wot.io data services bip.io, scriptr.io, and Circonus.

This event was particularly interesting as Steve Burr, Director of Engineering at wot.io, unboxes and connects an Atmel device live during the demo and starts getting temperature readings from it. Live demos are always fun to watch! The IoT Central group recorded the event and you can watch the video below.

The entire video is full of interesting IoT information. If you're looking for specific parts, the Atmel portion starts at about 28 minutes, wot.io starts around 32 minutes, and the technical portion starts around 38:30.

October 2, 2015 / Posted By: wotio team

In looking at many different IoT applications, a fairly common architecture emerges. Like Brutalist architecture, these applications are rugged, hard, and uncompromising, with little concern for a human scale aesthetic:

At its core it is a six stage pipeline, wherein the data is processed in a sequence. Variations on this architecture can be generated by branching off at any one of the six stages, and repeating some or all of the stages for some sub-path:

The stages correspond to different application types that are typically used in IoT systems:

One of the great pleasures of working at wot.io is seeing the development of new systems architectures and their interplay with real world constraints. As Chief Scientist, I spend a lot of my time metering and monitoring the behavior of complex soft real-time systems. In addition to looking at real world systems, I also get to design new test equipment to simulate systems that may one day go into market.

One such tool is ripple, a messaging software analog to an arbitrary waveform generator. Rather than generating a signal by changing volts over time, it generates a message waveform measured in messages per second over time. Much of the behavior of distributed IoT systems is only understandable in terms of message rate and latency. In many ways, the design constraints of these systems are more like those in designing traces on a PCB than it is like designing software. A tool like ripple allows us to simulate different types of load upon various combinations of application infrastructure.

Not all applications behave the same way under load, and not all data flows are created equal. Variations in message content and size, choice of partitioning scheme, differences in network topology, and hardware utilization, can all affect the amount of latency any processing stage introduces into the data flow. The variability in the different data pathways can result in synchronization, ordering, serialization, and consistency issues across the result set.

Consider a case where an application is spread across a few hundred data centers around the world. Due to variations in maintenance, physical failures, and the nature of the internet itself, it is not uncommon for an entire data center to go offline for some period of time. This sort of event can cause an immense backlog of messages from what is now the "distant past" (ie yesterday) to come flooding in, changing the results of the past day's analysis and reports. This problem is not just limited to hardware failures, but is common when using remote satellite based communication schemes. In these cases, a compressed batch of past data may appear all at once at a periodic basis when the weather and satellite timing permit the transmission.

Ripple was designed with these issues in mind, to make it easier to simulate these sorts of what-if scenarios we have encountered with real world systems. For our simulations, we use RabbitMQ as a message bus. It provides a reliable distributed queuing system, that is extensible. It is also convenient to use a protocol like AMQP for data interchange between processes, as it is well supported across languages. The core functionality ripple consists of:

  • modeling application topologies within RabbitMQ
  • creating pools of consumers which stand in for applications
  • forwarding with delays which allow for simulating different latency characteristics of an application
  • generating arbitrary patterns of messaging over time
  • simulating "noisy networks" where in message rates vary by some random noise factor

In the next blog post, I will describe the ways to use ripple to simulate a number of different real world systems, and describe some of the architectural concepts that can address the observed behaviors.

October 1, 2015 / Posted By: wotio team

ThingWorx is an IoT platform that enables users to collect IoT data from a variety of sources and build applications to visualize and operate on that data. As we showed in a previous post, wot.io supported the LiveWorx hackathon by deploying ThingWorx instances for developers to use in developing IoT solutions. In addition to automating ThingWorx deployment, we have also been working on creating a ThingWorx Extension for submission to the ThingWorx Marketplace.

As an IoT platform that values extensibility, ThingWorx provides a number of options for connecting other systems with a ThingWorx application. In particular, partners can package adapter into reusable ThingWorx extensions. Developers creating IoT solutions with ThingWorx can then install these extensions and easily access the additional functionality they provide in a fashion that is idiomatic to ThingWorx applications. wot.io developed an extension that follows this pattern and will provide a standard way to interface with the wot.io operating environment to send or receive data from a ThingWorx application.

As we've been working on our extension, we thought we would share some of the ways we think developers might use the extension. In this video we create a simple ThingWorx mashup showing just how a developer would access and use the installed wot.io extension.

We're looking forward to getting our extension listed in the ThingWorx Marketplace and getting feedback on how it works for ThingWorx developers.

October 1, 2015 / Posted By: wotio team

As part of our involvement with Thingworx' LiveWorx Event this year, wot.io was happy to support the pre-conference LiveWorx hackathon. Participants were provided with some hardware and sensors, some suggested verticals including Smart Agriculture and Smart City challenges, and of course ThingWorx instances for them to pull their solution together.

Part of wot.io's support was to deploy and host the 85 ThingWorx instances for the teams to work on. How did we do it?

One of the fundamental components of the wot.io operating environment (OE) is our configuration service and the associated orchestration that allows us to quickly deploy and manage integrated data services. Leveraging OpenStack's nova command line client and the popular Docker container system, the wot.io OE provides APIs that allow data services to be configured and deployed. This API can then be scripted for specific sets of data services or to deploy multiple instances of the same data service as in the case of the hackathon. This video shows the script we used to spin up the servers in Rackspace. This version creates just 5 rather than 85 instances.

The wot.io OE can also be used to quickly update deployed containers, either individually or en-masse. During the process of preparing for the hackathon, ThingWorx engineers discovered that they needed to revise the base ThingWorx configuration a number of times. They would simply send us a new archive file and we were then able to use it to update our core container. Once we told the configuration service to reference the new version, all of the running instances then detected the new version and updated themselves automatically. This made it easy for us to deploy updates as they came in--even right up until the event started.

In addition to deploying and hosting ThingWorx instances, we have also been working on a wot.io ThingWorx extension that will simplify the integration of ThingWorx with the wot.io OE, allowing data to be routed back and forth between other IoT platforms and thereby solving the IoT Platform Interoperability for the large enterprise and industrial companies. You can read more about our progress on that here.

September 30, 2015 / Posted By: wotio team

For the Love of Coffee

French Press

Coffee is amazing stuff, and when brewed just right, tastes incredible! I'm a coffee aficionado, and I'm always pursuing The Perfect Cup™. Preparation technique is critical! The Specialty Coffee Association of America has very rigid standards for how to prepare coffee, designed to ensure consistent and peak quality flavor in the resulting drink. Water temperature is one of the major factors, because as any good chemist knows, various compounds dissolve at different rates in different temperatures of water. The flavor of your cup of coffee is greatly determined by the temperature of water used, and consequently, the varying fractions of coffee compounds thereby extracted.

From the SCAA standard:

Cupping water temperature shall be 200°F ± 2°F (92.2 – 94.4°C) when poured on grounds.

We are engineers. We appreciate the scientific method, and data-driven decisions. The quest for The Perfect Cup must therefore entail data collection for later analysis. This collection should be automated, because life is too short for repetitive manual processes. So let's start out by checking our existing daily brewing process' water temperature, and logging the long-term variance.

We're going to do this with a Kinoma Create, which packs an 800 MHz ARM v5t processor, WiFi, Bluetooth, a color touchscreen, sound, I/O pins, and all kinds of other goodies. It's a comprehensive development kit that lets you code with JavaScript and XML, so it's a great choice, and even more so if JavaScript is one of your competencies. This will make our temperature logging simple, and the data services available through wot.io give us easy insights into our data because the integration is already done and working. Expanding beyond our first-steps of temperature logging will be a snap, as the Kinoma Create has more I/O than we can shake a stick at. Let's get to it!

Getting Started

For this project, I used:

Kinoma Create, coffee beans, and LM-35 sensors

The Probe

First thing I did was make a probe suitable for testing something I was going to drink. It needed to be precise, non-toxic, and tolerant of rapid temperature changes.

Temperature probe built into a Pyrex test tube

The temperature sensor is the LM35 from Texas Instruments, a military-grade precision Centigrade sensor with analog output, accurate to ±0.5ºC. That's well-within the specified ±2ºF spec from the SCAA for brewing water.

TI LM-35 Sensors in TO-92 Packages

I attached the sensor inside a Pyrex borosilicate glass test tube, which will withstand the thermal shock inherent in measuring boiling water. We certainly don't want shattered glass shards or contaminants in our coffee! To ensure good heat transfer, I used some thermal epoxy to affix the sensor at the bottom of the tube.

LM-35 epoxied into the end of a test tube

The cable is Belden 9841, typically used for RS-485 industrial controls and DMX 512 systems. While we don't need precision 120Ω data cable for this, it has 100% foil+braid shield and will keep our analog signals nice and clean. Plus, I had a spool of it on the rack - always an advantage ;)

About that LED... It functions as a power indicator, and makes the probe look good for showing off at World Maker Faire. Normally I wouldn't stick an LED next to a precision temperature sensor. The power dissipated by the LED and current-limiting resistor will cause a slight temperature rise and throw off the measurement. But it only dissipates maybe 10 milliwatts, and coffee is really hot, so I stuck an LED in there! No worries.

Testing the Sensor

Before writing the code, I needed to be sure the sensor output matched what's claimed on the datasheet (always check your assumptions!). A quick setup on a breadboard proved the datasheet to be correct.

LM-35 in a solderless breadboard

The temperature of the sensor itself measured ~24.1ºC with a calibrated FLIR thermal camera (with an assumed emissivity of ε0.90 for the plastic TO-92 case):

Thermal image of LM-35 reading 24.1ºC

...and the output of the device was 245mV, right on target!

Multimeter reading 245.59mV

Now we know we don't need much correction factor in software, if any.

The Code

I'll lead you through a very brief walkthrough of the code. You can grab the code from the repo on github.

First thing you'll want to do is put your PubNub publish and subscribe keys into the code, and your channel name.

PubNub Dashboard

Grab the keys and put them in a the top of main.xml:

<variable id="PUBNUB_PUBLISH_KEY" value="'YOUR_PUB_KEY_HERE'"   />  
<variable id="PUBNUB_SUBSCRIBE_KEY" value="'YOUR_SUB_KEY_HERE'" />  
<variable id="PUBNUB_CHANNEL" value="'YOUR_CHANNEL_NAME_HERE'" />;  

PubNub Library Integration

One of the key bits to using this PubNub library is you need to override the default application behavior. Their example came as straight JS, but I converted it to XML here, so you get to see both methods and learn some new tricks.

At the top, we include the pubnub.js library file, and then define a behavior that uses the PubNubBehavior prototype. While I won't claim to be an expert on PubNub's library, I believe we do things this way so that the PubNub library can handle the asynchronous events coming in from the message bus.

We also start into the main startup code, which resides in the onLaunch method.

<program xmlns="http://www.kinoma.com/kpr/1">  
    <include path="pubnub.js"/>
    <behavior id="ApplicationBehavior" like="PubNubBehavior">
        <method id="constructor" params="content,data"><![CDATA[
            PubNubBehavior.call(this, content, data);
        ]]></method>
        <method id="onLaunch" params="application"><![CDATA[
           ...

...and we see the rest down at the bottom, where we instantiate the new ApplicationBehavior and stick it into our main applicaiton.behavior thusly:

    <script>
        <![CDATA[
        application.behavior = new ApplicationBehavior(application, {});
        application.add( maincontainer = new MainContainer() );
        ]]>
    </script>

onLaunch Initialization

First thing we do is set up the pubnub object with our publish and subscribe keys. Note that you don't need to use keys from the same exchange - you can write to one, and read from an entirely different one. That's part of the amazing flexibility of message bus architectures like PubNub and wot.io.

After init, we subscribe to the specified channel, and set up callbacks for receiving messages (the message key) and connection events (connect). Upon connection we just fire off a quick Hello message so we can tell it's working. For receiving, we stick the message contents into a UI label element, and increment a counter, again doing both so we can tell what's going on for demonstration purposes.

You could certainly parse the incoming messages and do whatever you want with them!

        pubnub = PUBNUB.init({
            publish_key: PUBNUB_PUBLISH_KEY,
            subscribe_key: PUBNUB_SUBSCRIBE_KEY
        });
        pubnub.subscribe({
            channel : PUBNUB_CHANNEL,
            message : function(message, env, channel) {
                maincontainer.receivedMessage.string = JSON.stringify(message);
                maincontainer.receivedLabel.string = "Last received (" + ++receivedCount + "):";
            },
            connect: function pub() {
                /*
                    We're connected! Send a message.
                */
                pubnub.publish({
                    channel : PUBNUB_CHANNEL,
                    message : "Hello from wotio kinoma pubnub temperature demo!"
                });
            }
         });

Next we set up our input pins for the temp sensor:

        application.invoke( new MessageWithObject( "pins:configure", {
            analogSensor: {
                require: "analog",
                pins: {
                    analogTemp: { pin: 52 }
                }
            }
        } ) );

This uses Kinoma's BLL files which define the pin layout for hardware modules. I created a simple one for our temp sensor. I did not have the system configure the power and ground pins. At the time I coded this, Kinoma doesn't document an official way to do it (although it does exist if you dig into their codebase).

exports.pins = {  
    analogTemp: { type: "A2D" }
};

exports.configure = function() {  
    this.analogTemp.init();
}

exports.read = function() {  
    return this.analogTemp.read();
}

exports.close = function() {  
    this.analogTemp.close();
}

Lastly, we set up what is effectively the main loop. This fires off a message that will be processed by the analogSensor read method defined in the BLL file. It also sets it up to repeat with an interval of 500 milliseconds. The results are sent via a callback, /gotAnalogResult:

        /* Use the initialized analogSensor object and repeatedly
           call its read method with a given interval.  */
        application.invoke( new MessageWithObject( "pins:/analogSensor/read?" +
            serializeQuery( {
                repeat: "on",
                interval: 500,
                callback: "/gotAnalogResult"
        } ) ) );

The Results Callback

This is a message handler behavior which processes the analog value results from our periodic sensor read. It converts the reading to degrees Celsius, and fires off the data with an onAnalogValueChanged and onTempValueChanged message to whomever is listening. (We'll see who's listening down below...)

The sensor outputs 10 millivolts per degree Celsius, so 22ºC would be 220mV. This goes into our analog pin, which when read, gives a floating-point value from 0 to 1, representing 0V up to whatever the I/O voltage is set to, 3.3V or 5V. We do some conversion to get our temperature back.

You may notice that we only use a small range of the A/D converter's potential for typical temperatures, and this results in lower resolution readings. Ideally we'd pre-scale things using a DC amplifier with a gain of, say, 2 or 4, so the temperature signal uses more of the available input range.

    <handler path="/gotAnalogResult">
        <behavior>
            <method id="onInvoke" params="handler, message"><![CDATA[
                var result = message.requestObject;
                // Convert voltage result to temperature
                // LM35 is 10mV/ªC output; analog input is 0-1 for 0-3.3v (or 5 if set)
                // Subtract 1 degree for self-heating
                var temp = (result * 3.3 * 100) - 1;
                application.distribute( "onTempValueChanged", temp.toFixed(2) );
                application.distribute( "onAnalogValueChanged", result );
                pubnub.publish({channel:PUBNUB_CHANNEL, message:
                    {"k1-fd3b584da918": {"meta": "dont care", "tlv": [ {"name": "temperature", "value": temp.toFixed(2), "units": "C"} ] }}
                });
            ]]></method>
        </behavior>
    </handler>

The UI

Here we define the main container for the user interface. You'll see entries for the various text labels. Some of them have event listeners for the onAnalogValueChanged and onTempValueChanged events, and that's how they update the display.

    <container id="MainContainer" top="0" left="0" bottom="0" right="0">
        <skin color="white"/>

        <label left="5" top="0" string="'PubNub Temperature Telemetry Demo'">
            <style font="24px" color="red"/>
        </label>

        <label left="5" top="23" string="'Last Received (0):'" name="receivedLabel">
            <style font="20px" color="blue"/>
        </label>

        <label left="5" top="39" string="'--no message received yet--'" name="receivedMessage">
            <style font="14px" color="black"/>
        </label>

        <label left="0" right="0" top="80" string="'- - -'">
            <style font="60px" color="black"/>
            <behavior>
                <method id="onTempValueChanged" params="content,result"><![CDATA[
                    content.string = "Temp: " + result + " ºC";
                ]]></method>
            </behavior>
        </label>

        <label left="0" right="0" top="65" string="'- - -'">
            <style font="24px" color="green"/>
            <behavior>
                <method id="onAnalogValueChanged" params="content,result"><![CDATA[
                    content.string = result.toFixed(6) + " raw analog pin value";
                ]]></method>
            </behavior>
        </label>

        <picture url="'./assets/wotio_logo_500x120.png'" top="210" left="10" height="24" width="100" />
    </container>

Results

It worked well! After perfecting my water boiling technique (who would have thought that was a thing), I got a great cup with the data to prove it. Dark chocolate, caramel, hints of cherry and vanilla; earthy and full.

The messages flowed to PubNub from the Kinoma Create, and anything published to PubNub from elsewhere would show up nearly instantly on the Kinoma Create's screen. Keep reading to see how we used some data services via wot.io.

Finished demo with cup of coffee

World Maker Faire 2015

This setup was demonstrated at World Maker Faire 2015 in the Kinoma booth, where we also had a number of data services connected, scriptr.io, bip.io, and Circonus to start.

wot.io Ship IoT Data Service Exchange Diagram

These fed into Twitter and Gmail also. You can see the message flow graph created with bip.io, showing the message processing and fan-out:

bip.io workflow graph

We've written about creating these graphs before, just look through the other posts on the wot.io labs blog for several examples.

In Closing

Kinoma's Create platform pairs effortlessly with the data services available via wot.io, and the power to leverage existing expertise in JavaScript is a huge advantage when it comes time to develop and ship your product. That power extends further with wot.io partners like scriptr, where you can integrate further cloud-based JavaScript processing into your data service exchange workflow. To get started, grab a Kinoma Create and take a look at shipiot.net today!

September 29, 2015 / wot.io, maker faire new york, iot, kinoma create / Posted By: wotio team

While the organizers don’t release official attendance figures, others report that north of 90,000 makers, enthusiasts, hardware-hackers and curious onlookers made their way to the New York Hall of Science in Queens, NY, this past weekend for Maker Faire New York.

Among the sprawling rows of tents set up were companies, organizations and individuals showing off everything from:

  • 3-D printers
  • CNC machines
  • Automatic PCB Board Fabrication machines
  • a Drone Zone
  • RC robots battling each other to destruction.
  • electric powered wagons.
  • maker kits
  • sonic massagers
  • electronic wearables
  • local artists, much of their work made from mixing machine-milled parts and expert hands to craft something beautiful
  • other strange, wonderful creations that don't quite fit into any category
  • and of course, you know, a 30-ft Fire Breathing monster made from recycled Airplane parts (pictured in the post header, more photos).

Big names like Google, Microsoft, and Intel were there, showing off various IoT initiatives, teaching kids how to solder, and generally helping them get started in building electronics.

And of course wot.io wouldn't miss a Maker Faire so close to home, so we were there too. We were very excited to be able to join our friends from Kinoma whose booth saw plenty of traffic.

We've used the Kinoma Create for projects in the past and it was fun to build a new one to show at the Faire. For this outing we added a temperature sensor suitable for giving a very precise reading on your favorite beverage.

Data from the temperature sensor was captured by the attached Create unit, sent through PubNub, and routed to wot.io data services bip.io (pictured on the screen), scriptr.io, and Circonus.

One of the biggest hits in the booth was the Kinoma Create-powered-robot. Kids were invited to control the robot wirelessly from another Kinoma Create. The different projects showcased by wot.io and Kinoma demonstrated how accessible the JavaScript-powered Kinoma Create platform is for makers of all ages.

It was great to see how many kids were at the Faire, getting excited about inventing and exploring new ideas and technologies. It was pretty great to see them just play with electronics and have the chance to make cool stuff. When you ignite the imagination of a kid, and give them the tools and support to build their ideas into reality, there's no telling what they're going to bring to next year's Maker Faire. Given what was on display this year, I'm pretty excited to find out.

If you're interested in deeper technical details on the demo we showed, be sure to check out our labs blog entry that explains the full setup.

September 29, 2015 / 5G, R&D / Posted By: Kelly Capizzi

A new report by Jim Kohlenberger, President of JK Strategies and former Chief of Staff in the White House Office of Science and Technology Policy, examines the power and promise of 5G technology, as well as policy recommendations to maximize the social benefits of 5G. InterDigital wasn’t involved in the report but we are, as always, interested in developments related to 5G research.  

Mobilizing America: Accelerating Next Generation Wireless Opportunities Everywhere” proposes a comprehensive strategy to ensure U.S. global leadership in the wireless revolution.  

“The next wireless revolution will need to be about more than blazing fast speeds,” says advocacy group Mobile Future, which commissioned the report. The 5G revolution is also an opportunity to address the nation’s toughest challenges; accelerate access to more spectrum; lower barriers to mobile investment; revive American R&D; and fill the talent pipeline.  

Echoing the conclusions of the recent TIA 5G Operator Survey (which was sponsored by InterDigital), the report points to several signs that other countries are working to surpass the United States in 5G innovation, including Europe’s investment in a 5G Public-Private Partnership, South Korea’s plans to launch a 5G trial network when it hosts the 2018 Winter Olympic Games, and China’s creation of an interagency “promotion group” to coordinate 5G activities among industry and academia.  

Mobile Future will host a free webinar on its report, featuring Kohlenberger and Mobile Future Chair Jonathan Spalter, on Friday, October 2, at 1:00 pm.

September 25, 2015 / MAC, Wi-Fi, IEEE / Posted By: Kelly Capizzi

Wi-Fi privacy refers to being able to safely use your Wi-Fi device without someone tracking you. IEEE 802 Privacy Executive Committee (EC) Study Group has suggested an amendment to the Wi-Fi standard in order to continue to ensure such privacy. Recently, InterDigital Principal Engineer and IEEE 802 EC Study Group chair, Juan Carlos Zuniga, discussed the suggestion with Claus Hetting in Wi-Fi Now Episode 8.

In the interview, Juan Carlos explains to Claus how Wi-Fi privacy has become a concern and why the IEEE Study group proposed the solution to update the Wi-Fi protocol to use randomly generated MAC addresses to increase security and privacy. He also elaborates on possible implications with the proposed solution and estimates when the standard could be approved and commercialized.

In addition to Wi-Fi privacy, episode 8 featured another recent Wi-Fi hot topic – Wi-Fi calling to carriers. Claus interviewed SpectrumMAX’s Vice President and Chief Technology Officer, Amir Rajwany, on the current challenges and status of Wi-Fi calling. To learn more on both topics, watch the full episode below:

September 23, 2015 / 5G, LTE / Posted By: Kelly Capizzi

InterDigital’s  Chris Cave, Director, Research and Development, recently joined RCR Wireless’ Claudia Bacco in a video interview on the path to 5G. During the interview, Chris and Claudia discuss four major topics related to the transition from LTE to 5G – spectrum, LTE evolution, network infrastructure changes and 5G use cases.

Chris anticipates that LTE will continue to evolve in the existing 4G spectrum and will be a key component of 5G systems. Key topics that will be explored during this evolution include unlicensed bands as well as M2M and device-to-device communications that could play a major role in the connected car discussion. 5G is predicted to be everything you can imagine that can benefit from a wireless connection. This prediction makes use cases extremely important to 5G. Chris explains how he simplifies the 5G use case discussion into to two distinct categories - mobile broadband use cases and the connected world.

To learn more on the path to 5G, watch Chris' interview below:

September 22, 2015 / IoT, Mediatek, LinkIt ONE, shipiot, bipio, wotio, accelerometer, demo, tutorial / Posted By: wotio team

A Bike and an Idea

A very good friend of mine recently picked up a motorcycle as a first-time rider. It's a nice bike, a Honda CBR250 in gloss black, with under 3000 miles on it. She's smart, and took the safety class offered by the Motorcycle Safety Foundation, but I was still looking for ways to make sure she was going to be ok, and that I could quickly help if ever needed.

We started out by having her send me an SMS message whenever she arrived at her destination. "Made it," she would send, which worked ok. My inner sysadmin thought, "process seems repetitive; shouldn't this be automated?" And so was born this demo: a motorcycle crash alert that will both quickly transmit and permanently log a help message and GPS location if there is ever trouble!

Already having access to wot.io data service integrations like Twitter and Google Sheets, and the quick power of a bip.io workflow, I needed some hardware. Mediatek offers a dev board called the LinkIt ONE, available from seeedstudio.com. The LinkIt ONE integrates a heap of features for anyone making an Internet of Things prototype, including WiFi, Bluetooth LE, GPS, GSM, GPRS, audio, and SD card storage, all tied together by an ARM7 micro controller.

It's largely compatible with the Arduino pin headers, and can interface with Grove modules, SPI, I2C, and more. We'll be using their Arduino SDK and HDK to create our demo app, and hook this board up to the wot.io-powered bip.io data service exchange and do some really cool stuff with Twitter and Google Sheets. The only other bit I needed to add was an accelerometer, and I found a three-axis module in my parts bin.

Let's get started!

Prerequisites

Optional:

  • One motorcycle with rider.

Note! The SDK does not currently work with the latest Arduino IDE (v1.6.5 as of this writing)

Also note! If you have multiple installs of Arduino IDE like I do, you can simply rename the app with the version number to help keep them straight.

Updating Firmware

This is part of the Getting Started guide from Mediatek, but it's important so I'm calling it out specifically here. Make sure you update the firmware on your board before you begin. It's easy!

Building the Hardware

The setup for this is simple. Connect the GPS and GSM antennas to the LinkIt ONE board. Make sure to use the right antennas, and hook them to the proper plugs on the bottom, like it shows in the picture. We don't need the WiFi antenna for this demo, so I left it disconnected.

Hook the Li-Ion battery to the board so we can run it without a power cable. (Make sure it's charged; check Mediatek's docs).

And finally, we connect the accelerometer board. I used a tiny breadboard to make this easy.

Oh, and make sure the tiny switch in the middle is flipped over to SPI, not SD, or the green LED won't behave and other things may not work for this demo. Check the close-up image below.

Here you can see the detail of how we hook up the accelerometer. There are three pins connected as configuration options, using jumpers on the breadboard. These are for the self-test (disabled), g-force sensitivity (high sensitivity), and sleep mode (disabled). (Obviously you'd want to control the sleep mode programmatically on an actual product for lower power consumption, but we're keeping it simple here.)

We then have some flying leads over to the LinkIt board, for ground, +3.3v power, and the three analog outputs for x, y, and z axes.

The accelerometer breakout board I used for this demo is the MMA7361, which has three analog outputs. The chip was discontinued by Freescale, and Sparkfun no longer sells the breakout board. They have a similar one you could use, the ADXL335, which should work great. You can adapt this demo for whatever kind of accelerometer you are using, maybe even change it to a digital interface, since the LinkIt ONE board speaks I2C and SPI with ease.

Here we can see exactly where the flying leads come in for power and to the three analog inputs, A0, A1, and A2.

And finally, we neatly package up the prototype so it is self-contained and will fit under the motorcycle's seat:

That's it! The LinkIt ONE board has all the rest of the fun stuff already integrated and ready to use. Combined with some data services available from wot.io, we'll be up and running in no time!

Writing the Code

Let's walk through the code for this demo. You can get the code from our github repo to follow along.

Headers and Configuration

First, we need to include some standard headers from the Mediatek SDK. These give us access to the GPS module, and the GSM/GPRS radio for cellular data communications.

We also set up some #define statements to configure the GSM APN, the API hostname, and the Auth header.

#include <LGPS.h> 
#include <LGPRS.h> 
#include <LGPRSClient.h> 

// Change these to match your own config
#define GPRS_APN     "fast.t-mobile.com"
#define API_HOSTNAME "your_hostname_here.api.shipiot.net"
#define AUTH_HEADER  "your_auth_header_here"

Now lets define some variables we'll use later. These will include the HTTP POST request template, and the json data structure template. We surround these by the F(); function, which in Arudino-speak, means "store this string in Flash, not RAM". This is good practice for long static strings to save you some RAM space, but not strictly required for this small example.

We also have some global variables for building the request, for the GPRS cellular data client session, for the GPS data, and a C-style string buffer for sending the request.

String request_template = F("POST /bip/http/mediatek HTTP/1.1\r\n"  
                            "Host: " API_HOSTNAME "\r\n"
                            AUTH_HEADER "\r\n"
                            "Content-Type: application/json\r\n"
                            "Content-Length: ");
String data_template =    F("{\"x\":X,\"y\":Y,\"z\":Z,\"nmea\":\"NMEA\"}");  
String request;  
String data;  
String nmea;  
LGPRSClient c;  
gpsSentenceInfoStruct gpsDataStruct;  
char request_buf[512];  

Initializing the Board

Now on to the setup() function. This is your standard Arduino init function that will do all the one-time stuff at boot. We start out by setting the serial debug console speed to 115200 baud, call pinMode() to configure some digital output pins for the three LEDs on the board, and then set the LEDs to both red on, and the green one off.

The idea here is to use the LEDs as status info for the user. The first red LED turns off when the GPRS connects. The second one will start blinking while the GPS is acquiring a fix. And finally, the red LEDs will be off and the green LED will turn on to indicate that the system is read. We also blink off the green LED one time for every call to the HTTP endpoint, so the user knows it's working. (It would be good to check the return value from the API for this, but we don't do that in this simple demo)

void setup() {  
  Serial.begin(115200);
  Serial.println("Starting up...");

  pinMode(0,OUTPUT);
  pinMode(1,OUTPUT);
  pinMode(13,OUTPUT);

  // Turn on red LEDs, turn off green
  digitalWrite(0,LOW);
  digitalWrite(1,LOW);
  digitalWrite(13,LOW);

You'll see throughout the code that there are Serial.print() calls, which report the status to the debug console. This is handy during development, but you'd probably remove these for a production system to save space and power.

For setting up the GSM data communications, we need to connect to the GPRS APN. The actual APN name you need to use is defined by your cellular carrier, and you set it in the `#define statements at the top of the code.

  Serial.print("Connecting to GPRS APN...");
  while (!LGPRS.attachGPRS(GPRS_APN, NULL, NULL)) {
    Serial.print(".");
    delay(500);
  }
  Serial.println("Success");

Now we turn on the GPS chip, and delay for a second so it can get its bearings. (get it? get its bearings? ha.)

  Serial.print("GPS Powering up...");
  LGPS.powerOn();
  delay(1000);
  Serial.println("done");

So we've powered up the GPS and attached to the GPRS system, and we're going to call that Phase 1 complete. We turn off the first red LED, and move on to getting a GPS fix. That's the last bit of initialization to do, and we'll flip the LEDs to green when it's all done.

  // Phase 1 init complete,
  // so turn off first red LED
  digitalWrite(0,HIGH);

  waitForGPSFix();

  // Phase 2 init complete, LEDs to green
  digitalWrite(0,HIGH);
  digitalWrite(1,HIGH);
  digitalWrite(13,HIGH);

  Serial.println("Setup done");
}

Getting a GPS Fix

Let's take a look at the waitForGPSFix() function. It's simple, and simply checks the GPS data for indication of a good lock. We toggle the second red LED on every check to let the user know we're doing something and still waiting.

The GPS returns data formatted as NMEA sentences. The one we are interested in is the GPGGA sentence, which contains the location fix information. We are checking the char at offset 43 to see if it's a 1 - this magic number is from the GPS Quality Indicator field, and 0 means not locked, 1 means locked.

Later on when we're initialized, we simply pass along the raw NMEA data for processing by the bip.io data workflow; you'll see that later on.

void waitForGPSFix() {  
  byte i = 0;
  Serial.print("Getting GPS fix...");
  while (gpsDataStruct.GPGGA[43] != '1') { // 1 indicates a good fix
    LGPS.getData( &gpsDataStruct );
    delay(250);
    // toggle the red LED during GPS init...
    digitalWrite(1, i++ == 0 ? LOW : HIGH);
    i = (i > 1 ? 0 : i);
  }
  Serial.println("GPS locked.");
}

Great! Now that we're all initialized, we'll have a look at the main loop() function.

The Main Loop

First off in the main loop, we read the accelerometer, and we print the accelerometer outputs to the debug console. You'll need to check your outputs and see what z-axis threshold makes sense for your particular chip. (A more sophisticated system would have an auto-calibration routine, and probably use data from all three axes.)

void loop() {  
  Serial.println("Reading accelerometer...");
  int accel_x = analogRead(A0);
  int accel_y = analogRead(A1);
  int accel_z = analogRead(A2);
  Serial.print("x: ");    Serial.print(accel_x);
  Serial.print(" \ty: "); Serial.print(accel_y);
  Serial.print(" \tz: "); Serial.println(accel_z);

Now we read the GPS data.

  Serial.print("Reading GPS: ");
  LGPS.getData( &gpsDataStruct );
  Serial.print( (char *)gpsDataStruct.GPGGA );

We take the GPS data and the accelerometer data, and insert it into our json data template. Then, we insert the json data into the HTTP request template, and finally turn it into a plain old C string. There's some commented code that will print the fully-formatted HTTP request to your debug console, for help if things aren't working.

  // Format the HTTP API request template
  nmea = String( (char *)(gpsDataStruct.GPGGA) );
  nmea.trim();
  data = data_template;
  data.replace("X", String(accel_x,DEC) );
  data.replace("Y", String(accel_y,DEC) );
  data.replace("Z", String(accel_z,DEC) );
  data.replace("NMEA", nmea );
  request = request_template + data.length() + "\r\n\r\n" + data;
  request.toCharArray(request_buf,512);

  // Uncomment this to print the full request to debug console
  // Serial.print(request);

Sending the Telemetry

Now it's time to send the data to the bio.io HTTP endpoint. We connect to the API, and if successful, write the request buffer out. Then we blink the green LED off for 250ms to give the user some feedback.

  Serial.print("Connecting to api... ");
  c.connect(API_HOSTNAME, 80);
  Serial.print("checking connection.. ");
  if (c.connected()) {
    Serial.print("Sending...");
    c.write((uint8_t *)request_buf, strlen(request_buf));
  }

  Serial.print("waiting...");

  digitalWrite(13,LOW);
  delay(250);
  digitalWrite(13,HIGH);

Finally, we optionally receive any data returned from the API and print it to debug, close the connection, and delay for our next cycle. (You could do some cool stuff in the data workflow, processing the telemetry and returning something useful, and this is where you'd check the response.) We're just delaying for a brief time, and not accounting for the time it takes to contact the API, because we don't need a precise cadence of telemetry transmission, just periodic checks.

  Serial.print("receiving...");

  // uncomment this to print the API response to debug console
  /*
  while (!c.available()) { 
    delay(1);
  }
  while (c.available()) {
    char output = c.read();
    Serial.print( output );
  }
  */

  Serial.println("Closing.");
  c.stop();

  delay(5000);
}

That's it! Now we'll take a look in the video below at how to create the bip.io workflow, and attach the telemetry output to more than one data service.

Creating the Workflow, and Testing

In this video, I will show you how to create a data workflow that accepts NMEA and accelerometer data into an HTTP endpoint, transforms the data, tests a condition, and if the condition passes, sends a fan-out of messages to both Twitter and Google Sheets endpoints.

And then, I'll show you a test of it all working!

Conclusion

So now we have a working crash alert prototype! I think I'll work on this some more and wire it into the bike's electrical system, so it automatically runs and always has power. I've already got some ideas for additional features and improvements, and I'm sure you've thought of some of your own.

There's a lot of great hardware out there these days, but we need to be able to use the data that all these IoT devices produce for us. Easy access to data services through a unified interface is a powerful productivity catalyst, and enables even more people to bring their ideas to life.

I knew that Twitter was reliable and worked well for sending me alerts, and Google Sheets makes a good data store for later processing. I didn't have to spend any time reading their API documentation or experimenting, because wot.io's data service integrations and the bip.io workflow made it all just work. Plus, if this turns into a product, I know the support is there to scale up massively. Even better, I can add features and intelligence to it by changing the server-side workflow without ever touching deployed hardware!

Grab yourself a LinkIt ONE board, and sign up for some data services, and you'll be well on your way to shipping your own Internet of Things device like this!

Who would have thought a dev board and data service exchange could make an excellent bit of safety gear?

September 21, 2015 / Posted By: wotio team

Enterprises employ a large number of different software solutions to run their companies and have long looked to various forms of the enterprise portal to help manage the complexity of this wide array of software. As a data service exchange, wot.io is well aware of the challenge of pulling out the important components of multiple data services and putting them in one view. With this in mind, and as just one example, we have developed an integration with JBOSS Portal, the popular open source enterprise portal system, as an example of how you can manage the complexity.

The wot.io operating environment (OE) provides essential infrastructure to run, manage, and connect IoT applications from our partners including device management platforms (DMPs) and a wide array of data services including storage, analytics, scripting, Web API automation, real-time processing, visualization, and more. The main focus of an IoT solution lies in these essential data services, however the wot.io OE itself also requires configuration and administration. We used these administration interfaces for our first portlet integrations with JBOSS.

The wot.io OE administration tools are built as individual components of HTML suitable for placement in various portal frameworks as a portlet, widget, web thing, gadget, etc. These units can be composed together to provide sets to tools most useful for a given user or use case. We have portlets for user management (adding, updating removing), group management, API permissions, and more. The portlets are built with a hub in the browser, allowing them to all communicate efficiently with a given wot.io OE cluster over a single WebSocket connection.

Using the JBOSS portal tools, the portlets can be added to and arranged with other JBOSS portlets on a single portal page.

The design of the admin components as portlets make the portal design flexible and all leverage a single log-in even though they are separate components. Communication through the hub via a persistent WebSocket connection also makes the portlets "live" in the sense that they can both send new settings to the connected wot.io OE and receive updates and dynamically update status.

This video shows the full process for adding wot.io OE administration portlets to a JBOSS portal.

As a data service exchange, the next useful extension of this pattern is to add key components from deployed data services to the portal as well. This allows a user to include status information, reports, graphs and visualization, and other important information together in a single portal view even if the information comes from several different data services. Stay tuned for more examples of this approach and other solutions for making seamless IoT solutions.

September 16, 2015 / 5GPPP, RAN, 4G, 5G, LTE / Posted By: Kelly Capizzi

Flexibility is certainly a key principle to the foundation of 5G RAN design, but it is only one part of the story. The second principle? Fine integration of existing and new radio access technologies. This is according to InterDigital Europe’s Alan Carlton, Vice President, and Alain Mourad, Senior Manager, in the final article of a three-part series on 5G featured in RCR Wireless News’ Reader Forum.

In the article, Alan and Alain discuss the role radio access networks will play in the future of 5G. The two experts explain that a practical and viable approach builds on the principles of flexibility and fine integration. 5G tends to be an evolution of 4G, but the real difference is that 5G will aim at an inherently flexible design that can federate or integrate the multitude of RATS effectively, as stated in the article.

The pair also references two European H2020 5G Public Private Partnership projects, Xhaul and ICIRRUS. InterDigital recently announced the kick-off of Xhaul, and InterDigital Europe’s involvement as a work package leader on the project. Xhaul aims to develop a 5G integrated backhaul and fronthaul transport network to flexibly and dynamically interconnect the 5G radio access and core network functions.

Read the full RCR Wireless News article here.

September 15, 2015 / wot.io, IFA Berlin, AllSeen Alliance / Posted By: wotio team

Last week wot.io was excited to be traveling to Berlin, participating in the AllSeen Alliance booth at IFA. IFA Berlin is one of the largest consumer electronics and home appliance shows in the world, and it was an amazing experience.

The AllSeen Alliance's AllJoyn framework is designed to provide easy communication and interoperability between devices on a proximal network, like a typical home with a WiFi network. To show how different types of products, all running AllJoyn, can work together, the AllSeen Alliance booth at the show had a shared network with a dozen member companies all participating.

AllSeen Booth Demos

The booth had an amazing array of smart AllJoyn products including:

All of these products had AllJoyn built into them, making them discoverable, allowing them to send notifications, and making it possible to control and program them. And because they all spoke AllJoyn, controllers from one company, like the switches from LeGrand, could be configured to manage any of the other devices.

Cloud Services

In addition to providing the specification for all of these devices to communicate on the booth network, AllJoyn also has a provision for allowing communication between local devices and the cloud with their Gateway Agent specification. This allows devices to securely interact with cloud-based applications, such as:

  • Affinegy, providing cloud interfaces to local AllJoyn devices via their CHARIOT platform
  • Kii, providing device management and cloud services

And, of course, wot.io. Working with Two Bulls, we were able to get a feed of all notifications being sent across the local AllJoyn network. Every time a device sent a notification, it was routed to Firebase where we then pulled it into the wot.io operating environment. We then configured some data services to show what you might do using various notifications.

We wrote a script in scriptr.io to parse the incoming messages and look for specific notifications. To make it interesting, we joined a few events together, sending a temperature reading back to wot.io each time we saw a "refrigerator closed" event. This allowed us to show a real-time event in the booth and still have something interesting to graph.

We then routed the incoming temperature reading to Circonus and graphed it. We also added some randomness to the temperature readings to make the graph more interesting and show some of the things you can do with scriptr.io. The resulting graphs clearly showed all of the activity during the day, and had some unexplained refrigerator open events over night!

It was great to work with our fellow AllSeen Alliance members to put together so many compelling demos of the smart home that is already a reality.

Other coverage of the AllSeen Alliance at IFA Berlin:

September 11, 2015 / atmel, xplained, samd21, winc1500, i/o1, bipio, shipiot, IoT, wotio, tutorial, demo / Posted By: wotio team

Getting Started

If you're an Atmel fan like I am, you're probably excited about their Atmel Xplained prototyping and evaluation platform for Atmel AVR and ARM microcontrollers. These platforms and extension boards, like the WINC1500 WiFi module that we use in this demo, make it fast and simple to get a hardware prototype up and running.

Along with the hardware, you'll find a broad selection of example code and projects included with the Atmel Software Framework, which is a great help when first learning a new MCU or peripheral chip.

I'm going to walk you through a demo application for the SAM D21 Xplained and WINC1500, which will send temperature and light levels to a dynamic chart visualization using bip.io's powerful workflow tools. This is a potent combination for rapid development and deployment of any Internet of Things device.

(This demo may end up monitoring the hot pepper plants in my garden when it's done!)

Prerequisites

You will need:

In addition to the above objects, you will also need:

Completely optional, but delicious:

  • One hot pepper plant for datalogging

Making a bip.io Data Visualization Workflow

We're going to start out by configuring a workflow on bip.io. This will be a simple one to demonstrate the ease of communication and data visualization, but feel free to experiment and extend your own workflow once it's working.

Connecting Your Boards

The SAM D21 Xplained board has three connectors for extension board modules. To follow along with this demo, connect the WINC1500 Xplained module to the EXT1 header, and the I/O1 Xplained module to the EXT2 header. Leave EXT3 unconnected, and hook the Debug USB connector to your computer.

Your setup should look like the image below:

Developing the SAM D21 Application

You can check out the Atmel Studio project files from github and use them to follow along; the link is at the top of this article.

Building and Loading

With the project open in Atmel Studio, go to the Build menu and select Build Solution, or you can just hit F7.

When complete, you should see zero errors listed in the panel at the bottom. If there are any errors, correct them in your code, save all the files and try the build again.

When you build is complete with zero errors, you can load it to the SAM D21 Xplained board using the tools in Atmel Studio. From the Tools menu, select Device Programming:

The Device Programming dialog will appear. You need to select the Tool type and the Device itself. Tool will most likely be EDBG and Device will be something like ATSAMD21J18A. This could vary based on your details.

Click the Apply button to connect to the board and put it into programming mode. The amber Status LED will blink steadily at ~2Hz when the programming tool is connected. Additional options appear. Select the Memories item from the list on the left. Keeping all the default options, just hit the Program button.

When done, you should see that the flash verified OK, and your program will be loaded and begin running. Close the Device Programming dialog. The amber Status LED should now be on solid, and will blink off briefly when serial console output is printed by the loaded application.

Now we need a serial terminal!

Serial Terminal Setup

You'll need a serial terminal program to talk to your board. As Microsoft no longer includes the venerable HyperTerminal with Windows, you'll need to get a terminal app. You can still get HyperTerminal from Hilgraeve, or you can use the open-source Tera Term which we use for this demo.

You will first need to know which COM port number Windows assigned to your board. Open the Device Manager (right-click on Start menu in Windows 8) and look under Ports (COM & LPT) for your board (there may be others in there, too). Note the COM number, COM5 in this instance:

Open Tera Term, and select Serial Port. We will do the detailed configuration of baud rate, etc., in a moment:

Go to the Setup menu and choose Serial Port..., then make sure you have a baud rate of 115200, 8 data bits, no parity, and 1 stop bit. Standard stuff, here.

Hit OK to close the settings dialog, and then press the reset button on the SAM D21 board, right next to the debug USB connector. This will restart the board and you should see the debug console output in the terminal. It should look something like this:

You now have data points being sent once per second to your bip.io workflow! Simply pull up the Bip you created, double-click on the Data Visualization, then the Chart tab, and open the chart URL to see sensor telemetry in near-real-time.

A Note About Firmware & Drivers

The WINC1500 module has firmware loaded, and that firmware version needs to match what's expected by the Atmel Software Framework version you are building against. You may see a message something like this on the debug console:

...
(APP)(INFO)Chip ID 1502b1
(APP)(INFO)Firmware ver   : 18.0.0
(APP)(INFO)Min driver ver : 18.0.0
(APP)(INFO)Curr driver ver: 18.3.0
(APP)(ERR)[nm_drv_init][236]Firmware version mismatch!
main: m2m_wifi_init call error!(-13)
....

If you get a message like the one below, you'll need to update the firmware. You can find complete instructions in the Getting Started Guide for WINC1500 PDF on Atmel's website.

Briefly said, you need to load the WINC1500 Firmware Update Project example inside Atmel Studio, build it, and then run a batch file to load the compiled firmware to the WINC1500 board via the SAM D21 board. It is very simple to do, although I did run into a problem with an unquoted variable in one of the batch files, and a space in the pathname.

Once the firmware is updated, and matches the ASF version you are using to build the demo project, it should all work just fine.

Conclusions

After seeing this demo, I'm sure you agree that Atmel's Xplained boards combine with bip.io to make a potent combination for rapid prototyping and product development. With this demo as guidance, you can quickly start using the powerful data services bip.io ties together, and get your ideas flowing! With time-to-market such a critical factor these days, these tools will certainly help fuel the IoT revolution, and your future products.

Get started now! Here are those links again:

As for this demo, it may just end up monitoring my hot pepper plants...

September 10, 2015 / 3GPP, 5G, RAN / Posted By: Kelly Capizzi

The 3rd Generation Partnership Project (3GPP), the mobile broadband standard, has re-elected Diana Pani, senior manager, InterDigital Labs group, Vice Chair of its Radio Access Networks (RAN) Working Group (WG) 2 for a second term.

3GPP RAN WG2 is in charge of the Radio Interface architecture and protocols (MAC, RLC, and PDCP), the specification of the Radio Resource Control protocol, the strategies of Radio Resource Management, and the services provided by the physical layer to the upper layers. Diana was first elected to the two-year term of Vice Chair of 3GPP RAN WG2 in 2013 and was re-elected by acclamation during RAN2#91 from August 24-28, 2015 in Beijing, China.

Since joining InterDigital in 2004, Diana has worked on UMTS and LTE product design, research and development and standardization assignments. She is recognized as an expert in 3G and 4G cellular radio access, most notably L2/L3 protocol design as well as L1 system design.

“3GPP RAN WG2 is seen as one of the most important working groups for 5G design, as they will define the Radio Interface Architecture and Protocols,” said Dr. Byung K. Yi, Executive Vice President, InterDigital Labs, and Chief Technology Officer at InterDigital. “This re-election underscores the strength of our engineers’ ability to lead in the research community and validates our strong contributions to 3GPP over the years.”

The 3GPP unites seven telecommunications standard development organizations (ARIB, ATIS, CCSA, ETSI, TSDSI, TTA, and TTC) to produce the reports and specifications that define 3GPP technologies. 3GPP specifications and studies are contribution-driven, by member companies, in Working Groups within four Technical Specification Group levels – RAN, Service & Systems Aspects, Core Network & Terminals and GSM EDGE Radio Access Networks.

September 9, 2015 / PerceptFX, 4K, Video / Posted By: Kelly Capizzi

VideoEdge, a leading online publication dedicated to content production and delivery, recently published, “Pre-processing Could Help Push 4K/8K Through Limited Pipeline,” featuring Mari Silbey’s coverage of the Vantrix announcement that its open transcoding platform now supports InterDigital’s PerceptFX.

PerceptFX works by filtering raw video, isolating and removing bits of data that are invisible to the human eye, quotes VideoEdge from the news analysis published by Sibley, senior editor Cable/Video at Light Reading. PerceptFX deals with the perceptual pre-processing then hands the content off to an encoder. Vantrix Media Platform, with support from PerceptFX, can be applied to offline video and live broadcasts.

In August 2015, PerceptFX and Vantrix debuted the solution in a live transcoding environment at Cable Labs Summer Conference. Later this week, the two companies will demonstrate the Vantrix Media Platform again at the International Broadcasting Convention (IBC) in Amsterdam.

September 8, 2015 / DASH-IF, 5G, PerceptFX / Posted By: Kelly Capizzi

With 5G quickly approaching, it is important to examine the opportunities and challenges that this next generation will bring to mobile video services. Recently, several industry-leading companies discussed this particular topic at “Video Meets Mobile – The 5G Opportunity,” a public workshop co-sponsored by The DASH Industry Forum (DASH-IF).  

The workshop was broken out into seven sessions and hosted in San Diego on August 18-20, 2015 along with DASH-IF’s 11th face-to-face meeting. In the session, “New Technologies and Enablers,” pioneering video research by InterDigital was highlighted alongside contributions from industry leaders Fraunhofer, Microsoft and Qualcomm.  

Yuriy Reznik, Director, InterDigital Labs, discussed user and environment-aware media delivery in a panel session focused on new technologies and enablers that included other industry experts such as Tim Leland, Vice President, Product Management, Qualcomm; Yago Sanchez, Research Scientist, Fraunhofer; and Kilroy Hughes, Digital Media Architect, Azure Media Services, Microsoft.  

Yuriy’s presentation featured InterDigital’s perceptual pre-processor, PerceptFX, and the relevance its solutions have to current and future video industry trends such as Ultra HD and HDR. The presentation also discussed new opportunities that 5G may present for the video industry and DASH-IF.  

“The environment-aware perceptual processing approach works well with today’s latest codecs, streaming formats, and over existing networks,” said Yuriy. “The evolution of video technologies and arrival of ultra-low delay 5G networks could make the perceptual processing approach increasingly appealing, enabling additional improvements in quality and degree of realism in reproduction of videos.”  

To learn more about the event and view the full presentation, please click here.

September 8, 2015 / MPEG-DASH, DASH, DASH-IF, OTT / Posted By: Kelly Capizzi

In the past year, the video industry has seen significant growth of DASH deployments as well as its more mature implementations, including MPEG-DASH 2nd edition standard. This exponential growth makes Streaming Media’s recently published and completely vendor-driven 2015 MPEG-DASH Superguide a video industry must-read.

The Superguide opens with, “MPEG-DASH State of Affairs,” a DASH Industry Forum (DASH-IF) article written by Microsoft’s Iraj Sodagar, Principal, Multimedia Architect, and InterDigital’s Alex Giladi, Senior Manager, Video Software Architect. As the industry moves towards MPEG’s 3rd edition of DASH, Iraj and Alex discuss the technical developments of this year, DASH-IF’s recent activity and future work in MPEG-DASH beyond the 3rd edition.

Also, the two experts encourage anyone deploying or planning to deploy OTT streaming services and solutions to get involved with DASH-IF. The organization is responsible for promoting market adoption of the MPEG-DASH standard. A successful standard is a standard that is widely deployed by industry, and one that enables interoperability among different vendor’s services and solutions with no pain, as stated in their article.

Click here to download the 2015 MPEG-DASH Superguide and read the full article!

September 1, 2015 / iot, wot.io, complete IoT solution / Posted By: wotio team

As part of our preparation for the IoT Evolution Expo in Las Vegas in August, we were happy to be able to work with some of our IoT hardware and data service partners. Together, we built a demo showing how several data services from the wot.io data service exchange came together to make up a complete IoT solution.

This IoT Proof of Concept was based on events and readings from a coffee maker and some fans. We selected these because they are familiar and, more importantly, they demonstrate the types of instrumentation that can be applied to a wide range of use cases across many business verticals.

Multitech engineers added sensors to the coffee maker to measure the flow of water when coffee was made. These sensors were connected to a Multitech gateway, which then sent data to Stream Technologies' IoT-X platform.

Stream sent the device data to wot.io where we routed it to a set of data services that could then operate on the data. bip.io, scriptr.io, and Circonus were all configured to receive and operate on the incoming device data.

Device data was then routed to Solair where it was integrated with other information about the coffee maker in Solair's system to create a full application dashboard. This application provided all of the information you need for managing your infrastructure, combining asset data, like parts lists and schematics, with live sensor readings.

You can see a sample of the functionality provided by the various systems in the video below. Thanks to our partners for their help in putting together a great demo so quickly!

More reading on IoT Evolution:

August 17, 2015 / Posted By: wotio team

Last October, at ARM TechCon, we showed a demo of a NXP LPC1768 connected to a WS2812 24 color RGB LED. The hardware was programed using ARM mbed Development Platform and connected to the mbed Device Server. We used the wot operating environment to seamlessly integrate the data coming off the devices to a search engine, an analytics package, and a business intelligence platform.

In this tutorial, we are simply going to cover the basics of developing an application with the mbed compiler and a Freescale FRDM-K64F. We will connect the on board Freescale FXOS8700CQ 6 axis accelerometer and magnetometer up to a bip.io workflow.

Prerequisites

In order to follow along with this tutorial you will need:

The cost of the board is around $35, and you probably have spare cables lying around. For my setup, I used Internet Sharing on my Macbook Pro, to connect the FRDM-K64F to the Internet. Optionally, you can wire your board to your switch, and it will receive a DHCP lease as part of the startup sequence.

The code for this tutorial can be imported directly from the public mbed repository.

Configuring your workflow

I am going to reuse a workflow from a prior tutorial so if you have already done the Photon tutorial, this will feel like old hat. If you haven't then first go to ShipIoT.net and sign in. You will then need to Create A Bip to create a new blank canvas:

blank canvas

You can then click on the central circle to Select Event Source:

select event source

Here we will select Incoming Web Hook to create a HTTP endpoint to which our FRDM-K64F will send it's data. This is the staring point of our workflow. Once you select that you will be presented with a icon in the center:

web hook icon

Above the central canvas you'll see a URL in which we want to replace the Unlisted with a proper path. For this workflow, we'll name it accel which should produce a URL of the form:

http://<your_username>.api.shipiot.net/bip/http/accel

You can view this URL by clicking on the Hide/Show link icon next to the URL. The next step will be to add a Data Visualization element to the workflow, so that we can chart the values coming from the accelerometer. If you click on Add An Action you will be presented with a options panel:

Select a Pod

If you click on Data Visualization option, you will be presented with a list of actions.

action selection

Here we want to select View Chart to create a graphical chart of the incoming web hook data. On the main canvas, we then can connect the Incoming Web Hook icon to the Data Visualization icon by dragging from one to the other:

drag and drop edge

This will link the incoming data to the chart. To configure the parsing options, we'll open the Parser tab and create some representative JSON in the left hand panel:

{ "x": 0, "y": 0, "z": 0}

and then click the Parse button to generate the associated JSON schema. We can then return to the Setup tab and double click on the Data Visualization icon to open up it's settings.

First we can set the X axis value to the time that the FRDM-K64F sent the request by setting it to a custom value of Invoke Time:

Invoke time

We can then set the Y1 and Y2 values to the values X and Y respectively:

x and y

Clicking OK will then save these settings. Opening the settings a second time will present us with a Chart tab:

chart tab

This will display the data as it comes in off of the webhook. The last thing we need to do is set the authorization settings on the URL. For our purposes we'll use HTTP Basic Authorization with a username and password of test.

Basic Auth

The important thing here is to gab a copy of the HTTP Request Header. We will need to modify the source of the program to use this value to authenticate against the server. Feel free to use any username and password you'd like. Finally click Save and your workflow will be running.

running workflow

Developing the ARM mbed Application

As a picture is worth a 1000 words, a 15:48 @ 30fps video is worth 28,440 pictures:

In this video, I cover writing the application, and demonstrate the charting interface of the workflow we created above. If you don't feel like watching the video, just import the code into your mbed Developer account, and edit the three lines commented in the source to reflect your username and authentication choices.

You can then compile, download, and flash your FRDM-K64F board, and everything should just work! If it doesn't odds are good that it is a networking issue. Should your device not be able to acquire an IP address, you won't see the debug messages on the serial console. Similarly, should the webhook not work, you can check the Logs tab for reasons. Often it is merely a copy/paste bug regarding the authentication token.

Once you have it working, you can extend your workflow to perform additional actions. You can save your data to a Google Spreadsheet, or have it send a tweet to Twitter, or even control your Nest thermostat, or all of the above.

August 11, 2015 / Posted By: wotio team

Getting Started with the Photon

The Particle Photon is a small open source WiFi dev board with an embedded STM32 ARM Cortex M3 microcontroller:

It supports a number of analog and digital inputs, and with some work can communicate with devices via TWI, I2C, or SPI. For this example, we're going to connect it to a triple axis accelerometer breakout board from Sparkfun:

This board has a Freescale MMA8452Q accelerometer which supports an I2C interface. The total component cost of these two prototyping boards is about $30USD, making it rather inexpensive.

For our development environment, we will need both the Particle Dev and Particle CLI tools. All of the device driver code will be developed in the IDE, and the CLI will be used to setup the web hook to send our event data to Bip.io.

Writing a Device Driver

The Particle firmware mimics the Arduino API in a lot of ways, but there are some significant differences. These differences largely take three forms:

  • new methods specific to the Photon's hardware
  • different header files which need to be included
  • compiler is remote in the cloud!

The fact that your compiler is not local to your machine means you need to bundle your projects in a Particle Dev specific way. All of your files must live in the same directory and are sent via HTTP to the compiler via a multi-part mime document POST request. This means you must supply all of your library code each time you compile or make a change.

The code for this device driver can be found at:

https://github.com/WoTio/shipiot-photon-mma8452Q

And you can get it via git clone using:

git clone https://github.com/WoTio/shipiot-photon-mma8452Q

The data sheet for the MMA8452Q can be found at:

http://www.freescale.com/files/sensors/doc/data_sheet/MMA8452Q.pdf

And it goes without saying, but you should download and save the data sheet somewhere for future reference. This chip has a lot of features that we aren't going to use or cover, and the data sheet provides information on using it for a wide range of applications including: tap detection, portrait vs. landscape orientation detection, and freefall detection.

The first file we're going to look at is the mma8452.h file:

Here we define a C++ class that will model the state of our MMA8452 accelerometer. Our default constructor will set the I2C bus address to 0x1d. If you jumper the pads on the bottom of the accelerometer board, you can change the address to 0x1c. If you do this, you will need to supply that address to the constructor.

In our initial setup phase, we will call begin which will initialize the I2C interface, place the board into standby mode, and then set the scale and data rate to their default values. Finally it will turn the data output on by exiting standby mode:

Setting the board to standby mode is done by clearing the low bit on the register 0x2a:

We start data flow by toggling this bit back to one:

The Freescale MMA8452 needs to be in standby mode to modify any of the control registers. To modify the data rate, we can write a rate factor to the control register 0x2a:

The rate factor defaults to 000 which per table 55 of the data sheet amounts to 800Hz. Our wire speeds on the Photon has two settings 100kHz or 400kHz, both of which are more than sufficient by a couple orders of magnitude to support the output data of our device. And since we're going to drive this off of a 5V 1A mains wired power supply, we're not going to worry about the low power modes. We could easily lower the sample rate, as we are unlikely to update the web API that frequently. Changing the output data rate to something closer to 2x your polling frequency should be adequate.

To configure the scale (multiple in G over which our 12bits represent) of the x, y, and z components, we need to write to the low two bits of the register at address 0x0e.

These bits per table 16 set 2G (00), 4G (01), or 8G (10). Currently the value of 11 is reserved, and should not be supplied. Since we're unlikely to swing this around like a dead cat, 2G is sufficient for testing. If you'd like to handle larger swings, feel free to bump the rate up to 8G!

Finally, we need a way to test for the availability of data, and a way to retrieve that data once we have some. To test for availability of xyz data, we check bit 3 of the status register at address 0x00.

If we were only interested in the fact that one axis changed, or only wanted to look for movement in one direction, we could query other status register bits, but this is good enough for now.

Then to read the data once available, we can make a call to read the 6 bytes starting at address 0x01. This will give us X,Y,Z in MSB order, wherein we need to trim the bottom 4 bits of the LSB byte (as the sample size is 12 bits).

The actual input and output is done by a pair of routines which simply write the request and destination register and a input or output byte array and length. The available and read methods use the in method:

Whereas the standby, start, scale and rate methods use the out method to update the register values:

Writing a Sketch

The application we're going to use to forward our accelerometer data to our web application will simply initialize the accelerometer and then publish an event every second with the most recent values of x, y, and z.

For sake of our sanity, we can also log the x,y,z values to the Serial interface, which will allow us to record the values as we see them. If you supply power to the Photon via your computer's USB, the Serial interface can be discovered as usb modem / serial device.

You can understand the setup() and loop() functions in the context of the main.cpp file from the firmware. The .ino files get translated to C++, it adds some header files, and is referenced in the main() function. The activities taken by main() are:

  • setup the WLAN interface
  • waits until WLAN is ready
  • first loop through, it calls setup()
  • then it calls loop() each iteration forever

Should the WLAN interface fail to connect, the system will stay in the wait state, which means the LED will continue to blink green, and your application will not run.

Wiring up the Board

Now that we have our software ready for testing on hardware, it is a good idea to wire up the nets on a breadboard for testing.

breadboard

For testing we connect the USB to our computer so we can watch the Serial output. For the pins we connect from the Photon to the breakout board:

  • 3.3V to 3.3V
  • GND to GND
  • D0 to SDA
  • D1 to SCL

Following Sparkfun's app note, I'm using 2 x 330Ω resistors between D0 and SDA and D1 and SCL. You should see roughly 3.3V on pins 4 and 5 of the breakout board when the application is idle (I2C is active low). If you have a raw MMA8452Q, look at figure 4 on the data sheet. To wire it up, you will need:

  • 2 x 4.7kΩ pull up resistors tied to 3.3V from pin 4 (SDA) and pin 6 (SCL)
  • a 0.1µF bypass capacitor attached to pins 1 (VDDIO)
  • a 0.1µF capacitor tied to GND and pin 2 (BYP)
  • a 4.7µF bypass capacitor attached to pin 14 (VDD)

As we're going to wave the accelerometer around, I am going to fix mine to a perma-proto board. We can use a fairly simple setup:

ciruit diagram

Here I'm using:

  • 5 x 6 pin female headers
  • 2 x 330Ω resistors for current limiting
  • 22 guage wire jumpers

For the wiring, I'm going to:

  • jumper pin 1 of SV3 to pin 6 of SV1
  • jumper pin 4 of SV3 to pint 1 of SV1
  • tie one resistor to pin 6 of SV2
  • tie the other resistor to pin 5 of SV2
  • jumper the resistor tied to pin 6 of SV2 to pin 5 of SV1
  • jumper the resistor tied to pin 5 of SV2 to pin 4 of SV1

This way I can replace both the breakout board the the Photon or simply reuse them on other projects. I've also added a 6th header on the other side of my board so I can setup a second app with some analog sensors on the right side:

perma-proto board top

The bottom of the board looks like:

perma-proto board bottom

As you can probably tell, I've reused this board for a few other projects, but that has mostly had to do with resoldering the the jumpers for the I2C pins.

The this point, you should be able to setup your WiFi connection using the Particle phone app, or you can use the particle setup wifi CLI command to configure your board. Once your board is connected to your WiFi, you can use the compile and upload code using the cloud button to flash your device with the app.

Configuring your workflow

Over at Shipiot.net, you can sign up for a free bip.io account. Bip.io is a data service that provides integrations into a number of web applications. It allows you to create automated workflows that are attached to your device's data.

For this tutorial, we will connect our data to a web based graph. We will use this graph to visualize the data. Later we could attach actions based on patterns we discover in the data. For example, if we attached the Photon and accelerometer to an object, each time the object moved, we could have Twilio send a text message to our phone, and we could record each movement to a Google Spreadsheet for later analysis.

Once you click the Create A Bip button, you will be taken to a blank workflow canvas:

blank canvas

If you click on the center circle, you will be able to Select Event Source:

select event source

For integrating with the Particle.io's Cloud API, we will select Incoming Web Hook which will allow us to process each event that is sent by their webhook to our workflow. After selecting Incoming Web Hook, your canvas should look like this:

incoming web hook

Above the canvas, there is a URL bar with a path component set to Untitled. Change this to accel so that we can have a path that looks like:

http://<your_username>.api.shipiot.net/bip/http/accel

We will need this URL to setup the webhook on the the Particle.io Cloud API. Before we do that, however, we should finish configuring the remainder of the workflow so that the Cloud API doesn't error out while attempting to connect to an endpoint that doesn't exist!

Next we'll add a chart to visualize the data coming in off of the X and Y components of the accelerometer. First thing to do is click Add An Action, and it will bring you to an action selection panel:

action selection panel

Here we will select Data Visualization, which will enable us to plot the values sent by the device. Click it will bring us to a subpanel:

action behavior selection panel

To view the data in chart form we'll obviously pick View Chart, but we could as easily generated a visualization that would have allowed us to view the data as JSON or simply see the raw data as it enters the system. This is very handy for debugging communications between elements in our workflow.

Once we've selected the View Chart option, we will be presented with a canvas with two nodes on it:

webhook + data visualization

Now by dragging and dropping from the Incoming Web Hook icon to the Data Visualization icon, we can create a data flow from one to the other:

data flow

Now all of the messages that come in at our URL will be sent to the chart. But in order for us to plot the data, we need to describe the contents of the message. Clicking on the Parser tab will bring you to a two panel interface that looks like this:

parser interface

Into the left panel, we will enter some JSON that looks like the JSON that our application sent to the API:

json in parser interface

We then click the Parse button to generate the associated JSON schema:

json schema

We can now use these values as inputs to our chart. If we double click on the icon for the data visualizer, it will bring up a control panel for setting a number of values. Scroll down to the bottom of the panel and you'll see entries for setting the value of X, Y1, and Y2. For the X value we'll use the time of the incoming request:

setting X

We can then set the Y1 and Y2 values to the accelerometer's x and y values respectively:

setting y1

Once you click OK it will save the configuration. Double clicking the icon again will present you additional tabs including one for the Chart:

chart tab

Here we can copy the URL and open it up in a new browser window to see the data as it comes in.

The last thing we need to do before saving our workflow is setup some credentials for the webhook. By selecting the Auth tab under the webhook panel, we can change the authentication to Basic Auth, and in my case I'm going to use test:test for submitting data to my webhook:

auth

We will also need the Authorization header later to configure the webhook on the Cloud API side of things. Clicking Save or Save and Close will start your workflow running!

running

Configuring the Cloud API

For the remainder of the setup, we will use the particle CLI tool to interact with the device and the Particle Cloud API. The particle CLI tool allows us to do basically everything we need including flashing the chip.

To compile our source code we can use the particle compile command:

particle compile photon *

This will save our firmware image to a .bin file in the local directory. We can flash this via USB or over WiFi. To flash over WiFi, we need the device ID from the listing which we can get with particle list

particle list

Here I'll flash my device darkstar02 using the particle flash command:

particle flash

Being able to flash your device remotely is really handy when you've installed the device somewhere. You can also provide the source files to the particle flash command, and the cloud will attempt to compile it and flash it too, but I like having a copy of the firmware for flashing again at a later date.

Once the device is done flashing and LED is breathing teal, we can attach to the serial monitor using the particle serial monitor command:

serial monitor

As you can see I shook the device a couple times to make sure it was working. If you don't see any serial output, that probably means your sketch isn't working. Unfortunately, debugging via the serial monitor doesn't have a full debugger support.

Assuming you are seeing data in the serial monitor, you can then setup the Particle Cloud API webhook. Edit the accel.json file to contain your device details:

accel.json

Once you've setup your device and url settings, you can create the webhook using the particle webhook create command:

particle web create accel.json

This webhook forwards all of the events published to the accel events to the url specified in our accel.json file. We can sample these events using the particle subscribe command to view the messages as they arrive from the device:

particle subscribe

Because the webhook has the json line in it, the data sent to our URL will have the x, y, and z values extracted from the data field and placed in the top level of the event. The json sent to the webhook also contains device details, ttl, etc. as well.

Viewing the Data

If you remember the URL of the chart URL, you can after a few seconds pop over to the chart and see the new data as it comes in. You will need to be logged into shipiot.net to view the data. Here's a sample with me shaking the device:

results

At this point, you can go back and add new actions to your workflow, and have them trigger based on changes in the state of your device. Send a tweet. SMS your friends. Record your shaking to a spreadsheet.

August 6, 2015 / 5G, IoT / Posted By: Kelly Capizzi

InterDigital’s Rafael Cepeda, Senior Manager, examines the 5G and IoT intersection in a recent article featured on EE Times. With what he describes as 5G guidelines mostly in place, the discussion regarding the next generation of mobile networks has turned to what 5G will do for society and how it will intersect with the Internet of Things (IoT).

In his article, Rafael describes how 5G and the IoT work together as two sides of the same coin. He utilizes smart cities and the transport sector as an example of how the IoT will require a more flexible infrastructure. The enormous amount of data generated by the IoT will require the flexibility that 5G is expected to provide and therefore, the IoT will drive the dynamic configuration of the 5G network. Rafael explains that the two will work together to deliver the ultimate efficient configuration that will serve all end user’s needs whenever, and wherever.

Rafael is primarily focused on 5G research and development efforts at InterDigital Europe. InterDigital’s London-based research unit recently announced its involvement in three Horizon 2020 projects that are underway – XHAUL, POINT (iP Over IcN the betTer IP), and RIFE (aRchitecture for an Internet For Everybody). InterDigital Europe’s involvement underscores the company’s commitment to making strong contributions to the development of the next generation of mobile networks.

Click here to read the full article or to learn more about 5G and IoT, visit the vault.

August 4, 2015 / HEVC, Video, PerceptFX, h.264 / Posted By: Kelly Capizzi

Jan Ozer, one of the leading journalists and thought leaders in the streaming content space, recently wrote about HEVC Advance, and the fact that some companies are already offering technologies that claim to reduce the delivery data rate of h.264 and other codecs by as much as 50 percent with minimal impact on visual quality. His article, “The State of Video Codecs 2015,” looks at ways that new technologies could provide another way to extend the life of the h.264.

The technologies being offered do not replace the codec, instead they aim to make it work more efficiently. In his article, Jan describes the approaches of three companies: Faroudja Enterprises, EuclidIQ and InterDigital. While all three companies offer an element of compression-enhancement, they operate at different points in the encoding workflow and may have different target markets. Faroudja Enterprises provides a video bitrate reduction technology that is employed at the front and back end of video encoding/transcoding workflows. The technology appears to be targeted toward broadcast and similar markets, as Jan stated in the article. EuclidIQ offers a compression suite that yields bitrate reductions without requiring pre- or post-processing. However, EuclidIQ does not seem to be targeting the streaming publisher directly, instead it seeks encoding vendors.

InterDigital’s PerceptFX delivers a set of tools, pre-processor and artifact removal toolkit, to help content publishers improve their customers’ viewing experiences and reduce costs for content delivery and storage. The PerceptFX pre-processor eliminates imperceptible image data, enabling 4K and other resolution video to be delivered at impressive bandwidth savings. This solution offers seamless integration with existing workflows and solutions. For example, PerceptFX will demonstrate how Vantrix now enhances their open transcoding video pipeline with the PerceptFX pre-processor at the Cable Labs Summer Conference in Keystone, Colorado this week.

Going to miss Cable Labs? Check out Vantrix and InterDigital’s next demonstration at International Broadcasting Convention (IBC) in Amsterdam from September 11-15, 2015.

To learn more about InterDigital’s PerceptFX, please click here.

August 3, 2015 / Posted By: wotio team

Snow, the freshest bip.io, has just been made available through both the public repository and npm, and contains some significant improvements on it's data model, general concepts, user interface and tooling. Many hundreds of improvements have distilled bip.io into a stable, easier to use and vastly more powerful tool.

While it doesn't constitute any backwards breaking changes with older installs, the way some concepts are communicated and used may instead just break your brain. Don't worry, its a good thing.

I'm pleased to welcome bip.io version 0.4 (Snow). Let me show you what's inside.

Channels, Be Gone!

The concept of Channels is the biggest conceptual pain for users. A 'Channel' has always been a container that bip.io can use to hold persistent configuration for an action or event, whether needed or not. The requirement of the old data model was to create a Channel explicitly, and then use that in a bip. In most cases, this configuration wasn't required to begin with as there was no configuration to store. This meant that you would soon fill up your lists of saved actions with junk channels while creating new bips, and feel increased pressure to maintain a mental model of how things were named, where they lived, and what bips they were used for. It's what you might call a leaky abstraction.

With that in mind, Channels have evolved into something new, called 'Presets', which you usually won't have to think about.

An intended side-effect of this change is there's now freedom to use actions how and where you like, multiple times, in the same bip. This is perfect for creating logical gates, performing calculations and transforming data. I have been literally squealing at how easy it is to build functional pipelines now. Dropping channels reduces the barrier to entry significantly, and we've been exploding here with new bip possibilities!

So Where Did They Go?

While Channels still exist under the hood, and this doesn't break any old bips that use them, they are now only necessary in very specific cases, which the User Interface now takes care of for you. All your old channels will still be available after this update, they've just been repackaged as 'Presets', which can be found by entering a node's Personalizations screen.

The way Presets work is they take what were previously configuration options for channels, and merge those configurations with personalization. This does a couple of important things

  • It means that you now have the flexibility to override an otherwise protected configuration option from within a bip itself. Take Tumblr (or any blog integration) for example. Previously you would need to create one channel for every permutation of URL and post status (draft/published/image/video etc etc), appropriately label those channels, and maintain a mental picture of where they all were in the system. Now you can save those permutations as presets, if and only if you want to, or otherwise override them for specific nodes in specific bips in ways which won't have unintended side effects.

  • It clarifies the architectural intent of Channels. Merging of channel configurations and imports already happened on the server side, but this was never clear to users or developers. Aligning the experience with the API expectation means there's fewer surprises in making the leap into bip.io development.

Additionally, you'll notice the Pods screen has been completely removed. We're still playing with this idea and it might be re-born as something else in the near future. For now however, it's been disabled. Everything that could be done in the Pods screen can now be done when configuring a Bip itself.

I haven't touched on how this is modeled in the API as that's more of a developer concern. For developers, you can find updated API documentation here, and templating examples here. In a nutshell, Snow templating uses JSONPath exclusively, with channel id's or action pointers (eg: 0e7ab3fc-692e-4875-b484-834620d1c654 or email.smtp_forward) usable interchangeably.

RPC's Overhaul

The RPC's tab has also been dropped, and every RPC available for a channel, action or event appears as its own tab under Personalizations. Additionally, RPC's are displayed inside their own tabs with a link available if you want to break it out into its own window. This means that Pods can start composing their own interactive experiences in the context of an active graph.

Actions Quick Search

The list of saved events and actions in the left sidebar has been emptied and will not show any results until you start searching. Only action and event names will appear now in these results, not the channels themselves. Once you select an action and drag it onto the canvas, you can get to the channel information under the Personalizations dialog, and selecting a Preset.

Search fields have also been added to the Action/Event select dialogs across the board, keyboard shortcutting and tab key awareness are a work in progress.

Native Functions

We've packaged up a handful of native system pods into something called Functions. Functions is a dropdown which you'll find on the bip editing canvas and consists of all actions from Flow Controls, Templating, Time, Math, Crypto, and a brand new Pod called Data Visualization.

You can find the full manifest of Functions in our knowledge base.

Data Visualization

Data Visualization is especially awesome because you now have the ability to track data as it travels through a bip, from raw or structured data to even displaying a chart over time. These logging and visualization functions are perfect for debugging. The actions are View Raw Data, View JSON and View Chart.

Like any other action, data visualizations can be linked to any other node in the bip, capturing and displaying whatever they receive. Once configured, double-click on the node and go to the functions appropriate RPC tab. The nodes will only show data they've received while your edit view is open.

Here's a quick video to show how to get it running. You'll also notice there's no channel or preset management involved, the full library of services is instantly and always available.

Or just install this yourself

Lots Of Love For Web Hooks

Web hooks are getting more and more use as you unlock the power of bip.io with your services and devices. We drove through some significant improvements that make integrating web hooks with your applications and devices a truly joyful affair.

Web Hook Parser

An ongoing frustration with Web Hooks is they never really knew what data they would receive, and therefore that data couldn't be transformed very easily between different actions. Parser accepts the expected JSON payload from a device or application, has bip.io understand its structure and then makes it available for any personalization. This can be found as a new tab which appears when configuring an Incoming Web Hook

For example :

Makes Unbabel callback attributes available for personalization :

Although this example is simple, the Parser supports fully formed structured JSON, nested documents and arrays etc. Any part of this complex structure can be extracted and personalized per node.

Testing

Coupled with the Parser feature, Testing (via the Outbox) has now become much more powerful and easy. When you test a Web Hook, the payload will be pre-filled with a sample payload based on your Parser example, as well as provide a cURL string for the endpoint, including authorization headers that can simply be copied and pasted into your console.

And That's Not All Of It!

Don't think that with so many big features to talk about, we've ignored the smaller stuff. You'll find tons of small tweaks and improvements across the system, and every Pod has also received a once-over. It's fantastic to be able to go back to support tickets with fresh solutions!

In other news, more support is incoming for developers and device integrators via Enterprise ShipIoT. Expect many more pods to appear in the coming months and sign up to ShipIoT to stay abreast of Internet Of Things integrations!

The feedback we've been receiving has been fantastic, please continue to share whatever's on your mind.

Enjoy.

August 3, 2015 / Posted By: wotio team

When we got our hands on a couple of Electric Imps here in the office, we set about to see how we could use their ARM Cortex processors and WiFi capabilities. Within minutes of plugging the Imps in and loading up some example code in their cloud-based IDE, we had our first successful internet<->device communication going. Cool! That was simple enough, so onto something slightly more interesting:

In the software world, every example starts with printing 'Hello World'. Well in the device world the equivalent is the famed blinking LED. So for that we'll need:

  • an Electric IMP
  • a 330Ω resistor
  • a breadboard
  • some jumper wires
  • an LED, of course.

We're using an IMP card and the standard April Breakout Board for our little project. You can purchase one as well through that link. Wiring up the board as shown here
and again grabbing some example code that we dropped in the IMP cloud IDE, we were able to then push the code to the device. Sure enough, our LED began to blink!
Manually configuring the LED to loop on and off every 5 seconds is neat 'n all. What would be more interesting, though, is to have our Imp respond to events sent over the internet. So, onto setting up our agent code to handle incoming web requests:

function requestHandler(request, response) {  
    reqBody <- http.jsondecode(request.body); 
    device.send("led", reqBody["led"]);

    // send a response back to whoever made the request
    response.send(200, "OK");
}

http.onrequest(requestHandler);  

and our device code *:

led <- hardware.pin9;  
led.configure(DIGITAL_OUT);

agent.on("led", function (value) {  
  if (value == 0) led.write(0); // if 0 was passed in, turn led off
  else led.write(1);            
});
  • note that Imp code is written in Squirrel. You can also learn more about the Squirrel language here

now we can send an example request: curl -X GET -d '{"led":1}' https://agent.electricimp.com/XJOaOiPDb7UA

That's it! Now whenever we send a web request with {"led": 0 | 1} as the body of the message, we can send voltage through to pin9 and control the state of our LED (on or off), over the internet!
Pretty cool.

We'll leave securing the control of our device to some application logic that you can write into the agent code, which for anything more than a blinking LED you'll absolutely want to do.

With the Electric Imp we're essentially demonstrating that it is now possible with just a few lines of code to remotely manage and control a physical thing that you've deployed out in the field somewhere. We've attached a simple red LED to one of the GPIO pins, but of course you can start to imagine attaching anything to the Imp, and have that thing now be a real 'connected device'.

Tails

One other cool offering from Electric Imp is their Tails extensions which make it super-easy to start reading environmental data. Ours has sensors to read temperature, humidity, ambient light, and even the barometric pressure. They're designed to work with the April dev board, so we grabbed one and connected it up as well.
Some quick changes to the agent code:

function HttpGetWrapper (url, headers) {  
  local request = http.get(url, headers);
  local response = request.sendsync();
  return response;
}

function postReading(reading) {  
    headers <- {}   
    data <- reading["temp"] + "," + reading["humidity"] + "," + reading["light"] + "," + reading["pressure"] + "," + reading["timestamp"];
    url <- "http://teamiot.api.shipiot.net/bip/http/electricimp" + "?body=" + data;

    HttpGetWrapper(url, headers);
}

// Register the function to handle data messages from the device
device.on("reading", postReading);  
<device code omitted for brevity>  

And with that we are setup to handle reading sensor data and sending it over the internet. But to where?

Ship IoT

The url line from above is where we want to point our data to, e.g.

url <- "http://teamiot.api.shipiot.net/bip/http/electricimp" + "?body=" + data;

which is an incoming web-hook URL we setup in bip.io to receive our sensor data into the cloud. Once we have the url setup in bip.io, we can start to realize the value of that data by connecting easily to a range of web-services.

Let's setup that URL:

  1. Create an account on bip.io through Shipiot.net
  2. Set up our bip. The following video shows us how:
    3. There is no step three! We're done.

Open up the sheet in Google Drive and you'll see we that our Imp is sending its four (4) sensor readings (temperature, humidity, ambient light, and barometric pressure) right into our bip.io 'bip' which automatically handles sending and recording the data to a Google Spreadsheet.

There are dozens of different services you could also connect your device data to from within bip.io- it all depends on what use case makes sense for you.

wot.io + Electric Imp =

With the recent release of Electric Imp's BuildAPI we were also able to set up an adapter to be able to securely command-n-control our entire collection of Imps from within the wot.io Operating Environment (wot.io OE), including:

  • Send commands to activate and control a particular device. e.g:

    ["send_data", < Agent Id >, "{\"led\":1}"]

  • Send commands to read data off of any specific Imp (or read data from all of them).
  • List all of the devices - and query their current state - registered under each account.
  • Review the code running on some (or all) of the Imps.
  • Remotely update the code running on some (or all) of the Imps.
  • Remotely restart some (or all) of the Imps.
  • View the logs of every device
  • and more...

which, when connected to the range of data services offered through the wot.io Data Services Exchange really starts to unlock the potential value of amassing connected-device data on an industrial scale.

August 3, 2015 / 5G, IoT / Posted By: Kelly Capizzi

The fifth generation wireless standard is expected to underpin new technology deployments as well as future technologies that at this time can only be imagined. Currently, we are in the earliest stages of defining what 5G will be, and opinions from throughout the mobile ecosystem are useful in the outline of the eventual big picture. Telecom Industry Association (TIA) recently conducted a survey, sponsored by InterDigital, which provides valuable insight into the network operator’s view of the 5G evolution.

InterDigital’s Chris Cave, Director of Research and Development, took a moment to reflect on the survey results, and where things stand today at the start of the 5G research race, in a recent article featured on VentureBeat. In the article, Chris provides six detailed reasons to support his statement that 5G may turn out to be a tale of two approaches – and maybe more. He provides evidence that at this point in the development of 5G there are different streams and motivators that need to be resolved and connected over the next few years, based on the results from the TIA operator survey.

However, Chris states that the point-of-view will likely change at the time of the technologies’ full-deployment. He closes with the statement, “Say in 2025, we’ll see it [5G] as the collaborative-but-competitive, global, evolutionary effort that it will undoubtedly morph into.”

Check out Chris’ full article here or to learn more about 5G, visit the vault.

July 31, 2015 / Posted By: wotio team

We've been having some fun with the Philips Hue smart lighting system and I wanted to expand the interactivity beyond the local office WiFi network. We had an Imagination Creator Ci20 (version 1) board available, so I thought it would work as a good gateway to pull data from the Philips Hue bridge and send it to some online services with one of the wot.io data services; bip.io.

Imagination Creator Ci20

To keep it simple, I decided to share one value, the current hue setting of a single one of our lights (see the Hue documentation for details on how it defines light values). To get the value, I wrote a Perl program on the Ci20 to connect to the Hue gateway using the Device::Hue module. The program finds the correct light (we have several), pulls out the hue value, and then sends it along to our bip.io instance set up for Ship IoT. My bip then calls Numerous and updates my hue metric.

Details

First I set up the bip so I would have an endpoint to send data to. It's a simple one with a web hook (HTTP API) for input and a single call to Numerous. The Numerous pod configuration involves activating the pod with your Numerous developer API key, creating a number in Numerous, then providing the metric id for your number created in the Numerous app as a configuration to the configured pod (see the video for details).

If you're not familiar with Numerous, it's a mobile app that displays individual panels with important numbers on each panel. If you install the app and search on "wot.io" you'll find our shared "wot.io Buffalo Hue" number. Then you can see how our Hue changes and maybe set one of your lights to the same color as ours.

Once the bip is created, you have an endpoint. Next is to send data to it using the Ci20 board and a short Perl program.

The Ci20 board uses a MIPS processor and runs Debian out of the box. Add a monitor, keyboard, and mouse and you're ready to go. The board has wifi connectivity, so once the Debian desktop came up, I used the desktop GUI tool to connect the board to the same network the Hue gateway runs on.

Perl is part of the standard install. There are many ways to install the Device::Hue module, cpanminus is likely the fastest:

sudo apt-get install curl curl -L https://cpanmin.us | perl - --sudo App::cpanminus cpanm --notest --sudo Device::Hue

You can find the program I used in the wot.io github project. The values you need to run it are all listed at the top:

  • Base URL of the Hue bridge on your network
  • A username or "key" for the Hue bridge (instructions)
  • The name of the light you want to pull data from
  • URL of your bip endpoint and token

Once it's configured, you can run and your hue value is updated on Numerous!

Another interesting idea to extend the project is to schedule a job in cron, store the hue value, and send updates only when the value changes. It would also be fun to set up an endpoint on the Ci20 to receive a value back and set the hue value. Maybe a project for next time, or maybe someone will beat me to it and send me a pull request.

July 30, 2015 / Posted By: wotio team

Prerequisites

In this tutorial, we will cover how to use bip.io to connect a TI Launchpad module with dedicated ARM MCU to a Google Sheet. What you'll need to follow along:

Some additional tools that are useful but not absolutely necessary (but you should definitely acquire):

In addition to the physical goods, you will also need:

Configuring bip.io

Before going in and creating your workflow, it is a good idea to first setup a new Google Sheet for this project. The two things you should do are create a doc named ShipIoT and rename Sheet1 to accel.

Google Sheet

The integration with Google Sheets in bip.io can use both of these values to identify where to put your data. By renaming the sheet, you can easily save the data for later.

Next we'll

Create A Bip

which will become our workflow for collecting our accelerometer data in a Google Sheet. Once you click that button, you will be presented with a blank canvas:

blank bip canvas

From here, it take about two minutes to configure the workflow:

If you didn't catch all that, click on the circle in the center, and you will be presented with a menu of event triggers:

trigger selection

Select the Incoming Web Hook option so that our CC3100 board will be able to connect to bip.io via a HTTP connection used by the library. This will generate the following view:

trigger

You can now see the center circle has changed to our Incoming Web Hook icon. We also have a URL above that broken out into parts. We need to provide the final component in the path which is currently Untitled. For this example, let's name this sheets.

url filled out

Clicking on the Show/Hide Endpoint button will assemble the URL into one suitable for copying and pasting into your application.

url listing

Next we'll add an action to add the data sent to the Incoming Web Hook to our Google Sheet. Click the Add an Action button to open the action selection modal:

action selection

Select Google Drive, and if this is your first time through, proceed through the activation process. When you have Google Drive selected you will be presented with a set of individual actions that this module can perform:

select action

Select the Append To A Spreadsheet action. We'll later configure the properties of this action to write to the Google Sheet we created earlier. But first we'll drag and drop from the Incoming Web Hook to the Google Drive icon to link them.

After they are linked, we'll describe the data coming from the Incoming Web Hook by selecting the Parser tab. In the left hand panel, we can type some sample data that the accelerometer will send:

json schema before

If we then hit the Parse button, it will generate a JSON schema that we can use to parse the incoming messages. There's no need to save the schema, as we will be using it in just a moment.

Next back at the Setup tab, double click on the Google Drive icon, and you'll be presented with an options screen. Fill out the fields according to how you setup your Google Sheet. In my case I named the spreadsheet ShipIoT and the worksheet accel:

drive options

If you then scroll down, you'll see a section for the New Row contents, which we will use the Custom option to select the Attributes of the incoming message to populate. Because we configured the JSON schema before coming here, we will be presented with a list of fields that correspond to our device's data.

row attributes

Then all that is left to do is to click OK and then Save, and our new workflow is now running. We can use the Auth tab to change the authentication credentials on the endpoint. By default it uses your user name and your API Token (found under settings) as the password. Selecting Basic Auth can allow you to change the username and password which is generally a good idea if you want to make the service available to someone else.

Setting up the Board

Once you have all of your components together:

  • insert the MMA8452Q breakout board in column j
  • trim the leads on two of the 330Ω resistors
  • insert the 330Ω resistors in series with the SDA and SCL pins bridging the gap

The board should look like this:

breadboard w/ resitors

It is a good idea to leave some free holes between the resistors and the board so that you can safely probe the lines. These two resistors are there for current limiting on the data (SDA) and clock (SCL) of the I2C bus.

Next we'll add some jumpers to the board. I'm going to use red (3.3V), black (GND), yellow (SCL), and green (SDA), but you can pick your own favorite colors. My pin out will look like:

  • Red f1 to 1.01 (3.3V)
  • Black f6 to 3.02 (GND)
  • Green a2 to 1.10 (I2C1SDA)
  • Yellow a3 to 1.09 (I2C1SCL)

The breadboard will look roughly like this:

breadboard wiring

On the other side, the CC3100BOOST should be installed on top of the TIVA-C or LM4F120 Launchpad. The jumper wires should look like:

topside jumpers

If you only have male to male jumper, you can wire up the back of the Launchpad as follows (remember that everything is backwards!)

underside of Launchpad

Verifying the I2C address (optional)

If you have a Bus Pirate handy, now is a great time to verify that your accelerometer is working, and ensuring it works. To start, we'll make sure we have the right leads:

Once you're certain you have the correct leads with 3.3v and ground, it is time to test to see if your MMA8452 is working correctly. The Bus Pirate allows us to connect search the I2C bus for the address of the device. If you're using the Sparkfun breakout board, there are two possible addresses: 0x1D or 0x1C. By tying the jumper on the bottom to ground, you can select the second address. To make sure you have the right address you can search the I2C bus this way:

Programming the Board

Now that we have verified that the MMA8452 is working, we can program the board. If you don't like typing, you can download the code for this tutorial from Github.

git clone https://github.com/WoTio/shipiot-cc3100-mma8452.git

You will need to install the directories contained within into the libraries directory for your Energia setup. On Mac OS X, you should be able to place both the ShipIoT and SFE_MMA8452Q directories into ~/Documents/Energia/libraries.

Once all of the files are installed, open the ShipIoTCC3100MMA8452Q.ino file in Energia, and you will see the following source:

ShipIoT_CC3100_MMA8452Q.ino source

The things that you need to change to work in your environment:

  • change the ssid to your network's SSID
  • change the password to your network's password
  • change the bip.io URL, user, and password
  • change the address passed to accel if necessary

If you use my credentials from the image, you will get an access denied response from the server. But points if you tried it :) Plugging in the board, and selecting the correct Tools >> Board and Tools >> Serial Port from the Energia menu should make it possible just to compile and load onto your board.

Once running, open up the Serial Console and you should see a sequence of dots as it connects to your WiFi network, obtains an ip address, and then the device should start making HTTP request to your URL. If everything is working correctly you should start seeing data appear in you spreadsheet.

Odds and Ends

If it all worked, you should have seen results appearing in your spreadsheet within seconds of the device finishing it's programming. Though in the real world not everything goes as planned. In fact, the first time I tried using the Wire library, I got no activity on any of the pins. So I hooked it up to a scope:

probes

If you look at the SFE_MMA8452Q.cpp line 45, there's a special Wire.setModule(1) line which is necessary for selecting which of the I2C interfaces Energia's Wire library should use. After discovering this method however, I was able to get something on the scope:

scope picture

In this picture the SCL is on probe 1, and the SDA is on probe 2. I2C is active low, so all you'll see if it isn't working is a 1.8V trace. You can see that the clock isn't terribly square, and there is some ringing, but nothing too much.

If you run into issues with I2C, this is a good place to start looking. And always remember to ground yourself, test w/ your multimeter, and double check with your scope. The Bus Pirate was also very useful in ensuring that the I2C communications were going through once everything was running. Macro (2) can be a very powerful tool for finding communication problems.

July 29, 2015 / IEEE, HEVC, SHVC, Image Processing / Posted By: Kelly Capizzi

The Institute of Electrical and Electronics Engineers (IEEE) Signal Processing Society’s publication, IEEE Transactions on Image Processing, is considered the flagship peer-reviewed publication in image processing… and has recently appointed our own Dr. Rahul Vanam, a staff engineer in InterDigital Labs, as Associate Editor for the 2015 – 2018 term.

IEEE Transactions on Image Processing publishes articles focused on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing through open calls for articles as well as special issues on various topics. Rahul, who specializes in multimedia signal processing as well as video and image compression/processing, joins the editorial board of this recognized journal along with industry representatives from leading companies around the world, like Google, IBM, Intel, Ricoh, Disney, and GE, among others. In addition to this achievement, Rahul serves an area chair for the IEEE International conference on multimedia and expo (ICME) 2015, publicity co-chair for the 2016 Southwest Symposium on Image Analysis and Interpretation (SSIAI), and as a key member of the IEEE MMTC Multimedia Processing for Communications Interest Group (MPCIG).

InterDigital has a dedicated video standards research team that actively contributes to the development of standardized video codecs through two areas: standards-based and prototyping-based innovation. Standards-based innovation focuses on participation and contribution to video standardization organizations, while notable achievements for the prototypes developed by the team include power-aware HEVC streaming, software-based HEVC decoder, fast HEVC encoder, and real-time SHVC decoder. For a view of some of our research in image and video processing, please visit the Vault.

July 24, 2015 / Posted By: wotio team

Setup

For this tutorial you will need to sign up for a set of accounts:

You will also need to acquire a CloudBit, based around a Freescale i.MX23 ARM 926EJ-S processor, from LittleBits. The parts we will use for this tutorial are:

  • a 5V 2amp USB power supply
  • a micro USB cable
  • p3 usb power
  • i3 button
  • w20 cloud
  • o9 bargraph

They should be assembled in series as follows:

Instructions for connecting the Cloudbit to your local WiFi network for the first time can be found on the Cloudbit Getting Started page. If you have already setup your Cloudbit, you can change the WiFi settings by going to your Cloudbit's page and going to Settings and selecting the following option:

http://control.littlebitscloud.cc/

If you don't already have a Slack account, you can create a new team and setup your own #littlebits channel for testing:

So by now, you have signed up for Ship IoT, have a CloudBit sitting on your desk, a Slack channel, and are wondering what's next?

Creating our first Bip

When you first Sign In to Ship IoT, you will encounter a friendly green button that looks like this:

Clicking that button will take you to a blank canvas onto which you can install an Event Source:

By clicking on the target in the middle, you will be presented with an Event selection screen:

We'll select "Incoming Web Hook" to provision a URL to which our CloudBit will send messages. In the field that say "Untitled":

Enter a path of "littlebits" for our Event Source, and we
should now have an event trigger on our canvas:

Next we will "Add An Action" which will bring us to an action selection screen:

If you scroll down a bit you will find a Slack pod which we can activate. Your first time through, it will request you to sign into your Slack account and authorize Ship IoT to access your Slack account. In the background it will provision a new access token and send you an email notifying you of that. In the future, you can deactivate this token through the Slack interface.

After you have activated the pod, you will be asked to select an action to perform:

In this case, our only option is to "Post to Channel". Selecting this action will result in another node in our bip:

Double click on the Slack icon to open up the action's preferences:

We can give the bip.io bot a name of "LittleBits":

We can select the "Channel ID" either using the "Use Preset" button which will attempt to discover the list of channels you have created, or you can supply a custom channel id:

Finally, we need to specify the "Message Text", and for this we will send the full message sent by the CloudBit by selecting "Incoming Web Hook Object":

After clicking OK, we can now link these together by dragging and dropping from the purple Event Source to the Slack action:

Now whenever a message is sent to https://yourname.api.shipiot.net/bip/http/littlebits it will be sent to our "Post to Channel" action!

Well not exactly. We still need allow LittleBits to send us messages. Under the "Auth" header, we can change the authentication type to "None":

Turning off auth makes our URL a shared secret that we should only share with LittleBits. Anyone with that URL will be able to send messages to our Slack channel, so some care should be taken not to share it!

Configuring a LittleBits Subscription

Configuring your CloudBit to talk to bip.io requires using the command line for a little bit. First we will need a little information from our Settings panel:

We will need to record both the Device ID and the AccessToken. These are needed to setup the subscription to our bip.iob application.

To setup a subscription requires a bit more black magic on our part. The CloudBit API Documentation describes how to make a HTTP request to register our device with a 3rd party subscriber. In our case, we would like to register our Incoming Web Hook URL as a subscriber to our CloudBit. To do so, we'll write a small bash shell script use curl in a script we'll name "subscribe":

#!/bin/bash

DeviceID=$1

AccessToken=$2

URL=$3

EVENTS=$4

curl -XPOST \

-H "Accept: application/vnd.littlebits.v2+json" \

-H "Authorization: bearer $AccessToken" \

https://api-http.littlebitscloud.cc/subscriptions \

-d publisher_id=$DeviceID \

-d subscriber_id=$URL \

-d publisher_events=$EVENTS

To use this script we need only make it executeable and run it:

$ chmod u+x subscribe

$ ./subscribe yourDeviceId yourAccessToken yourUrl "amplitude:delta:ignite"

This will cause the the URL you supply to be contacted each time the amplitude changes from low to high. If you want to have the value periodically reported instead you can just use the value of "amplitude" to get a message roughly every 750ms.

If the script works you will get a message back like:

{"publisherid":"xxxxxxxxxx","subscriberid":"http://yourname.api.shipiot.net/bip/http/littlebits","publisher_events":[{"name":"amplitude:delta:ignite"}]

This means your subscription has been registered and events are now flowing through the system.

Testing it out

If you push the button now:

A message should show up in your #littlebits channel:

You can use this same technique to drive any number of workflows through bip.io. All at the press of a button.

July 24, 2015 / Posted By: wotio team

wot.io has made it much easier to Ship IoT projects with Ship IoT, currently in beta, to make it easy to prototype smaller IoT projects with maker boards and starter kits. Ship IoT is deployment of one of our wot.io data services, bip.io, which is a web API automation system that makes it easy to connect to open APIs for dozens of services and automate data workflows.

As an example of the types of projects we think users might be interested in, we put together a simple project using the Kinoma Create--which packs an 800 MHz ARM v5t processor--as the device. Using the HTTP API provided by Ship IoT Lite, we're able to send data and events from the Kinoma Create to Ship IoT Lite, and then to other services like Twitter. Here's a video showing how we did it.

The IDE used there is Kinoma Studio which is the tool you use to deploy code to the Kinoma Create. You can find the sample code in our Kinoma project in github. I shared the simple Twitter bip in Ship IoT so you can get up and running quickly.

Ship IoT is a free to sign up, so give it a try today!

July 10, 2015 / MAC, Wi-Fi, IEEE, IETF / Posted By: Kelly Capizzi

Internet privacy is becoming a large concern, as more and more devices are getting directly or indirectly connected to the Internet. Recently, the IETF and IEEE 802 announced the successful completion of three experimental mobile privacy trials – and an InterDigital engineer was a key part of it.

The IEEE 802 Privacy Executive Committee Study Group identified privacy issues related to the use of globally-unique media access control (MAC) addresses in over-the-air communications like Wi-Fi, and the risk that long-lived identifiers such as MAC addresses pose to the exposure of users to unauthorized tracking.  Juan Carlos Zuniga, Principal Engineer at InterDigital’s Montreal R&D center, serves as chair of the study group, which proposed a solution to this privacy issue and embarked on experiments to study the implications of the solution. Earlier this week, Juan Carlos provided interviews to several wireless tech media outlets on the group’s work and recommendations for better security and privacy.

Concern arises from the fact that MAC addresses can become privacy risks by exposing users to unauthorized tracking. The uniqueness of the identifier and lack of encryption enables an easily made connection between the identifier and the user. “So you can identify the walking path, where they work, where they live, what their likely income is, what their age range is, in a scarily easy way,” Juan Carlos told CSO’s Maria Korolov. The IEEE Study group proposed the solution to update the Wi-Fi protocol to use randomly generated MAC addresses to increase security and privacy. Juan Carlos told Maria that he hopes to see his group’s recommendations incorporated in the next version of the 802.11 standard.

In the FierceWirelessTech article, “IEEE Study Group Recommends Improvements in Wi-Fi Security,” Juan Carlos explains that while the recommendation for randomized MAC addresses seems straightforward, there are still implications for commercial and enterprise networks. For example, a hotel may tie the identifier to an account so that the system can track that a guest has paid for their 24-hour Wi-Fi Service. If the identifier is changed, the system may try to charge a guest again. Juan Carlos clarifies that the IEEE group would want to avoid those types of things from happening.

To read the full CSO article, click here or to read the full FierceWirelessTech article, click here.

July 10, 2015 / IoT, oneM2M, wot.io / Posted By: Kelly Capizzi

The Internet of Things (IoT) is a major topic of conversation in the tech world right now, and InterDigital is involved in a big way – whether it’s our oneMPOWER platform, our commerical initiative wot.io, or the efforts of our various teams to partner with industry development efforts, standards bodies and external innovation. InterDigital’s Serhad Doken, Vice President, Innovation Partners, took a moment to reflect on the future of IoT, and the role of software in driving it, in an article featured on Wireless Week today.  

According to Serhad, IoT is not about connecting things or producing data…it’s about what happens next. In the article, he breaks down what he believes the IoT big picture will eventually hold into ten key points. Serhad addresses how the IoT will alter enterprise, transition business models, and amplify the importance of multi-modal User Interface and User Experience. He closes the article with what he refers to as “the crown jewel potential and real promise of IoT” – rapid new service creation. Serhad explains that full IoT solutions will allow for a simplification that enables any user to design and add new services without needing (or even wanting) to fully understanding the mechanics.

As a pioneer in mobile technology, InterDigital is actively taking on challenges in IoT. Most recently, the company demonstrated the latest capabilities of its oneMPOWER platform solution at Internet of Things World San Francisco in May 2015. InterDigital's oneMPOWER platform provides M2M/IoT application enabling services that include connectivity, device, data, and transaction management resulting in faster time-to-market, scalable application development and lower operation costs.  

Check out Serhad’s full article here or to learn more about IoT, visit the vault.

July 9, 2015 / MPEG-DASH, DASH, OTT / Posted By: Kelly Capizzi

Hulu, the U.S. - based OTT service, recently announced the migration to MPEG-DASH inside their various players. With the transition, DASH-compliant video segments serve more than 75 percent of Hulu’s traffic and the company expects that to increase as their services evolve.

Baptiste Coudurier, Principal Software Development Lead at Hulu, highlighted the benefits of the migration to DASH for the OTT service as well as its customers in an exclusive interview with Streaming Media Europe. The main benefits of the migration according to Coudurier: Control, flexibility, simplicity and performance. “Overall we [Hulu] have less user support to provide with DASH as we have more control on the players,” stated Coudurier in the interview. The migration to DASH is part of a major evolution for Hulu. Next steps for the company? Implementing the DASH multiperiod feature for ad insertion.

According to Alex Giladi, Senior Manager, Video Software Architect at InterDigital, this is a great illustration of the growing adoption of DASH by mainstream providers of premium content. Alex is involved in projects related to MPEG Systems in MPEG, DASH Industry Forum and SCTE. At the Streaming Media East 2015 conference, Alex spoke in the panel discussion titled, “Implementing New Technologies with MPEG-DASH.” The main focus of his discussion surrounded advanced dynamic ad insertion from DASH 1.0 to DASH 3.0. To watch his full talk along with the rest of the panel, please click here.

July 6, 2015 / IOT, IOE, 5G, SDN, NFV / Posted By: Kelly Capizzi

5G will literally be about supporting everything - that is, supporting a wireless world that is the “Internet of everything,” stated InterDigital’s Alan Carlton in the first article of his three-part series on 5G featured in RCR Wireless News’ Reader Forum.

In the article, titled “5G is coming! Wireless telecom is dead, long live wireless IT,” Carlton dives into the real 5G challenge – supporting the internet of everything. Mr. Carlton discusses how 5G will tackle the challenge of everything through a foundation of established IT thinking. The article covers the role of cornerstone technologies such as software defined networking (SDN) and network function virtualization (NFV) as well as radio technology in the emergence of 5G.

Make sure to stay tuned as RCR Wireless will publish Carlton’s two follow-up articles that will explore 5G network and radio aspect in more detail. To read the full first article, please click here.  

July 1, 2015 / Posted By: wotio team

At wot.io, we're proud to be able to accelerate IoT solutions with our data service exchange™. We've put together this Ship IoT tutorial to show you how you can use the Texas Instruments based Beaglebone Black board and its ARM Cortex-A8 processor with one of our data service providers: bip.io, giving you access to web API integration and automation with 60+ services.

This blog explains how to use the BeagleBone Black in a project that enables you to send a tweet with the press of a button! After implementing this, you should be ready to replace that button with any sensors you like, and to use bip.io to create interesting work flows with the many web services that it supports.

What you'll need

  • A beaglebone black
  • An internet connection for it
  • A breadboard
  • A pushbutton
  • A 1kOhm Resistor
  • Some wires

Step 1 - The hardware

Let's start by wiring up the button on the breadboard. The image below shows the circuit layout.

Full SetupBeaglebone CloseupBreadboard Closeup

The pins on the switch are spaced differently in each direction, so they will only fit in the correct orientation on the breadboard. An astute eye will also notice that a 10kOhm resistor was used in our test setup, as we didn't have any 1kOhm ones lying around!

The two connections on the P8 header are both GND (pins 2 and 12). The connection on the P9 header (pin 3) is 3.3V. What we're doing is pulling down the input pin (P8-12) so it stays at 0V until the button is pressed, at which point it goes up to 3.3V.

If you'd like to know know what all of the pins on the board do, check out http://stuffwemade.net/hwio/beaglebone-pin-reference/.

Step 2 - Connecting to Ship IOT Lite

Now, before we get into writing any code, let's set up a Ship IOT Lite account, along with a bip that tweets. You'll need a twitter account to complete this step, so go to twitter.com and sign up if you don't have one. Then you can go to shipiot.net and follow the instructions in the video below.

And you're done! You can test the endpoint by entering your username, API token, and bip name into the following URL and using

$USERNAME='shipiot_username'
$APITOKEN='api_token'
$BIPNAME='bbbtweet'
curl https://$USERNAME:$APITOKEN@$USERNAME.shipiot.net/bip/http/$BIPNAME/ -H 'Content-Type: application/json' -d {"title":"BBB", "body": "Check it out - I just tweeted from ShipIOT Lite."}  

Then, in a few seconds, you should see the message pop up in your twitter account! Note that twitter has a spam-prevention feature that prevents you from sending duplicate message, so if you want to test it multiple times, make sure you change the message body each time.

Step 3 - The software

For the device code, we're going to write a simple application in Python. If you don't know Python don't be afraid to give it a shot - it's a very straightforward and easy to read language, so even if you can't program in it you should still be able to understand what's going on.

Going through the basics of how to use the Beaglebone Black are out of the scope of this tutorial, but as long as you can SSH into it and it has an internet connection you are good to go. You can check out the getting started page for help with that. We will be listing the instructions for both Angstrom (the default BBB Linux distro) and Debian, which is a bit more full-featured.

First, we're going to install the required libraries. To do this we'll use pip, Python's package manager. But first, we'll need to install it.

On Angstrom (the default Linux distro, if you haven't changed it), run:

opkg update && opkg install python-pip python-setuptools python-smbus  

On Debian, run:

sudo apt-get update  
sudo apt-get install build-essential python-dev python-setuptools python-pip python-smbus libffi-dev libssl-dev -y  

Now, on either distro, install the dependencies that we will be using in the code:

pip install Adafruit_BBIO          # A library from Adafruit that provides easy access to the board's pin data.  
pip install requests               # A library to make HTTP requests simpler  
pip install requests[security]     # Enables SSL (HTTPS) support  

Now, the code. For now, let's just get it to run, and then we can go through it and investigate about how it works.

Create a new file called wotbutton and paste the following code into it:

#!/usr/bin/env python

###############################
### SHIP IOT ACCOUNT DETAILS ###
###############################
shipiot_username = "wotdemo"  
shipiot_token = "5139354cedaf7252c776ecf793452344"  
shipiot_bip_name = "bbbdemo"  
###############################

import Adafruit_BBIO.GPIO as gpio  
from time import sleep  
import json, requests, time, sys


def on_press():  
    ##############################
    ###### THE SHIP IOT CALL ######
    ##############################
    ## This is the important part of the integration.
    ## It shows the single HTTP call required to send the event data
    ## to bip.io, which then acts upon it according to the `bip`
    ## that was created. Note that since we are using twitter in our
    ## demo, and twitter has an anti-spam feature, 
    ############################
    r = requests.post(
        "https://%s.shipiot.net/bip/http/%s/" % (shipiot_username, shipiot_bip_name),
        auth=(shipiot_username, shipiot_token),
        data=json.dumps(
            {"title": "BBB", "body": "Beaglebone Black Button Pressed!\n" + time.asctime(time.localtime(time.time()))}),
        headers={"Content-Type": "application/json"}
    )
    ############################
    ############################
    if r.status_code != 200:
        print "Ship IOT Lite connection failed. Please try again"
    else:
        print "event sent!"


# Prepare to read the state of pin 12 on header P8
gpio.setup("P8_12", gpio.IN)

notifyWaiting = True  
oldState = 0  
# Program loop
while True:  
    if notifyWaiting:
        print "Waiting for button press..."
        notifyWaiting = False
    sleep(0.01) # Short delay in the infinite reduces CPU usage
    if gpio.input("P8_12") == 1:
        sys.stdout.write('Pressed button...')
        notifyWaiting = True
        on_press() # Calls Ship IOT Lite, as detailed above
        while gpio.input("P8_12") == 1:
            sleep(0.01)

Now, at the command prompt, type:

./wotbutton

and, after a few second of loading libraries and initializing inputs, you should get the prompt Waiting for button press.... Now press the button, and check out your tweet!

July 1, 2015 / SAM, Wi-Fi, LTE, Carrier-Grade Wi-Fi / Posted By: Kelly Capizzi

On June 24th, InterDigital’s Bob Gazda, Senior Director of Technology Development, participated in a webinar conducted by RCR Wireless along with industry experts from Ixia, Republic Wireless and Senza Fili Consulting. The dynamic panel discussion surrounded the opportunities and limitations of carrier-grade Wi-Fi as well as security issues for evaluating Wi-Fi offload strategies and how service providers can monetize Wi-Fi. To listen to the panelists’ assessments of these topics, and to view the full webinar, please click here.

Prior to the webinar, Mr. Gazda joined RCR Wireless’ Martha DeGrasse to discuss these topics in more depth. The interview served as background for the featured report, “Strategies for Effective Wi-Fi Offload,” that RCR Wireless released on June 25, 2015. Similar to other industry leaders, InterDigital has worked extensively on the integration and coexistence of Wi-Fi and cellular networks. As stated in the report, “solutions that combine Wi-Fi and cellular may ultimately serve users and operators best.”

Currently, InterDigital offers its Smart Access Manager (SAM) solution to improve traffic management across cellular and Wi-Fi. SAM provides intelligent network selection based on operator provisioned policy, Hotspot 2.0 parameters, user preferences, and network conditions.

If you’re interested in additional information on Wi-Fi Offload or unlicensed LTE, visit the InterDigital Vault to search through our latest videos, presentations and white papers.

June 26, 2015 / Posted By: wotio team

In March, wot.io had the opportunity to be a guest in the ARM booth at Mobile World Congress in Barcelona, Spain. We showed a transportation demo with Stream Technologies providing data from trucks in the London area. We routed the data to ARM's mbed Device Server which was hosted on the wot.io operating environment. We then routed it to several data services including a Thingworx mashup, ElasticSearch and Kibana visualization, and scriptr.io script.

ARM captured the demo in this video.

June 26, 2015 / Posted By: wotio team

Welcome to the wot.io labs blog. As we work with various types of IoT products and solutions, we often have the opportunity to create some really interesting proofs of concept, demos, prototypes, and early versions of production implementations. We're going to be sharing all of those with you here so you can get a view into some of the things we're working on. Stay tuned for some interesting posts!

June 26, 2015 / 5G, Millimeter Wave, EdgeHaul / Posted By: Kelly Capizzi

The initial requirements for 5G networks highlighted the need for an increase in spectrum, but that need has now expanded. Following the LTE and 5G World Summit Conferences, IDG News published an article that addresses the telecommunication industry’s search for new frequencies in which to operate 5G networks.

In the article, InterDigital’s Robert “Bob” DiFazio, Chief Engineer, discusses the growing need to unlock new spectrum bands in the 6GHZ to 100GHz range. “The use of spectrum in these bands is immensely important for 5G networks to be able to offer multiple gigabits per second,” stated DiFazio. An increase in communication speeds is expected to correlate with lower latency in mobile networks. As the article states, ”there is nowhere else to go but up,” according to Samsung. However, in order to access the potential of new spectrum bands, a new generation of antennas and modulation schemes will be required as well as regulatory approval.

Earlier this week at LTE World Summit, InterDigital featured a live over-the-air demonstration of its EdgeHaul™ WiGig-based millimeter wave mesh backhaul platform for Gbps transport. The EdgeHaul solution, which is designed to operate in 60GHz unlicensed spectrum, uses adaptive phased array beam forming technology and features an antenna that is a precursor to the new generation of antennas that will be needed to increase the speed of 5G networks.

Click here to read the full article or to learn more about EdgeHaul, please visit the vault.

June 24, 2015 / Posted By: wotio team

IoT solutions have many moving parts. Devices, connectivity, device management, and data services. So we've taken all of these components, wrapped them all up into one unified environment, and provided them for you to try out. We're going to take a look at one of our core data services, scriptr.

scriptr

Scriptr, on it's own, is a cloud-based javascript web API framework. It gives you an interface to write some javascript code, and then makes that available at an HTTP endpoint in the cloud. At wot.io, we've taken that interface and wrapped it into our environment. So now, any messages that your devices send can automatically get parsed in scriptr, giving you a tool to create your business logic out of those messages.

Getting Started

To get started, check out the tutorial on getting the Philips Hue system integrated with wot.io here. Once you have that running, get the path for the Blink Demo from the email you received when signing up, or contact us for help.

In this demo, we're going to use scriptr to automatically turn on and off a lightbulb.

In your scriptr account, create a new script and call it connectedhome/blink. To find out how, please refer to the scriptr documentation. Then type in the following code and save it:

var lights = JSON.parse(request.rawBody).data[0].lights; // Retrieve the data that we are putting onto the wot.io bus  
return '["phue","set_light",3,"on",' + !(lights["3"]["state"]["on"]) + ']'; // Send a message that either turns off the light if it is on or vice versa.  

Now, run your Philips Hue integration from the previous step, using the path from the email

python hue.py <bridge ip> <wot path>  

And that's it! Your light bulb should now turn on and off every few seconds. If you want to adjust the speed, simply change the delay in hue.py.

Demo two - color change

Let's try another simple script, where the color of the bulb changes. This time, create a script called connectedhome/changecolor, type in the following code, and save it:

return '["phue","set_light",3,"hue",' + Math.floor((Math.random()*65000)) + ']'; // Send a command to change the light bulb color  

Connect with the hue.py script again, and voila! That's all that it takes to create a production-ready deployment. There's no need to set up any servers or anything else.

June 24, 2015 / 5G / Posted By: Kelly Capizzi

From the 2015 Future of Wireless International Conference in London, Mobile World Live published an article focused on the hot topic of day one at the conference – 5G.

The article discusses the opportunities that 5G may provide to the wireless technology industry, government and regulators. According to Keysight Technologies, a United States test and measurement company, 5G is an opportunity to rethink the approach to the next phase of communications. Alan Carlton, Vice President of InterDigital Europe, forecasts 5G as a ten-year journey of innovation that will result in transformation of the industry as we know it today.

Mobile World Live also mentions InterDigital Europe’s current work with the European Commission on a 5G socioeconomic research project. The project aims to help establish European consensus for the requirements of 5G. Carlton stated that “we as an industry need to understand these requirements and there’s a lot of work to be done.”

Click here to read the full article or learn more about InterDigital’s role in 5G at the vault.

June 17, 2015 / STEM / Posted By: Kelly Capizzi

Science, technology, engineering and mathematics (STEM) education is a key to the next generation of researchers, engineers and business leaders as well as the competitiveness of our nation in an increasingly interconnected global economy. DelawareOnline recently published an article written by Bill Merritt, President and CEO of InterDigital, which emphasizes the need for an increase in support of STEM education in America.

As a mobile technology research and development company, InterDigital depends on the next generation of researchers, engineers and business leaders to emerge from the work being done in elementary, secondary and post-secondary schools. In the article, Merritt discusses the current under-performance in STEM throughout American school systems and explains the lack of interest in STEM careers even though the number of STEM-related jobs is growing twice as fast as other fields.

“That's not something we would like to have; it is something we must have if our company is going to prosper long-term. And there are literally thousands more companies like us, facing this high-skilled labor shortage, in Delaware and across the nation,” said Merritt on investments in education and research. Mr. Merritt calls on American companies, the U.S. Government, state governments and U.S. Congress to take action and make a difference in the educational opportunities for STEM.

InterDigital is committed to investing in the ideas and people of the future and has been an active supporter of the STEM community. Recently, the company's local STEM-related efforts include sponsorships of the Delaware Children's Museum Junior Engineers Program and wireless communications laboratories at Delaware State University.

To read the full article, please click here.

June 11, 2015 / 5G / Posted By: Kelly Capizzi

The fifth generation wireless standard – 5G – is anticipated to be the most critical advancement in digital society and Europe has taken significant steps to lead that development globally. Yesterday, Mobile Europe released an article that features new European research initiatives related to the development and roll out of 5G – and InterDigital Europe was highlighted as a contributor.

The article discusses the groundbreaking European Commission (EC) 5G study commissioned by Brussels that will explore the socioeconomic impact of 5G technology and potential use cases in various areas such as health, transport, social services and more. InterDigital Europe, InterDigital’s London-based 5G-focused research unit, in partnership with other 5G experts that include Real Wireless, Tech4i2 and Trinity College, Dublin will conduct this study over the next year. The insights that will be gained from this research project will be crucial to the securement of Europe as a global 5G leader.

Simon Saunders, Director of Technology, Real Wireless, who serves as a project director for the EU research project, discussed the study’s perspective with Mobile Europe. Mr. Saunders stated that “the consortium [Tech4i2, InterDigital Europe, Trinity College Dublin and Real Wireless] we have assembled to work on this project offers a uniquely informed yet independent perspective on these issues.”

As a pioneer in mobile technology, InterDigital has been at the forefront of wireless innovation in 2G, 3G, 4G/LTE and LTE-A, and now 5G. In addition to its role in the EC study, InterDigital is involved in three Horizon 2020 projects that are underway – XHAUL, POINT (iP Over IcN the betTer IP), and RIFE (aRchitecture for an Internet For Everybody). The company’s involvement underscores its commitment to fostering collaboration in the European segment and making strong contributions to the development of 5G networks.

To read the full article, please click here or learn more about InterDigital’s role in 5G at the vault.

June 8, 2015 / 5G, SDN, NFV, VR / Posted By: Kelly Capizzi

From the announcement of Facebook’s Oculus VR’s consumer headset in 2016 to the launch of Google Cardboard Expeditions, virtual reality has made its way into the topic of conversation in the tech world. VentureBeat, one of the premier online publications covering technology and startups, recently published an article surrounding virtual reality penned by InterDigital Europe’s Vice President, Alan Carlton, titled When Virtual Reality Really Hits, It Won’t Look like Google Cardboard.

In the article, Carlton discusses the role of the fifth generation wireless standard – 5G – in the evolution of virtual reality capabilities. He provides insight into what it will take in terms of core technologies such as Software Defined Networking and Network Function Virtualization in order to deliver the right virtual reality experience. Another major factor for the right experience according to Carlton? Reducing latency.

Recently, InterDigital announced InterDigital Europe’s involvement in four key European 5G initiatives, which all include specific latency-reduction goals. InterDigital foresees 5G latency requirements getting down to about 5 milliseconds, which will enable the broad uptake of virtual reality and augmented reality systems. Carlton states that there is not a single area of the system that does not come into play in driving latency.

For more information on InterDigital’s work in 5G, please visit the vault.

May 14, 2015 / NFV, Security / Posted By: Kelly Capizzi

The European Telecommunications Standards Institute (ETSI) recently signed an agreement with the Trusted Computing Group (TCG) to collaborate on the development of international standards for a secure global telecommunications infrastructure.

TCG develops, defines and promotes open, vendor-neutral, global industry standards, supportive of a hardware-based root of trust, for interoperable trusted computing platforms. On behalf of the TCG, Alec Brusilovsky, security standardization, Member of Technical Staff, InterDigital, presented activities and plans for future collaboration between the two organizations to the ETSI General Assembly in Sophia Antipolis, France earlier this year.

Mr. Brusilovsky is cochairman of the TCG’s Trusted Mobility Solutions (TMS) Work Group, which considers various use cases, investigates security issues and makes recommendations for the best security practices in consideration with all applicable public standards including TCG, ETSI, and IETF among others. The TCG TMS Work Group had an active role in establishing the Memorandum of Understanding (MoU) between the two standards organizations.

The MoU denotes areas of collaboration and strategy between ETSI and TCG to ensure total platform integrity, including boot, run-time, and crash integrity, using best available security technologies in telecommunication systems. ETSI and the TCG both stand to benefit from the adoption of this complementary approach to the standardization process.

“This MoU creates a much needed synergy between ETSI and TCG,” commented Mr. Brusilovsky. “It enables TCG technologies ensuring platform integrity as well as security automation to be utilized in ETSI initiatives such as NFV, CYBER, and TCLI.”

 
Pictured left to right: Mr. Simon Hicks, Chairman of ETSI General Assembly; Mr. Alec Brusilovsky, TCG TMS Chair; and Mr. Luis Jorge Romero, ETSI Director General.  

To learn more on the TCG TMS Work Group, please click here.

 

May 8, 2015 / 5G, VR / Posted By: Kelly Capizzi

Facebook’s Oculus VR recently announced that it will launch a consumer version of its virtual reality headset in 2016. Shortly following the announcement, TechNewsWorld, an ECT News Network publication, featured an article that surrounded the release as well as the future of virtual reality – and InterDigital’s Senior Director of Systems Engineering, Vincent Roy, contributed to the article.

Vincent discussed the role that the 5G wireless standard will play in the evolution of virtual reality with Quinten Plummer, longtime technology reporter. Roy stated that the projected large decrease in latency as a requirement makes the role of 5G critical to virtual reality. 5G could provide the ability for virtual reality to cut the cord and provide remote commands in real time without noticeable latency.

A number of companies and experts, including InterDigital, have started to shape their vision for the advanced services that will be enabled by 5G. At 2015 Mobile World Congress in Barcelona, InterDigital presented their vision for the advanced services enabled by 5G networks that includes tactile internet, pervasive video, eHealth services, and more.

Click here to read the full TechNewsWorld article and learn more.

May 7, 2015 / Posted By: wotio team

It's really easy to do many things at once with bip.io, but something that we see in the wild is duplication of workflows (and effort!) consisting just minor modifications. This can get out of hand and really stacks up if you have a bunch of social channels to reach. Especially if you need to have the same message format or filtering applied for every outgoing message. Making sweeping modifications or tweaking your strategy can get really cumbersome and tedious, really quickly!

bip.io's graph model lets you chain an unlimited number of different actions and content transformations together and create complete workflows, and the best part is - every action can potentially be parallelized! What this means is rather than having to do things in limited sequential order, bip.io can send the same message out to many different apps at one time. This is what we call a 'fanout'.

Say you have your own RSS feed and would really like to cross post updates to it, on both Twitter and Facebook. Rather than just taking the RSS entry and posting it wholesale to Twitter (and then duplicating the same flow for Facebook), we can pipe, filter and compose content into standard messages, which can be distributed to those services at the same time (fanned out). Having the flexibility to add or remove targets to this fanout dynamically means more time spent perfecting your content strategy and less time stuck in configuration.

May 5, 2015 / 5G, LAA-LTE, Wi-Fi, LTE / Posted By: Kelly Capizzi

The next major shift in mobile broadband – 5G – will enable innovative services through emerging wireless core technologies and a programmable infrastructure designed to meet new requirements. InterDigital’s Jim Miller, Director of Radio Standards, recently joined Senza Fili’s Monica Paolini to discuss one of the emerging technologies, LAA-LTE.

The article based on the conversation is titled “Fairness to Wi-Fi is Crucial to LTE Unlicensed’s Success,” and focuses on the evolution of the role of LTE unlicensed, Wi-Fi and their coexistence. Jim also discussed InterDigital’s LAA-LTE standardization efforts that are focused on 3GPP and current solutions, such as the Smart Access Manager, that provide the company with a good vantage point to assess the potential of LAA-LTE.

The interview will serve as a background for the upcoming report, – “LTE Unlicensed and Wi-Fi: Moving Beyond Coexistence” – being produced by Senza Fili in collaboration with RCR wireless and set to be released on May 12, 2015. In addition, XCellAir, Ruckus Wireless, Qualcomm, Cisco, Alcatel-Lucent and Wi-Fi Alliance among other industry experts participated in interviews that helped drive the report.

To read full interview transcripts, please visit: http://content.rcrwireless.com/LTE_Unlicensed_Report_InterDigital

May 5, 2015 / 5G, Millimeter Wave, spectrum / Posted By: Kelly Capizzi

With the initial 5G requirements highlighting a need for more spectrum, many industry organizations and governments are actively working to identify potential spectrum for future mobile services while taking into account other existing and potential users of spectrum. Most recently, OFCOM, an independent regulator and competition authority for the UK communication industries, released an updated report that sets their work plan on bands above 6GHZ.

The publication, "Laying the foundations for next generation mobile services: update on bands above 6 GHz" summarizes responses to OFCOM’s January Call for Input (CFI) on spectrum above 6GHz for future mobile communications and recognizes contributions from the detailed submission of InterDigital Europe to the CFI. InterDigital Europe, which is focused on the development of 5G and IoT technologies, also participated in the OFCOM event, “Future Technology and 5G” held on March 12, 2015.

OFCOM’s recognition of InterDigital Europe’s submission underscores the importance of InterDigital’s European research operations and aligns with their mission to foster collaboration in this segment. Other collaborative efforts can be seen in the recent announcement of their key roles in multiple Horizon 2020 projects, including the 5GPPP project, XHAUL.

For more information on InterDigital’s vision for 5G, please visit the Vault.

April 24, 2015 / 5G, Standards / Posted By: Kelly Capizzi

Numerous industry forums and research projects are working today to feed the standardization of tomorrow. InterDigital has played a central role in defining global wireless standards for the industry and is now focused on the next major shift – 5G. Most recently, InterDigital participated in the IEEE Communications Society (ComSoc) Standard’s Activities Council’s 5G Rapid Reaction Standardization Working Meeting that was hosted on April 21, 2015.  

The working meeting aimed to identify primary standards development challenges in 5G, determine standardization opportunities for the IEEE and establish pre-standardization research, study and/or working groups under the identified areas. Alex Reznik, Senior Principal Engineer, InterDigital, presented InterDigital’s vision of 5G along with the company’s ideas on standardization of 5G and other related areas. A number of major operator and telecom equipment vendors also participated in the one-day working meeting held at IEEE Standards Association Headquarters in Piscataway, NJ.  

"The standards eco-system must work towards a multitude of standards to define the key building blocks of 5G,” commented Reznik. “The coexistence, integration and harmonization across standards that complement each other is what will provide the ultimate 5G experience.”  

As a proven standards leader, InterDigital is well positioned for strong contributions to the design of 5G across multiple standard organizations and recently was awarded key roles in multiple Horizon 2020 projects, including the 5GPPP project, XHAUL. The company’s participation in this working meeting provided another opportunity to join other industry experts in shaping the direction of 5G.  

To learn more about InterDigital’s vision for 5G, please visit the InterDigital Vault.

April 22, 2015 / Wi-Fi, MAC, congestion-aware / Posted By: Kelly Capizzi

Recently in March, Liangping Ma, Member of Technical Staff of the InterDigital Labs group, demonstrated one of InterDigital’s cutting-edge network optimization technologies at the ACM Multimedia Systems Conference (MMSys), in Portland, Oregon. Ma worked jointly with Wei Chen, Senior Engineer, InterDigital, and Chien-Chung Shen, Professor of Computer and Information Sciences, University of Delaware.

The network optimization technology addressed a major challenge to video teleconferencing on mobile devices that communicate over Wi-Fi. Unlike video streaming of prepared content such as Netflix movies, video teleconferencing requires the network to provide very low latency. In addition, Wi-Fi is prone to packet losses resulting from either channel errors or network congestion, even with the retransmission mechanism in Wi-Fi. The-state-of-the-art technologies are either too slow in providing feedback on packet losses (e.g., RTP-layer retransmission based on end-to-end feedback) or too inefficient in using network resources (e.g., application-layer forward error correction).

The InterDigital Labs group tackled the challenge by looking at how to enhance the Medium Access Control (MAC) layer of Wi-Fi. At a first glance, it may be tempting to keep sending a lost packet until it is successfully received. This approach has two problems:

  1. When the Wi-Fi network is congested, to keep sending a lost packet will only make the congestion worse.
  2. To keep transmitting a lost packet will hide congestion-caused packet losses that are essential to higher-layer congestion control protocols such as the Google Congestion Control protocol in WebRTC.

Instead, the InterDigital Labs group proposed to first detect congestion, and then increase the number of allowed retransmissions if and only if congestion is not present. In other words, the number of allowed retransmissions is increased if and only if a packet loss is caused by channel errors. The technology significantly reduces packet losses without disrupting congestion control. The benefit is clearly demonstrated by the much smoother play of the video.

April 13, 2015 / 5G, SDN, NFV, RCR Wireless, 5GPPP / Posted By: Kelly Capizzi

The first commercial deployments of 5G networks are predicted to begin in just five years. However, it is likely that many elements of what will be 5G will be seen much earlier than 5G access networks as we work to develop the broader capabilities. Two elements in particular that will be seen are network function virtualization and software defined networks.

InterDigital’s Alan Carlton, Vice President, InterDigital Europe, and Jim Nolan, Executive Vice President, InterDigital Solutions, recently joined Jeff Mucci, CEO and Editorial Director of RCR Wireless News, for a discussion on the role of network function virtualization and software defined networks, NFV and SDN, in 5G development.

Carlton explained that basically SDN is about creating the separation of the control plane from the data plane and NFV is about taking the service logic out of local hardware and putting it in the cloud. He provides specific examples, such as InterDigital Europe’s recent milestone 5G Infrastructure Public Private Partnership (5GPPP) win, project XHAUL, of application-level development on SDN platforms and elaborates on how NFV and SDN will form the cornerstones on which 5G will be built.

To learn more and watch the full podcast, visit RCR Wireless News’ Coders: Episode 5 – 5G Networks and Open Source Code podcast.

April 8, 2015 / STEM / Posted By: Kelly Capizzi

Science, technology, engineering and mathematics (STEM) education is key to the competitiveness of our nation in an increasingly interconnected global economy. InterDigital has been an active supporter of the STEM community and is committed to investing in the ideas and people of the future. The company’s commitment can be seen in its latest local effort - sponsorship of new wireless communications laboratories at Delaware State University.

On Tuesday, InterDigital announced the contribution of a $300,000 grant to DSU’s College of Mathematics, Natural Sciences and Technology (CMNST) during a media event held at the University’s Dover Campus. Remarks were made by Dr. Harry Williams, President, Delaware State University; Jack Markell, Governor of the State of Delaware; Thomas Carper, U.S. Senator; Dr. Noureddine Melikechi, Vice President for Research, Innovation and Economic Development & Dean of the CMNST; and William J. Merritt, President and CEO, InterDigital.

The grant funding will be used to establish three new laboratories in the University’s Mishoe Science Center: a digital and analog electronics laboratory; a wireless communications, signal processing and controls laboratory; and an advanced micro-controller design lab. The new teaching laboratories will be available for undergraduate studies at DSU beginning this fall.

“As one of the leading American companies in the field of wireless technologies, we at InterDigital depend on the work being done here at DSU and in universities across the nation to prepare young people for careers in STEM fields,” commented Merritt. “We are very pleased to be a part of the Delaware State University College of Mathematics, Natural Sciences and Technology team on a new mission to the future – a mission to ensure that our young people are well equipped to be the researchers, engineers and business leaders our economy needs.”

For more information on Delaware State University’s College of Mathematics, Natural Sciences and Technology, please click here.


William Merritt, CEO, InterDigital, providing his remarks.


(L-R): Tom Preston, general counsel, DSU; Tom Carper, U.S. Senator; Jack Markell, Governor, DE; Dr. Noureddine Melikechi, vice president of research, DSU; William Merritt, CEO, InterDigital; Harry L. Williams, president, DSU; David Turner, board of trustees chairman, DSU; Robin Christiansen, Mayor of Dover; Jannie Lau, general counsel, InterDigital, and Alton Thompson, provost, DSU with display check.


Dr. Noureddine Melikechi and William Merritt talk post grant announcement.

April 7, 2015 / Posted By: wotio team

Let’s create a Bip that Tracks some Stock Prices and saves it for us out to an Analytics Service. We'll use keen.io for this example, but you could just as easily use mongodb or other pods and run the data through whatever services you want as well.

This data could be anything that’s found on the web. So although this example is polling for a current stock price, any data that can be scraped from the web is fair game. We’ll show how that’s done in a bit.

First lets define which stock symbol we’d like to track. Big Blue is as good as any, so lets pick the (1) Flow Controls and (2) ‘Generate a Payload' with (3) “IBM" as the payload.

(1)

(2)

(3)

This tells our bip to start out emitting "IBM" into whatever we want to connect to it.
We'll want to bind that "IBM" payload to a specific HTML DOM element on a specific page, so let's go ahead and use the HTML pod DOM Selector action (4) to make a web request and use the jQuery Selector to get the data we want. We'll add that action the same way we added our initial payload. (Create New Action -> HTML pod -> DOM selector)
(4)

(5) Go ahead and add our new DOM Selector action onto the graph (5) and double-click to edit the properties. (6)
(6) Here we’re going to go out to the web to grab the latest Stock Price. We'll use Google Finance for that as you can see in the URL field. We're interested in a specific DOM element, so we're going to enter #price-panel .pr span to select the page element we're interested in (the price!)

Do you see how we used the generated payload of "IBM" in the first step to build out our URL query in this step? That's how we grab a piece of dynamic data from one pod and feed it into another in whichever way we want!

Now that we're grabbing some data (IBM listed price), we need a place to put it. Well, that's exactly what Keen.io was built for, so let's use the keen.io pod to send our data over to our keen.io account!

Again, adding a Keen.io action to our bip is done the same way we added our Payload Generator & DOM Selector actions. (Pods -> Keen.io -> Add an Event)

Once we've added Keen.io, go ahead an add the action to the graph and its simply a matter of connecting the dots!
(7)

By connecting the actions in this way, as if by magic we now have available to us the transformed or collected attributes from the other actions.

Keen.io expects to recieve a well formed key:value object, so we'll marshal the Event Object field to send over a JSON object keyed on "IBM" with the value dynamically set to the Text field from our DOM Selector pod action.

With those things set, let's run our bip and then head over to our keen.io dashboard to see that everything is flowing correctly.

Viewing the latest data event we sent to keen.io (via our bip!) and sure enough we see that our stock price value is getting sent correctly.

{ "keen": { "timestamp": "2015-04-06T14:00:01.257Z", "created_at": "2015-04-06T14:00:01.257Z", "id": "5522916196773d1d96613471" }, "IBM": "159.08" }

Success!!

Now we can take advantage of all of the rich analytics and monitoring that Keen.io provides on that data we've sent.

To recap:
1. Pods -> Create Event -> Flow -> Generate Payload
2. My Bips -> Create a Bip -> Add Generate Payload
3. Add an Event -> HTML -> DOM Selector
4. Add an Event -> Keen.io -> Add an Event
5. Click-n-drag to connect Payload -to- DOM-Selector
5a. Click-n-drag to connect DOM-Selector -to- Keenio-Event
6. Double Click Each Pod Icon To Configure the Actions.
7. Hit Run!

Feel free to follow these steps to build up a tracker for your own portfolio.

And of course with bip.io we can connect many different services to store and mix-n-match whatever data we want, coming from whatever website or web-service we want, pretty easily.

So get out there and start bipping!

April 2, 2015 / small cell, Wi-Fi, LTE, Carrier Wi-Fi, SON / Posted By: Kelly Capizzi

ThinkSmallCell featured two companies that have raised the profile on how to manage large scale small cell deployments in its recent article, Managing Large Scale Small Cell Deployments – and XCellAir, an InterDigital commercial initiative, is one of them.

In the article, ThinkSmallCell, a leading resource website for those involved in the mobile phone industry, covers potential future features and required characteristics with the expansion of small cell deployment and its interwork with Carrier Wi-Fi. XCellAir is highlighted in regard to their view of the solution space encompassing both Wi-Fi and cellular as well as their entirely cloud-based solution.

XCellAir offers one of the industry's first cloud-based, multi-vendor, multi-technology mobile network management and optimization solutions. Their strategic ecosystem features a network comprised of Wi-Fi access points and LTE small cell vendors, as well as self-organizing network (SON) suppliers that will help wireless service providers unlock untapped potential of Heterogeneous Networks (HetNets) consisting of Wi-Fi and / or cellular small cells.

To learn more about the solution, please visit xcellair.com

March 31, 2015 / Posted By: wotio team

One of the nice things about bip.io’s ability to automate the web for you is that things can appear to happen for you without you having to do anything. But sometimes you want to have more control over when those events occur.

So we’ve added the ability to schedule bips to trigger on whatever schedule you want.

Say you want one of your personal bips to trigger every weekday at 5pm, and you want some of your business-oriented bips to only trigger on the third Friday of every month.
Well, you can now set a tailored, specific calendar for when each and every bip you own should run. As you can see here in this example, which will run every 3rd Friday of the month at 4:30pm, starting on April 1st. (You can even tell it to run in someone's else's timezone!)

Remember that this scheduling feature is on a per-bip basis.
If however, you want to change the interval of how often all of your triggered bips are fired, you can define your triggers schedule by adjusting the crons settings in your config/*.json, like this example to run every 15 min:

"crons": {
    "trigger": "0 */15 * * * *",
      ...
},

let us know if you find this feature useful and be sure to share with the community the awesome bips you're scheduling!

and as always, you can stay up to date with what’s going on with bip.io by following us on twitter

March 30, 2015 / Posted By: wotio team

When thinking about web automation we need to go beyond simple integrations and think of it as a class of programming that makes use of various distributed API's to be put to work backing parts of applications, or replacing them entirely. Where once all software was built and run on a single machine with localised libraries and runtimes, we now have the world's API's doing their one thing well, at scale, right at our fingertips.

That's pretty neat! It's an exciting future.

bip.io is purpose built to this end, and although it's a web automation framework, it's also a visual programming tool for building and easily maintaining discrete endpoints which can augment your own applications, using external and native API's.

I had the chance to build a new 'randomised integer' flow control recently and took it as an opportunity to not only build some useful Bips (essentially, distributed graphs), but also take advantage the 'app-like' characteristics of Bips by replacing the loading message on the bip.io dashboard. Anyone else on the team could now update the configuration as they like, no programming required. No sprint planning, no builds, no deployment plan. It was fun, quick to do and pretty useful, so here it goes!

In A Nutshell

We're going to generate the message in the dashboard loading screen

From a public Web Hook Bip that looks like this

Problem Space

So this is the most important part - Planning. The loading messages are generated randomly for each hit on the dashboard from a list of possibilities. When we look at a piece of existing code like :

$motds = Array(
  "message 1", 
  "message 2",
  "etc..."
);

define('MOTD', $motds[ array_rand($motds) ]);

... it can be distilled down to being just a 'list of strings'. One of those strings needs to be extracted randomly, stored and displayed somehow. And that's what this Bip should do.

I know we have a text templater that can contain a list, and ways to split lists of strings using flow controls. The randomisation part means there's some math involved, so the math pod is needed too. That bit should take care of the extraction requirements.

The logical steps of extracting a random string from a list will be (annotated with bipio actions) :

  • get the number of lines (templater.text_template)
  • generate a random number between 0 and the number of lines (math.random_int)
  • get every line number (flow.lsplit)
  • test that the line number matches the random number (math.eval)
  • if the line number matches the random number, then that's a random line (flow.truthy)

However, because Bips are asynchronous, distributed pipelines, there's no way to loop back all the possible outputs from that computation, I'll need somewhere to store the result for retrieval later. For that, I'll use a Syndication container, which can store line items (syndication.list).

Select A Web Hook Trigger

Web Hooks are a certain flavor of Bip which sits passively in the system and waits for messages to arrive before performing it's work. They're kind of like personal API's that can be called on demand with any ad-hoc data structure.

Find them by just going to Create A Bip > Create New Event and select Incoming Web Hook

Now I'm ready. While it's not completely necessary, it's useful to give web hooks a name. The name will become part of the URL. For this I called it random_motd - Random Message Of The Day, because it will behave similarly to unix motd's

Create Containers

Here's a little cheat if you're following along. Go ahead and plug this filter into the action search area message template,math,by line,truthy,store.

It should give you a list that matches pretty closely to the actions mentioned earlier, and look similar to

Add them all into the graph. When it comes to creating the syndication container, make sure to set replace mode, rather than append mode. This will make sure that new motd's aren't appended to the container, and that it's contents are replaced instead.

^^ This is really important and ties everything together in the end

Connect The Dots And Personalize

Whichever the preferred method is, by either connecting these in advance or step by step, eventually we'll need a graph that looks like this :

I usually connect them all up first and let bip.io try to figure out how data should be transformed, but it's personal preference. I start by dragging from the Web Hook icon to templater and so on, until the syndication container is the last node in the pipeline.

For each of the nodes, double click and set up these transformations

Templater - Set the template

Math - Generate random value

Flow Control - Split lines

Math - Select Random Line

Flow Control - Test Current Line Equals The Random Line

Syndication - Replace Into Container

Select A Renderer

Almost there, we have the pipeline defined but how is calling this endpoint going to return a random message? That's where Renderers come in. Renderers let a Web Hook Bip respond in custom ways beyond a simple 200 OK message by making use of RPC's provided by Actions themselves, but usually hidden.

What we want to do in this case, is serve the random line that has been stored in the syndication list container, back to the connecting client. Luckily syndication.list has a renderer to do just this, so I enable it by going to the Renderer tab and hitting 'Enable' for the 'Returns List Content' renderer under 'Syndication : current motd'

Makes Sense?

Ready To Save

Because it's a web hook, you'll generally need your username and API token to call it, but I don't care about authentication for this demo, so under the Auth tab, Authentiction should be set to 'none'. None auth means its available to anyone with the link.

try it

Or install it yourself

A couple of notes if you're building this yourself...

After the Bip has been saved it will appear in your list under My Bips, and callable as https://{your username}.bip.io/bip/http/random_motd

You'll need to prime the endpoint by calling it once, which will set the container content and give it something to respond with. After that, you're all set.

Video here

Happy Bipping!

March 27, 2015 / 5G, H2020, 5GPPP / Posted By: Kelly Capizzi

As a pioneer in mobile technology, 5G research and development has been an area of special focus for InterDigital and its subsidiary, InterDigital Europe. In addition to 5G R&D, InterDigital Europe is focused on partner collaboration in Horizon 2020 (H2020) and other initiatives. The company’s mission to drive future technologies and foster collaborations can be seen in the company’s key roles in H2020 projects such as POINT (iP Over IcN the betTer IP) and RIFE (aRchitecture for an Internet For Everybody) along with its most recent milestone 5GPPP win.

Wireless Magazine, a publication that targets the UK Wireless sector, published an article yesterday that highlights InterDigital Europe’s milestone 5GPPP win for project XHAUL. In the article, editor James Atkinson underscores our experience in the space along with our involvement in this exciting project. The objective of project XHAUL is to develop a 5G integrated backhaul and fronthaul transport network to flexibly and dynamically interconnect the 5G radio access and core network functions.  

Check out the full article here and learn more about our vision of 5G on the InterDigital Vault.

March 24, 2015 / IoT, M2M, IP, oneM2M / Posted By: Patrick Van de Wille

The Internet of Things (IoT) is one of the most important areas of research for InterDigital, for a variety of reasons. For our technologists, we understand that it’s a force that will in all likelihood fundamentally alter the way the world operates, much like the advent of large-scale business computing or the Internet. For our investors, we often highlight that IoT is one of the next big opportunities that the company is working to capitalize on, through a variety of approaches including our Convida Wireless joint venture, our ONEMPOWER M2M service delivery platform, and wot.io, our startup focused on IoT data management.

So although it’s still early days for IoT, we were happy to see InterDigital included in a list of IoT market leaders when it comes to intellectual property. In a blog entry yesterday, Boston-based TechIPm, LLC, a professional research and consulting company specializing in technology and intellectual property mining and management, ranked InterDigital third in its “M2M for IoT Innovation Ranking,” just behind LG and Ericsson and ahead of Samsung, ETRI and Qualcomm, among others.

The ranking underscores the pioneering development and standardization work that InterDigital has done in IoT, including helping to drive the ETSI standard that is now a part of the OneM2M effort. That work has also been captured in a solution, the ONEMPOWER M2M service delivery platform, which was highlighted at Mobile World Congress in Barcelona.

It should be noted, InterDigital does not vouch for the accuracy of third-party research or the methodologies such researchers employ – TechIPm’s conclusions are their own. Still, it’s nice to be noticed!

March 17, 2015 / Posted By: wotio team

Here in New York, we get very little control over our apartment's environment. Our buildings have central heating, window A/C units are the norm, and depending on how high your building goes your windows might not even open. This makes a perfect environment to use a device like the Netatmo weather station. It reports on the 'vital stats' of your environment, both indoors and outdoors (temperature, atmospheric pressure, CO2 levels, and even noise levels) so you can get a good idea of what your environment is like. But better yet, with a system like Wotio you can actually put that data to use, and have your A/Cs, windows, and heaters act on that data to create a more comfortable environment. In this post we'll get our NWS connected to Wotio, and in a future one we'll use some of Wot's partner services to automate our A/C with it.

The Integration

One of the cool parts of this integration is that, unlike some of the others, the Device Management Platform for the NWS is cloud-based. This means that this integration can be run from anywhere without being limited to your local network. So spin up a cloud instance and try it!

Prerequisites

To get started, you'll need the following:

  • A Netatmo Weather Station
  • A computer, virtual machine, or cloud instance with Python 2.7 installed
  • A Wotio access token and path
  • Netatmo Credentials, Client ID, and Secret Token

To get your Netatmo Credentials, register on their developer site and create a new app (https://dev.netatmo.com/dev/createapp). Your Client Id and Client Secret will be given to you there.

The integration

For this integration, we'll be using Python. Even if you don't know it it's fairly easy lanugage to read and understand what's going on.

You can find the code here:

https://github.com/WoTio/wotnetatmo

Download it, then run:

sudo python setup.py install  
wotnetatmo <netatmo username> <netatmo password> <netatmo clientid> <netatmo clientsecret> <visit http://wot.io for your wot.io path>  

Where <UUID> is a random string you've generated to identify your application. If you're doing multiple integrations and want them to communicate, this should be kept as the same value.

And that's it! Your wemo devices are now integrated into Wotio. The trick here is in the wotwrapper library. What it does is it takes the functions in a python class (in this case the Netatmo class) and wraps them into Wot-style JSON commands. In this case the only function available is get_state(), which is used to push data onto the Wotio bus.

For detailed usage, see the code and documentation at WoTio/wotnetatmo on Github.

The Wot side

Now that we've got our station wrapped, let's check out it's data stream on Wot.

We can connect to Wotio using several different protocols, but for our purposes we're going to use websockets. Why? Because we can use any number of mechanisms to view the data streams we're generating in real-time. If you don't have a websocket shell installed, you can try progrium/wssh. It's also based on Python 2.7, so if you've gotten this far in the integration it should work for you perfectly fine. So let's use it to open a websocket connection to our data stream on wotio:

wssh <visit http://wot.io for your wot.io path>  

And you should see the data stream coming from the Netatmo Weather Station, similar to the one below:

[{"Noise": 38, "Temperature": 22.2, "When": 1426618048, "Humidity": 42, "Pressure": 1002.6, "CO2": 1058, "AbsolutePressure": 1001.7, "date_max_temp": 1426595993, "min_temp": 21.7, "date_min_temp": 1426578048, "wifi_status": 60, "max_temp": 23.5}, {"date_min_temp": 1426582660, "rf_status": 44, "Temperature": 15, "date_max_temp": 1426614084, "min_temp": 10.3, "battery_vp": 5344, "max_temp": 16.5, "When": 1426618032, "Humidity": 41}]

Notice how this data is now being pushed and routed within Wotio automatically? That's what gives Wot it's power. There are lots of pre-integrated services that can act on this data. You can even integrate another device (just as easily, mind you) and Wotio gives you the power to link them together so that they can communicate no matter where they are in the world; all they need is an internet connection. For example, you could have a trigger set up so that if some data point here reaches a certain threshold, some other action occurs (say, the battery running out triggers a tweet, or some other device to turn off). And all of this is production-ready, with no infrastructure for you to maintain, should you want to take your devices (or services) to market.

And if you've got another device connected as described on the blog, you should be able to send and receive commands on it as well!

Learn More

If you'd like to see a sample of the power you can get out of Wotio, visit the wot.io website and we'll call you to schedule a meeting.

March 17, 2015 / Posted By: wotio team

When looking around the smart home market, it's hard to miss the Belkin WeMo line of products. They've got many different kinds of compatible devices, from light bulbs to power sockets to space heaters. All of which have some sort of data associated with them, from the state of the device to the state of the surrounding environment. So why don't we integrate our devices into Wotio, just like we did with the Philips Hue.

Why Wot?

What if the WeMo's light sensors, bulbs, and just about every other device was already pre-integrated into a single system, including devices on other platforms? What if the only thing needed to get them working together was a short script? What if you could then tack on other services from 3rd parties (maybe SQL storage, or tweeting specific events) with little extra effort? And someone else did all of the hosting and scaling work for you? Well, then you'd be using Wot. So in this post I'll be showing you how easy it is to integrate the WeMo system into Wotio, and how Wotio can be used to leverage it.

The integration

WeMo devices can be controlled two ways - either via their Cloud API, or via the local Device Management Platform ( Unfortunately, their DMP doesn't have an open API and is (at least publicly) undocumented. So in this post we'll be seeing how to bypass their DMP and still retain control of our devices over the internet, and then in a future post, how to replace theirs with one on Wotio. If you're cringing right now, you've probably done some system integration before. But don't worry, that's what makes Wot so powerful - it makes these integrations so quick and easy that we don't have to worry about them anymore.

Prerequisites

To get started, you'll need the following:

  • A Belkin WeMo device (in this demo, we'll use an Insight Switch)
  • A computer, virtual machine, or cloud instance with Python 2.7 installed and a network connection to the Hue bridge
  • A Wotio access token and path

The integration

For this integration, we'll be using Python. Even if you don't know it it's fairly easy lanugage to read and understand what's going on.

You can find the code here:

https://github.com/WoTio/wot-wemo

Download it, then from that directory run:

sudo python setup.py install  
wotwemo <visit http://wot.io for your wot.io path>  

Where <UUID> is a random string you've generated to identify your application. If you're doing multiple integrations and want them to communicate, this should be kept as the same value.

And that's it! Your wemo devices are now integrated into Wotio. The trick here is in the wotwrapper library. What it does is it takes the functions in a python class (in this case the WotWemo class) and wraps them into Wot-style JSON commands.

JSON Message received         | Function Called  
------------------------------------------------------------------
["list_devices"]              | wemo.list_devices()
["help","<device_name>"]      | wemo.help(device_name)

For detailed usage, see the code and documentation at WoTio/wot-wemo on Github.

The Wot side

Now that we've got our switch wrapped, let's check out it's data stream on Wot, and attempt to control it over the internet.

We can connect to Wotio using several different protocols, but for our purposes we're going to use websockets. Why? Because we can use any number of mechanisms to view the data streams we're generating in real-time. If you don't have a websocket shell installed, you can try progrium/wssh. It's also based on Python 2.7, so if you've gotten this far in the integration it should work for you perfectly fine. So let's use it to open a websocket connection to our data stream on wotio:

wssh <visit http://wot.io for your wot.io path>  

And you should see the data stream coming from the insight switch! To control it, all you have to do is type a command:

["set_state","<device_name>","off"]
["set_state","<device_name>","on"]

Notice how each time you send one of these commands, you not only toggle the switch, but change the data coming out of it as well? That's what gives Wot it's power. There are lots of pre-integrated services that can act on this data. You can even integrate another device (just as easily, mind you) and Wotio gives you the power to link them together so that they can communicate no matter where they are in the world; all they need is an internet connection. And all of this is production-ready, with no infrastructure for you to maintain, should you want to take your devices (or services) to market.

And if you've got another device connected as described on the blog, you should be able to send and receive commands on it as well!

Learn More

If you'd like to see a sample of the power you can get out of Wotio, visit the wot.io website and we'll call you to schedule a meeting.

March 17, 2015 / STEM, NSPE / Posted By: Kelly Capizzi

Science, technology, engineering and mathematics (STEM) education is essential to the growth and development of our country. InterDigital is committed to the importance of STEM education and has actively driven support in the STEM community through its sponsorship of the Delaware Children’s Museum Junior Engineers Program. The company encourages employees to become engaged in local STEM education efforts and is proud to share one of the latest efforts by advanced video research technical staff member, Yong He.

Last Saturday, the math team of Black Mountain Middle School won first place among 40 plus Southern California middle school teams and made history with the first female student to win first place as an individual at the MATHCOUNTS Competition Series held at University of California Irvine.

InterDigital’s Yong He has been the competition coach for the Black Mountain Middle School math team located in Rancho Penasquitos, San Diego for the past two years. He received the 2015 California Society of Professional Engineers First Place Team Coach award from MATHCOUNTS for his outstanding contributions as a coach.

“The state competition victory was the happiest moment for every student, parent and volunteer who dedicated their time, hard work and support to the team, and I am so proud to have had a part,” stated He. “This award carries the same significance to me as that of my professional achievement awards.”

The National Society of Professional Engineers cofounded MATHCOUNTS in 1983 to empower middle school students to reach their full potential in mathematics. The MATHCOUNTS foundation offers three distinct programs: MATHCOUNTS Competition Series, The National Math Club and Math Video Challenge. The MATHCOUNTS Competition Series is the only national coaching and competition mathematics program for sixth, seventh and eighth grade students. The program has received a number of recognitions including two White House citations as an outstanding private sector initiative.

For more information on the MATHCOUNTS foundation, please visit www.mathcounts.org.

March 11, 2015 / MWC, 5G, small cell, backhaul, WiGig, HetNet / Posted By: Kelly Capizzi

With the close of Mobile World Congress (MWC) 2015, there has been an influx of media coverage surrounding the event. This post features post-MWC coverage on InterDigital and its commercial initiatives:  

March 6, 2015:

ThinkSmallCell’s Mobile World Congress 2015 – Small Cell Report mentions XCellAir, a cloud-based, multi-vendor, multi-technology mobile network management and optimization solution, in their coverage on the planning and deployment side of small cells. To learn more about XCellAir, please visit xcellair.com.   

March 9, 2015:

Rethink Wireless’ Caroline Gabriel’s MWC round up article, “MWC: mobile shake-up looms, but not from 5G,” highlight’s InterDigital’s Peraso baseband system-on-chip as a notable solution for 60GHz WiGig and small cell backhaul solutions. Gabriel goes on to mention XCellAir in the new operators section as she discusses the role of cloud platforms to support a new generation of network.  

In the latest post from the Dispatches from the Wireless Front blog on toolbox.com, the executive team of XCellAir discusses their business model and its relationship to the HetNet paradigm with the blogger, wirednot.    

March 10, 2015:

Shortly following the MWC round up article, Rethink Wireless’ Gabriel published an article focused solely on InterDigital’s new product strategy.  The article highlights the four key areas that our labs organization is focused on and our two most recent commercial initiatives XCellAir and wot.io, a data service exchange.  To learn more about our labs, click here, and for more on wot.io, please visit wot.io.

March 10, 2015 / 5G, IoT, M2M, Spectrum, Networks, MWC / Posted By: Kelly Capizzi

5G is theorized to be a fundamental shift and key enabler in the future of the digital world. The shift from 4G to 5G will encompass the emergence of new technologies and approaches to the Internet that will change the way people live, work and play. The new approaches may require collaboration among vertical markets to help the wireless industry understand the consumer's needs and develop a technological network that connects people, things and services like never before.

This requirement appeared as a common theme across the six panelists in the FierceWireless Technologies’ “The Path to 5G: Defining the Next Generation of Wireless Networks” luncheon that took place during Mobile World Congress on March 3, 2015 at the Fira Congress Hotel in Barcelona.

Sue Marek, Editor-in-Chief, FierceWireless, moderated a dynamic panel that included the following six industry executives: Asha Keddy, VP Platform Engineering Group, Intel; Tom Keahtley, SVP, Wireless Network Architecture and Design, AT&T Services; Alex Jinsung Choi, EVP and Head of Corporate R&D Center, SK Telecom; Adam Koeppe, VP of Network Technology and Planning, Verizon Communications; and Eduardo Esteves, VP, Product Management, Qualcomm.

The panelists made it clear that 5G requires a need for a much broader set of definitions beyond just the telecom industry and will require contributions from key players across industry vertical markets such as health care and transportation. Throughout the discussion, they expressed that the next generation of wireless networks will need to be defined through an open environment with a focus on the quality of service for specific user experiences and the new business models that will emerge in the market.

Alan Carlton, Vice President of InterDigital Europe, closed the luncheon with some remarks on the panel discussion and InterDigital's involvement with 5G. Currently, the company has a strong focus on 5G innovation in areas that include Air Interfaces, Networks, M2M/IoT, Spectrum Sharing, and Services Enabled by 5G Networks.

For more information on 5G, make sure to visit the InterDigital Vault at http://vault.interdigital.com.

March 9, 2015 / IoT / Posted By: Kelly Capizzi

wot.io, a leading data service exchange for connected device platforms, recently was recognized as a top technology innovator. The company was named one of ABI Research’s “Hot Tech Innovators” and was ranked seventh on IoT Nexus’ “Power Players in the Internet of Things” report in February.

ABI Research, a technology market intelligence firm, analyzed 118 of the most innovative companies in the global market that they consider to be on the cusp of imminent breakout. IoT Nexus’ “Power Players in the Internet of Things” report featured the top 50 IoT companies, of which the top ten was comprised of Cisco, Intel, Google, ARM, SeeControl, Sigfox, IBM, GE, and Spark.io, along with wot.io.

For more, please see the wot.io news release.  

March 5, 2015 / Wi-Fi, LTE, 5G, LAA-LTE / Posted By: Kelly Capizzi

Shortly after XCellAir announced the launch of their strategic ecosystem, FierceWirelessTech published a feature story on the company, XCellAir emerges from stealth mode, launches ecosystem for LTE, Wi-Fi network optimization.

Monica Alleven, editor of FierceWirelessTech, interviewed XCellAir co-founders and proven technology experts: Amit Agarwal, president; Narayan Menon, CTO and EVP of engineering, and Todd Mersch, EVP of sales and marketing, to gain insight into one of the industry's first cloud-based, multi-vendor, multi-technology mobile network management and optimization solutions.

As an InterDigital commercial initiative, the company was formed to meet the challenges presented by an evolving network landscape through enabling wireless service providers to efficiently manage, optimize and monetize their wireless networks.

Their strategic ecosystem features a network comprised of Wi-Fi access points and LTE small cell vendor, as well as self-organizing network (SON) suppliers that will help wireless service providers unlock untapped potential of Heterogeneous Networks (HetNets) consisting of Wi-Fi and / or cellular small cells. The goal of the ecosystem is to deliver immense increases in network capacity, extended coverage and lower cost-per-bit.

FierceWirelessTech quoted Menon, “the way we've architected our solution… it's inherently multi-technology, multi-vendor. It will evolve very nicely to support 5G small cells as well." In addition, its technology has implications for Licensed Assisted Access (LAA)-LTE also known as LTE-Unlicensed.

To learn more about the solution, please visit xcellair.com.

March 2, 2015 / MWC, oneM2M, SAM, SDP, mmW backhaul, WiGig, Live / Posted By: Kelly Capizzi

Day one at Mobile World Congress 2015 has arrived! This post will serve as your source for the action happening at our MWC 2015 booth, Hall 7, Stand 7A71.

The action includes demos of our WiGig-based millimeter-wave mesh backhaul solution, oneM2M-compilant Software Development PlatformSmart Access Manager and Perceptual Pre-Processing. Two of our commercial initiatives, wot.io and XCellAir, are also featuring their technologies. Stay tuned for more details throughout the week.

To close out a successful first day, The Mañaners, a popular local reggae band, opened up their three day set at our booth. Don’t miss the band live tomorrow and Wednesday at 5:30PM CET.

 The Mananers

February 22, 2015 / Posted By: wotio team

We’re very happy to announce the latest release of bip.io realizes all of our community’s feedback!

bip.io 0.3 (‘Sansa’) is both an upgrade of the user experience and open source server, introducing a bunch of new features for authoring and sharing your Bips and contributing to its ever growing list of supported services (Pods).

The bip creation workflow has had a significant overhaul which places action discovery, configuration and workflow assembly at the heart of the experience without sacrificing any of the instrumentation you might already love.

We’ve taken the best bits of flow based programming tools (the non-tedious parts!) and applied them to general web automation, with more crowd intelligence under the hood so you only need to customize when it makes sense. Some of that intelligence has also been baked into our open source server (fork it on GitHub) so your own software can learn as we do. You can read a bit more about that in Scott’s recent post covering transforms synchronization - it’s one of our best features.

The changes may look drastic but many core paradigms remain intact, now streamlined and modernized. Lets take a look at some of the bigger changes so you can get started right away. We also have a new built-in help system to refer to any time, or please reach us at support@bip.io if the going gets tough.

My Bips vs Community

Lets face it, there wasn’t a lot you could actually do with Bips from their landing screen. We’ve split community and personal bips into dedicated areas and consolidated their representation, making some common functionality available without having to drill down into Bips themselves. Simple things like Copying, Sharing, Pausing and Testing were buried in the old system, and while those things are still actionable per Bip, they’re also now available in the My Bips landing screen. The way Bips are represented also received a facelift and is consistent across the whole system making them uniquely and consistently identifiable, embeddable and very close visually to the graphs they represent

All Shared Bips are now part of the Community section, and now fully searchable with a much easier and less issue prone guided install. We’ll be building out more community features over the coming months, and while we have some strong ideas about what this should look like we can’t do it without you, so drop some ideas into our Feedback And Support widget. We’ll get to them faster than you can blink.

Building Your Bips

OK, this was a big one. The old workflow was pretty convoluted and needed some inherent knowledge about how the system worked, with certain steps in different areas depending on what you needed to do.

The point of a User Experience isn’t to just duplicate what a programmer would have to do, visually, but to create an abstraction that’s workable and easy to use for everyone. The original experience just had to go!

Here’s a little before and after, these are always fun to show off, if only for posterity.



Some of the bigger changes

- Channels are now called ‘Actions’ and ‘Events’, depending if something is performing an action on demand or generating events or content on its own.

- All Bips are created the same way, by pressing the ‘Create A Bip’ button. Email and Web Hooks have been turned into regular event sources.

- The workspace takes up the entire screen and replacing the list of bips to the left are your saved Actions and Events. You can search, add, enable, authenticate and discover integrations from the one screen now, as you’re doing work.

- Dragging from a node and having it spawn a Channel selection and transforms has been completely dropped. To add Actions and Events, either click ‘Add’ from the left sidebar or drag it onto the workspace. Connect services by dragging from source to destination

- Transforms are now called Personalizations, and they don’t need to be explicitly configured. The system will do its best to map actions based on your own behavior and when in doubt it will look to the community for suggestions, in realtime

- Hovering over an Action or Event will now give you contextual information about the item itself, including any global preferences

- The ‘source’ node is always identifiable as a pulsing beacon so it’s easy to see where source messages are emitting from

- Workspace physics behave more naturally. We know how annoying chasing icons is and will continue to work towards more predictable physics.

- Experimental - Some aspects of the Data View are now editable

- Sharing no longer requires a description

- Bips can be shared without saving, giving you the opportunity to remove personal information before sharing

- Triggers can be manually invoked with a ‘Run Now’ button, even when paused

- State and Logging tools! We’ll tell you the last time a Bip ran, and the Logs tab now includes Action/Event failures, run times and state changes

- Flow Controls now all have unique icons, so you can see how the control is affecting your integrations at a glance

And a quick demo of the improved workflow

What’s Next

Adding some much needed normality to the experience, including all the underlying server engineering, gives us a great platform on which we can concentrate on the one most important thing. Building a fantastic community!

We’ll be seeding hundreds of new shared bips over the coming weeks with the 60+ services currently supported and really fine tuning documentation and service discovery, making Bip.IO easier for you to not only learn and utilize, but also contribute to and make part of your own application.

We’ve had a great time receiving and implementing your feedback in building a new experience. We hope you like it and would love to know your feedback, suggestions or success stories to make the next version even better!

Many Thanks
- Team Bip.IO

February 20, 2015 / Bipio, Api / Posted By: wotio team

Hey, so we just released a cool new feature into the bip.io server that we hope you’ll find useful.

Whenever you use bip.io to connect awesome web-services together, its sometimes tedious to remember how a piece of the information you’re getting from one service should be mapped to another service, like when you want to create an email digest of your curated RSS feed of your favorite twitter content. It’s sort of annoying to have to remember and configure, okay, this RSS field field should map to that email setting, and this input should map to that output. every-single-time you want to create a bip.

Well now you don’t have to.

When you connect one web service to another, you can do all sorts of interesting ‘transforms’ with that data as it flows across the graph that you create as your bip. And well, some ways of connecting those services together are frankly more interesting and common amongst all bip.io users. And so we’ve taken the most popular transforms that people are using, and made them available as the default suggestions, for everyone to use.

You don’t have to use the suggested transform, of course. Its just a suggestion after all! You’re free to connect what you want, however you want. That’s the power and flexibility of bip.io. When setting up your bip you can always personalize your integration by clicking on the +Variables button and choose whatever field you want to capture.

Let’s walk through how to set it up:

When you install bip.io, there’s now an extra option in the setup process to confirm that you’d like to periodically fetch the community transforms. This will set a “transforms” option in your config/default.json, like so:

This will tell the server to periodically go and fetch the latest popular transforms, which is set as a cron job that you can also configure in the server config settings. If you already have bip.io installed, you can update to the latest build to get this feature as well.

As this is largely a community-powered feature, the more you use it, the better it gets. Its smart like that. So give it a try. Let us know if you find this aspect useful.

Enjoy.

February 12, 2015 / Posted By: wotio team

Philips really dominated the connected lighting market with the Hue system of bulbs. They give you a Device Management Platform (the "bridge") with an open API, which all the bulbs in your home - or business - connect to. While there are already tons of apps compatible with the Hue system, and there's IFTTT support for some simple triggers, how do you handle more advanced applications? What if you needed to hook up the bulbs to a light sensor to build an automated greenhouse? Or wanted to have them strobe if someone trips your alarm system? Sure, you could build the entire system manually, but that takes too long. That's what Wot.io is for.

Why Wot?

What if the hue, light sensors, door alarms, and just about every other device was already pre-integrated into a single system? What if the only thing needed to get them working together was a short script? What if you could then tack on other services from 3rd parties (maybe SQL storage, or tweeting specific events) with little extra effort? And someone else did all of the hosting and scaling work for you? Well, then you'd be using Wot. So in this post I'll be showing you how easy it is to integrate the Philips Hue system into Wotio, and how Wotio can be used to leverage it.

The integration

When you buy the Hue system, you get a bridge to install on your local network that connects and controls all of the Hue Bulbs from a central location via a RESTful API. This is Philips' Device Management Platform - it updates the light bulbs' firmware, pushes new 'scenes' to them, and sends them commands. We will be integrating this API with Wotio to provide both a data stream and a control endpoint.

Prerequisites

To get started, you will need the following:

  • A Philips Hue bridge and light bulbs
  • A computer, virtual machine, or cloud instance with Python 2.7 installed and a network connection to the Hue bridge
  • A Wotio access token and path

The integration

For this integration, we'll be using Python. We will need two libraries. First, there's studioimaginaire/phue, which gives us a convenient way to access the bridge API Second, there's wotio/wotwrapper. To install them, run:

sudo pip install phue  
sudo pip install wotwrapper  

Then we can write our connector. The code is this simple:

#!/usr/bin/env python
# hue.py
# A module for connecting the philips hue platform to the Wot environment
import sys, wotwrapper  
from phue import Bridge

# Initialize the bridge connection with IP address
b = Bridge(sys.argv[1])

# If the app is not registered and the button is not pressed, press the button and call connect() (this only needs to be run a single time)
b.connect()

# Wrap it onto Wotio
# connect(<wotio path>, <module name>, <object to wrap>, <function to retrieve data>, <delay between data publishes>)
wotwrapper.connect(sys.argv[2], 'phue', b, b.get_api, 10)  

Save that as hue.py, then press the link button on the bridge and run:

hue.py <ip address of bridge> <visit http://wot.io for your wot.io path>

Where <UUID> is any identifier that you specify (try to make it unique). That's it. It took only five lines of actual code! We're initializing the studioimaginaire/phue library, and passing it the bridge's IP address over the command line. We then wrap that library onto the Wot bus using wotio/wotwrapper.

So what did this wrapper actually do? Two things:

  1. It uses the b.get_api function to pump the current state of the system onto the bus (as you probably guessed)
  2. It wraps the methods of the Bridge class into Wotio-style JSON calls:
JSON Message received              | Function Called  
------------------------------------------------------------------
["phue","get_api"]                 | b.get_api()
["phue","set_light",1,"bri",254]   | b.set_light(1, 'bri', 254)

For the full documentation and code of wotwrapper, visit the wotwrapper github page. For all of the API calls, visit the phue github page. And to get a copy of this integration's code, visit wot-phue.

The Wot side

Now that we've got it wrapped, let's try and see the data stream on Wotio, and control a few lights while we're at it.

We can connect to Wotio using several different protocols, but for our purposes we're going to use websockets. Why? Because we can use any number of mechanisms to view the data streams we're generating in real-time. If you don't have a websocket shell installed, you can try progrium/wssh. It's also based on Python 2.7, so if you've gotten this far in the integration it should work for you perfectly fine. So let's use it to open a websocket connection to our data stream on wotio:

wssh <visit http://wot.io for your wot.io path>

And you should see the data stream coming from your light bulbs! So why don't we try controlling them too. All you have to do now is type a JSON message and hit enter, and you'll change the state of the bulbs:

["phue","set_light",1,"bri",254]
["phue","set_light",2,"bri",50]
["phue","set_light",1,"on",false]

Notice how each time you send one of these commands, you not only change the lights in your house but the data coming out of the bridge as well? That's what gives Wot it's power. There are lots of pre-integrated services that can act on this data. You can even integrate another device (just as easily, mind you) and Wotio gives you the power to link them together so that they can communicate no matter where they are in the world; all they need is an internet connection. And all of this is production-ready, with no infrastructure for you to maintain, should you want to take your devices (or services) to market.

Learn more

If you'd like to see a sample of the power you can get out of Wotio, visit the wot.io website and we'll call you to schedule a meeting.

February 2, 2015 / bylaws / Posted By: Patrick Van de Wille

Today, our company announced that our board of directors had approved some amendments to our bylaws, in the form of an 8-K. Among the changes there’s one that, as head of investor relations, I’d like to highlight: we adjusted our bylaws to provide InterDigital with the ability to hold a virtual annual shareholder meeting. Virtual annual meetings provide shareholders with the ability not only to hear the proceedings and ask questions online, but also to vote real-time.  Annual meetings can be fully virtual or hybrid, combining an in-person physical meeting with an online virtual component.

Fully virtual annual meetings have been progressing in terms of adoption recently – according to Broadridge, one of the leading virtual annual meeting technology providers, the number of companies holding fully virtual annual shareholder meetings almost doubled from 2012 to 2014. While it’s still certainly a minority of public companies, the trend is clear. And, as a company whose brand is intertwined with advanced tech R&D and