News from Industry

SIREMIS – Ongoing Updates

miconda - Fri, 07/07/2017 - 19:04
There is some ongoing work on Siremis (web management interface for kamailio) to make it fully compatible with Kamailio 5.0.x as well as update some of its legacy components:So I thought to start a discussion here and see if some of those changes are going to impact too much existing installations or what are the best options to use for the future.Done so far:
  • 1) implemented the JSONRPC client using UDP and unix domain sockets, to work in pair with jsonrpcs module as a replacement for removed MI interface. The old JSONRPC over HTTP is still an option.
  • 2) charts system has been migrated from open flash chart (ofc) to echarts (pure html5/javascript) — therefore no more requiring to enable flash in browser. Configuration is the same and the charts should look pretty similar. If you upgrade, apart of different html view for the charts, the config files should not be changed. If you notice something broken, open an issue on github project linked above.
Planned to be done:
  • 3) Relocate siremis/modules/ser to siremis/modules/sipadmin — this purely for more suggestive name for the admin module related to the SIP services offered by Kamailio, and be in pair with the module sipuser. This is mainly search and replace over php and xml files, however, it is going to impact if you developed your own internal extensions for Siremis and placed them in the ‘ser’ module. It will require that you do the same search+replace
  • 4) Review existing database tables views and add fields for missing columns.
  • 5) Add views for other database tables. First in my mind being the table for rtpengine module. If you use some modules with tables not yet managed by Siremis, reply and list them to set a list of priorities.
Should have other things to report about Siremis and are not yet listed on project’s issue tracker, let’s discuss here.Testing and feedback for 1) and 2) are very appreciated!Thanks for flying Kamailio!

Skype For Android Update Brings No User Presence

miconda - Wed, 07/05/2017 - 19:55
Apparently the latest update of Skype for Android from few weeks ago removed the user's presence status, so no more online, do not disturb, etc. for your contacts. Either a bug or a feature, not clear yet, but it is reported or discussed across the web (redit, microsoft, etc.).

Being one of the first RTC apps that worked cross plaform (Linux, Windows, Mac OS), I got couple of old relatives used to it and they relied a lot on presence status to start chatting. This change had an impact, as they didn't contact each other for a while, until they starting questioning what happens.

I am still using it from time to time as a channel for first discussions on a potential business prospect, if nothing else is convenient, but there I don't rely on presence status at all, anyhow, the desktop version still presents user's status.



Anyhow, the reason for blogging about is to debate the usefulness vs. complexity of presence services. The telephony and VoIP is probably one of the biggest consumers for presence, mainly related to so called BLF (busy lamp fields), very common in the PBX world. Maybe an easy to implement in old PBX models, where each phone had a unique extension (number) associated with, the implementation complexity escalated in the VoIP/IP telephony world with the possibility of a user to have many devices for the same account.

Moreover, the rise of multi-tenant cloud-based PBX systems, the presence service became an issue of scalability as well. Behind presence service is a greedy data consumer, for each state change, there is a bunch of notifications going on behind:

  • either one from device to the server, then from server to many watchers (the contacts) - presence server model
  • or many from device to each of the watchers - end-to-end presence model
I think the main reason for shrinking presence services is the monetisation, or better said, the lack of it. If there would be enough financial benefits, the RTC providers will invest in it. Besides users not willing to pay for it (or not being used to pay for it), the free RTC networks out there cannot do much with data collected on this scope -- it is no much commercial value to know how many times you went from online to idle to do-not-disturb and back within a day. Texting on the other hand is where you share your needs or thoughts, the provider learns quickly you asked a friend for suggestions on a new gadget, so your next web browsing session has the adds waiting right there.
There is another reason that presence might be disabled on mobile apps -- background traffic to update the status, which affects both battery life and data usage. This issue could be eventually overcome by doing presence requests only when the contact is displayed, so I don't see it as the main reason for not offering presence.
The free messaging services space is so crowded that the cost of operations is crucial. Many services started with no real time presence (e.g., WhatsApp). Others are following now, more or less we see a movement like in the low cost airlines model - get rid of what is not making money directly, offer only the minimal!

Video Chat for a Website with WebRTC. Where to Start?

bloggeek - Mon, 07/03/2017 - 12:00

A Video chat widget is something you may want to add to your website. And while WebRTC is the only alternative at the moment, there are many ways in which this can get done.

For all intent and purpose, I will assume we’re going to focus on a WordPress site (like my own, which has no video chat widget because it isn’t needed for my business).

I will base this article on this Quora question and the brief answer I’ve given there:

Which 3rd party software should I use for a video conferencing feature in my company’s intranet?

Company’s intranet using wordpress and we are looking for somehow a whitelabel solution (obviously not skype) which allow employee to be able to do video conferencing on their laptops and mobile devices. (Though it is intranet, it is hosted on the internet just that only for private use)

My answer on Quora:

You need something based on WebRTC (more about WebRTC here).

One way to go, is to search the WordPress plugins directory – webrtc – WordPress Plugins

But – most of the plugins there are too old or dead already.

Here’s a suggestion of 3 different vendors that should work well (even if they don’t have a plugin for WordPress, these can be easily created for them – either by you, a freelancer or the vendor himself once you contact him):

These services are very different and very similar to one another at the same time.

Now, if you fail to find something that meet your exact needs, contact me directly and I’ll be able to help you further.

Intranet or Internet?

The question itself is about a company’s intranet.

Two options here:

  1. The deployment itself is on premise, hosted on the company’s data center and accessible only within the company’s local network or via a VPN
  2. The deployment itself is cloud based and the use of the term Intranet is just to indicate that this is for the company employees’ own consumption and not for customers or other people

The question’s clarification mentions that this is of the second type, but… I don’t really think it matters. The answers here will apply to both.

Also remember that:

  1. You can have WebRTC installed and operated within a company’s local network
  2. You can have a local installation configured to be access externally if and when needed (ugly, but possible)
  3. You can decide to have the WebRTC video chat part executed and hosted over the “cloud” even though everything else (including maybe signaling) is hosted in the company’s local network
What’s the Scenario?

Video chat in a website can take different shapes and sizes.

Here are a few of these:

  • 1:1 video chat widget for potential prospects
  • Video “phone” directory of employees, accessible to people coming to the website. This can be used as a meeting point of the employee and his online “smart” business card
  • Meeting point of users. Not necessarily your employees, where they can discuss the content on the page or a specific topic
  • Virtual conference rooms for group calls. Either internal for employees only or more likely to collaborate with external people as well

This list is probably missing a few more use cases – feel free to suggest them in the comments.

I intentionally left out the more glaring contact center use cases, as these got covered as part of the story of O2’s contact center from a few months ago.

What Features do you need?

There are different requirements that you might be after.

Here are some for you to consider:

  1. 1:1 or group calls? Group calls would require investing more. I’ve seen too many instances recently where mesh was being marketed, promoted and sold as a great and usable solution for group calls (it isn’t. If someone tells you that, know you aren’t dealing with someone with any experience or enough scars)
  2. Screen sharing. Want it? Need it?
  3. Recording. Are you interested in recording these video calls? All of them? Pick and choose? Voice only? Not at all?
  4. Video message. Can a user record and leave a message if no one’s there?
  5. Lobby. Do you need a lobby where people can select the room they want to join? Create new rooms? Find someone specific they want to chat with?
  6. Waiting room. Do you need a virtual waiting room? A place where people need to wait until a moderator or someone verifiable joins?
  7. Authentication. How do people authenticate? Is it part of the solution or will that be done by the fact that they are already on the website? Do you want to be able to have identifiers for these people (you do know we all have names – right?)
  8. Escalate from text. Do you need the ability to escalate a text chat conversation to a video one or the fact that people are on the page and pressed a button is enough to get them in?
  9. Sessions log. Is there a sessions log somewhere where you can see the sessions that were made? Might be something you need
  10. Downtime. Any status page of the service? Any ability to know when that part of the service is down? This is especially important when this is going to be one of your sales or support channels (it means money is involved)
  11. Working hours. Do you need this to appear only in specific hours of the day? Either during working hours or on off-hours
  12. Branding. Do you need this to be white labeled and carry your logo, colors, look and feel, etc. Or are you content with having the logo of a third party in there (with a link to their site of course)
  13. Mobile app. Do you need a mobile app for employees for this? For visitors? What should be the functionality of that mobile app?

Not exhaustive, but should get you started.

If you need to write that down, then you can use my requirements template for it.

How do you want this done?

There are a few WordPress plugins already available out there that use WebRTC and do “things”. Will these be suitable for your needs? I am not sure, but it is worth a shot.

If there aren’t, it doesn’t mean you can’t get it there. For a few 100’s of dollars, you can probably get a developer to wrap an existing solution as a WordPress plugin that is tailored for your needs.

And if you can’t find any existing solution that fits – you can have that developed for a larger sum of money and have it embedded as well.

I guess these are your choices if I had to put them on a graph:

Where do we go from here?

Yes. WebRTC enables putting video chat into websites.

Yes. There are solutions for it.

But somehow, people have an appetite once they see the capabilities.

When you plan on adding WebRTC video chat into your website, decide on the features you need and the budget you have. If they don’t match, either increase the budget or accommodate to it by changing your feature set.

I will be talking later this month in my Virtual Coffee about the WebRTC Developer Tools Landscape. I won’t be touching WordPress directly, but this is something you don’t want to miss as it will touch the various development strategies and the tools out there at your disposal.

Join me for a free Virtual Coffee on the WebRTC Developer Tools Landscape. Register now

The post Video Chat for a Website with WebRTC. Where to Start? appeared first on BlogGeek.me.

Another One Bites the Dust: Tropo Closes its Doors to New Customers

bloggeek - Mon, 06/26/2017 - 12:00

It seems like we’re in an ongoing roller coaster when it comes to CPaaS. Tropo has just closed its doors. At least if you want to be a new customer (which is doubtful at best at this point in time).

Looking for the public announcement that got deleted? You can find it hereTropo and Cisco – Where it all started?

Two years ago (give or take a month), Cisco acquired Tropo. I’ve written about it at the time, trying to figure out what will Cisco focus on when it comes to Tropo’s roadmap. I guess I was quite wrong in where I saw this headed and where it went to…

The two questions I did ask in that article that were probably the most important were:

Will developers be attracted to Tropo or will they look for their solution elsewhere?

Was this acquisition about the technology? Was it about the existing customer base? Was it about the developer ecosystem?

We’ve got an answer to the first question – they went elsewhere. If this was a growing business, then developers would be attracted, but it wasn’t.

Looking back at my recent update to my WebRTC API report, it made perfect sense that this will be shut down soon enough. Just look at the investment made by Tropo and then Cisco into the platform:

Nothing serious was done for developers on the WebRTC part of the platform since September 2014, and then after the acquisition, focus of the Tropo developers went to Cisco Spark.

This answers my second question – this wasn’t about technology or customers or even the existing developer ecosystem of Tropo. It was about the developers of Tropo and their understanding of developer ecosystems. It was about talent acquisition for Cisco Spark. Why kill the running business that is Tropo? I really don’t know or understand. I also fail to see how such a signal can be a good one to developers who plan on using other Cisco APIs.

When an API vendor (CPaaS, WebRTC or other) discontinues its APIs or stops investing in developers. What…
Click To Tweet

I don’t know what was the rationale of Cisco to acquire Tropo in the first place. If all they wanted was the talent, then this seems like too big of a deal to make. If it was for the platform, then they sure didn’t invest in it the needed resources to make it grow and flourish.

Now, here’s why I think this move took place, and it probably happened at around the time of the acquisition and in parallel to it.

UCaaS or CPaaS? The Shifting Focus

We’ve got two worlds in a collision course.

UCaaS – Unified Communications as a Service – the “modern” day voice and video communications sold to enterprises.

CPaaS – Cloud Communications as a Service – the APIs used by developers to embed communications into their products.

UCaaS has a problem. Its problem is that it always saw the world in monochrome. If you were an organization, then ALL of your communication needs get met by UCaaS (or UC, or convergence or whatever name it had before). And it worked. Up to a point. Go to most UC or UCaaS vendors’ websites and look at their solutions or industries section.

The screenshot above was taken from Polycom’s homepage (couldn’t find one on Cisco since their move to Cisco Spark). Now, notice the industries? Education, Healthcare, …

All of these industries are now shifting focus. Many of the solutions found in these industries that used to go to expensive UC vendors are now going to niche vendors offering solutions specifically for these niches. And they are doing it by way of WebRTC. Here are a few examples from interviews I’ve had done over the years on this site:

You’ll even find some of these as distinct verticals in my updated WebRTC for Business People free report.

The result? The UCaaS market is shrinking because of WebRTC and CPaaS.

The UCaaS market is shrinking because of WebRTC and CPaaS
Click To Tweet

Why is this happening? Three reasons that come to play here:

  1. Cloud. And the availability of CPaaS to build whatever it is that I want
  2. WebRTC (and CPaaS), that is downsizing this thing we call communications from from a service into a feature – something to be embedded in other services
  3. The rise of the gig economy. In many cases, a business is offering communications not between its employees at all, but rather between its users. Think Uber – drivers and riders. Airbnb – hosts and travelers. Upwork – employers and freelancers.

With the rising popularity of the dominant CPaaS vendors, there’s a threat to UCaaS: at the end of the day, UCaaS is just a single example of a service that can be developed using CPaaS. And you start to see a few entrepreneurs actually trying to build UCaaS on top of CPaaS vendors.

If I had to draw the relationship here, I’d do it in this way probably:

Just the opposite of what UC vendors have imagined in the last two decades. And a really worrying possibility of how the future may look like.

What should a UCaaS vendor do in such a case?

Add APIs!

So now UCaaS vendors are trying to add APIs to serve as their CPaaS layer on top of their existing business. Most of them do it with the intent of making sure their UCaaS customers just don’t go elsewhere for communications.

Think of a bank that has an internal UCaaS deployment. He now wants to build a contact center. He has 3 ways of doing that:

  1. Go to a contact center vendor. Probably a cloud based one – CCaaS
  2. Build something using CPaaS. By himself. Using external developers. Or just someone that is already integrated with CPaaS
  3. Try and do something with his UCaaS vendor

If (3) happens, then great.

If (1) happens, then it is fine, assuming the UCaaS vendor offers contact center capabilities as well. And if he doesn’t, then there’s at least a very high probability that there’s no real competition between him and that CCaaS vendor

If (2) happens… well… that can be a threat in the long run.

And did we talk about Uber, Upwork and Airbnb? Yes. They have internal communications between their employees. Maybe there’s some UCaaS in there. But most of the communications generated by these companies don’t even involve their employees – only their customers and users. Where the hell do you fit UCaaS in there?

The APIs are there as a moat against the CPaaS vendors. Which is just where Cisco decided to place Tropo. As Cisco’s moat for its UCaaS offering – Cisco Spark. The Tropo team are there to make sure the APIs are approachable and create the buzz and developer ecosystem around it.

Quite an expensive moat if the end result was just killing Tropo in the process (the asset that was acquired).

The Surprise Announcement

Last week, Tropo had a blog post published on their site and had it removed after a few hours.

Since Tropo publish the full post content into their RSS, and Feedly grabs the content and has it archived, the information in that post was not lost. I copied it from Feedly and placed it in a shared Google Doc – you can find it here.

The post is no longer on Tropo’s website. A few reasons why this may be:

  • Someone higher up probably decided to do this quietly by contacting the most active customers only
  • The post was published too early, and Cisco wanted to break the news a bit later
  • The post got too many queries from angry developers, so they took it down, until they rewrite it to a more positive note

That last one is what I think is the real reason. And that’s because after reading it more than once, I still find it rather confusing and not really clear.

This announcement from Tropo has two parts to it:

  1. We’ve got your back with our great Cisco Spark commitment to developers (an API portal, a marketplace, ambassadors program and an innovation fund)
  2. Tropo is dead if you aren’t on to Cisco Spark. But we will honor our SLA terms (because we must)

That first part is about CPaaS serving UCaaS. Cisco Spark is UCaaS with a layer of APIs that Cisco is now trying to grow.

It is also about damage control – if you’re a Tropo user, then this whole acquisition thing by Cisco probably ended bad for you. So they want to remind you about the alternatives they now offer.

The second part is about how they see current Tropo developers:

  • If you want to onboard to Tropo as a new customer, then tough luck. Doors are closed. Go elsewhere. Or go enroll to the Cisco Spark ISV Program
  • If you are just trying the system out and have an account in there that has no apps in production environment (i.e – you are a freeloader and not a paying customer) – you will be thrown out, no questions asked
  • If you are an existing paying customer running in production, then Tropo “will honor all contractual obligations”

This reads to me like the acquisition of AddLive by Snapchat. First, the service was acquired and closed its doors to new customers. It honored existing contracts. And it shut down its platform to all existing customers this year – once all contracts have ended.

So –

  1. Don’t expect Tropo to renew existing paying contracts
  2. Expect Cisco to push paying Tropo customers to Cisco Spark instead
  3. Tropo won’t add new features, and the existing ones, including any support to them will deteriorate, but will still fit into the SLA they have in place
Where Will Tropo Customers Go?

Tropo was predominantly about voice and SMS.

It was also the biggest competitor of Twilio some 6-7 years ago, when both with comparable in size and focus. Or at least until Tropo shifted its focus from developers to carriers.

There are two places where Tropo customers can go:

  1. Cisco Spark. I think this won’t happen in droves. Some will be headed there, but most will probably go look for a different home
  2. Other CPaaS vendors. Mostly Twilio, but some will go to Nexmo, Plivo or Sinch. Maybe even TeleSign

The big winner here is Twilio.

The big losers are the developers who put their fate in Tropo.

CPaaS: Buyers Beware

As I always say, the CPaaS market is a buyers beware one.

Developers can and should use CPaaS to build their applications. It will reduce a lot of the upfront costs and the risks involved in starting a new initiative. It gives you the ability to explore an idea, fine tuning and tweaking it until you find the right path in the market. But it comes at a price. And I don’t mean the money you need to put in order to use CPaaS. I mean the risk involved in the fluidity of this market.

CPaaS and WebRTC APIs. A buyers beware market
Click To Tweet

The trick then, is to pick a CPaaS vendor that doesn’t only fit your feature requirements, but is also here to stay. And it is hard to do that.

Large companies are not necessarily the right approach – Cisco/Tropo as an example. Or AT&T shutting down its WebRTC APIs.

Small startups aren’t necessarily the answer either – they might shut down or pivot. SightCall pivoted. OpenClove has silently closed doors.

Dominant players might not be a viable alternative – if you consider Tropo such a player, and Cisco as a sure bet of an acquirer – and where it now ended.

You can reduce these risks if you know who these CPaaS vendors are and where they are headed. If you follow them for quite some time and understand if and what they are investing on. And if you need help with that, just follow my blog or contact me.

The post Another One Bites the Dust: Tropo Closes its Doors to New Customers appeared first on BlogGeek.me.

Kamailio v4.3.7 Released

miconda - Fri, 06/23/2017 - 21:10
Kamailio SIP Server v4.3.7 stable is out – a minor release including fixes in code and documentation since v4.3.6. The configuration file and database schema compatibility is preserved.Kamailio (former OpenSER) v4.3.7 is based on the latest version of GIT branch 4.3, therefore those running previous 4.3.x versions are advised to upgrade. There is no change that has to be done to configuration file or database structure comparing with older v4.3.x.Important Note: v4.3.7 is the last official release from branch 4.3. Consider to upgrade to v4.4.x or 5.0.x as soon as possible to benefit of proper source code maintenance.Resources for Kamailio version 4.3.7Source tarballs are available at:Detailed changelog:Download via GIT: # git clone git://git.kamailio.org/kamailio kamailio
# cd kamailio
# git checkout -b 4.3 origin/4.3Binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.3.x release series is summarized in the announcement of v4.3.0:Note: the branch 4.3 is becoming an unmaintained stable branch. The maintained stable branches at this moment are 4.4 and 5.0, with v5.0.2 being released out of the latest stable branch. Therefore an alternative is to upgrade to latest 4.4.x or 5.0.x – be aware that you may need to change the configuration files and database structures from 4.3.x to 4.4.x or 5.0.x. See more details about it at:Thanks for flying Kamailio and enjoy the summer holidays!

Reeling in Safari on WebRTC – A Closer Look at What’s Supported

webrtchacks - Mon, 06/19/2017 - 15:17

Long have WebRTC developers waited for the day Apple would come around to WebRTC. It has not been simple for web developers and Apple due to their policy that requires web browsing functionality to use the WebKit engine along with Safari. This mean no WebRTC in Safari, no Firefox or Chrome WebRTC on iOS, no native […]

The post Reeling in Safari on WebRTC – A Closer Look at What’s Supported appeared first on webrtcHacks.

10 Tips for Choosing the Right WebRTC Open Source Media Server Framework

bloggeek - Mon, 06/19/2017 - 12:00

Too many WebRTC open source projects. Not enough good ones.

Ever went to github to search for something you needed for your WebRTC project? Great. Today, there’s almost as many WebRTC github projects as there are WebRTC LinkedIn profiles.

Today, there’s almost as many WebRTC github projects as there are WebRTC LinkedIn profiles.
Click To Tweet

Some of these code repositories really are popular and useful while others? Less so.

Here’s the most glaring example for me –

When you just search for WebRTC on github, and let it select the “Best match” by default for you, you’ll get PubNub’s sample of using PubNub as your signaling for a simple 1:1 video call using WebRTC. And here’s the funny thing – it doesn’t even work any longer. Because it uses an old PubNub WebRTC SDK. And that’s for an area that requires less of an effort from you anyway (signaling).

Let’s assume you actually did find a WebRTC open source media server that you like (on github of course). How do you know that it is any good? Here are 10 different signals (not WebRTC ones) that you can use to make that decision.

Need to pick a WebRTC media server framework? Why not use my Free Media Server Framework Selection Worksheet when checking your alternatives?

Get the worksheet1. Do You Grok the Code?

If you are going to adopt an open source media server for your WebRTC project then expect to need to dive into the code every once in awhile.

This is something you’ll have to do either to get the darn thing to work, fix a bug, tweak a setting or even write the functionality you need in a plugin/add-on/extension or whatever name that media server uses for making it work.

In many of the cases I see when vendors rely on an open source WebRTC media server framework, they end up having to dig in its code, sometimes to the point of taking full ownership of it and forking it from its baseline forever.

To make sure you’re making the right decision – check first that you understand what you’re getting yourself into and try to grok the code first.

My own personal preference would be a code that has comments in it (I know I have high standards). I just can’t get myself behind the notion of code that explains itself (it never does). So be sure to check if the non-obvious parts (think bandwidth estimation) are commented properly while you’re at it.

2. Is the Code Fresh?

Apple just landed with WebRTC. And yes. We’re all happyyyyy.

But now we all need to shift out focus to H.264. Including that WebRTC media server you’ve been planning to use.

Oh – and Google? They just announced they will be migrating slowly from Plan B to Unified Plan. Don’t worry about the details – it just changes the way multiple streams are handled. Affecting most group calling implementations out there.

And there was that getstats() API change recently, since the draft specification of WebRTC finally decided on the correct definition of it, which was different than its Chrome implementation.

The end result?

Code written a year or two ago have serious issues in actually running.

Without even talking about upgrades, updates, security patches, etc.

Just baseline “make it work” stuff.

When you check that github page of the WebRTC media server you plan on adopting – make sure to look when it was last updated. Bonus points if you check what that update was about and how frequently updates take place.

3. Anyone Using It?

Nothing like making the same mistakes others are making.

Err… wrong expression.

What you want is a popular open source WebRTC media server. Anything else has a reason, and that reason won’t be that you found a diamond in the rough.

Go for a popular framework. One that is battle tested. Preferably used by big names. In production. Inside commercial products.

Why? Because it gives you two things you need:

  1. Validation
  2. Ecosystem

It gives you validation that this thing is worth something – others have already used it.

And it gives you an ecosystem of knowledge and experience around it. This can be leveraged sometimes for finding some freelancers who have already used it or to get assistance from more people in the “community”.

I wouldn’t pick a platform only based on popularity, but I would use it as a strong signal.

4. Is This Thing Documented?

What you get in a media framework for WebRTC is a sort of an engine. Something that has to be connected to your whole app. To do that, you need to integrate with it somehow – either through its APIs when you link on it – or a lot more commonly these days by REST APIs (or GraphQL or whatever). You might need both if you plan on writing your own addon/extension to it.

And to do that, you need to know the interface that was designed especially for you.

Which means something written. Documentation.

When it comes to open source frameworks, documentation is not guaranteed to be of specific quality – it will vary greatly from one project to another.

Just make sure the one you’re using is documented to a level that makes it… understandable.

If possible, check that the documentation includes:

  • Some introductory material to the makeup and architecture of the project
  • An API reference
  • A few demos and examples of how to use this thing
  • Some information about installation, configuration, maintenance, scaling, …

The more documentation the better off you will be a year down the road.

5. Is It Debuggable?

WebRTC is real time and real time is hard to debug.

It gets harder still if what you need to look at isn’t the signaling part but rather the media part.

I know you just LOVE adding your own printf and cout statements in your C++ code and try reproducing that nagging bug. Or better yet – start collecting PCAP files and… err… read them.

It would be nice though if some of that logging, debugging, etc would be available without you having to always add these statements into the code. Maybe even have a mechanism in place with different log levels – and have sensible information written into the logs so the time you’ll need to invest in finding bugs will be reduced.

Also – make sure it is easy to plug a monitoring system to the “application” metrics of the media server you are going to use. If there is no easy way to test the health of this thing in production, it is either not used in production or requires some modifications to get there. You don’t want to be in that predicament.

While at it – make sure the code itself is well documented. There’s nothing as frustrating (and as stupid) as the self explanatory code notion. People – code can’t explain itself – it does what it does. I know that the person who wrote that media server was god incarnate, but the rest of us aren’t. Your programmers are excellent, but trust me – not that good. Pick something maintainable. Something that is self explanatory because someone took the time to write some damn good comments in the tricky parts of the code. I know this one is also part of grokking the code, but it is that important – check it twice.

For me, the ability to debug, troubleshoot and find issues faster is usually a critical one for something that is going to get into my own business. Just a thought.

6. Does it Scale?

Media servers are resource hogs (check this video mini series for a quick explanation).

This means that in most likelihood, if your business will become successful in any way, you will need more than a single media server to run at “scale”.

You can always crank it up on Amazon AWS from m4.large up to m4.16xlarge, but then what’s next?

At the end of the day, scaling of media servers comes down to adding more machines. Which is simple enough until you start dealing with group calls.

Here’s an example.

  • Assume a single machine can handle 100 participants, split into any group type (I am simplifying here)
  • And we have 10 participants on average in each call
  • Each group call can have 2 participants, up to… 50 participants

Now… how do we scale this thing?

How many machines do we need to put out there? When do we decide that we don’t add new calls into a running machine? Do we cascade these machines when they are fully booked? Throw out calls and try to shift them to other machines?

I don’t know, but you should. At least if you’re serious about your product.

You probably won’t find the answers to this in the open source WebRTC media server’s documentation, but you should at least make sure you have some reasonable documentation of how to run it at scale, and not as a one-off instance only.

7. What Languages Does it Use?

They wrote it in Kotlin.

Because that’s the greatest language ever. Here. Even Google just made it an official one for Android.

What can go wrong?

Two things I look for in the language being used:

  1. That the end result will be highly performant. Remember it’s a resource hog we’re dealing with here, so better start with something that is even remotely optimized
  2. That I know how to use and have it covered by my developers

Some of these things are Node.js based while other are written in Java. Which one are your developers more comfortable with? Which one fits better with your technology plans for your company in the next 5 years?

If you need to make a decision between two media servers and the main difference between them for you is the language – then go with the one that works better for your organization.

8. Does It Fit Your Signaling Paradigm?

Three things you need in your WebRTC product:

  1. Signaling server
  2. STUN/TURN server
  3. Media server

That 3rd one isn’t mandatory, but it is if you’re reading this.

That media server needs to interact with the signaling server and the STUN/TURN server.

Sure. you can use the signaling capabilities of the media server, but they aren’t really meant for that, and my own suggestion is not to put the media server publicly out there for everything – have it controlled internally in your service. It just doesn’t make architectural sense for me.

So you’ll need to have it interacting with your signaling server. Check that they both share similar paradigms and notions, otherwise, this in itself is going to be quite a headache.

While at it – check that there’s easy integration between the media server you’re selecting and the STUN/TURN server that you’ve decided to use. This one is usually simple enough, but just make sure you’re not in for a surprise here.

9. Is the License Right For You?

BSD? MIT? APL? GPL? AGPL?

What license does this open source WebRTC media server framework comes with exactly?

Interestingly, some projects switch their license somewhere along the way. Jitsi did it after its acquisition by Atlassian – moving from LGPL to a more permissive APL.

The way your business model looks like, and the way you plan to deploy the service are going to affect the types of open source licenses you will want and will be able to adopt inside your service.

There are different types of free when it comes to open source licenses.
Click To Tweet

Every piece of code you pick and use needs to adhere to these internal requirements and constraints that you have, and that also includes the media server. Since the media server isn’t something you’ll be replacing in and out of place like a used battery, my suggestion is to pick something that comes with a permissive open source license – or go for a commercial product instead (it will cost you, but will solve this nagging issue).

I’ve written about open source licenses before – check it out as well.

10. Is Anyone Offering Paid Support For It?

Yes. I know.

You’re using open source to avoid paying anyone.

And yet. If you check many of the successful and well maintained open source projects – especially the small and mid-sized ones – you will see a business model behind them. A way for those who invest their time in it to make a living. In many cases, that business model is in support and custom development.

Having paid support as an option means two things:

  1. Someone is willing to take ownership and improve this thing and he is doing it as a day job – and not as a hobby
  2. If you’ll need help ASAP – you can always pay to get it

If no one is offering any paid support, then who is maintaining this code and to what end? What would encourage them to continue with this investment? Will they just decide to stop investing in it next month? Last month? Next year?

Making the Decision

I am not sure about you, but I love making decisions. I really do.

Taking in the requirements and the constraints, understanding that there’s always unknowns and partial information. And from there distill a decision. A decisive selection of what to go with.

You can find more technical information on media servers in this great compilation made by Gustavo Garcia.

After you take the functional requirements that you have, and find a few suitable open source WebRTC media frameworks that can fit these requirements, go over this list.

See how each of them addresses the points raised. It should help you get to the answer you are seeking about which framework to go with.

Towards that goal, I also created a Media Server Framework Selection Sheet. Use it when the need comes to select an open source WebRTC media server framework for your project.

Get the worksheet

 

The post 10 Tips for Choosing the Right WebRTC Open Source Media Server Framework appeared first on BlogGeek.me.

Kamailio v4.4.6 Released

miconda - Fri, 06/16/2017 - 22:08
Kamailio SIP Server v4.4.6 stable is out – a minor release including fixes in code and documentation since v4.4.5. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio v4.4.6 is based on the latest version of GIT branch 4.4. We recommend those running previous 4.4.x versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous release of the v4.4 branch.Resources for Kamailio version 4.4.6Source tarballs are available at:Detailed changelog:Download via GIT: # git clone git://git.kamailio.org/kamailio kamailio
# cd kamailio
# git checkout -b 4.4 origin/4.4Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.4.x release series is summarized in the announcement of v4.4.0:Note: the branch 4.4 is the previous stable branch. The latest stable branch is 5.0, at this time with v5.0.2 being released out of it. The project is officially maintaining the last two stable branches, these are 5.0 and 4.4. Therefore an alternative is to upgrade to latest 5.0.x – be aware that you may need to change the configuration files and database structures from 4.4.x to 5.0.x. See more details about it at:Thanks for flying Kamailio!

Kamailio v5.0.2 Released

miconda - Wed, 06/14/2017 - 23:09
Kamailio SIP Server v5.0.2 stable is out – a minor release including fixes in code and documentation since v5.0.1. The configuration file and database schema compatibility is preserved, which means you don’t have to change anything to update.Kamailio v5.0.2 is based on the latest version of GIT branch 5.0. We recommend those running previous 5.0.x or older versions to upgrade. There is no change that has to be done to configuration file or database structure comparing with the previous release of the v5.0 branch.Resources for Kamailio version 5.0.2Source tarballs are available at:Detailed changelog:Download via GIT: # git clone https://github.com/kamailio/kamailio kamailio
# cd kamailio
# git checkout -b 5.0 origin/5.0Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 5.0.x release series is summarized in the announcement of v5.0.0:Thanks for flying Kamailio!

WebRTC in 2017

webrtc.is - Tue, 06/13/2017 - 01:42
The road to the promised land.

For more than 6 years, we have been working on and looking forward to a simpler way to build RTC (Real Time Communications) applications on the web. In order for this technology to truly show its value, the major browser vendors needed to show up.

Now, it’s a reality!

macOS SierraLeft: Safari Preview 32 (Safari 11.0, WebKit 12604.1.23.0.4) using H.264  Right: Chrome Version 58.0.3029.110 (64-bit). https://webrtc.github.io/samples/ using H.264

Mobile, mobile, mobile.

Now that Apple has joined the party in earnest, does the technology have the coverage required in order for developers to make good use of WebRTC on mobile devices? Let’s find out.

Until now, in order for WebRTC to work on iOS, we were relegated to wrapping WebRTC code in Objective-C and Swift, in our native iOS apps. Basically, we had to take the Chrome code and build an app that was sent to the app store for approval and wait in line, like all the other chumps (yours truly included). Conversely, on Android we could run much of that same code from our desktop Chrome apps, on the Android device as well, within reason of course.

Now that Safari and Chrome are shipping compatible WebRTC on mobile, we get to reuse the same code, right!? Well, mostly, they are different code bases, after all.

A word about hardware acceleration.

If ubiquitous mobile video is to take off, the battery life of the device has to last more than the length of the 10 minute video call (ok, I am exaggerating a bit, but I think you get the point) and the performance needs to be at least adequate enough to distinguish facial features. My bar is set a little higher, baby steps for now.

Without h/w acceleration the CPU is likely working too hard to encode the local video and decode the inbound video + service the other processes required at the same time. That really means there needs to be hardware onboard the device dedicated to video coding. That in turn means H.264, since there are very few vendors that offer VP8 or VP9 h/w acceleration.

Question: Does this mean that mobile apps written with VP8 will not be able to deliver decent mobile video conferencing?

Answer: No, not at all, but they will likely not be as performant as those taking advantage of hardware acceleration.

Suffice to say that SVC (Scalable Video Coding) usage would be another reason why we need h/w acceleration, but that’s for another day.

Who’s using what?

The majority of desktop and mobile WebRTC apps written today, are using VP8 for video.

Since Apple and Microsoft both use H.264 and Google uses VP8 and H.264 (recently shipped Open H.264 – on the desktop and mobile). Also, many of the Enterprise RTC developers are already on that H.264 bandwagon.

Question: If Apple and Microsoft devices ship with H.264, what is the case with Google Chrome on desktops and android, are they preferencing VP8?

Answer: Chrome for desktop and android now have H.264 native. Many of the Android devices that ship today all have H.264 hardware acceleration onboard. In order to understand which units have H.264 and hardware acceleration, you can run use the Android APIs to pull a list of available codecs, but in the case of WebRTC, you will only get H.264 in Android WebRTC if there is a h/w encoder on the device.

Is H.264 the answer for WebRTC video?

Here is a recent test:
Host 1 – (before joining):
macOS Sierra, Macbook, Safari (Technology Preview 32)

Host 2 (after joining):
Android 7, Samsung 7, Chrome 55

setRemoteDescription OperationError: Failed to set remote video description and params.     Likely because Safari is not seeing H.264 on Android.

Host 1 (after joining):

According to the Chrome Status page, Chrome for Android should have H.264. So why is the session barfing when trying to set up video? The logs do not lie…

Safari – offer:
a=rtpmap:96 red/90000
a=rtpmap:98 ulpfec/90000
a=rtpmap:99 H264/90000

Chrome on android – answer:
a=rtpmap:96 red/90000
a=rtpmap:98 ulpfec/90000
a=rtpmap:97 rtx/90000

Err, huh? No H.264 in reply?
So, I updated to latest Chrome on android (58) and tried again…


et voilà!!

Next topic, paying the man!

Shipping your product with H.264 enabled, means you may potentially need to deal with the MPEG-LA royalty police for H.264 royalties, but there are some grey areas.

In the case of Apple and Microsoft, where H.264 royalties are already being paid for by the parent vendor, the WebRTC developer is riding on the coattails of papa bear, at least in theory.

Cisco’s generous OpenH.264 offer means that those using this binary module, can do so at potentially no cost:

We will not pass on our MPEG-LA licensing costs for this module, and based on the current licensing environment, this will effectively make H.264 free for use on supported platforms.

Q: If I use the source code in my product, and then distribute that product on my own, will Cisco cover the MPEG LA licensing fees which I’d otherwise have to pay?

A: No. Cisco is only covering the licensing fees for its own binary module, and products or projects that utilize it must download it at the time the product or project is installed on the user’s computer or device. Cisco will not be liable for any licensing fees incurred by other parties.

That seems to mean (I am no lawyer) every developer shipping WebRTC apps supporting Open H.264 binary module, get a free ride. Those using some other binary, or shipping the above source code for that module, could be on the hook for those royalties. That said, since there are royalties being paid by parent vendors where devices are shipping H.264 anyways, developers may not get hassled regardless.

Summary:

So what did we learn here?

  • Apple has joined the party, now we have a full complement of browser vendors!
  • If you want to leverage WebRTC video to deliver a ubiquitous mobile and desktop experience for your users, you should likely consider including both H.264 and VP8.
  • VP8 is (still) free and powers most of the WebRTC video out there today.
  • You can make use of the Open H.264 project and get a free H.264 ride, albeit baseline AVC.
  • WebRTC on Android does not support software encoding of H.264, so unless there is local hardware acceleration, H.264 will not be in the offer.
  • H.264 is not fully enabled (or buggy) in Chrome 55 (I was using it on Samsung S7 Edge (Android 7), but it does work with Chrome 58.
  • WebRTC is not DOA!
  • SDP still sucks and ORTC can’t come soon enough!!

The W3C and IETF are also closing in on shipping WebRTC as a web standard, here’s a great update from Google on that as well. Latest W3C WebRTC editor’s draft, latest charter.

As a side note, it would be interesting to see something like this open sourced; VP8 / H.264 conversion without transcoding, if only to service the existing desktop apps currently running VP8 <-> mobile H.264. It would likely overwhelm the mobile device, but it would be cool if it worked!

Disclaimer: The views expressed by me are mine alone and do not necessarily represent the views or opinions of my employer.


WebRTC iOS 11 Support. Are We There Yet?

bloggeek - Mon, 06/12/2017 - 12:00

Last week WWDC was a happy occasion for those who deal with WebRTC. For the first time, we got the official word – and code – of WebRTC on Apple devices – WebRTC iOS and WebRTC Safari support is finally here.

I spent the time since then talking to a couple of developers and managers who either tinkered with Safari 11 already or have to make plans in their products for this development.

I came out a bit undecided about this whole thing. Here are where things stand.

Apple’s Coverage

The WebKit site has its own announcement – as close as we’ll ever get to an official announcement from Apple with any detail it seems.

What I find interesting with this one (go read it – I’ll wait here):

  • Google isn’t mentioned (god forbid). But their “LibWebRTC open source framework” is. Laughed my pants off of this one. The lengths companies would go to not mention one another is ridiculous
  • WebRTC in WebKit isn’t mentioned. Two years ago I opined it wouldn’t move the needle. And it seems like it didn’t – or at least not enough for Apple to mention it in this WebKit announcement
  • TokBox and BlueJeans are mentioned as early beta. TokBox I understand. Can’t miss one of the dominant players in this space. But BlueJeans? That was a surprise to me
Previous Coverage

First things first, here are some posts that were published already about Apple’s release of WebRTC Safari (in alphabetical order):

I’ve ignored a few generic posts that just indicated WebRTC is out there.

Most relevant Twitter thread on this?

Testing out Safari 11's #WebRTC support on #macOS High Sierra. pic.twitter.com/zSpTfMlRj5

— Saúl Ibarra Corretgé (@saghul) June 6, 2017

Who’s Missing?

Well… the general media.

I haven’t really seen the keynote. I find the Apple keynotes irrelevant to my needs, so prefer not to waste my time on them. My understanding is that WebRTC took place there as a word in a slide. And that’s HUGE! Especially considering the fact that none of the general technology media outlets cared to mention it in any way.

Not even the general developers media.

What was mentioned instead were things like the iOS file manager. That seems a lot more transformative.

This isn’t a rant about general media and how they miss WebRTC. It is a rant about how WebRTC as a technology has not caught the attention at all, outside a little circle of people.

Is it because communications is no longer interesting?

At a large developers event here in Israel on that same week, there were 6 different tracks: Architecture, Full stack, Philosophy, Big Data, IOT, Security. No comms.

Not sure what the answer…

On one hand, WebRTC is transformative and important. On the other hand, it is mostly ignored. #iOS
Click To Tweet
WebRTC Safari Support

So what did get included in the WebRTC Safari implementation by Apple?

  • The basics. GetUserMedia and PeerConnection
  • One-on-one voice and video calls seem to work well across browsers AND devices (including things like Safari to Edge)
  • Opus is there for audio
  • H.264 only for video. There is no VP8 or VP9. More on that later
  • The code supports Plan B, if you are into multiparty media routing
  • Data Channel is supported, but too buggy to be used apparently
  • No screen sharing. Yet
  • This is both for the desktop and mobile:
    • WebRTC Safari support is there for the macOS in the new version of safari
    • WebRTC iOS support is there for iOS 11 – via the Safari web browser, and maybe(?) inside WebViews

This is mostly expected. See below why.

WebKit, Blink and WebRTC

A long time ago, in a galaxy far far away. There was a browser rendering engine called WebKit. Everyone used WebKit. Especially Google and Apple. Google used WebKit for Chrome. And Apple used WebKit for Safari.

And then one day, in the middle of the development of WebRTC, Google decided enough is enough. They forked WebKit, renamed it to Blink, removed all excess code baggage and never looked back.

Apple never did care about WebRTC. At least not enough to make this thing happen until last week. I hope this is a move forward and a change of pace in Apple’s adoption of WebRTC.

Here’s what I think Apple did:

  • Seems like they just took the Google code for WebRTC and hammered at it until it fit nicely back into WebKit (ignoring WebRTC in WebKit in the process)
  • How did they modify it? Remove VP8. Add H.264 by hooking it up with the hardware codec in iOS and on the Mac
  • And did the rest of the porting work – connecting devices, etc.
  • Plan B is there because. Well. Google uses Plan B at the moment. And it just stands to reason that the code Apple had was Plan B
WebRTC Safari and Interoperability

When it comes to WebRTC, the question is one of browser interoperability. There aren’t many browser vendors (I am counting four major ones).

The basics seem to work fine. If you run a simple peer-to-peer call between any of the 4 browsers, you’ll get a call going. Voice and video. The lowest common denominator for that seems to be Opus+H.264 due to Safari. Otherwise, Opus+VP8 would also be a possibility.

The challenge starts when what you’re trying to do is multiparty. While H.264 is supported by all browsers, the ability to use simulcast with H.264 isn’t. At the moment, Chrome doesn’t support simulcast with H.264. The current market perception is that multiparty video without simulcast is meh.

Doing group calling?

  • Go for audio only
  • Force everyone to use H.264 if you need Safari (either as a general rule or when the specific session has someone using Safari) – and understand that simulcast won’t be available to you

Now it is going to become a matter of who blinks first: Google by adding H.264 simulcasting to Chrome; or Apple by adding VP8 to Safari.

The Next Video Codec War

Which leads us to the next frontier in the video codec wars.

If you came in late to the game, then know that we had over 3 years of continuous bickering regarding the mandatory video codec in WebRTC. Here’s the last I wrote about the codec wars when the Alliance of Open Media formed some two years ago.

At the time, both VP8 and H.264 were defined as mandatory video codecs in WebRTC. The trajectory was also quite obvious:

After  H.264 and VP8, there was going to be a shift towards VP9 and then a leap towards AV1 – the new video codec currently being defined by the Alliance of Open Media.

Who isn’t in the alliance? Apple.

And it seems that Apple decided to bank on HEVC (H.265) – the successor of H.264. This is true for both iOS and macOS. This is done through hardware acceleration for both the encoder and the decoder, with the purpose of reducing storage requirements for video files and reducing bandwidth requirements for streaming.

But it goes to show where Apple will be going with WebRTC next. They will be adding HEVC support to it, ignoring VP9 altogether.

There’s a hefty cost in taking that route:

  • H.264 is simple yet expensive – when you use it, you need to pay up royalties to a single patent pool – MPEG-LA
  • HEVC is complex AND expensive – when you use it, you need to pay up royalties for MULTIPLE patent pools – MPEG-LA, HEVC Advance, Velos Media. Wondering which one you’ll need to pay for and how much? Me too

Which is why I think Apple is taking this route in the first place.

Apple has its own patents in HEVC, and is part of the MPEG-LA patent pool.

And it knows royalties is going to be complex and expensive. Which makes this a barrier for other vendors. Especially those who aren’t as vertically integrated – who needs to pay royalties here? The chipset vendor? The OS vendor? The handset manufacturer?

By embedding HEVC in iOS 11 and macOS High Sierra, Apple is doing what it does best – differentiates itself from anyone else based on quality:

  • It has hardware acceleration for HEVC. Both encoding and decoding
  • It starts using it today on its devices, and “magically” media quality improves and/or storage/network requirements decrease

Google, and Android by extension, won’t be adding HEVC. Google has taken the VP9 route. But in VP9 most solutions are software based – especially for the encoder. Which means that using VP9 eats up CPU. Results look just as good as HEVC, but the cost is higher on CPU and memory needs. Which means an “inferior” solution.

Which is exactly what Apple wants and needs. It just doesn’t care if interoperability with other devices requires lowering quality as the perception of who’s at fault will fall flat on Android and Google and not on Apple.

So…

Don’t expect to see VP9 or AV1 anytime soon in Apple’s roadmap. Not for WebRTC and not for anything else.

Dan Rayburn covers the streaming side (non-WebRTC) of this HEVC decision quite nicely on StreamingMedia.

Oh, and while at it, Jeremy Noring wrote a well thought rant about this lack of support for VP8 and VP9. His suggestion? Go vote for bug #173141 on WebKit. It probably won’t help, but it will make you feel a bit better

The only way I see this being resolved? If Google retracts their support for H.264 and just blatantly removes it from Chrome until Apple adds VP8 to Safari. I’ll be happy to see this happening.

FaceTime , Multiparty and WebRTC

Apple has FaceTime.

And FaceTime probably doesn’t use WebRTC. I am not sure if it ever will.

When Google came out with WebRTC, they had the Hangouts (now Meet) team about a year behind in its adoption of WebRTC as their underlying technology – but the intent and later execution was there.

When Microsoft came out with WebRTC, Skype didn’t support WebRTC. But they did launch Skype for Linux which is built somewhat on top of WebRTC, and Skype for Web which is taking the same route. Call it ORTC instead of WebRTC – they are one and the same as they are set to merge anyway.

Apple? Will they place FaceTime on top of WebRTC? I see no incentive there whatsoever.

Can Cisco change this? Rowan Trollope broke the news titled “Cisco and Apple Announce New Features” that WebEx and Cisco Spark now work on Safari – no download needed. I’ll translate it for you by adding the missing keyword here – WebRTC. Cisco is using WebRTC to do that. And since their stack is built atop H.264, they got that working on Safari.

Cisco and Apple here is interesting. While Cisco mentions this in the headline as if these new features were done jointly, Apple isn’t really acknowledging it. There’s no quote from an Apple representative, and at the same time, Cisco isn’t mentioned in the WebKit announcement – TokBox and BlueJeans are.

One wonders.

Back to FaceTime. Which is a 1:1 video chat service.

And the fact that many look into group video chat and other multiparty video configurations.

Will Apple care enough to support it well in WebRTC? Will it move from Plan B to Unified Plan? Will it care about simulcast? Will it invest in SVC? Will it listen and work with Cisco on its multiparty needs for the benefit of us all?

Older Devices

Apple will not be upgrading iPhone 5 devices to iOS 11. That’s a 5 years old device.

In many requirement documents I see a request to support iPhone 4.

Will this bump the general audience out there to focus on iPhone 6 and upwards, seeing what Apple is doing as well? Will this mean vendors will need to port WebRTC on their own to support older devices?

Time will tell, but I think that switching to iPhone 6 and above and focusing there makes sense.

Chrome/Firefox support on iOS

Here’s a question for you.

If you use Chrome or Firefox on iOS. And open a URL to a site using WebRTC. Will that work?

Here’s the catch.

The reason there was no real browser for iOS that supported iOS until today? Apple mandates WebKit as the rendering engine on any browser app that comes to its AppStore.

Now that WebKit is getting official WebRTC support – will Chrome and Firefox add WebRTC support to their iOS browsers?

And if they do – you’ll be getting the Apple restrictions. I can just see the WebRTC developer teams at Google and Mozilla cringing at this thought.

This can get really interesting if and when Apple decides to add HEVC support to WebRTC in its WebKit implementation of iOS. You’ll get Chrome on iOS with H.264 and HEVC and Chrome everywhere else with H.264, VP8 and VP9.

Fun times.

What Should Developers Do?

Here’s what you’ve been waiting for. The question I’ve been asked multiple times already:

Do I need to build an app? Should I wait?

The suggest at the moment is wait. Question is for what and until when exactly.

If you are looking for a closed app and planning on developing native, then go with whatever worked for you until today. This news item just isn’t for you.

If you are looking for browser support on iOS, then go with Safari and plan on enabling H.264 video codec in your service. Don’t wait up for VP8.

If you want something that will be cross platform app development using HTML5, then wait. Webview WebRTC support in iOS is still an unknown. If it gets there in the coming months then waiting a few more minutes probably won’t make a real difference for you anyway.

My Updated Cheat Sheet

As it is, this change with Safari, iOS and macOS required some necessary updates in my resources.

First to update is the WebRTC Device Cheat sheet. You can find the updated one in the same download page.

One last thing –

Join my and Philipp Hancke for Virtual Coffee

I planned for a different Virtual Coffee session this month. One about developer tools. It got bumped into July.

The one in June will cover the iOS announcement and its ramifications. My guest this time will be Philipp Hancke.

The session takes place on Monday, June 19 at 15:30 EDT.

It is free to join, but will not be available later as a recording (unless you are a customer).

Register now

The post WebRTC iOS 11 Support. Are We There Yet? appeared first on BlogGeek.me.

Kamailio Database Structure Description

miconda - Wed, 06/07/2017 - 10:46
The database tables created by Kamailio, along with the description of their columns, are documented in the tutorial available at:If you haven’t read it so far, take a bit of time to check the details for the tables of the modules you are using. It can help to understand better the purpose of each field that you have to provision for Kamailio.The database structure is created using kamdbctl, if you start now with it, guidelines can be found in the install tutorial:Contributions to add more documentation for database structure are very appreciated. You can update xml files from source tree in src/lib/srdb1/schema/ and then make a pull request via Kamailio’s github.com project.Thank you for flying Kamailio!

WebRTC Ships in MacOS and iOS 11

webrtc.is - Wed, 06/07/2017 - 03:39

Update:

Looks like VP8 is not there after all, bummer. More political jostling afoot, which sucks for the development community.

This is a big deal, to have Apple / Safari onboard is really the final major obstacle in the adoption of this awesome standard.

More info (thanks Marc Abrams !!)…
Based on the beta for macOS High Sierra – that was made available yesterday…
https://cl.ly/3q0p1O3…­
– Test samples: webrtc.github.io/samples/ (It passed most of the tests)
– Video codec support is VP8 and H.264 (I have not seen a test that shows H.265 or HEVC but I know it’s there)
– Audio codec support is Opus, ISAC16, G.722 and PCMU
– Basic datachannel support is there but none of the tests seem to work

 

AWESOME!!! This took a bit longer that many of us were expecting, but hey better late than never!


With WebRTC, Don’t Never Ever Mix Media and Signaling

bloggeek - Mon, 06/05/2017 - 12:00

And while at it – don’t mix signaling with NAT traversal.

Somehow, many people are asking these question in different phrasing, taking different angles and approaches to it. The thing is, if you want to build a robust production worthy service using WebRTC, you need to split these three entities.

If you haven’t already, then I suggest you check out my free 3-part video mini course on WebRTC servers.

Now, let’s dive into the details a bit –

Signaling Servers

Signaling servers is something we all have in our WebRTC products.

Why?

Because without them there’s no call. At all. Not even a Hello World example.

It is that simple.

You can co-locate the signaling server with your application server.

Here are a few things that you probably surmised about these servers already:

  1. You can scale a single server to handle 1000’s or event 100,000’s of connections and sessions in parallel
  2. These servers must maintain state for each user connected to them, making them hard to scale out
  3. Oftentimes, decisions that take place in these servers rely on external databases
  4. Latency of a couple 100’s of milliseconds is fine for these servers, but it is rather easy to be abusive and have that blown out of proportion if not designed and implemented properly (a few high profile services that I use daily come to mind here)

Did I mention signaling servers are written in higher level languages? Java, Node.js, Rails, Python, PHP (god forbid), …

NAT Traversal Servers

STUN and TURN is what I mean here.

And yes, we usually cram STUN along with TURN. TURN is the resource hog out of the two, but STUN can be attached to the same server just because they both have the same general purpose in life (to get the media flowing properly).

This is why I will ignore STUN here and focus on TURN.

Sometimes, people forget to TURN. They do so because WebRTC works great between two browser tabs or two people in the same office without the need for TURN, and putting Google’s STUN server URL is just so simple to do… that this is how they “ship” the product. Until all hell breaks loose.

TURN ends up relaying media between session participants. It does that when the participants can’t reach each other directly for one reason or another. This kind of a relay mechanism dictates two things:

  1. TURN will eat up bandwidth. And a lot of it
  2. Your preference is to place your TURN server as close it to the participant as possible. It is the only way to improve media quality and reduce latency, as from that TURN server, you have more control over the network (you can pay for better routes for example)

While you might not need many TURN servers, you probably want one at each availability zone of the cloud provider you are using.

Oh – and most NAT traversal servers I know are written in C/C++.

Media Servers

Media Servers are optional. So much so that they aren’t really a part of the specification – they’re just something you’d add in order to support certain functions. Group calls and recording are good examples of features that almost always translate into needing media servers.

The problem is that media servers are resource hogs compared to any of the other servers you’ll be needing with WebRTC.

This means that they end up scaling quite differently – a lot faster to be exact. And when they fail or crash (which happens), you still want to be able to reconnect the session nicely in front of the customer.

But the main thing is that it has different specs than the other servers here.

Which is why in most cases, media servers are placed in “isolation”.

There’s a point in placing media servers co-located with TURN servers – they scale somewhat together when TURN is needed. But I am not in favor of this approach most times, because TURN is a lot more Internet facing than the media server. And while I haven’t seen any publicity around hackers attacking media servers, it is probably only a matter of time.

And guess what? Media Servers? They are usually implemented in C/C++. They say it’s for speed.

Why Split them up?

Because they are different.

They serve different purposes.

And most likely, they need to be located in different parts of your deployment.

So just don’t. Place them in separate machines. Or VMs. Or Docker. Or whatever. Just have them logically separated and be prepared to separate them physically when the need arise.

If you want to understand more about WebRTC servers, then try out my free WebRTC server side mini course. You won’t regret it.

The post With WebRTC, Don’t Never Ever Mix Media and Signaling appeared first on BlogGeek.me.

Installing vSphere Replication from Linux CLI

TXLAB - Fri, 06/02/2017 - 18:43

tested with VCSA 6.1 and vSphere replication 6.1.1. OVF Tool 4.2.0 is installed on a Debian Jessy machine.

ovftool --acceptAllEulas -ds=datastore1 \ -n=<VMname> --ipAllocationPolicy=fixedPolicy \ --prop:password='*******' \ --prop:ntpserver=******* \ --vService:installation=com.vmware.vim.vsm:extension_vservice \ vSphere_Replication_OVF10.ovf  vi://vcenter01.domain.com/DC1/host/Cluster1/

 


Filed under: Networking Tagged: vmware

VSCode Syntax Highlighting For Kamailio.cfg

miconda - Fri, 06/02/2017 - 18:31
Visual Studio Code (VSCode) is an open source edition released by Microsoft, with a special focus for development. It is an Electron application, therefore it can run on many operating systems, including Linux and MacOS (hint: if you want to give it a try, be sure you disable telemetry reports in case you want strict privacy).If you are using it for editing kamailio.cfg, you may consider installing the extension for syntax highlighting. It can be installed from VSCode marketplace:or from github repository:Editing kamailio.cfg should look like next image (on a dark blue theme).If you are using a different editor, note that there are kamailio.cfg syntax highlighting extensions for VIM, MCEdit and Atom. Contributions for other editors or enhancements to existing ones are very welcome!Thank you for flying Kamailio!

Acrylic enclosure for FriendlyElec NanoPi NEO2

TXLAB - Fri, 06/02/2017 - 10:38

The NanoPi NEO2 board by FriendlyElec has several options for an enclosure in their webshop. The 3D-printed plastic enclosure is of too poor quality, and it doesn’t fixate the heatsink properly on the CPU.

The acrylic case does not include washers, which makes the whole construct too fragile, as the screws can easily damage the plastic.  Also the M2.5 screws for fixing the heatsink are too short.

So, I added the following components to the design:

  • M3*16mm  screws (4 pieces)
  • M3 washers (24 pieces)

Also the following parts came with the acrylic case:

  • M3*6mm screws (4 pieces)
  • 6.3mm plastic spacers (4 pieces)
  • 25mm female-female M3 spacers (4 pieces)
  • 6mm male-female M3 spacers (4 pieces)

As a result, we get a sturdy case that is able to sustain some rough handling, like carrying it in a toolbox among other hardware.

(scratches on my phone camera made the pictures a bit too soft)


Filed under: Networking Tagged: arm, friendlyelec, linux

Is Twilio Redefining CPaaS?

bloggeek - Mon, 05/29/2017 - 12:00

Twilio’s Jeff Lawson had a really interesting keynote at their Signal event. I think Twilio is trying to redefine what CPaaS is. If this works for them, it will make it doubly hard for their competitors.

This is going to be long, as the keynote was long and packed full with information and details that pave the road to what CPaaS is going to be in 2020.

I suggest you watch this keynote yourself –

What I loved the most? The beginning, where Jeff refers to code as making art. I have to agree. In my developer days, that was the feeling. Coding was like building with lego bricks without the instructions or sitting down to paint on T-shirts (yes – I did that in my youth). When a CEO of a company talks about coding as art and you see he truly believes it – you know that what that company is doing must be… art.

Before we Begin

One term you didn’t hear at the keynote:

CPaaS

One term that was there every other slide:

This was about developers, who is the buyer and how software APIs are everywhere.

It was also about how CPaaS is changing and Twilio is now much bigger than that – in the traditional sense of what CPaaS means.

It wasn’t said out loud, but the low level APIs that everyone are haggling about – SMS and voice – are nice, but not where the future lies.

Twilio by the Numbers

The numbers game was reserved for the first 13 minutes of the keynote, where Jeff asserted Twilio’s distinct leadership in this market:

  • 28 billion interactions (a year)
  • 70 countries (numbers in) today; plan to grow to numbers in a 100 countries “by this summer”
  • Over 900 employees; Over 400 in R&D
  • 30,000 annual deployments (the number of times a year Twilio is introducing new releases… in a continuous delivery mechanism to an operational cloud service) – this enables shipping faster with less mistakes. The result? 3.5 days between new product or feature launch
  • 1.6 million developer accounts on Twilio; 600,000 new accounts in the past year; doubling YoY
  • 99.999% API availability – and 99.999% success rate
  • 0 APIs killed

More about the two last bullets later.

Here’s what Twilio deployed in the past year:

To me, this is becoming hard to follow and grasp, especially when I need to look at other vendors as well.

If you look at it, you’ll see that Twilio has been working hard in multiple vectors. The main ones are Enterprise, IP communications and “legacy” telephony.

The main messages?

  1. Twilio is the largest player in the communication API space
  2. Twilio is stable and solid for the enterprise
  3. Twilio is globally available and rapidly expanding to new countries
  4. Twilio is fast to introduce new features

All this boils down to stating that a competitive advantage can be best achieved on top of Twilio.

Twilio’s New Layering Model

If you’ve been watching this space, you might have noticed that I tend to use this model to explain CPaaS feature sets:

And this is how Jeff explained it on stage in Twilio’s Signal event 2016:

Building blocks. Unrelated. Could have been placed horizontally one next to the other to get the same concept. But piling them on top of each other is great – it shows there’s lots and lots of services and features to use.

2017 brings with it a change in the paradigm and a new layers model that Jeff explained, and was later expanded with more details:

The funny thing is that this reminded me of how we explained the portfolio and API layers in our VoIP products at RADVISION more than 10 years ago. It is great to see how this translates well when shifting from on premise APIs to cloud APIs. But I digress.

Back to the layering model.

The Super Network wasn’t given much thought in this time around. There were announcements and improvements in this area, but these are a given by now. For those who wish to outmaneuver Twilio by offering a better network – that’s going to be tough without the layers above it.

Then there’s the Programmable Communications Cloud, which is where most of the CPaaS vendors are. This is what I drawn as my own perspective of CPaaS services. The names have changed a bit for the Twilio’s services – we’ve got Programmable Chat now instead of IP Messaging. SMS has 3 separate building blocks here instead of one, and the baseline one is called Programmable SMS – keeping the lower level Communications APIs with a nice naming convention of Programmable X.

The interesting part of this story comes in the Engagement Cloud. Jeff made a point of explaining the three aspects of it: Systems, Departments and Individuals. And the thing about the Engagement Cloud is that services there are actually best practices – they aren’t “functional” in their nature. So Twilio are referring to the APIs in this layer as Declarative APIs.

The Engagement Cloud

The main difference between what’s in the Engagement Cloud and the Programmable Communications Cloud? In the Programmable Communications Cloud you know as a developer what will happen – you ask to send an SMS and the SMS is sent. With the Engagement Cloud, you ask for a message to reach someone – and you don’t really care how it is done – just that it will be done in any channel that fits best.

No Channel To Rule Them All

What is that “any channel that fits best”?

That’s based on what Twilio decides in the modules they offer in the Engagement Cloud, and it is where the words “best practices” were used during the event.

Best practices are powerful. As a supplier, they show you know the business of your customer to a point where you can assist him more than just by giving him the thing that he think he needs. It places you often as a trusted advisor, or one to go to in order to decide what it is you are going to do. After all – you own the best practices, so why not follow them?

It is also where the most value is to be made moving forward.

SMS is probably still kind when it comes to revenue in CPaaS. Not only for Twilio, but for all players in the market. And while this is nice and true, it is also a real threat to them all:

Yes. SMS is growing in use.

Yes. The stupid term A2P (Application 2 Person) is growing rapidly and it is done using SMS.

Yes. People prefer that over installing apps, receiving emails and getting push notifications.

Yes. People do read SMS messages. But I am not sure if they trust them.

Here’s a quick story for you.

Airbnb.

I use them once in awhile. I was just planning a trip with the family for July. Found the dates. Booked the flights. Found an Airbnb to stay at. Reserved a place – and was asked if I am cool with push notifications. I clicked yes. And here’s what I got the next moment on my phone:

Businesses might be recommended to use SMS to reach their customers, but the prices of SMS urges businesses to seek other, cheaper channels of communications at the same time.

There is no money to be had in Communications APIs in the long term.
Click To Tweet

There is already a price war at this level. Vendors trying to be “cheaper than X”. Developers complaining about the high prices of CPaaS, not understanding the real costs of developing and maintaining such systems.

What’s in Twilio’s Engagement Cloud

Which is where the Engagement Cloud comes – or more accurately – the best practices and smarts on top of just calling communications APIs.

Twilio are now offering 4 APIs in that domain:

  1. Authy – handling authentication type scenarios, including but not limited to the classic 2FA
  2. Notify – a notification service from applications and services to users. It started as SMS, continued with the addition of push notifications, but now supports multiple channels
  3. TaskRouter – a way to connect incoming tasks with workers who can handle them. Can be used to build queuing systems in contact centers
  4. Proxy – a mechanism to send connect between people in two separate groups – riders and drivers for example. Suitable for the sharing economy or when the “agents” aren’t “employees” or are loosely “controlled” by the organization
Omnichannel

The interesting bit here is that these all started as functional building blocks. But now the stories behind them are all about multi-channel.

SMS is great, but it isn’t the answer.

IP messaging is great, but it isn’t the answer.

Facebook messenger with its billion+ users is great, but it isn’t the answer.

XKCD says it best:

With such a model in which we live in, programmable communications need to be able to keep track of the best means to reach a person. And so Twilio’s Engagement Cloud is about becoming Omnichannel (=everywhere) with the smarts needed to pick and choose the best channel per interaction.

Are we there yet with the current Twilio offering? I don’t know. But the positioning, intent, roadmap and vision is crystal clear. And with Twilio’s current speed of execution, it is going to happen sooner rather than later.

Vendor Lock-in

The great thing about this layer of Engagement Cloud for Twilio is that it is going to be hard to replace once you start using it.

How hard is it to replace an API that sends out an SMS to a phone number with another API that does the same? Not much.

But how hard is it to replace best practices wrapped inside an API that decides what to do on its own based on context? Harder. And getting even more so as time goes by and that piece of module gets smarter.

Twilio gets a better handle on its customers with the Engagement Cloud. It makes it a lot harder for developers to go for a multi-vendors strategy where they use SMS from the CPaaS vendor whose price is the lowest.

Developer’s Benefits

Why would developers use these Engagement Cloud modules from Twilio?

Because they save them a ton of time and even a lot more in headaches.

Today, there are 3 huge benefits for developers:

  1. Logic – the logic in these modules is one you get for “free” here. No need to write it on your own and make the decisions on your own
  2. Best practices – I know that your contact center is unique, but it probably adheres to similar queuing mechanisms as all the rest. You end up using similar/same best practices as others in your domain, and having these already pre-built is great. Maybe you can even improve your own business simply by using best practices and not do it alone
  3. Multiple vendors – adding channels means figuring out the APIs and craziness of multiple providers. No fun. You get a single API to rule them all. What’s not to like?

These areas are usually those that developers usually don’t like to deal with. That third one especially is a real pain – after you did it for 2 vendors/channels – connected it to SMS and maybe Facebook Messenger, it feels boring to add the next channel. But now you don’t have to anymore. And don’t you get me started with how the APIs there deprecated and changed through time.

Machine Learning and its CPaaS Role

Twilio talked about Machine Learning in two new APIs that it is introducing: Speech Recognition and Understand

The Speech Recognition one is a bit less interesting. It is done in partnership with Google, using Google’s engine for it. The smarts on Twilio’s side here is the integration and how they are stitching these capabilities of text to speech throughout their line of products.

Here what Twilio is doing is acting in the most Twilio-like approach – instead of developing its own speech recognition tech, or using a 3rd party that gets installed on premise, it decided to partner with Google and user their cloud based speech recognition technology. And then making it easier for developers to consume as part of the bigger Twilio offering.

The real story lies elsewhere though – in Twilio Understand.

While Speech Recognition is a functional piece where you feed the machine with voice and get text, Understand is about modeling your use case and then having the machine parse text based on that model.

It is also where Twilio seems to have gone it alone (or embedded a third party internally), building its real first customer-facing Machine Learning based product.

In the past few years we’ve seen huge growth in this space. It started with Big Data, turned Analytics, turned Real Time Analytics, turned Decision Engines, turned Machine Learning.

Companies use this type of capabilities in many ways. Mostly internally, where Twilio probably had been doing that already. But embedding machine learning and big data, making products smarter is where we’re headed. And for me, this is the first instance I’ve seen by a CPaaS vendor taking this route.

It is still a small step, as Understand is another piece of API – a module – that you can use. And just like many of Twilio’s other APIs, you can use it as a building block integrated with its other building blocks. It is a move in the right direction to evolving into something much bigger.

LinkedIn shows that Twilio has several data scientists (the man power you need for such tasks), though none of them was “kind enough” to offer details of his role or doings at Twilio

Moving forward, I’d expect Twilio to hire several more people in that domain, beefing up its chops and starting to offer these capabilities elsewhere.

The only competitor at the moment who is seeing that is Cisco Spark – with their recent acquisition of MindMeld.

The great thing about machine learning? People feel and assume that it is super hard. Which means it is worth paying for.

The Enterprise

Here’s where enterprises find a home at Twilio’s Signal 2017 keynote. Best to just show it in slides:

Twilio’s API calls success rate. This goes on top of its 99.999% API availability and this is where Jeff wants you to focus – not on getting an API returning an error (would would still fall under availability) but rather on how many successful results you get from the APIs.

Since Twilio launched, none of its APIs was ever deprecated or killed (haven’t checked it myself but this is what Jeff wants you to remember).

Twilio has been working hard on reaching out to enterprises. It introduced an Enterprise plan last year. Implemented ISO 2700. Added Public Key Validation. Introduced support for Enterprise SSO.

All these are great, but what I think resonates here the most are the above two items.

99.999% Success Rate

Enterprises LOVE this.

SLAs. Guarantees. All the rage.

Twilio is operating at 99.999% uptime and is happy to offer a 99.99% guarantee in its enterprise SLA:

For an enterprise to go for Twilio requires two leaps of faith:

  1. Leaping from on premise to the cloud. Yes. I am told everyone’s migrating to the cloud, but this is still painful
  2. Picking Twilio as its vendor of choice and not its natural enterprise vendors (known as ISVs)

When you pick Twilio, who’s giving you any guarantees?

Well… Twilio does. At 99.99% while maintaining 99.999% across all of its services to all of its customers.

That’s a powerful message. Especially if you couple it with 30,000 deployments a year.

0 APIs Killed

This one is REALLY interesting.

In the world of APIs where everything is in the cloud with a single copy running (it isn’t, but bear with me a second), having someone say that they offer backward compatibility to all of their APIs is huge.

The number of changes you usually need to follow with APIs on the internet is huge. If you have a product using third party APIs, then every year or two, you need to make some changes to have it continue to work properly – because the APIs you use change.

0 APIs kills means that if an enterprise writes their code today for a project they have, it won’t need to worry about changes to it due to Twilio. Now, in many cases, enterprises develop a project, launch it and then are happy to continue with it as is without further investment (or budget). Which means that this kind of a soft guarantee is important.

How does Twilio do it?

They launch products in beta and run the beta for long periods of time. During that time, they get developers to use and tinker with the APIs, collect feedback and when they feel ready, they officially launch it – at which point the API is deemed stable.

It works well because Twilio has lots and lots of customers, some willing to jump on new offerings and take the risk of having things break a bit during those beta periods.

The end result? 0 APIs killed.

Will it Blend?

I believe it will.

Twilio has introduced a new paradigm for they way it is layering its product offerings.

In the process, it repositioned all of its higher level APIs as the Engagement Cloud. It stitched these APIs to use its lower Programmable Communications APIs, adding business logic and best practices. And it is now looking into machine learning as well.

It is a powerful package with nothing comparable on the market.

Twilio are the best of suite approach of CPaaS – offering the largest breadth of support across this space. And it is making sure to offer powerful building blocks to make developers think twice before going for an alternative.

Twilio isn’t for everyone. And other CPaaS vendors do have their place. But increasingly, these places become niches.

Is there more?

Yes.

This analysis is long, but by no means full.

There were a lot of other aspects of the announcements and Twilio’s moves that require more thought and details. The pricing model on group Programmable Video is one of them. Third Party Add Ons in certain domains (especially for analytics) is another. Or Twilio heading into the UI layer. And then there’s serverless via Twilio Functions. This isn’t even an exhaustive list…

I won’t be going into these here, but these are things that I am actively looking at.

Contact me if you are interested in understanding more about this space.

Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.

The post Is Twilio Redefining CPaaS? appeared first on BlogGeek.me.

Kamailio World 2017: Videos Online

miconda - Mon, 05/22/2017 - 23:52
Video recordings for all the sessions at Kamailio World Conference 2017 were made available on KamailioWorld YouTube channel next day after the event. You can see them at:The channel has now more than 120 videos from past Kamailio World editions, being a tremendous knowledge base showing various use cases for Kamailio and other VoIP platforms.The slides will be collected from speakers and made available in the near future.Enjoy!Thank you for flying Kamailio!

WebRTC for Business People – The 2017 Edition

bloggeek - Mon, 05/22/2017 - 12:00

WebRTC for Business People – The 2017 Edition

The revamped WebRTC for Business People report is now published.

I’ve been reviewing the stats of the content here lately, and noticed that people still find their way and download my WebRTC for Business People report.

My only problem with it is that it was already old – and showing it: There’s no serious mention of the advances made by Microsoft Edge towards WebRTC support, nothing about the difficulty of finding experienced WebRTC developers.

But most of all – the use cases it mentioned. Some of them were companies that got acquired. Others were shutdown. Some stagnated in place, or are now on life support.

That’s not a good way to introduce someone to the topic of WebRTC.

So a rewrite was in order here, which brought me to work with a few sponsors:

They were kind enough to make the investment needed for me to put the time and effort into it.

WebRTC for Business People – what’s in the 2017 edition?

Well…

First off – the report is still free. You can download it, print it, read it online – practically do whatever you want with it.

I’ve refreshed the visuals and updated the analysis part with data from 2015 until 2017 (the time that have passed since the last update to the report). Did you know that WebRTC is still growing linearly in all relevant parameters that you can check?

No hype here. Just solid, steady growth. The minor change in github projects trajectory? That started when Google moved their WebRTC samples and demos from Google code to github.

I wonder if this will change when Apple adds WebRTC to Safari and iOS.

I removed all the nonsense in the report about SIP and H.323. These protocols still exist, but more often than not, people don’t look at them and compare them with WebRTC – because WebRTC has gone way beyond these signaling protocols.

Oh – yes – and I completely rewritten all the vendor use cases and the segments I look at in this report. Here’s the new set of vendors in the report:

If you are interested in WebRTC, the ecosystem around it and understand how companies are using it today – in the real life – making real commercial use of it – then check out this report.

Download the report

The post WebRTC for Business People – The 2017 Edition appeared first on BlogGeek.me.

Pages

Subscribe to OpenTelecom.IT aggregator

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Yet more available pages

Responsive grid

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Typography

Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

More »

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.