News from Industry

Twilio’s Voice Insights for WebRTC – a line on the sand

bloggeek - Fri, 09/23/2016 - 12:00

Analytics != Operation

Twilio just announced a new service to its growing cadre of services. This time – Voice Insights.

What to expect in the coming days

This week Twilio announced several interesting initiatives:

  1. Country specific guidelines on using SMS
  2. A new Voice Insights service
  3. The Kurento acquisition

Add to that their recent announcement on their new Enterprise offering and the way they seem to be adding more number choices in countries. What we get is too much work to cover a single vendor in this industry.

Twilio is enhancing its services in breadth and depth at the same time, doing so while trying to reach out to new customer types. I will be covering all of these issues soon enough. Some of it here, some on other blogs where I write. Customers with an active subscription for my WebRTC PaaS report will receive a longform written analysis separately covering all these aspects later this month.

What I want to cover in this article

I already wrote about Twilio’s Kurento acquisition. This time, I want to focus on Voice Insights.

All the media outlets I’ve checked to read about Voice Insights were regurgitating the Twilio announcement with little to add. At most, they had callstats.io to refer to. I think a lot is missing from the current conversation. So lets dig in.

What is Voice Insights?

Voice Insights is a set of tools that can be used to understand what’s going on under the rug. When you use a communications API platform – or build your own for that matter – the first thing to notice is that there’s lack of understanding of what’s really happening.

Most dashboards focus on giving you the basics – what sessions you created, how long were they, how much money you owe us. Others add some indication of quality metrics.

The tools under the Voice Insights title at Twilio include:

  1. Collection of all network stats, so you can check them out in the Twilio console
  2. Real time triggers on the client, telling you when network issues arise or the volume is too low/high
  3. Pre-call network test on the client
  4. User feedback collection (the Skype “how was your call quality” nag)

Some of them were already available in some form or another in the Twilio offering – such as user feedback collection.

The features here can be split into two types:

  1. Client side – the real time triggers, pre-call network test
  2. Server side – collection of network stats

Twilio gave a good introduction to all of thee capabilities, so I won’t be repeating them here.

What is interesting, is if and how they have decided to implement the real time triggers – do they get triggered from the backend or directly by running rules on the device. But I digress here.

How is it priced?

Interestingly, Voice Insights is priced separately from the calling service itself.

If you want insights into the voice minutes you use on Twilio, there’s an extra charge associated with it.

Prices start from $0.004 per minute, going down to ~$0.002 per minute for those who can commit to 1 million voice minutes a month. It goes down to a shy above $0.001 a minute.

For comparison, SIP-to-SIP voice calling on Twilio starts at $0.005 per minute, making Voice Insights a rather expensive service.

Comparisons with callstats.io are necessary at this point. If you take a low tier of 10,000 voice minutes a month, callstats.io is priced at 19 EUR (based on their calculator – it can get higher or lower based on “data points”) whereas Twilio Voice Insights stands at 40 USD. How do these two vendors offer lower rates at bulk is an exercise I’ll leave for others to make.

Is this high? low? market price? I have no clue.

TokBox, on the other hand, has their own tool called Inspector and another feature called Pre-Call Test. And it is given for free as part of the service.

Where is it headed?

Voice Insights can take several directions with Twilio:

  • Extend it to support video sessions as well
  • Enhance and deepen the analytics capabilities, probably once enought  feedback is received from customers on this feature
  • Switch from a paid to free offering, again, based on customer feedback
  • Unbundle it from Twilio and offer it as a stand-alone service to others – maybe to all the vendors that are using Kurento on premise?

With analytics, the sky usually isn’t the limit. It is just the beginning of the dreams and stories you can build upon a large data set. The problem is how can you take these dreams and make them come true.

Which brings us to the next issue.

The future of Analytics in Comm APIs

There’s a line drawn in the sand here. Between communications and analytics.

Analytics has a perceived value of its own – on top of enabling the interaction itself.

Will this hold water? Will other communication API vendors add such capabilities? Will they be charging extra for them?

I’ve had my share of stories around CEM (Customer Experience Management). Network equipment vendors and those handling video streaming are marketing it to their customers. Analytics on network data. This isn’t much different.

Time will tell if this is something that will become common place and desired, or just a failed attempt. I still don’t have an opinion where this will go.

Up next

Next in my quick series of articles on Twilio comes coverage of their new Enterprise plan, and how Twilio is trying to grow in breadth and depth at the same time.

 

Test and Monitor your WebRTC Service like a pro - check out how testRTC can improve your service' stability and performance.

The post Twilio’s Voice Insights for WebRTC – a line on the sand appeared first on BlogGeek.me.

Discount on the Advanced WebRTC Architecture Course ends tomorrow

bloggeek - Thu, 09/22/2016 - 12:00

If you haven’t yet enrolled to my Advanced WebRTC Architecture course – then why wait?

I just noticed that I haven’t written any specific post here about the upcoming course, so consider this one that announcement. To my defense – I sent it out a few days ago to the monthly newsletter I have.

Why a course on WebRTC architecture?

I’ve been working with entrepreneurs, developers, product managers and people in general about their WebRTC products for quite some time. But somehow I missed to notice that in many such discussions there were large gaps in what people thought about WebRTC and what WebRTC really is.

There’s lots of beginner’s information out there for WebRTC, but somehow it always focuses on how to use the WebRTC APIs in the browser, or what the meaning of a specific feature in the standard is. There is also a large set of walk-throughs of different frameworks that you can use, but no one seems to offer a path for a developer to decide on his architecture. To answer the question of “what should I be choosing for my service?

So I set out to put a course that answers that specific question. It gives the basics of what WebRTC is, and then dives into the part of what it means to put an architecture in place:

  • How to analyze the real requirements of your scenarios?
  • What are the various components you will need?
  • Go through common design patterns that crop up in popular service archetypes
What’s in the course?

The easiest way is to go through the course syllabus. It is available online here and also in PDF form.

When will the course take place?

The course is all conducted online, but not live.

It starts on October 24, and I am now in final preparation of recording the materials after creating them in the past two months.

The course is designed to be:

  • Built out of 7 modules
  • Have 40 lessons give or take, each on average should take you 30 minutes
  • This means if you take a lesson on every working day, you should complete this in 2 months
  • You can do it at a faster pace if you wish
  • Course materials are available online for students for a period of 2 months. This can be extended to 4 months for those who wish to add Office Hours on top of the course
Any discount for friends and family?

Enrolling to the course is $247 USD. Adding Office Hours on top of it means an additional $150 USD.

Until tomorrow, there’s a $50 USD discount – so enroll now if you’re already certain you want to.

There are discounts for those who want to enroll as a larger group – contact me for that.

Have more questions?

Check the FAQ. I’ll be updating it as more questions come it.

If you can’t find what you need there – just contact me.

The post Discount on the Advanced WebRTC Architecture Course ends tomorrow appeared first on BlogGeek.me.

Twilio Acquires Kurento. Who will Acquire Janus?

bloggeek - Wed, 09/21/2016 - 12:00

Open source media frameworks in WebRTC are all the rage these days.

Jitsi got acquired by Atlassian early last year and now Twilio grabs Kurento.

What to expect in the coming days

Yesterday Twilio announced several interesting initiatives:

  1. Country specific guidelines on using SMS
  2. A new Voice Insights service
  3. The Kurento acquisition

Add to that their recent announcement on their new Enterprise offering and the way they seem to be adding more number choices in countries. What we get is too much work to cover a single vendor in this industry.

Twilio is enhancing its services in breadth and depth at the same time, doing so while trying to reach out to new customer types. I will be covering all of these issues soon enough. Some of it here, some on other blogs where I write. Customers with an active subscription for my WebRTC PaaS report will receive a longform written analysis separately covering all these aspects later this month.

What I want to cover in this article

What I want to cover in this part of my analysis of the recent Twilio announcements is their acquisition of Kurento.

Things I’ll be touching is Why Kurento – how will it further Twilio’s goal – and also what will happen to the many users of Kurento.

I’ll also touch the open source media server space, and the fact that the next runner up in the acquisition roulette of our industry should be Janus.

But first things first.

What is Kurento?

Kurento is an open source WebRTC server-side media framework implemented on top of GStreamer. While it may not be limited to WebRTC, my guess is that most if not all of its users make use of WebRTC with it.

What does that mean exactly?

  • Open source – anyone can download and use Kurento. And many do
    • There’s a vibrant community around it of developers that use it independently, Outsourcing development shops that use it in their projects to customers and the Kurento team itself offering free and paid support to it
    • It is distributed under the Apache license which is quite lenient and enterprise-friendly
  • server-side media framework – when you want to process media in WebRTC for recording, multiparty or other processes, a server-side media framework is necessary
  • GStreamer – another popular open source project for media processing. Just another tidbit you may want to remember

I am seeing Kurento everywhere I go. Every couple of meetings I have with companies, they indicate that they make use of Kurento or when you look at their service it is apparent it uses Kurento. Somehow, it has become one of these universal packages that developers turn to when they need stuff done.

The Kurento team is running multiple activities/businesses (I might be doing a few mistakes here – it is always hard to follow such internal structures):

  1. Kurento, the open source project itself
    • Assisted by research done at theUniversidad Rey Juan Carlos located in Madrid, Spain
    • Funding raised through the European Commission
    • Money received by selling support and customization services
  2. NUBOMEDIA
    • A new initiative focused on scaling and an open source PaaS offering on top of Kurento
    • You can read more about it in a guest post by Luis Lopez (the face of Kurento)
  3. elasticRTC
    • Another new initiative, but a commercial one
    • Focused at getting scalable Kurento running on AWS
  4. Naevatec / Tikal Technologies SL
    • The business side of the Kurento project, where customization and support is done for a price

Kurento have a busy team…

What did Twilio acquire exactly?

This is where things get complicated. From my understanding, reading the materials online and through a briefing held with Twilio, this is what you can expect:

  • Kurento as an open source project is left open source, untouched and un-acquired. That said, the bulk of the team maintaining Kurento (the Naevatec developers) will be moving to be Twilio employees
  • Naevtec was not acquired and will live on. A new team will need to be hired and trained. During the transition period, the Twilio team will work on the Kurento project fulfilling any existing obligations. After that, Naevatec will supposedly have the internal manpower to take charge of that part of the business
  • elasticRTC was acquired. They will not be onboarding any new customers, but will continue supporting existing customers
    • This sounds like the story of AddLive and Snapchat (they waited for support contracts to expire and worked diligently but legally to get customers off the AddLive service)
    • That said, it seems like Twilio wants to leverage these early adopters of elasticRTC to design and build their own Twilio API offering around that domain (more on that later)
    • As I don’t believe there are many customers to elasticRTC, I don’t see this as a real blow to anyone
  • NUBOMEDIA was not mentioned in any of the announcements of the acquisition
    • I forgot to prod about it in my briefing…
    • Twilio are probably unhappy about this one, but had nothing to do about it
    • NUBOMEDIA is funded by multiple European projects, so was either impossible to acquire or too expensive for what Twilio had an appetite for
    • It might also had more partners to it than just the Kurento team(s)
    • How will the acquisition affect NUBOMEDIA’s project and the zeal with which Twilio’s new employees from Naevatec will have for it is an open question

To sum things up:

Twilio acqui-hired the team behind the Kurento project and took their elasticRTC offering out of the market before it became too popular.

How will Twilio use Kurento?

I’d like to split this one to short term and long term

Short term – multiparty calling

Twilio needed an SFU. Desperately.

In April 2015 the Twilio Video initiative was announced. Almost 18 months later and that service is still in beta. It is also still 1:1 calling or mesh for multiparty.

Something had to be done. While I am sure Twilio has been working for quite some time on a solid multiparty option, they probably had a few roadblocks, which got them to start using Kurento – or decide they need to buy that technology instead of build it internally.

Which got them to the point of the acquisition. Twilio will probably embed Kurento into their Twilio Video offer, adding three new capabilities to their platform with it:

  1. Multiparty calling, in an SFU model, and maybe an MCU one
  2. Video recording capability – a popular Kurento use case
  3. PSTN connectivity for video calling – Kurento has a SIP-Gateway component that can be used for that purpose
Long term – generic media server

In the long term, Twilio can employ the full power of Kurento and offer it in the cloud with a flexible API that pipelines media in real time.

This can be used in our new brave world of AI, Bots, IOT and AR – all them acronyms people love talking about.

It will be interesting to see how Twilio ends up implementing it and what kind of an API and an offering they will put in place, as there are many challenges here:

  • How do you do something so generic but still maintain low resource consumption?
  • How do you price it in an attractive way?
  • How do you decide which use cases to cover and which to ignore?
  • How do you design it for scale, especially if you are as big as Twilio?
  • How do you design simple yet flexible and powerful API for something so generic in nature?

This is one of the most interesting projects in our industry at the moment, and if Twilio is working towards that goal, then I envy their product managers and developers.

What will be left of the Kurento project?

That’s the big unknown. Luis Lopez, project lead of Kurento details the official stance of Kurento and Twilio on the Kurento blog. It is an expected positive looking write up, but it leaves the hard questions unanswered.

Maintaining the Kurento project

Twilio is known for their openness and the way they work with developers. While that is true, the Twilio github has little in the way of projects that aren’t samples written on top of the Twilio platform or open sourced projects that touch the core of Twilio. While that is understandable and expected, the question is how will Twilio treat the Kurento open source project?

Now that most of the workforce that is leading Kurento are becoming Twilio employees, will they work on the open source Kurento build or on internal needs and builds of Twilio? Here are a few hard questions that have no real answers to them:

  • What will be contributed back to the Kurento project besides stability and bug fixes?
  • If Twilio work on optimizing Kurento to higher capacities or add horizontal scalability modules to Kurento. Will that be open sourced or left inside Twilio?
  • How will Twilio prioritize bugs and requests coming from the large Kurento community versus handling their own internal roadmap?

While in many cases, with Kurento the answer would have been that Naevatec could just as well limit the access to higher level modules for paid customers – there was someone you could talk to when you wanted to purchase such modules. Now with Twilio, that route is over. Twilio are not in the business of paid support and customization of open source projects – they are in the business of cloud APIs.

There will be ongoing friction inside Twilio with the decision between investing in the open source Kurento platform versus using it internally. If you thought that was bad with Atlassian acquiring Jitsi – it is doubly so here, where Twilio may have to compete with a build vs buy decisions of companies where “build” is done on top of Kurento.

I assume Twilio doesn’t have the answers to these questions yet either.

Maintaining the business model

Kurento has customers. Not only users and developers.

These customers pay Naevatec. They pay for support hours or for customization work.

Will this be allowed moving forward?

Can the yet-to-be-hired new team at Naevatec handle the support?

What happens when someone wants to pay a large sum of money to Naevatec in order to deploy a scalable Kurento service in the cloud? Will Naevatec pick that project? If said customer also wants to build an API platform on top of it, will that be something Naeva Tec will still do?

What will others who see themselves as Twilio competitors do if they made use of Kurento up until now? Especially if they were a Naevatec paying customer…

The good thing is, that many of the Kurento users ended up getting paid support and customization by third party vendors. Now if you only could know which one of them does a decent job…

Should TokBox be worried?

Yes and no.

Yes, because it means Twilio will be getting their multiparty story, and by that competing with TokBox. Twilio has a wider set of features as well, making them more attractive in some cases.

No, because there’s room for more players, and for video calling services at the moment, TokBox is the go-to vendor. I wonder if they can maintain their lead.

What about Janus?

I recently compared Jitsi to Kurento.

Little did I know then that Twilio decided on Kurento and was in the process of acquiring it.

I also raised the question about Janus.

To some extent, Janus is next-in-line:

  • Those I know who use the project are happy with it and its architecture. A lot more than other smaller open source media framework projects
  • Slack has been using Janus for awhile now
  • Other vendors, some got acquired recently, also make use of it

Does Meetecho, the company behind Janus, willing to sell it isn’t important. It is a matter of price points.

We’ve seen the larger vendors veer towards acquiring the technology that they are using.

Will Slack go after Janus? Maybe Vonage/Nexmo? Oracle, to beef their own WebRTC offering?

Open source media frameworks have proven to be extremely effective in churning out commercial services on top of them. WebRTC made that happen by being its own open source initiative.

It is good to see Kurento finding a new home and growing up. Kudos to the Kurento team.

 

Learn how to design the best architecture for our WebRTC service in this new Advanced WebRTC Architecture course.

 

The post Twilio Acquires Kurento. Who will Acquire Janus? appeared first on BlogGeek.me.

How Media and Signaling flows look like in WebRTC?

bloggeek - Mon, 09/19/2016 - 12:00

I hope this will clear up some of the confusion around WebRTC media flows.

I guess this is one of the main reasons why I started with my new project of an Advanced WebRTC Architecture Course. In too many conversations I’ve had recently it seemed like people didn’t know exactly what happens with that WebRTC magic – what bits go where. While you can probably find that out by reading the specifications and the explanations around the WebRTC APIs or how ICE works, they all fail to consider the real use cases – the ones requiring media engines to be deployed.

So here we go.

In this article, I’ll be showing some of these flows. I made them part of the course – a whole lesson. If you are interested in learning more – then make sure to enroll to the course.

#1 – Basic P2P Call Direct WebRTC P2P call

We will start off with the basics and build on that as we move along.

Our entities will be colored in red. Signaling flows in green and media flows in blue.

What you see above is the classic explanation of WebRTC. Our entities:

  1. Two browsers, connected to an application server
  2. The application server is a simple web server that is used to “connect” both browsers. It can be something like the Facebook website, an ecommerce site, your heatlhcare provider or my own site with its monthly virtual coffee sessions
  3. Our STUN and TURN server (yes. You don’t need two separate servers. They almost always come as a single server/process). And we’re not using it in this case, but we will in the next scenarios

What we have here is the classic VoIP (or WebRTC?) triangle. Signaling flows vertically towards the server but media flows directly across the browsers.

BTW – there’s some signaling going off from the browsers towards the STUN/TURN server for practically all types of scenarios. This is used to find the public IP address of the browsers at the very least. And almost always, we don’t draw this relationship (until you really need to fix a big, STUN seems obvious and too simple to even mention).

 

Summing this one up: nothing to write home about.

Moving on…

#2 – Basic Relay Call Basic WebRTC relay call

This is probably the main drawing you’ll see when ICE and TURN get explained.

In essence, the browsers couldn’t (or weren’t allowed) to reach each other directly with their media, so a third party needs to facilitate that for them and route the media. This is exactly why we use TURN servers in WebRTC (and other VoIP protocols).

This means that WebRTC isn’t necessarily P2P and P2P can’t be enforced – it is just a best effort thing.

So far so go. But somewhat boring and expected.

Let’s start looking at more interesting scenarios. Ones where we need a media server to handle the media:

#3 – WebRTC Media Server Direct Call, Centralized Signaling WebRTC Media Server Direct Call, Centralized Signaling

Now things start to become interesting.

We’ve added a new entity into the mix – a media server. It can be used to record the calls, manage multiparty scenarios, gateway to other networks, do some other processing on the media – whatever you fancy.

To make things simple, we’ve dropped the relay via TURN. We will get to it in a moment, but for now – bear with me please.

Media

The media now needs to flow through the media server. This may look like the previous drawing, where the media was routed through the TURN server – but it isn’t.

Where the TURN server relays the media without looking at it – and without being able to look at it (it is encrypted end-to-end); the Media Server acts as a termination point for the media and the WebRTC session itself. What we really see here is two separate WebRTC sessions – one from the browser on the left to the media server, and a second one from the media server to the browser on the right. This one is important to understand – since these are two separate WebRTC sessions – you need to think and treat them separately as well.

Another important note to make about media servers is that putting them on a public IP isn’t enough – you will still need a TURN server.

Signaling

On the signaling front, most assume that signaling continues as it always have. In which case, the media server needs to be controlled in some manner, presumably using a backend-to-backend signaling with the application server.

This is a great approach that keeps things simple with a single source of truth in the system, but it doesn’t always happen.

Why? Because we have APIs everywhere. Including in media servers. And these APIs are sometimes used (and even abused) by clients running browsers.

Which leads us to our next scenario:

#4 – WebRTC Media Server Direct Call, Split Signaling WebRTC Media Server Direct Call, Split Signaling

This scenario is what we usually get to when we add a media server into the mix.

Signaling will most often than not be done between the browser and the media server while at the same time we will have signaling between the browser and the application server.

This is easier to develop and start running, but comes with a few drawbacks:

  1. Authorization now needs to take place between multiple different servers written in different technologies
  2. It is harder to get a single source of truth in the system, which means it is harder for the application server to know what is really going on
  3. Doing such work from a browser opens up vulnerabilities and attack vectors on the system – as the code itself is wide open and exposes more of the backend infrastructure

Skip it if you can.

Now lets add back that STUN/TURN server into the mix.

#5 – WebRTC Media Server Call Relay WebRTC Media Server Call Relay

This scenario is actually #3 with one minor difference – the media gets relayed via TURN.

It will happen if the browsers are behind firewalls, or in special cases when this is something that we enforce for our own reasons.

Nothing special about this scenario besides the fact that it may well happen when your intent is to run scenario #3 – hard to tell your users which network to use to access your service.

#6 – WebRTC Media Server Call Partial Relay WebRTC Media Server Call Partial Relay

Just like #5, this is also a derivative of #3 that we need to remember.

The relay may well happen only in one side of the media server – I hope you remember that each side is a WebRTC session on its own.

If you notice, I decided here to have signaling direct to the media server, but could have used the backend to backend signaling.

#7 – WebRTC Media Server and TURN Co-location WebRTC Media Server and TURN Co-location

This scenario shows a different type of a decision making point. The challenge here is to answer the question of where to deploy the STUN/TURN server.

While we can put it as an independent entity that stands on its own, we can co-locate it with the media server itself.

What do we gain by this? Less moving parts. Scales with the media server. Less routing headaches. Flexibility to get media into your infrastructure as close to the user as possible.

What do we lose? Two different functions in one box – at a time when micro services are the latest tech fad. We can’t scale them separately and at times we do want to scale them separately.

Know Your Flows

These are some of the decisions you’ll need to make if you go to deploy your own WebRTC infrastructure; and even if you don’t do that and just end up going for a communication API vendor – it is worthwhile understanding the underlying nature of the service. I’ve seen more than a single startup go work with a communication API vendor only to fail due to specific requirements and architectures that had to be put in place.

One last thing – this is 1 of 40 different lessons in my Advanced WebRTC Architecture Course. If you find this relevant to you – you should join me and enroll to the course. There’s an early bird discount valid until the end of this week.

The post How Media and Signaling flows look like in WebRTC? appeared first on BlogGeek.me.

AstriCon 2016

miconda - Fri, 09/16/2016 - 17:34
The Asterisk Users Conference – AstriCon – is taking place in Glendale, Arizona, during September 27-29, 2016.With a consistent group of VoIP community using both Kamailio and Asterisk projects, Kamailio will have again a strong presence on site this year, including the participation in the expo floor, coordinated this edition by Fred Posner.Along with him, you may meet around Torrey Searle, Nir Simionovich, Joran Vinzens and others that can answer your questions about Kamailio and Asterisk. Like in the past editions, several presentations will touch the use of Kamailio and integration with Asterisk — see agenda.It is definitely a must-attend event if you are looking to build flexible real time communications using Kamailio and Asterisk, or even beyond that, there are not many places around the world where you can find so much VoIP knowledge at the same time along the year!

ClueCon Weekly – July 27, 2016 – Chad Hart – WebRTC

FreeSWITCH - Thu, 09/15/2016 - 19:08

Chad Hart joins the ClueCon Weekly Team to talk WebRTC

ClueCon Weekly – July 13, 2016 – Rich Garboski – eTech.tv

FreeSWITCH - Thu, 09/15/2016 - 19:04


*It should be noted that the lip sync on this video is off due to bandwidth issues on our presenters side.

Kamailio Advanced Training, Oct 24-26, 2016, in Berlin

miconda - Thu, 09/15/2016 - 12:45
Next European edition of Kamailio Advanced Training will take place in Berlin, Germany, during October 24-26, 2016.The content will be based on latest stable series of Kamailio 4.4.x, released in March 2016, the major version that brought a large set of new features, currently having the minor release number v4.4.2.The class in Berlin is organized by Asipto  and will be taught by Daniel-Constantin Mierla, co-founder and core developer of Kamailio SIP Server project.Read more details about the class and registration process at:Looking forward to meeting some of you in Berlin!

IMTC: Supporting WebRTC Interoperability

bloggeek - Thu, 09/15/2016 - 12:00

Where is the IMTC focusing it efforts when it comes to WebRTC?

[Bernard Aboba, who is IMTC Director and Principal Architect for Microsoft wanted to clarify a bit what the IMTC is doing in the WebRTC Activity Group. I was happy to give him this floor, clarifying a bit the tweet I shared in an earlier post]

One of the IMTC’s core missions is to enhance interoperability in multimedia communications, with real-time video communications having been a focus of the organization since its inception. With IMTC’s membership including many companies within the video industry, IMTC has over the years dealt with a wide range of video interoperability issues, from simple 1:1 video scenarios to telepresence use cases involving multiple participants, each with multiple cameras and screens.

With WebRTC browsers now adding support for H.264/AVC as well as VP9, and support for advanced video functionality such as simulcast and scalable video coding (SVC) becoming available, the need for WebRTC video protocol and API interoperability testing has grown, particularly in scenarios implemented by video conferencing applications. As a result, the IMTC’s WebRTC Activity Group has been working to further interoperability testing between WebRTC browsers.

In the past, the IMTC has sponsored development of test suites, including a test suite for SIP over IPv6, and most recently a tool for testing interoperability of HEVC/H.265 scalable video coding. For SuperOp 2016, the WebRTC AG took on testing of WebRTC audio and video interoperability. So a logical next step was to work on development of automated WebRTC interoperability tests. Challenges include:

  1. Developing basic audio and video tests that can run on all browsers without rewriting the test code for each new browser to be supported.
  2. Developing tests covering not only basic use cases (e.g. peer-to-peer audio/video), but also advanced use cases requiring a central conferencing server (e.g. conferencing scenarios involving multiple participants, simulcast, scalable video coding, screen sharing, etc.)

For its initial work, IMTC decided to focus on the first problem. To enable interoperability testing of the VP9 and H.264/AVC implementations now available in browsers, the IMTC supported Philipp Hancke (known to the community as “fippo”) in enhancing automated WebRTC interoperability tests, now available at https://github.com/fippo/testbed. Sample code used in the automated tests is available at https://github.com/webrtc/samples.

The interoperability tests depend on adapter.js, a Javascript “shim” library originally developed by the Chrome team to enable tests to be run on Chrome and Firefox. Support for VP9 and H.264/AVC has been rolled into adapter.js 2.0, as well as support for Edge (first added by fippo in October 2015). The testbed also depends on a merged fix (not yet released) in version 2.0.2. The latest adapter.js release as well as ongoing fixes is available at https://github.com/webrtc/adapter.

With the enhancements rolled into adapter.js 2.0, the shim library enables WebRTC developers to ship audio and video applications running across browsers using a single code base. At ClueCon 2016, Anthony Minessale of Freeswitch demonstrated the Verto client written to the WebRTC 1.0 API supporting audio and video interoperability between Chrome, Firefox and Edge.

Got questions or want to learn more about the IMTC and its involvement with WebRTC? Email the IMTC directly.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post IMTC: Supporting WebRTC Interoperability appeared first on BlogGeek.me.

Kamailio v4.4.3 Released

miconda - Wed, 09/14/2016 - 21:00
Kamailio SIP Server v4.4.3 stable is out – a minor release including fixes in code and documentation since v4.4.2. The configuration file and database schema compatibility is preserved.Kamailio v4.4.3 is based on the latest version of GIT branch 4.4, therefore those running previous 4.4.x versions are advised to upgrade. There is no change that has to be done to configuration file or database structure comparing with older v4.4.x.Resources for Kamailio version 4.4.3Source tarballs are available at:Detailed changelog:Download via GIT: # git clone git://git.kamailio.org/kamailio kamailio
# cd kamailio
# git checkout -b 4.4 origin/4.4

    Relevant notes, binaries and packages will be uploaded at:Modules’ documentation:What is new in 4.4.x release series is summarized in the announcement of v4.4.0:Thanks for flying Kamailio!

    FreeSWITCH Week in Review (Master Branch) September 3rd – September 10th

    FreeSWITCH - Tue, 09/13/2016 - 07:56

    Mod_kazoo had some API enhancements, mod_http_cache has GET and PUT from Azure Blob services, and mod_conference added a variable called conference_join_energy_level. The FreeSWITCH configuration audit is ongoing with initial minor commits and will continue throughout the year. If you are looking to volunteer to help with that or would like more information email brian@freeswitch.org or join the Bug Hunt on Tuesdays at 12:00pm Central Time.

    Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! And, head over to freeswitch.com to learn more about FreeSWITCH support.

    New features that were added:

    • FS-9480 [mod_kazoo] Add API enhancements
    • FS-9457 [mod_http_cache] Allow GET and PUT from Azure Blob Service
    • FS-9487 [core] Add CBR param to video file recording params
    • FS-9495 [mod_conference] Add conference_join_energy_level variable

    Improvements in build system, cross-platform support, and packaging:

    • FS-9551 [mod_sofia] Compare session before setting TFLAG_SKIP_EARLY
    • FS-9488 [mod_http_cache] Fixed a compile error
    • FS-9498 [mod_conference] Try to make video writing thread more efficient

    The following bugs were squashed:

    • FS-9482 [core] Fixed a segfault on the second attempt to use uuid_media_3p
    • FS-9483 [mod_conference] Fixed a missing keyframe after re-invite
    • FS-9484 [core] Fixed a variable type format spec
    • FS-9493 [mod_conference] Fixed a possible crash when changing from normal to personal canvas on the fly
    • FS-9494 [mod_conference] Fixed issues with video avatar switching when video starts/stops
    • FS-9486 [mod_sofia] Fixed an issue with uuid_drop_dtmf switching between tone replace and digit
    • FS-9458 [mod_avmd] Set channel variable before BEEP event is fired
    • FS-6954 [core] Use channel flags to check for proxy media or bypass media
    • FS-9346 [verto_communicator] Add DTMF icon while on a video call, fixing conferences with pin number
    • FS-9497 [mod_av] Fixed an AV sync record issue

    Do you still need TURN if your media server has a public IP address?

    bloggeek - Mon, 09/12/2016 - 12:00

    Yes you do. Sorry.

    This is something I bumped into recently and was quite surprised it wasn’t obvious, which lead me to the conclusion that the WebRTC Architecture course I am launching is… mandatory. This was a company that had their media server on a public IP address, thinking that this should remove their need to run a TURN server. Apparently, the only thing it did was remove their connection rate.

    It is high time I write about it here, as over the past year I actually saw 3 different ways in which vendors break their connectivity:

    1. They don’t put a TURN server at all, relying on media servers with public IP addresses
    2. They don’t put a TURN server at all, assuming STUN is enough for a peer to peer based service (!)
    3. They don’t configure the TURN server they use for TCP and TLS connectivity, assuming UDP relay is more than enough

    Newsflash: THIS ISN’T ENOUGH

    I digress though. I want to explain why the first alternative is broken:

    Why a public IP address for your media server isn’t enough

    With WebRTC, traffic goes peer to peer. Or at least it should:

    But this doesn’t always work because one or both of the browsers are on private networks, so they don’t really have a public address to use – or don’t know it. If one of them has a public IP, then things should be simpler – the other end will direct traffic to that address, and from that “pinhole” that gets created, traffic can flow the other way.

    The end result? If you put your media server on a public IP address – you’re set of success.

    But the thing is you really aren’t.

    There’s this notion of IT and security people that you should only open ports that need to be used. And since all traffic to the internet flows over HTTP(S); and HTTP(S) flows over TCP – you can just block UDP and be done with it.

    Now, something that usually gets overlooked is that WebRTC uses UDP for its media traffic. Unless TURN relay over TCP/TLS is configured and necessary. Which sometimes it does. I asked a colleague of mine about the traffic they see, and got something similar to this distribution table:

    With up to 20% of the sessions requiring TURN with TCP or TLS – it is no wonder a public IP configured on a media server just isn’t enough.

    Oh, and while we’re talking security – I am not certain that in the long run, you really want your media server on the internet with nothing in front of it to handle nasty stuff like DDoS.

    What should you do then?
    1. Make sure you have TURN configured in your service
      • But make sure you have TCP and TLS enabled in it and found in your peer connection’s configuration
      • I don’t care if you do that as part of your media server (because it is sophisticated), using a TURN server you cobbled up or through a third party service
    2. Check out my new WebRTC Architecture course
      • It covers other aspects of TURN servers, IP addresses and things imperative for a production deployment
      • The images used in this article come from the materials I’ve newly created for it
    3. Test the configuration you have in place
      • Limit UDP on your test machines, do it on live networks
      • Or just use testRTC – we have in this service simple mechanisms in place to run these specific scenarios

    Whatever you do though, don’t rely on a public IP address in your media server to be enough.

    The post Do you still need TURN if your media server has a public IP address? appeared first on BlogGeek.me.

    WebRTC media servers in the Cloud: lessons learned (Luis López Fernández)

    webrtchacks - Fri, 09/09/2016 - 13:32

    Media servers, server-side media handling devices, continue to be a popular topic of discussion in WebRTC. One reason for this because they are the most complex elements in a VoIP architecture and that lends itself to differing approaches and misunderstandings. Putting WebRTC media servers in the cloud and reliably scaling them is  even harder. Fortunately there are […]

    The post WebRTC media servers in the Cloud: lessons learned (Luis López Fernández) appeared first on webrtcHacks.

    Should you use Kurento or Jitsi for your multiparty WebRTC video conference product?

    bloggeek - Mon, 09/05/2016 - 12:00

    Kurento or Jitsi; Kurento vs Jitsi – is the the ultimate head to head comparison for open source media servers in WebRTC?

    Yes and no. And if you want an easy answer of “Kurento is the way to go” or “Jitsi will solve all of your headaches” then you’ve come to the wrong place. As with everything else here, the answer depends a lot on what it is you are trying to achieve.

    Since this is something that get raised quite often these days by the people I chat with, I decided to share my views here. To do that, the best way I know is to start by explaining how I compartmentalized these two projects in my mind:

    Jitsi Videobridge

    The Jitsi Videobridge is an SFU. It is an open source one, which is currently owned and maintained by Atlassian.

    The acquisition of the Jitsi Videobridge serves Atlassian in two ways:

    1. Integrating Jitsi Videobridge into HipChat while owning the technology (it took the better part of the last 18 months)
    2. Showing some open source love – they did change the license of Jitsi from LGPL to APL

    Here’s the intro of Jitsi from its github page:

    Jitsi Videobridge is an XMPP server component that allows for multiuser video communication. Unlike the expensive dedicated hardware videobridges, Jitsi Videobridge does not mix the video channels into a composite video stream, but only relays the received video channels to all call participants. Therefore, while it does need to run on a server with good network bandwidth, CPU horsepower is not that critical for performance.

    I emphasized the important parts for you. Here’s what they mean:

    • XMPP server component – a decision was made as to the signaling of Jitsi. It was made years ago, where the idea was to “compete” head-to-head with Google Hangouts. So the choice was made to use XMPP signaling. This means that if you need/want/desire anything else, you are in for a world of pain – doable, but not fun
    • does not mix the video channels – it doesn’t look into the media at all or can process raw video in any way
    • only relays the received video – it is an SFU

    Put simply – Jitsi is an SFU with XMPP signaling.

    If this is what you’re looking for then this baby is for you. If you don’t want/need an SFU or have other signaling protocol, better start elsewhere.

    You can find outsourcing vendors who are happy to use Jitsi and have it customized or integrated to your use case.

    Kurento

    Kurento is a kind of an media server framework. This too is an open source one, but one that is maintained by Kurento Technologies.

    With Kurento you can essentially build whatever you want when it comes to backend media processing: SFU, MCU, recording, transcoding, gateway, etc.

    This is an advantage and a disadvantage.

    An advantage because it means you can practically use it for any type of use case you have.

    A disadvantage because there’s more work to be done with it than something that is single purpose and focused.

    Kurento has its own set of vendors who are happy to support, customize and integrate it for you, one of which are the actual authors and maintainers of the Kurento code base.

    Which one’s for you? Kurento or Jitsi?

    Both frameworks are very popular, with each having at the very least 10’s of independent installations and integrations done on top of them and running in production services.

    Kurento or Jitsi? Kurento or Jitsi? Not always an easy choice, but here’s where I draw the line:

    If what you need is a pure SFU with XMPP on top, then go with Jitsi. Or find some other “out of the box” SFU that you like.

    If what you need is more complex, or necessitates more integration points, then you are probably better off using Kurento.

    What about Janus?

    Janus is… somewhat tougher to explain.

    Their website states that it is a “general purpose WebRTC Gateway”. So in my mind it will mostly fit into the role of a WebRTC-SIP gateway.

    That said, I’ve seen more than a single vendor using it in totally other ways – anything from an SFU to an IOT gateway.

    I need to see more evidence of use cases where production services end up using it for multiparty as opposed to a gateway component to suggest it as a solid alternative.

    Oh – and there are other frameworks out there as well – open source or commercial.

    Where can I learn more?

    Multiparty and server components are a small part of what is needed when going about building a WebRTC infrastructure for a communication service.

    In the past few months, I’ve noticed a growing requests in challenges and misunderstandings of how and what WebRTC really is. People tend to focus on the obvious side of the browser APIs that WebRTC has, and forget to think about the backend infrastructure for it – something that is just as important, if not more.

    It is why I’ve decided to launch an online WebRTC Architecture course that tackles these types of questions.

    Course starts October 24, priced at $247 USD per student. If you enroll before October 10, there’s a $50 discount – so why wait?

    The post Should you use Kurento or Jitsi for your multiparty WebRTC video conference product? appeared first on BlogGeek.me.

    Kamailio – 15 Years of Development

    miconda - Sat, 09/03/2016 - 12:34
    Fifteen years ago, on September 3, 2001, inside Fraunhofer Fokus Research Institute, first commit to the source code repository of Kamailio has been made by Andrei Pelinescu-Onciul:Here are the references to the first three commits:# git log --pretty=format:"%h%x09%an%x09%ad%x09%s" --reverse | head -3

    512dcd9 Andrei Pelinescu-Onciul Mon Sep 3 21:27:11 2001 +0000 Initial revision
    888ca09 Andrei Pelinescu-Onciul Tue Sep 4 01:41:39 2001 +0000 parser seems to work
    e60a972 Andrei Pelinescu-Onciul Tue Sep 4 20:55:41 2001 +0000 First working releaseThe project was initially named SIP Express Router (aka SER), years later – after a fork, a rename and a merge – it converged into into what is now the Kamailio project. It has been a fabulous journey so far, in a more than ever challenging market of real time communications.Well known for its performances, flexibility and stability, Kamailio has set a relevant footprint in open source and open communications, enabling entities world wide to prototype, launch new services and build scalable businesses, research and innovate in real time communications. More over, the project has succeeded to create an amazing community of users and contributors, the real engine behind its successful evolution.Its time to celebrate the moment, everyone involved deserving it — thank you all!In a few months the project will deliver v5.0.0, its 16th public major release, with a restructuring of the source tree to match current modern approaches and more flexibility in choosing the language for building the desired SIP routing rules. Stay tuned!Thank you for flying Kamailio!

    FreeSWITCH Week in Review (Master Branch) August 20th – August 27th

    FreeSWITCH - Tue, 08/30/2016 - 20:15

    It was a quiet week in the code with some minor build updates and improvements. The FreeSWITCH configuration audit has begun with initial minor commits and will continue throughout the year. If you are looking to volunteer to help with that or would like more information email brian@freeswitch.org or join the Bug Hunt on Tuesdays at 12:00pm Central Time.

    Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! And, head over to freeswitch.com to learn more about FreeSWITCH support.

    Improvements in build system, cross platform support, and packaging:

    • FS-9442 [Debian] Tweak the packages to properly install the debug symbols via freeswitch-all-dbg and freeswitch-meta-all-dbg
    • FS-8608 [configuration] Beginning the default configuration and a first step is establishing that parameters should have dashes

    The following bugs were squashed:

    • FS-9443 [core] Fixed a segfault caused by SDP in a verto.invite with missing ICE candidates
    • FS-9447 [mod_avmd] Increased the number of samples to skip to avoid false beep detection on some voicemails for Windows
    • FS-9452 [libsofia] Fixed the true/false logic for using destination flag
    • FS-7706 [mod_callcenter] Hangup agent channel if we failed to bridge it with member channel

    VUC – SER-Kamailio At 15 Years

    miconda - Mon, 08/29/2016 - 17:16
    This week on Friday, September 2, 2016, we will join the VoIP Users Conference (VUC) moderated by Randy Resnick for an open discussion about the evolution of Kamailio project.On September 3, 2001, the fist commit was pushed for what was then SIP Express Router (SER) project at Fraunhofer FOKUS Research Institute in Berlin, Germany, project that evolved over the time in what is now Kamailio.A lot of great things happened along the way, there were also some not-very-pleasant moments, but hey, that’s life! Join us to listen or share such moments — if Kamailio made your live easy or bad, if you have a funny story to tell or a photo/video to share, you are welcome on board!We started the celebration at Kamailio World Conference 2016, now we gather online of a non-technical debate of whether the project succeeded to deliver on its promises.Several people already confirmed the participation, like Alex Balashov, Daniel-Constantin Mierla and Fred Posner. We expect VUC regulars such as James Body to be around.Anyone can connect to VUC and listen audio or watch the video session via SIP or YouTube live streaming. For more details see:Should you want to actively participate in the discussion, contact us via email at registration [at] kamailio.org in order to plan a bit the structure. Last minute joining is also possible, but a matter of capacity for the video conferencing system.Thank you for flying Kamailio!

    Will there ever be a decentralized web?

    bloggeek - Mon, 08/29/2016 - 12:00

    No. Yes. Don’t know.

    I’ve recently read an article at iSchool@Syracuse. For lack of a better term on my part, pundits opining about the decentralized web.

    It is an interesting read. Going through the opinions there, you can divide the crowd into 3 factions:

    1. We want privacy. Also we hate governments and monopolies. This is the largest group
    2. There’s this great tech we can put in place to make the internet more robust
    3. We actually don’t know

    I am… somewhat split across all of these three groups.

    #1 – Privacy, Gatekeepers and Monopolies

    Like any other person, I want privacy. On the other hand, I want security, which in many cases (and especially today) comes at the price of privacy. I also want convenience, and at the age of artificial intelligence and chat bots – this can easily mean less privacy.

    As for governments and monopolies – I don’t think these will change due to a new protocol or a decentralized web. The web started as something decentralized and utopian to some extent. It degraded to what it is today because governments caught on and because companies grew inside the internet to become monopolies. Can we redesign it all in a way that will not allow for governments to rule over the data going into them or for monopolies to not exist? I doubt it.

    I am taking part now in a few projects where location matters. Where you position your servers, how you architect your network, and even how you communicate your intent with governments – all these can make or break your service. I just can’t envision how protocols can change that in a global scale – and how the forces that be that need to promote and push these things will actively do so.

    I think it is a good thing to strive for, but something that is going very challenging to achieve:

    • Most powerful services today rely on big data = no real privacy (at least not in front of the service you end up using). This will always cause tension between our design for privacy versus our desire for personalization and automation
    • Most governments can enforce rules in the long run in ways that catch up with protocols – or simply abuse weaknesses in products
    • Popular services bubble to the top, in the long run making them into monopolies and gatekeepers by choice – no one forces us to use Google for search, and yet most of us view search on the web and Google as synonymous
    #2 – Tech

    Yes. Our web is client-server for the most part, with browsers getting their data fix from backend servers.

    We now have technologies that can work differently (WebRTC’s data channel is one of them, and there are others still).

    We can and should work on making our infrastrucuture more robust. More impregnable to malicious attackers and prone to errors. We should make it scale better. And yes. Decentralization is usually a good design pattern to achieve these goals.

    But if at the end of the day, the decentralized web is only about maintaining the same user experience, then this is just a slow evolution of what we’re already doing.

    Tech is great. I love tech. Most people don’t really care.

    #3 – We just don’t know

    As with many other definitions out there, there’s no clear definition of what the decentralized web is or should be. Just a set of opinions by different pundits – most with an agenda for putting out that specific definition.

    I really don’t know what that is or what it should be. I just know that our web today is centralized in many ways, but in other ways it is already rather decentralized. The idea that I have this website hosted somewhere (I am clueless as to where), while I write these words from my home in Israel, it is being served either directly or from a CDN to different locations around the globe – all done through a set of intermediaries – some of which I specifically selected (and pay for or use for free) – to me that’s rather decentralized.

    At the end of the day, the work being done by researchers for finding ways to utilize our existing protocols to offer decentralized, robust services or to define and develop new protocols that are inherently decentralized is fascinating. I’ve had my share of it in my university days. This field is a great place to research and learn about networks and communications. I can’t wait to see how these will evolve our every day networks.

     

     

    The post Will there ever be a decentralized web? appeared first on BlogGeek.me.

    SIPit 32

    miconda - Fri, 08/26/2016 - 15:32
    The next SIPit – the SIP Tnteroperability Test Event – will be held at the University of New Hampshire Interoperability Laboratory, in Durham, New Hampshire, USA, during September 12-16, 2016.SIPit facilitates testing your SIP implementations, gathering of SIP professionals that develop phones, PBXes, servers or other SIP application to enable peer-to-peer an multiparty tests.There is a great testbed of various NAT networks for those of you working with NAT traversal issues, including IPv6 in the network as well as an extensive set of tests for TLS. This year, there will also be a focus on STIR – the new secure identity handling in SIP.Olle E. Johansson explains why you should participate in the next slides:Participate in SIPit from Olle E JohanssonTo register and learn more details, go to the main SIPit web site:Along the past 15 years, Kamailio-SER participated at many of the SIPit events, which is reflected in the robustness of the application. This edition, at least Olle will be again there to ensure the latest version stays rock solid.Many of the automatic tests on site are built using Kamailio, go there and hammer it!Thank you for flying Kamailio!

    Pages

    Subscribe to OpenTelecom.IT aggregator

    Using the greatness of Parallax

    Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

    Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

    Get free trial

    Wow, this most certainly is a great a theme.

    John Smith
    Company name

    Yet more available pages

    Responsive grid

    Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

    More »

    Typography

    Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.

    More »

    Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.