SDP has been a frequent topic, both here on webrtcHacks as well as in the discussion about the standard itself. Modifying the SDP in arcane ways is referred to as SDP munging. This post gives an introduction into what SDP munging is, why its done and why it should not be done. This is not […]
The post Not a Guide to SDP Munging appeared first on webrtcHacks.
Finding a good WebRTC course is tricky. Finding a training program that teaches you more than the basics about WebRTC isnāt simple. Here are a few questions to guide you in finding that course you want.
First off – I am biased. I have created a WebRTC training and have been running it successfully for a couple of years now, teaching IT workers about WebRTC. Iāll try to be as objective as possible in this article. The main thing I ask of you? Do your own research, and feel free to use my questions below as a guide to your quest after the best WebRTC training course.
Without much ado, here are the 6 questions you need to ask yourself about the WebRTC training you are planning to enroll to:
1. What was the last date the WebRTC course was updated?This is probably the most important question to ask.
WebRTC is a moving target. Ever changing.
There are 3 separate axes that need to be tackled when learning WebRTC:
The standard is still changing. WebRTC 1.0 will hopefully be completed this year. The changes are minor, but they still occur. And once they are over, we will start talking about WebRTC NV – the Next Version of WebRTC. Which will inject new learnings around WebRTC.
Browsers are changing. Especially Chrome. But not only. They have their own implementations of WebRTC, slightly different than the standard. And they are crawling ever so slowly towards being spec-compliant. On top of that, they have their own features, nuances and experiments going on; of things that might or might not end up as part of WebRTC.
The Ecosystem around WebRTC is what you should really be interested in. Not many developers use WebRTC directly. Most use third party open source or commercial frameworks so they see less of the WebRTC API surface itself. Selecting which framework to use, and how they are going to affect your architecture and future growth is the hard part.
All this boils down to this:
If the WebRTC training you are going to enroll in is more than 6-12 months old, it isnāt going to help you that much.
2. Does it cover more than the WebRTC API surface?WebRTC is multidisciplinary. It spans across different concepts, and is a lot more than just the APIs the browser publishes.
How is the course youāre planning to take tackling that?
While many of the WebRTC courses focus on the API surface, they fail to understand the reality of WebRTC: Most WebRTC developers donāt interact directly with WebRTC APIs, but rather use third parties – either in the form of open source or commercial frameworks for signaling and media servers; or in the form of full managed services (think TokBox or Twilio). In such cases, it is critical for the students to understand and grok WebRTC from a perspective of the whole architecture and less so in what each and every API in WebRTC does (something that may change from one Chrome release to another).
Things youāll need covered in order to write a decent application that is production ready:
Then thereās the part of how you boil it down to an actual solution. What components to use and why.
WebRTC has a set of building blocks, but you need to know which ones to use to fit the specific model you want to operate.
An interesting tidbit to check – does the training include aspects of group sessions or broadcasting? These require a look beyond the basics of WebRTC API calls.
Make sure the WebRTC course you take isnāt too focused on the APIs and isnāt too focused on the standard specification.
3. Is the instructor who created that WebRTC training available for questions?Assume that WebRTC is going to be challenging to grok.
And with an online course you are mostly on your own. Unless thereās a bigger framework at play.
Here are a few things that can help you out:
And one last thing – do you even know who the instructor is?
An important part in learning WebRTC is the ability to ask questions interactively. Make sure that is part of the training you enroll in.
4. How long is the course?An hour? Two hours? Four hours?
More doesnāt always mean better, but with WebRTC hereās the thing – thereās quite a lot of ground to cover. And there are three ways to do that:
That third option means that a WebRTC course, at least a decent one, should take more than a full day of training – well above 10 hours of information.
If you want to really learn WebRTC, make sure the course you take has enough hours in it to give you the knowledge you need.
5. What are students saying about the course?Do people like the course? Do they feel it got them what they needed?
Look at the testimonials of the WebRTC courses: you will immediately notice the frustration of students with the freshness of the courses – most of them are 3-5 years old. This makes them useless. Interestingly, students are less worried about the price (these are cheap courses) – they are a lot more worried about the time they wasted.
Check what companies are sending their employees to take that course. Are they just sampling it out, or sending multiple employees? What do these employees have to say about the course after taking it?
You will be able to find many answers to the other questions here just by reading the reviews of students.
If you are going to invest your time on an online WebRTC training, make sure to read testimonials and reviews about that training.
6. Is the course suitable for your purpose?Just need to understand in broad strokes what WebRTC is and what it does? Are you after a deep understanding of WebRTC and how to develop or test it properly? What about offering support or ops for a WebRTC application?
Each of these has a different set of needs. Each needs a view of WebRTC from a different angle.
Which angle do you need and how well does it align with the angle of that course you are looking at?
Make sure the WebRTC course is aligned (as much as possible) with the type of work youāre expected to do.
Looking for a WebRTC course? Ask yourself: What should a good online WebRTC training include? A good WebRTC training should include information about WebRTC APIs, STUN/TURN servers, media servers (SFU, MCU), signaling servers and the state of the ecosystem and browser support.
A course focusing only on the WebRTC API or showing how a specific simple āhello worldā application works wonāt suffice.
Ask yourself the following questions about the course to understand if it is for you:
* What was the last date the WebRTC course was updated?
* Does it cover more than the WebRTC API surface?
* Is the instructor who created that WebRTC training available for questions?
* How long is the course?
* What are students saying about the course?
* Is the course suitable for your purpose?
Yes. Some courses are targeted more towards developers while others focus on ops and support.
If you are looking for a WebRTC course, be sure to check that the course is aligned with your job description.
There are several WebRTC training courses out there. Be sure to sift through them and find the one that is most suitable for you.
Interested? check out my own WebRTC courses:
The post 6 questions to ask about your WebRTC training š appeared first on BlogGeek.me.
Most business owners will agree that itās become much harder to justify paying the increasingly exorbitant lease rates for office space in most major cities in North America. Even Canada isnāt exempt.
Once a haven for US companies looking to hire cheaper Canadian labor, Vancouver now has theĀ lowest commercial vacancy rate.Ā To add insult to injury, it also has theĀ highest price of gasoline in North America.
CBREāsĀ Canada Q2 Quarterly Statistics ReportĀ said that downtown Vancouverās office vacancy rate was 2.6 percent in 2019ās second quarter, down from 4.7 percent one year previously, making it the hottest commercial office space market in North America on par with Toronto, beating out 3rd place San Francisco, where the vacancy rate is 3.6 percent.
Coworking GrowthGrowth in commercial office space worldwide is also being spurred by coworking. We now see coworking facilities in a large number of major cities across the globe, although the number of new coworking space openings does appear to be slowing down when compared to the previous year.
Our projections show that in 2019 growth will be slower than the previous year, although the industry continues to grow at a strong pace. While most of the industry growth can be attributed to new spaces, a large portion is owed to existing spaces diversifying their services or acquiring businesses, and expanding into smaller, niche markets that generally have stronger, more close-knit communities. āĀ Coworking resourcesCoworking is obviously not free. It does reduce the overhead and headache of having to manage your own office (lease, insurance, maintenance, etc) but if and organization made use of coworking facilities full-time, it could likely be more expensive than a comparable stand-alone office space, per square foot.
It doesnāt take a genius to see that not only are office spaces getting harder to find, but they are also the most expensive they ever have been. For staff, who are interested in raising a family, getting them to this expensive office is also costly. This sounds like a lose-lose proposition, why are we doing this again?
Coworking + remote work | FTW!Unsurprisingly, IT organizations and software organizations that have no real need for dedicated physical locations appear to be shuttering offices and opting for coworking + remote work models.
Automattic, Gitlab, Shopify (just to name a few) have successfully made this transition, in fact, some of these companies were purposefully built as distributed companies from the get-go.
Various reports and studies have been done which seem to indicate thatĀ everyoneĀ wants to work from home. In a recent study, Buffer published theĀ State of Remote WorkĀ where 2,500 remote workers surfaced some interesting statistics:
Zapier has also published aĀ report(*) on the subject and the findings are quite similar in that it points to knowledge workers’ desire to work remotely:
Microsoft (Japan) is also researching work routines andĀ recently published findingsĀ on a 4 day work week experiment, which increased productivity by up to 39.9%. This could very well increase even more if they adopted a virtual coworking model for the other 4 days.
ConcernsNow that we have set the stage for what looks to be an unstoppable trend, letās take a look at why this isĀ notĀ a no-brainer.
I interviewed a few companies (ranging from small to large) and asked them what their position was with remote work in mind. Some business owners and team members expressed concerns.
Some of these concerns are legitimate and it could be they will not be overcome with even the best remote work processes.
Case in point āĀ In 2012, Marissa Mayer was hired as CEO of Yahoo! and was charged to return the former powerhouse to its glory days. Among the many things she had to fix were company culture and productivity. According to sources closer to Yahoo!, it was made clear that many of those working at the company were not getting their jobs done when working from home. A review of VPN logins and source repository access logs surfaced a gap in the lack of work being accomplished while Yahoo! staff were working from home.
In 2013, an internal letter was issued, the company mandated that remote work was to be all but banned. Here is an excerpt from that letterā¦
To become the absolute best place to work, communication and collaboration will be important, so we need to be working side-by-side. That is why it is critical that we are all present in our offices. Some of the best decisions and insights come from hallway and cafeteria discussions, meeting new people, and impromptu team meetings. Speed and quality are often sacrificed when we work from home. We need to be one Yahoo!, and that starts with physically being together.
Some of Mayerās staff, the press, and many other groups let her have it, no one seemed to be impressed. It could be said that Mayer had little choice. She had to do whatever she could to turn the company around and for her, that meant taking some drastic measures.Ā In a Forbes post, Yahoo! commented furtherā¦
āThis isnāt a broad industry view on working from home ā this is about whatās right for Yahoo, right now.ā
At first blush, it would seem this was more about timing and the position Yahoo! found themselves in at the time. They did what they thought needed to be done to influence behavior.
This seems like an extreme case, but the same sentiment can be found in other IT and SaaS organizations worldwide. In fact, some of these companies are the creators of communications software and services we use for remote work every day. In fact, they openly promote the āwork from anywhereā mantra in their own product marketing. It might seem a little hypocritical, but it is happening for many of the same reasons we have shown.
Remote Work ā BenefitsNow that we have heard the concerns, letās talk about the potential upside. Here are some high-level benefits:
One in four knowledge workers find their commute to be among the most stressful parts of their job.*
Obviously, these benefits can contribute to a more attractive and economical approach to building a business, as long as you can overcome the concerns.
Taking the plungeIf you are still with me and undeterred, you are not alone. Personally, I have been working remotely 100% for several years in various roles with teams all over the world. I have learned a few things along the way. Here are the cliff notes.
Remote Work Guide:Ā A good place to start is by creating a āremote work guideā document that embodies some or all of the elements listed here along with your own spin on things. Your teams may not have experienced working remotely before, they will need some guidance and direction, this is also where we set expectations eg. working hours, always-on video, etc. It could be an addendum to your existing company handbook or a completely new document, keep in mind it will grow with your company.Ā (Note: Many miss this step and itās likely the single most important contributing factor to a successful remote work strategy for your company or organization.)
Small Teams: You are going to need some time to plan your rollout and decide which processes and tools are going to work best for your various teams. When your teams are first getting started, parcel off smaller project teams that are tech-savvy and preferably have experience using online collaboration tools. Their experience will pave the way for everyone else. Once you have a good process that seems to be working, you can roll it out in stages for everyone else.
Always-on Video Conferencing:Ā This may sound a bit creepy but it can actually be quite effective in preserving team spirit, fending off FOMO and helping with the isolation that some feel when working remotely. It can be done in pairs, teams or even using a water cooler approach where team members drop in and out during the day. You can even use it to bridge branch offices, like a window into each remote office. Letās be honest, organizations are going to see a bit more opposition when introducing this concept, it will need to be actively managed. As the business leader, you will need to actively work with team members to encourage participation (eg. by leading a weekly all-hands meeting or asking them to join or lead regular video calls, etc). If managed properly this idea can be a great communications centerpiece.
Weekly all-hands Video Conference:Ā This is less about remote work and just good business practice. I have seen this work well in traditional and remote businesses, but few business leaders do it. Weekly highlights are shared by the CEO with support from other leaders in the organization. A master slide deck is prepared in Google Slides, with input from various departments. Friday afternoons are a good time as it ends the week on a high note (and serious note if things need attention) and helps start the next week off with a positive sentiment.
Coworking Passes:Ā In addition to virtual coworking, itās a good idea to include at least one or two days a week of onsite coworking for those that feel they need to get out of the house and be around other professionals. This has been widely adopted by some of the larger distributed organizations. Going completely virtual can be a bit of a shock to the system, this helps ease the transition and keeps everyone feeling like they are still human.
Offsite Team Events:Ā With the reduction or elimination of in-person face time, team-building exercises now become more important. Organize quarterly or semi-annual gatherings at your favorite coworking establishment or pick a fun recreational location. If your company is large enough, you can divide these meets into geographical pods. Schedule at least one all-hands meeting per year with some fun events to ensure everyone feels like they are part of the organization. Do yourself a favor and donāt leave this to the last minute, you will have a poor turnout, piss people off and defeat the purpose.
Collaboration, Productivity & Automation ToolsThere are literally dozens of team collaboration tools you can use to empower your remote workers. Try as many as you can. Select tools that are intuitive and self-explanatory, this will cut down on the learning curve. Make sure the vendors you select provide mobile support so your teams can be connected via phone or tablet.
Here are some that I have used and have found work well for remote teams, in no particular order:
As this remote work thing matures, we will see more purpose-built applications that aim to bring our teams closer together, virtually.
We are already seeing some activity in this space with the recent capital raise by Tandem, which has a sidecar collaboration application that works pretty well with Slack.
Tandem ā virtual coworking appAnother is Sococo, which looks more like a virtual workspace with web conferencing. They take an interesting approach to how they visualize the virtual office and how team members work together. I actually think this is an intuitive idea, although it does feel a wee bit recreational. To be fair, I have not used the service.
Sococo ā virtual workspaceIt is expected these solutions that personalize and aid remote teams in working better together will certainly evolve. It is still unclear if customers would opt-in for purpose-built applications or just use several disparate applications to do the same job, time will tell.
The next post will speak to the future of remote work. We will be touching on AI & bots, VR & AR in the remote work realm, some of which are being used today and some are not far off at all.
If you work in a distributed company, Iād like to hear from you. What tools do you use today and how are they working for you? How often do you use video/web conferencing as part of your daily routine? If you prefer sharing your comments or questions privately, feel free to shoot me a text message or call anytime: (877) 897ā1952Ā (Note: All calls will be recorded).
None of the ideas expressed in this post are shared, supported, or endorsed in any manner by my employer.
Most developers should just use libwebrtc that Google supplies for their WebRTC mobile SDK. Which exact release to pick and at what pace to update is a more nuanced decision one needs to make.
* I’ll be using SDK and library as well as mobile WebRTC SDK and mobile WebRTC library interchangably in this article, so bear with me
In the release notes of WebRTC M80 (=the changes made to WebRTC in the upcoming Chrome 80), Google added an interesting deprecation announcement:
Deprecating binary mobile libraries
The webrtc.org open source repository contains platform implementations for Windows, Mac, iOS and Android. These are primarily utilized for automated testing. Browsers and other applications that embed WebRTC often have developed their own highly optimized platform code with custom capture/render components matching the applications architecture.
We have decided to discontinue the distribution of precompiled libraries for Android and iOS. The script for creating the AAR library can be found here, the build script for iOS is located here.
Lets try to decrypt this deprecation and explain it, and then see what developers should be doing (and are doing already).
Official WebRTC precompiled libraries for Android and iOSTo understand this announcement we first need to understand whatās this WebRTC precompiled mobile libraries is exactly.
From the start, it was possible to use WebRTC on mobile. Google introduced WebRTC in Android Chrome in July 2013, less than a year after Chrome 23 was released on desktop with WebRTC support. Since that moment and on the codebase for libwebrtc (Googleās implementation of WebRTC) included support for mobile.
Up until 2016, Google never did offer any compiled binaries. Developers had to figure out the build process and handle it on their own. Several github repositories held compiled WebRTC source code for mobile and were somewhat popular.
In November 2016, Google introduced the official WebRTC precompiled libraries for Android and iOS, which they have maintained up until today.
Most of the vendors out there who are building applications or even SDKs (think CPaaS vendors such as Twilio or Nexmo) make use of libwebrtc as well for their basis of the VoIP stack implementation they run for their own clients. This was true BEFORE Google announced official WebRTC precompiled mobile SDKs and it will continue to be the case even now after Google discontinues the distribution of these mobile SDKs.
How did we get here?
Discontinuing to distribute the WebRTC mobile librariesFirst off, it is important to state and understand: Google uses the same WebRTC codebase that goes into Chrome also in the Google Meet and Google Duo mobile applications running on Android and iOS.
There is no plan or incentive for Google to stop maintaining the libwebrtc codebase for mobile operating systems.
That being said, Google just stopped distribution of its WebRTC mobile libraries.
Why?
Because for all intents and purposes they were useless.
All vendors I know who run their products in production for mobile either use a third party SDK (open source or commercial) or have their own custom build of libwebrtc.
This is the case partially because the precompiled binaries from Google are somewhat useless. Hereās the official CocoaPod for Googleās WebRTC project:
The version mentioned here is 1.1.29400. What exactly does this relate to?
This made the binaries useless without giving them any real chance in life, which led to their discontinuation.
The Google WebRTC team had two alternatives here:
They chose discontinuation. Probably because of what Iāll be sharing with you next.
What WebRTC mobile SDK should you use now?This is the real question. It is the one developers had to deal with before, during and now after the age of Googleās official precompiled mobile libraries for WebRTC.
There are two routes to take here for any developer who needs a WebRTC SDK (I am ignoring those using higher level abstractions such as SDKs provided by CPaaS vendors):
Between these two alternatives, the majority of the developers are choosing option (1). Why? Because let’s face it – no other library today offers the same feature richness, quality and interoperability with what runs in the browser that everyone uses.
There are a multitude of alternatives to Googleās libwebrtc, but they are all lacking in at least one way (probably more):
I am sure Iāve left a few more gaps in that list.
Ask yourself why is Edge now based on Chromium and using Googleās WebRTC almost verbatim, or why Apple is relying on Googleās libwertc in a lot of its own implementation of WebRTC in Safari.
That said, there are very good reasons for using libraries other than Googleās libwebrtc:
For the majority of the developers out there, libwebrtc is the right SDK to use on mobile.
Best practices in using Googleās libwebrtc mobile SDKIf you are going to use libwebrtc, what is it that you should be doing then?
Here are the best practices Iāve seen of companies using libwebrtc mobile SDK in production:
Use Googleās libwebrtc implementation. This is by far the most comprehensive and popular library for client-side WebRTC implementations. Other alternatives exist, but you need to understand what you sign up for when you opt for using them.
What version of Googleās WebRTC should I use for my mobile application?The best practice here is to pick something that is new but not too new. Pick on of the latest releases that is considered to be stable. Donāt upgrade immediately to the latest release as that is time consuming. Make it a point of upgrading your libwebrtc 2-4 times a year.
Are there client-side WebRTC libraries other than the one Google publishes?Yes there are. PION and GStreamer come to mind in the open source scene. Iād seriously consider the reasons for not using Googleās libwebrtc in favor of anything else though, mainly due to its feature richness and immediate interoperability with Chrome and all other browsers.
Reduce your risks with WebRTCLooking to lower their risks and increase their time to market with that WebRTC project youāre working on?
I can help you with this; when it comes to WebRTC and communication technologies, I help my clients get the answers they need and make sure their project doesnāt get delayed.
Contact me if you are interested.
The post How to pick the right WebRTC mobile SDK build for your application appeared first on BlogGeek.me.
Register to the two free webinars I am hosting this month in areas around supporting WebRTC with Talkdesk and Poly.
I am shifting gears this year. Looking back at last year, what Iāve noticed is that thereās been a shift in what clients are asking of me. Many of them are more interested in issues that are support related rather than architecture or development. While a lot of the work I do revolves around assisting with defining architectures and dealing with roadmaps of products, thereās been an ongoing increase in the questions related to supporting WebRTC.
This led to a few changes in the things that I have on offer:
Somehow, I found myself scheduling two separate free webinars for this month with partners that are around WebRTC support.
Talkdesk and how to support WebRTC-based call centersAt testRTC, weāve created a product in 2019 to assist support teams analyze network issues for their users. Our first client for this product were Talkdesk who were kind enough to share their experience with us in a nice testimonial.
On Tuesday next week, JoĆ£o Gaspar from Talkdesk will join me in a webinar titled How to analyze WebRTC network issues in minutes and not hours (or days). In this webinar, Iāll explain a bit about the challenges WebRTC poses when it comes to connectivity from a support perspective, and JoĆ£o will share with us what Talkdesk are doing today to assist their users.
Iāve learned a lot from working with JoĆ£o and his team last year, and I am sure this will be interesting to you as well.
How to analyze WebRTC network issues in minutes and not hours (or days)
Tuesday, January 21, 2020
14:00-14:45 EST; 11:00-11:45 PST
Register here Poly and picking the right headset to improve WebRTC session qualityIn the last year Iāve had a lot of conversations with support engineers. The people who end up needing to troubleshoot, figure out and explain issues to their users. Many of these issues end up being related to network connectivity. This made me create the new Supporting WebRTC course (now open for all to enroll). One thing I wanted to add there but had no clue about is headsets.
Headsets are this thing that I have at home and use for most of my conference calls. But I never really gave them a second thought. The last pair I purchased at the local computer equipment store, not even making an informed decision about what I needed.
That lead me to reach out to Poly, to get a briefing about headsets and how they affect quality in WebRTC, which lead to me understanding that this boring topic known as headsets is quite fascinating. Obviously, I used what I learned in that briefing to create that lesson I needed in my course.
The great thing though, is that Richard Kenny from Poly (who briefed me), was kind enough to accept joining a webinar about this topic.
Picking the best headset for your next WebRTC session
Tuesday, January 28, 2020
14:00-14:45 EST; 11:00-11:45 PST
Register here How are you handling your support efforts with WebRTC?The people who usually follow me here are developers or product managers. Seldom are they support-oriented. I know that based on the comments and conversations I have on and off this website.
My suggestion to you is to go check what your support team is challenged with. What is keeping them up at night. What is it they need assistance with. What knowledge are they missing.
And then once you do, see if these webinars might be useful to them so you can share this with them. Let’s make 2020 the year we start solving more of the connectivity issues for our customers.
The post Supporting WebRTC: Two webinars coming your way (with Talkdesk & Poly) appeared first on BlogGeek.me.
WebRTC isnāt like Node.js or TensorFlow. Its purpose isnāt adoption in general, but rather adoption in browsers. If you believe otherwise, then thereās a problem of expectations you need to deal with.
As we are starting 2020, with what is hopefully going to be an official spec for WebRTC 1.0, it is time for a bit of reflections. I started this off when writing about Googleās WebRTC roadmap and Iād like to continue it here about WebRTC goals and expectations.
When I explain what WebRTC is, I start off with the fact that it is two things at the same time:
The open source project angle is interesting.
Is WebRTC an open source project?The main codebase we have for WebRTC today is the one maintained by Google at webrtc.org. There are other open source projects that implement the spec, but none to this level of completeness and quality.
By the ecosystem and use of WebRTC, one may think that this is just another popular open source project, like Node.js or TensorFlow.
It isnāt.
If I had to depict Node.js, it would be something like this:
How would I draw a diagram of WebRTC? Probably something like this:
From an administrative point of view, WebRTC is part of Blink, Chromiumās rendering engine. Blink is part of Chromium, the open source part of Chrome. And Chromium is what Chrome uses as its browser engine.
WebRTC isnāt exactly an independent project, sitting on its own, living the life.
Need an example why? WebRTCās version releases follow the version releases of Chrome in terms of numbering and release dates. But mobile doesnāt follow the exact same set of rules. Olivier wrote it quite eloquently just recently:
āFor web developers, release notes are very good and detailed. But for IOS and Android developers… I expect the same level of information.ā
Thereās an expectation problem hereā¦
WebRTC isnāt like other open source projects that stand on their own, independent from what is around them. WebRTC is a component inside Chrome. A single module.
The WebRTC team at Google are assisting developers using the codebase elsewhere. It took a few years, but we now have build scripts that can build WebRTC separately and independently from Chromium. We have official pre-compiled mobile libraries for WebRTC from Google, albeit not a 1:1 match to the official WebRTC/Chromium releases.
At the end of the day, the WebRTC team at Google are probably being measured internally at Google by how they contributed to Chrome, Googleās WebRTC-based services AND to the web as a whole. Less so to the ecosystem around their codebase. If and how WebRTC gets adopted and used in mobile first applications or inside devices and sensors is harder to count and measure – and probably interests Google management somewhat less.
Who contributes to WebRTC?I took the liberty of checking the commit history of the WebRTC git project over the years, creating the graph below:
There were various different emails associated with the committers, but they fell into these broad categories:
It is safe to say that the majority of committers throughout the years are Googlers, and that the ones who arenāt Googlers arenāt contributing all that much.
Is that because Google is protective about the codebase, as it goes right into Chrome which servers over a billion users? Or is it because people just donāt want to commit? Maybe the ecosystem around WebRTC is too small to support more contributors? Might there be other reasons?
One wonders how such a popular project has so little external contributors while there are many developers who enjoy it.
Is webrtc.org Googleās RTC or ours?A few years back, Google introduced a new programming language – Go (or Golang). It is getting quite a following (and its own WebRTC implementation, though unrelated to this article).
In May 2019, quite a stir was raised due to a post published by Chris Siebenmann titled Go is Google’s language, not ours. Interestingly enough, if you replace the word āGoā with āWebRTCā in this article – it rings true in many ways.
Golang has over 2,000 lines in its CONTRIBUTORS file versus WebRTCās 100+ AUTHORS. While Golang identify individual contributors, WebRTC uses wildcard ācorporateā contributions (I wouldnāt count too many contributors in these corporates though). WebRTC is smaller, and I dare say more centralized.
The simple answer to those who complain is going to be the same – āthis is an open source project, feel free to fork itā.
For WebRTC, Iād add to this that what goes into the API layer is what the W3C and IETF decide. So Google isnāt in direct control over the future of WebRTC – just of its main implementation, which needs to adhere to the specification.
Then thereās the Node.js community forks that took place over the years (latest one from 2017). These disputes, technical and political, always seem to get resolved and merged back into the main project. In hindsight, this just seems like attempts to influence the direction of the project.
Can this be done for WebRTC?
It already occurred with the introduction (and slow death) of ORTC. ORTC (Object-RTC) started and was actively pushed by Microsoft, ending with most of what they wanted to do wrapped up into WebRTC (and probably causing a lot of the delays weāve had with reaching WebRTC 1.0).
What does that mean to you?Should you complain about Google? Maybe, but it wonāt help
For Google, it makes sense to push WebRTC into Chrome as that is its main objective. Google is improving in tooling and capabilities of using WebRTC outside of Chrome, but this objective will always be second to prioritization of Chrome’s needs and Google’s services.
As an open source project, you are free to use or not use it. Youāre not paying for it, so what would you be complaining about?
Google have invested and is still investing heavily in WebRTC. It is their prerogative to do so, especially as they are the only ones doing it today.
You should make an educated decision, weighing your requirements, risks and challenges, when developing a service that makes use of WebRTC.
The post Google’s WebRTC goals – a problem of expectations appeared first on BlogGeek.me.
Googleās plans for WebRTC have either changed or finally got revealed. Where? In its internal WebRTC roadmap.
WebRTC is many things.
On one hand, it is a standard specification at the W3C (and is reaching 1.0 milestone).
On the other hand, it is an open source project. While there are a few such projects today, the most important one is Googleās webrtc.org. This is the code that gets into Chrome itself and the one being adopted by many (simply because it is already highly optimized for the main scenarios. And⦠it is free).
Google made it super simple for companies to adopt its WebRTC implementation – it uses a BSD open source license, making it quite permissive.
In the last 8 years, weāve been treated like royalty, having access to a world-class media engine implementation for free.
The WebRTC roadmap weāve seen so far from Google had 3 types of features in it:
At all times, these were available to everyone.
Googleās intent in open sourcing WebRTCWhen WebRTC was first introduced it was about who has the balls to take something that up until that point was considered a core competency and make it freely available. This was a piece of technology that video conferencing companies protected fiercely, battling about through their sales and marketing pitches, each claiming to have superior media quality. At the time, media quality wasnāt in the āgood enoughā position that it is today:
Google made the calculated risk at the time:
Other vendors just following along in the ride, making minor contributions here and there. Today, the leading (and only) media engine out there for WebRTC is still the Google one. At least in any meaningful way. So much so that Googleās ācompetitorsā are using Googleās WebRTC stack directly in their products.
Where has this lead Google?WebRTC is a huge success. All modern browsers now support it. They interoperate (to a good extent). Today, in every industry and market where live or real time media is needed, WebRTC is playing an important role.
But what about Google and WebRTC? What success did Google exert from WebRTC?
Not a lot. Or at least not enough.
Google uses WebRTC in the following services it offers:
Lets see how well did Google fare in each.
Hangouts / Google MeetI use these two services almost on a daily basis. My calendar meetings default to them simply because they are so each to schedule with the Google Calendar. They offer what I need without any of the complexity.
But.
When you read or hear discussions about the video conferencing market, the vendors mentioned are usually Zoom and Cisco. Maybe Microsoft Teams or Skype for Business. Also Bluejeans and Pexip. A few others. Google isnāt one of the top vendors that come to mind here. Even though their service is rather good.
Did I mention that almost all their competitors are using WebRTC as well?
DuoDuo. Googleās answer to Appleās FaceTime.
It is a standalone video calling app available on Android and iOS. It isnāt installed by default on most smartphones and users need to actively find it, install it and make a decision on using it. Not an easy feat.
Why hasnāt Google nailed and bolted it smack into Android? Probably due to carriers and not wanting to hurt their feelings (and Googleās relationship with them). Otherwise, it makes no sense for Google to try and compete with the likes of FaceTime with one hand tied behind their backs.
Anyways⦠Duo is quite popular. Even on iPhone. It is ranked #7 in the social apps in the Apple App Store. This is higher than Houseparty (positioned somewhere at #17-20), which is rather interesting considering the high engagement Houseparty sees for its users.
Google doesnāt share any stats on usage of Duo. The only thing we know is downloads and the number of people who ranked it – two stat points that are useless for social networks. This is quite telling to the real usage numbers – not publishing them means they arenāt on par with the competition.
Curious myself, Iāve put out a quick poll on Twitter:
This is most definitely NOT the way to know or understand usage, but it is interesting.
My audience is probably tech savvy. Those answering the poll are highly likely to know about WebRTC. And still. We have over 50% who never tried it and 13% who use it. Iād consider 13% quite a lot and surprising. But it isnāt scratching the surface of where it should be given that Google owns and controls Android.
StadiaGoogle Stadia is something totally different. It is cloud gaming. The game is being processed and rendered in āthe cloudā and gets streamed in real time to your device using WebRTC. Google even made modifications to its WebRTC implementation to make it a better fit for gaming.
The concept is great. The technology is solid. The experience is said to be good (if youāre close enough to the data center and have a good network connection).
From the media, it seems like there are hurdles and challenges to the Stadia launch – this type of an article titled āStadiaās biggest problem? Googleā or this one titled āGoogle Duo is the best video calling service you’re not usingā are rather common. Especially when put in comparison to the Apple Arcade launch.
Looking at Google Play store numbers for the Stadia app, things look rather disappointing: below 1M installs so far:
I have this feeling Google expected more.
Cloud gaming is still new and nascent. It will take time to happen and mature.
Take this into an adjacent industry, Netflix introduced streaming in 2007. It took them 3-4 years for the stock to take notice and the service to mature enough to make a dent in the industry. Whereas today, every other production studio is launching their own streaming service.
Will Google have the patience with Stadia to get there or will it end up shutting it down like many other āexperimentsā it has been running throughout the years? The thought itself is making it hard for Google to entice game developers to jump on its platform.
Chrome Remote DesktopGoogle apparently has a remote desktop service. It makes use of WebRTCās screen sharing capability and is called Chrome Remote Desktop.
While I havenāt used it myself, this does seem to have quite a following. 10M+ installs on Android, The Chrome extension shows ~4.8M users.
There is no apparent business model as the service is offered freely, and while the market has similar paid services, it doesnāt seem to be big enough to attract a company like Google. This isnāt interesting enough to value an investment in WebRTC itself by Google.
YouTube LiveYouTube has the ability to host live events. And it does that with the help of WebRTC.
That said, its use of WebRTC isnāt an impressive one – it is just a window into the service if you want to broadcast from your browser. It isnāt used for live streaming to the users themselves. Thereās more on the technical side of it on webrcHacks, where they analyze what goes on the wire with YouTube Live.
Hereās the thing – just like Chrome Remote Desktop, this is Google exploiting a technology that is there. It isnāt about leading the industry or the market with it. And as with Chrome Remote Desktop, it isnāt of enough value to make it worth their while to invest in making WebRTC itself better.
–
WebRTC is now part of HTML5 and part of what browsers need to do, so Google needs to invest in having it in Chrome. How much to invest is the real challenge.
To WebRTC or not to WebRTC?Meet, Duo and Stadia seem to be the leading factors in whatever Google is doing in WebRTC, other than dealing with complaints and feedback from the community.
Google MeetGoogle Meet is using VP9. It is one of the only group calling services running in production at scale that have made that shift.
By harnessing WebRTC and owning its roadmap, Google is able to experiment and build their service faster than others can on WebRTC.
Two interesting examples weāve had in the past year –
1. At Kranky Geek 2018, Google showed an experiment of using WebAssembly with WebRTC to improve video switching in a conference by distinguishing noise and speech:
Did it find its way into Google Meet? Maybe.
Then thereās the new captioning feature in Google Meet, which Gustavo nicely explains. It uses the data channel in WebRTC to send back the results. Assuming anything in WebRTC was needed to change to make this work better, Google could do that as it owns the WebRTC roadmap.
Google Meet, being predominantly a browser based experience, will need to rely on changes made directly into WebRTC or things that can be bolted on top using WebAssembly.
Google DuoGoogle Duo is a mobile first service. It has browser support via Duo for Web, but for the most part, it is meant to be used on your smartphone.
Last month, Google announced some new features in Pixel phones, but also 3 machine learning based improvements for Duo:
Auto-framing:
āAuto-framing keeps your face centered during your Duo video calls, even as you move around, thanks to Pixel 4ās wide-angle lens. And if another person joins you in the shot, the camera automatically adjusts to keep both of you in the frame.ā
Weāve seen Facebook do that in Portal and a few video conferencing vendors adding that to their room systems.
Packet loss concealment:
āWhen a bad connection leads to spotty audio, a machine learning model on your Pixel 4 predicts the likely next sound and helps you to keep the conversation going with minimum disruptions.ā
Packet loss concealment using machine learning is something not many are doing (or publishing that their are doing).
Background blur:
āyou can now apply a portrait filter as well. Youāll look sharper against the gentle blur of your background, while the busy office or messy bedroom behind you goes out of focus.ā
Another nice feature, which is available in other services such as Zoom.
From the looks of it, auto-framing and background blur rely on hardware based capabilities of the Pixel devices. Packet loss concealment⦠a lot less so.
Could we see machine learning based packet loss concealment find its way into the WebRTC codebase? (where it makes the most sense to add it instead of as an external piece of software). Not soon…
Google StadiaFor Google Stadia, Google went with QUIC instead of SCTP for the controls. It decided to make use of WebRTC for live streaming itself.
But it wasnāt enough. It needed the low latency of WebRTC to be even lower. So it added a Chrome experiment to enable them to reduce the playout delay in WebRTC. A few of my clients have already adopted it and are happy with the results for their own use case.
Google also tweaked and improved the VP9 decoder to make it work with 4K 60fps streams.
In the case of Stadia, the changes need to be made inside the WebRTC codebase to apply well for its service anywhere.
What is changing with Googleās strategy about WebRTC in 2020?WebRTC 1.0 is āoutā. Almost.
The latest CR (Candidate Recommendation) is dated December 13. Hopefully the last one before we go to the next step. It is interesting to look at the original charter of WebRTC:
It took somewhat longer to get here than originally expected, but weāre almost there.
Google held its internal milestone of WebRTC 1.0 code complete two months back.
What now?
Besides housekeeping, bug fixes, and talking about WebRTC NV (the next version), I think a lot of it will change internally at Google to how can they make more of their investment in WebRTC and stay or become more competitive in the market. This being an open source project, means that some features will need to be kept out of the open source codebase. Like the new packet loss concealment mechanism in Google Duo.
How is that achievable?
The leading factor is going to be adding more flexibility and control to developers over what WebRTC is and how it operates. Ideally by using WebAssembly and in the future by using WebTransport and WebCodecs, two new initiatives that will unbundle a lot of what WebRTC is.
This gives the ability to take out improvements out of the baseline implementation and introduce them as proprietary features.
The demarcation line of what will go into the WebRTC codebase by Google and what will be kept out of it is going to be the use of machine learning and artificial intelligence. Whenever a feature makes use of learned machine learning models, Google will most probably try to keep that implementation out of WebRTC. Why? Because it has the greatest value and the highest investment today.
Should this worry you?Maybe, but it is to be expected.
Google has invested heavily in WebRTC. Without this investment nothing that we see and use today in WebRTC and take for granted would have been possible.
It is even surprising that it lasted this longā¦
WebRTC closes the basic gaps and requirements of media engines. It is good enough. If you want to improve upon it, differentiate or be at the cutting edge of the WebRTC technology, you will need to invest in it yourself as well. Relying only on Google isnāt an option. And probably never really was.
Hereās to an interesting and eventful 2020 with WebRTC!
The post Google’s private WebRTC roadmap for 2020 = AI appeared first on BlogGeek.me.
Review of Chrome's migration to WebRTC's Unified Plan, how false metrics may have misguided this effort, and what that means moving forward.
Continue reading Is everyone switching to Unified Plan? at webrtcHacks.
Conference calls were always complex. WebRTC might have made joining them simpler, but it does come with its own set of headaches.
Iāve been in the industry for the last 20 years or so (a dinosaur by now). I had my share of conference calls that I joined or scheduled. As humans, we tend to remember the bad things that happened. The outliers. There are many of those with conferencing.
When I saw this Dilbert strip the other day, it resonated well with the āSupporting WebRTCā course Iāve been working on these past few months:
One of the things I am dabbling with now in the course are media quality issues. This was spot on. So of course I had to share it on Twitter, which immediately got a colleague to remind me of this great Avengers mock video conference:
The funny thing is that this still occurs today, even if people will let you believe networks are better and these problems no longer exist. They do. Unless you are Zoom – Zoom always works. At least until it doesnātā¦
What can possibly go wrong?This one was just published today, so couldnāt resistā¦
A modern WebRTC service today will have a few potential failure points:
Letās try to break these down a bit
1. The cloud verndorās infrastructureHereās a secret. AWS breaks from time to time. So does Azure and Google and Digital Ocean and practically everyone else.
Some of these failures are large and public ones. A lot more are smaller and silent ones that arenāt even reported in the main status pages of these cloud vendors. We see that in testRTC – as I am writing these words, we are struggling with a network or resource issue with one of the cloud vendors that we are using, which affects one of our services (thankfully, weāre still running for most of our customers).
Your service might be unreachable or experiencing bad media quality because of the cloud vendor you are using. Fortunately, most cases, these are issues that donāt last long. Unfortunately, these issues are out of your control.
2. Your own infrastructureThis one is obvious but sometimes neglected. What you run in your backend and how the client devices are configured to use it has a profound effect on the quality of experience for your users.
Iāve seen anything from poor ICE servers configuration, through bad scaling decisions to machines that just need a reboot.
WebRTC has a lot of moving parts. You need to take good care and attention to them.
3. The userās networkNow we head towards the things that you have no control over⦠and primarily that is the userās network.
You. donāt. have. control. over. what. network. your. customer. Uses.
He might be over a poor 3G connection (yes, we still have those). Or just be too far from the closest WiFi hotspot he is connected through. Or any other set of stupid issues.
In enterprises, problems can easily include restrictive firewall configurations or use of an HTTP proxy or a VPN.
Then thereās the congestion on the userās network based on what OTHER people are doing on it.
Here, what youāll need to do is to be able to understand the issue and explain it to the user to help him in squeezing more out of the network he is using.
4. The userās browserHereās another challenging one.
The first one is a bit obvious – modern browsers automatically upgrade. This means you will end up with a new browser running your app one day without Apple, Google, Microsoft or Mozilla calling you to ask if you agree to that. And yes – these upgrades may well change behavior for customers and affect media quality.
Then thereās the opposite one – in enterprise environments, IT administrators sometimes lock browser versions and donāt let them upgrade automatically.
The biggest challenge weāre now facing though are Google experiments, like the one conducted with mDNS in WebRTC. Google is conducting experiments in Chrome on live users sporadically. You have no control over these and no indication where and how they are conducted. The whole purpose of this is to surface issues. Problem is, you wonāt know if it breaks things for you until someone complains (or unless you monitor your deployment closely).
5. The userās deviceThe device the user uses affects quality. Obviously.
Tried recently to use an iPhone 4 with a WebRTC service?
The CPU, memory, software and other processes your user has on the device will affect quality. Add to that the fact that certain devices and peripherals behave differently and have their own known (or unknown) issues with WebRTC, and you get another minor headache to deal with.
The things we can control in our WebRTC conference callsHereās where we started – a modern WebRTC service today will have a few potential failure points:
In WebRTC calls, you can control your own infrastructure. And you can build it to work around many cloud vendorās infrastructure issues.
You can try to add logic that deals with the userās device.
You can probably deal with many of the userās browser issues by more testing and running their unstable and developer preview releases.
The things we canāt control in our WebRTC conference callsThe main thing you canāt control is the userās network.
What you can do here is to provide better support, assisting your users in finding out the issues that plague their network and suggesting what they can do about it.
Two things you will be needing to get that done: tooling and knowledge
The tooling side Iāll probably touch in a future article. The knowledge part is something I have a solution for.
How can you better serve your customer?In the last few months Iāve been working on the creation of a new āSupporting WebRTCā course. This course is geared towards support people who get complaints from users about their service and they need to understand how to help them out.
The course started through conversations with support teams in widely known providers of WebRTC services, which turned into a suggested agenda that later turned into a real course.
There are already close to 6 hours of content split into 33 lessons, with more to be added in the next month or so.
Iāve decided to open up registration to the course to everyone and not limit it to the limited pre-launch users Iāve shared it with. I feel it is the right time and that the content there is rock solid.
If you want to improve your knowledge or your support teamās knowledge of WebRTC, with a focus on getting them making your users happy and using your service, then check out my course.
Register to the Supporting WebRTC courseThe post WebRTC conference calls. What could possibly go wrong? appeared first on BlogGeek.me.
WebRTC isnāt only about guest access or even interoperability. It is about the whole infrastructure and service.
My article last month about guest access, the use of WebRTC for it AND how it is now used for āinteroperabilityā between Microsoft and Cisco had its nice share of feedback and comments. Both on the article and off of it in private conversations. I think there is another trend that needs to be explained, which in a way is a lot more important. This one is about video conferencing hardware being dominated by HTTP and WebRTC. This in turn, is affecting how modern video infrastructure is also shifting towards WebRTC.
Where video conferencing hardware meets WebRTCCheck out this recent session from Kranky Geek last month. Here, Nissar Mahamood from Lifesize explains how WebRTC got integrated into their latest meeting room systems (=hardware), getting it to 4K resolutions.
It is a good session for anyone who is looking at embedded platforms and systems or needs to customize WebRTC for his own needs, using it outside of a web browser.
There are two things in this video that surprised me, for two very different reasons:
I started seeing more and more developers using GStreamer as part of the technology stack they use with WebRTC. On Linux, your best bet with processing media using open source is either ffmpeg or GStreamer. Due to the real time nature of WebRTC, GStreamer is often the more sought after approach. In the past year or so, it also added WebRTC transport, making it a more viable option.
In many cases, the use of GStreamer is for connecting non-WebRTC content to WebRTC or getting content from WebRTC to restream it elsewhere. Lifesize has done something slightly different with it:
As the illustration above from their Kranky Geek session shows, Lifesize replaced the media engine (voice and video engines) part of WebRTC with their own which is built on top of WebRTC. They donāt use the WebRTC parts of GStreamer, but rather the āoriginalā parts of it and replacing whatās in WebRTC with their own.
It is surprising as many would use WebRTC specifically for its media engine implementations and throw its other components. Why did they take that route? Probably because their existing systems already used GStreamer that is heavily customized or at the very least fine tuned for their needs. It made more sense to keep that investment than to try and reintroduce it into something like WebRTC.
This approach, of taking the WebRTC source code and modifying it to fit a need isnāt an easy route, but it is one that many are taking. More on that later.
Selecting Node.js as the client application environmentWeāve been so focused on development with WebRTC on browsers and mobile, that embedded non-mobile platforms are usually neglected. These have their own set of frameworks when it comes to WebRTC.
The one selected by Lifesize was Node.js:
They created a Node.js wrapper that interfaces directly with the WebRTC native C++ āAPIā with an effort to create the same JS API they get in the browser for WebRTC.
Why? Their meeting room systems now use HTML for its visual rendering and the application logic is driven by JavaScript.
Why JavaScript?
Because of Atwoodās Law
any application that can be written in JavaScript, will eventually be written in JavaScript
Lifesize simply made their application to one that can be written in JavaScript.
This is doubly true when you factor in the need to support web browsers where you have WebRTC with a JS API on top anyways.
The hidden assertion of WebRTC cloud infrastructureWhat I like the slide above is the cloud with the wording āLifesize Cloud Serviceā in it. The fact that Lifesize is connecting to it via Node.js speaks volumes about where we are and where weāre headed versus where weāre coming from.
A few years ago, this cloud service would have been based on H.323 or SIP signaling.
H.323 is now a deadend (something that is hard for me to say or think – Iāve been ādoingā H.323 for the better part of my 13 years at RADVISION). SIP is used everywhere, but somehow I donāt see a bright future for it outside of PSTN connectivity (aka SIP Trunking).
Lifesize may or may not be using SIP here (SIP over WebSocket in this case) – due to the nature of their service. What I like about this is how there is a transition from WebRTC at the edge of the network towards WebRTC as the network itself. Let me try and explain –
Video conferencing vendors started off looking at WebRTC as a way to get into browsers. Or as a piece of open source code to gut and reuse elsewhere. If one wanted to connect a room system or a software client to a guest (or a user) connecting via WebRTC on a web browser, this would be the approach taken:
(I made up that term transcoding gateway just for this article)
You would interconnect them via a gateway or a media server. Signaling would be translated from one end to the other. Media would be transcoded as well. This, of course, is a waste of processing and bandwidth. It is expensive and wasteful. It doesnāt scale.
With the growing popularity of WebRTC and the increasing use and demand for browser connectivity to video conferences, there was/is no other way than to rethink the infrastructure to make it fit for purpose – have it understand and work with WebRTC not only at the edge.
Thatās when vendors start trying to fit WebRTC paradigms into their infrastructure:
(guess what? Translating gateway? Also made up just for this article)
Things they do at this stage?
There are a lot of other minor nuances that need to be added and implemented at this stage. While some of these changes are nagging and painful, others are important. Adding SRTP simply means adding encryption and security – something that is downright mandatory in this day and age.
The illustration also shows where we focused on making the changes in this round – on the devices themselves. Weāve āupgradedā our legacy phone into a smartphone. In reality, the intent here is to make the devices we have in the network WebRTC-aware so they require a lot less translation in the gateway component.
Once a vendor is here, he still has that nagging box in the middle that doesnāt allow direct communication between the browser and the rest of his infrastructure. It is still a pain that needs to be maintained and dealt with. This becomes the last thing to throw out the window.
At this last stage, vendors go āall inā with WebRTC, modifying their equipment and infrastructure to simply communicate with WebRTC directly.
This migration takes place because of three main reasons:
That third reason is why once a decision to upgrade the infrastructure of a vendor and modernize it takes place, there is a switch towards adopting WebRTC wholeheartedly.
This isnāt just LifesizeMicrosoft took the plunge when adding Skype for Web and went all in with Microsoft Teams.
With their hardware devices for Teams they simply support web technologies in the device, with WebRTC, which means theoretical ability to support any WebRTC infrastructure deployed out there and not only Teams.
Same as the above is what we see with Cisco recently.
BlueJeans and Highfive both live and breath web technologies.
Forgot to mention you? Put a comment belowā¦
There were other good Kranky Geek sessions around this topic this year and last year. Here are a few of them:
Hereās what seems to be the winning software stack that gets shoved under the hood of video conferencing hardware these days. It comes in two shapes and sizes:
LinuxThis gives a vendor a hardware platform where web development is enabled.
AndroidThis diverts from the web development approach a bit (while it does allow for it). That said, it opens up room for third party applications to be developed and delivered alongside the main interface.
Linux or Android, which one will it be? Depends on what your requirements are.
A word about Zoom in this contextWhy isnāt Zoom using WebRTC properly?
I donāt know. But I can make an educated guess.
It all relates to my previous analysis of Zoom and WebRTC.
Zoom were stuck with the guest access paradigm, trying to take the first step was too expensive for them for some reason. Placing that interworking element to connect their infrastructure to web enabled Zoom clients didnāt scale well with pure WebRTC. It required video transcoding and probably a few more hurdles.
At their size, with their business model and with the amount of guest access use they see with the Zoom client on PCs, it just didnāt scale economically. So they took the WASM route that they are following today.
It got them on browsers, with limited quality, but workable. It got them an understanding on WASM and video processing in WASM that not many companies have today.
And it put them on an intersection in how they operate in the future.
Would they:
If I were the CTO of Zoom, I am not sure which of these routes Iād pick at this point in time. Not an easy decision to make, with a lot to gain and lose in each approach.
Need help figuring this out?This whole domain is challenging. Getting WebRTC to work on devices, around devices, in new or existing infrastructure. Deciding how to define and build a hardware solution.
Contact me if you need help figuring this out.
The post The software inside video conferencing hardware is… WebRTC appeared first on BlogGeek.me.
Our best Kranky Geek event ever. Or is it just that I have a short memory?
Earlier this month marked the highlight of the year for me. It happens every year now since 2015. The Kranky Geek event takes place in San Francisco. The event started by mistake and had become an immensely taxing and enjoyable undertaking for me.
WebRTC is a niche of an industry that are here to change the world and challenge how we communicate online with each other in real time. Kranky Geek became a place where our WebRTC niche meets, mingles and discusses many aspects of what it is that weāre doing. A lot of it is technology – and learnings people had and the scars they have to show for it. Some of it is more future looking, where new requirements are shared and semi-pitches are made. It is also a place where we get to talk and interact with the people behind the browser implementations.
I decided to share this slide about how niche WebRTC is:
This shows Stack Overflow Trends for WebRTC, VoIP and SIP. It is the percentage of questions in each month that shows these technologies as tags. WebRTC is higher than either SIP or VoIP by a factor of 3, which is nice. But overall, weāre still talking 0.05% of the questions, which isnāt much. WebRTC is a niche, but an important one (at least to me).
What is Kranky Geek all aboutKranky Geek is about the current state and the immediate future of the WebRTC ecosystem. It is first and foremost an event for developers.
Hereās what I understood at a client meeting earlier in that same week. After the meeting, the client comes to me and tells me how he is using the videos from past Kranky Geek events. Whenever there is a technical detail or a topic he knows is covered by one of our past sessions, he just goes and searches the videos to find that 2-3 minutes he needs.
It got me thinking. It is quite similar to how I use it. I end up referring people to a specific Kranky Geek video at least once a month if not more.
In the end, we are into learning and expanding the knowledge available out there about WebRTC.
The obligatory thanksThe Kranky Geek event isnāt funded by the audienceā tickets. These are practically free. We have a low registration fee that is a kind of a seriousness fee, which makes it easier to estimate the actual attendance rates we will see. That fees ends up being donated to good causes. In the case of Kranky Geek, weāve been giving that money to GDI.
The event is only possible due to its sponsors.
–
There are a few people and companies that I need to thank for the Kranky Geek 2019 event.
First, to my partners in crime – Chris and Chad. Our different opinions and dispositions make a good mix for running Kranky Geek.
To Google and the Chrome WebRTC team at Google.
Google have been there with us from the beginning. They assist us tremendously with the logistics, their attendance and their sessions throughout the years.
To our sponsors of the event:
Their contribution is an important part of us being able to do this every year. I am also very happy that without exception, they treat their speaking slot and our rigorous process and dry runs seriously.
We had a new type of sponsors this year. Vendors who wanted to be part of the event, but didnāt speak (they came after we had a full agenda already).
Voximplant is a CPaaS vendor with WebRTC technology – one you should follow closely if you arenāt already.
Jamm just came out of stealth, and wanted to do that as part of our event.
What you can find in this yearās Kranky Geek sessionsWe started off planning the event with a lot of AI in mind. This is what we had last year, and the trend is obvious to follow this year as well. It will probably still be a trend 5 years from now.
When we actually looked at our agenda, we found a nice mix of WebRTC topics, covering things from WebRTC specifications and best practices, through customizing and modifying WebRTC in production to new use cases and AI.
It is good we did a dry run to all of our speakers, since I didnāt really have the time and attention to listen to them during the event itself. I learned a lot of new things about WebRTC from the dry runs we have and I am sure you will find some very interesting and useful sessions here as well.
All of the videos are already available on YouTube and I encourage you to both subscribe and watch our 2019 playlist:
See you next year?Maybe.
We never really know if we will be having a next ever. This is part of the fact that weāre not professional event organizers. We do it because we enjoy it. We also rely on others to make this happen.
If you are interested in a Kranky Geek 2020, then do one of the following things (or all of them):
The post Kranky Geek SF 2019 – post event summary appeared first on BlogGeek.me.
There are different ways to deal with interoperability. With WebRTC, the one selected is relying on the browser and offer guest access. Interestingly, while the industry is headed in that direction, the elephants are also headed⦠elsewhere.
When I first started with this blog, over 7 years ago, I wasnāt really sure where I was headed with it. What I did know, is that I have to write something about WebRTC to get it off my chest. WebRTC was the reason I stopped working at RADVISION and moved on. You see, as a CTO of my business unit I was told thereās no budget to invest in researching what we can do with WebRTC. Somehow, the future wasnāt important enough, which got me to understand thereās no future for a CTO there either.
I ended up deciding to write three posts – what is WebRTC, why signaling is irrelevant, and what a future meeting room would look like.
That third article? Here it is, from March 2012: The Post-WebRTC Video Conferencing Room System
Weāre still slowly crawling towards that goal.
A short history lesson: the early daysFor many years video meetings were an in-company luxury. A dubious luxury at that.
All Most video conferencing systems were based on a signaling protocol called H.323 and were *supposed* to be āinteroperableā. This didnāt work that well, and in the end, companies tended to purchase all of their hardware from a single vendor. Multi Vendor was possible, but always at a loss of features or capabilities – either because these were proprietary to begin with or because interoperability is such an elusive target.
What was a person to do when he needed to communicate with someone *outside* the company? Dial his phone number. If video was what was needed, then the IT department had to be involved – on both sides. Fooling around with dialing plans, checking that the video conferencing devices interoperate, and then hand holding the users throughout that session. This happened not only in regular companies but also when the companies in question were video conferencing vendors.
Most systems at this point were hardware based. You had to purchase āmeeting roomsā and install them.
The system was totally broken.
Rise of the federationAt some point a new concept started cropping up. If I recall correctly, Microsoft came with it, in their Microsoft Lync service. The idea was create federations.
Microsoft Lync was a semi-standards based service. It was SIP based in nature, but different – connecting to it was harder than connecting to other SIP devices and services as a lot of the spec was proprietary. Being Microsoft, they had a largish software-based market share, but one that was left unconnected.
Each company installed, operated and managed its own Microsoft Lync service. You couldnāt just reach out to another user on another installation directly. What you could do is involve the IT people (on both ends – yes), and get them to configure both installations to be aware of one another. This was referred to as a federation.
Think about it.
Thousands and thousands of installations. Each an island of its own. Each time you wanted to reach out to someone from a new island, you had to ask permission and get it setup – to federate with that other island of install base.
And guess what? This never really worked either. Not in real life. And not even for the video conferencing vendors themselves.
The friction was just too high to make this useful for the workforce.
Introducing the software clientUntil a couple of years ago, video conferencing was a thing for hardware devices.
20 years ago? These devices were mostly built around DSPs and weird embedded operating systems.
15-20 years ago? The vendors learned about Linux and were comfortable enough to use it (!) for an embedded application such as video conferencing. The main concern was usually the real time nature necessary in encoding and decoding video.
About 15 years ago, the notion of being able to use a software client on a Windows operating system to join a video conference (not conduct a meeting – just join one) started to crop up.
The idea was this:
This brought with it the headaches of having to deal with unmanaged networks – having employees (mainly managers) connect from their home, coffee shops or the occasional crappy hotel network.
This new capability started changing the business model around video conferencing. How do you license the software in a world where what was sold was hardware through channels and VARs?
What it also did was change behavior patterns. People now didnāt go to meeting rooms to join a call – they joined from wherever they wanted. Once the video client was installed in their PC they were relatively free.
It had another use case to it: technically, you could get someone to connect as a guest to a meeting. All he needed to do was install the specific software client of the specific video conferencing vendor from the specific landing page of the specific enterprise who purchased the video conferencing system and connect.
If you conducted a meeting with a company who had an installation of a specific vendor, then meeting with another company using the same video vendor usually meant you didnāt have to install the client again – unless it needed an upgrade of sorts.
Since these were early days, there were many installation issues with these clients. When it worked it was great, but when it doesnāt…
Enter the cloudAt around the same point in time, cloud services started taking potshots at the video conferencing industry. They didnāt call it video conferencing but rather web conferencing. Why? Because the center of the service wasnāt an on premise hardware video system installation, but rather a software based cloud service.
It wasnāt as performant and the quality was lower, but it was easier to use. Sadly, video conferencing companies didnāt see it as an existential threat.
Anyway, these services assumed that all users download and install a software client to connect to these web conferences.
Since this was their bread and butter, the idea of having guests connect became more prevalent and acceptable.
At any given point in time, I had on my laptop at least 3 such software clients. Services like WebEx, GoToMeeting and AT&T Connect.
Two challenge these services faced:
Out of these two challenges, Zoom came and solved the first one. For the most part, the first experience of a user with Zoom is by being invited to a Zoom meeting. By someone. Not necessarily an employee in a company who licensed Zoom – just by someone.
The change in business model, as well as the focus on the first time experience (making it simple), got Zoom to where it is today.
The problem that remained though is the software installation piece. Thatās friction, and the browser-based solution that Zoom is offering is still subpar to what can be done on a browser.
The WebRTC guest accessIn the past 5 years, what weāve seen is that every video conferencing vendor except for Zoom has made it towards WebRTC.
Vendors still offer software clients for ongoing use of their service and for providing an improved experience, but all of them also have WebRTC access as well.
Need to have someone join a session? Create a calendar invite and get a meeting link. That link will allow you to either install a software client or just use the browser with WebRTC.
This has become the norm to the point that in many cases, I get invited to meetings just by receiving a URL on one messaging service or another.
Just in the last year we;ve seen UCaaS vendors joining this game by offering their own video conferencing services, usually called Meetings:
The race towards having video bolted on top of voice meetings and web conferences now relies on WebRTC support and guest access as key features.
The nice thing about this? Thereās no need to interoperate, federate or connect the islands of services. Need someone to join a meeting? Just send them a link. They wonāt need to install anything, just click and be connected. Magic.
Today – almost all services offer simple to use guest access via the browser using WebRTC.
Room systems āinteroperabilityā in 2020This all leads to this interesting announcement by Microsoft and Cisco. In two carefully crafted posts/announcements, the two companies appear to be collaborating more than ever. The plan?
Offer direct guest access for a room system of one vendor to meetings of the other vendor.
What does that mean? If you are invited to a Microsoft Teams session as a guest, you should be able to join it from a Cisco WebEx Room device. And vice versa.
This is no federation here – just pure use of an existing room system to join āanyā meeting.
From Ciscoās announcement:
Cisco and Microsoft are working together on a new approach that enables a direct guest join capability from one anotherās video conferencing device to their respective meeting service web app (WebRTC based).
From Microsoftās announcement:
Cisco and Microsoft are working together on a new approach that enables meeting room devices to connect to meeting services from other vendors via embedded web technologies. Microsoft and Cisco will be enabling a direct guest join capability from their respective video conferencing device to the web app for the video meeting service.
A few interesting initial thoughts:
It is about time we got there.
The post Video meetings guest access: the new frontier of interoperability appeared first on BlogGeek.me.
We are now almost 8 years into WebRTC, and it seems like the same mistakes developers made 8 years ago are still being made today. Here are some common WebRTC mistakes that I see on a daily basis.
Last week, I took a quick business trip to Beijing for Agora.ioās RTC Expo event. I was invited by Agora.io to present there about a WebRTC topic, and I decided on āCommon WebRTC mistakes and how to avoid themā. Why? Because it fits nicely with the fact that Iāve been promoting my WebRTC course recently, but also because it is an issue that crops up on a weekly basis.
RTC Expo is an interesting event. To begin with, it is a local event in China. It runs in three separate tracks and it was well attended – the rooms were usually filled to the brim during sessions. The number of foreigners could be counted on the fingers of a single hand. Agora.io offered there live translation, automated using Google Translator. During every session, the spoken words were transcribed and then translated to either Chinese or English, showing both languages to the side of the big screen. The results were mixed, and at times funny. It allowed understanding the gist of what was said but required some grasp of the language spoken by the presenter.
For my own presentation, I decided to work out with a simple structure:
This structure gave me the ability to fit the content to the length of the session quite nicely, while driving home the three main concerns:
There are a lot more mistakes, but these definitely make it to the top of the list.
If you are interested in learning more, then here is the deck I used:
Common WebRTC mistakesand how to avoid them (RTC Expo 2019) from Tsahi Levent-leviWhen the video of the session will be published, I will add it here as well.And if you are interested in solving such issues and reducing the risks of your WebRTC project, then I can always suggest my WebRTC courses.
The post Common WebRTC mistakes and how to avoid them [Slidedeck] appeared first on BlogGeek.me.
I am in the process of launching a WebRTC support course, alongside my WebRTC training for developers. This is by part taking place because of the work weāve been doing at testRTC lately.
Supporting a technology is different than developing it. This is something I learned only recently. It is something I should have known some 20 years ago already. You learn something new every day.
I was always on the software development track. Be it as a developer, project lead, product manager or CTO. It was all about defining, designing, implementing and maintaining communication software. On good days, I interacted with product managers and developers. On bad days, I had to deal with support people (not because they are bad people, but because it meant we had product issues and bugs to deal with). On really bad days, I had to talk to a client who was on an escalation path.
A lot of that work with clients and support teams is frustrating as hell for developers. Oftentimes, there are two disconnected conversations going on, where both sides try to talk to each other but somehow thereās a mismatch in the languages.
This was never a fun experience for me.
Learning the trade of technical supportEarlier this year, at testRTC, where I am a co-founder and the CEO, weāve partnered with Talkdesk, developing a new product to suit their needs. For the first time, my customers werenāt other developers, devops or entrepreneurs but rather support teams. What we essentially built was a network testing tool for WebRTC, which enabled Talkdeskās support team to more easily collect and analyze network statistics from their clients. The end result for Talkdesk? This greatly reduced their turnaround time on incidents. This product is now being trialed by a few other customers, which is great.
I learned a lot from this experience – working with support teams, understanding their challenges and getting feedback from them on our initial alpha release and from there to the product launch itself.
At roughly the same timeframe, I found myself consulting more to support teams through BlogGeek.me, which was a different experience. The main bulk of my consulting is either around architecture and troubleshooting development issues in communication technologies or they are revolving around roadmapping and strategizing communication products. The people you deal with are different in each case, and trying to assist support people instead of making them go away as a developer in my distant past is an interesting experience (something that I should have experienced years back, when I was still young and beautiful).
Where is all that leading to?
New upcoming Supporting WebRTC courseMy next pet project at BlogGeek.me is a new course. This one geared towards support people.
It isnāt a subset of the developers WebRTC courses that are already available, but rather a brand new course, created and recorded from scratch.
Why?
Because support teams need something different.
They donāt really need to know the internals of SRTP, or a detailed explanation of the patent situation of video codecs, or a lot of other technicalities. What they need is a basic understanding of WebRTC and then a lot of information around how things fail (as opposed to how they work).
If you want a peak at the agenda for this course, then it is available here.
I am in the process of creating the materials for the course and will switch gears towards recording and putting this live in two or three weeks.
There are 3 options here:
Today, I have 3 WebRTC courses for developers:
If you want to learn more about them, you can check the course syllabus (PDF).
Are you an employee and not a decision maker?I think this doesnāt happen enough:
The part not happening enough is employees asking to take classes. Asking to get trained in technologies they need to get their job done. Why do I think that? Because I used to be like that as a developer myself. I was passive, waiting for things to happen to me, rarely going and asking for the tools to assist me in my work.
More often than not, I see managers interested in enrolling their employees you my courses. From time to time, there will be a developer who thinks this is important enough to go and ask for permission to take the course – or even more – go suggest the company to send the whole team to enroll in the course.
Think you need this course but donāt think management will approve? Try asking them. You might be surprised by the reply you get.
The post Are you supporting WebRTC or developing with WebRTC? appeared first on BlogGeek.me.
I find myself looking at streaming platforms somewhat more lately. A topic that crops up from time to time is access to āopen dataā. Many write about the merits of open data but a lot less is written about the challenges related to making such data accessible and available.
Iāve asked Tom Camp, technical author and developer at Ably Realtime, a data stream network and realtime API management platform, to give a few pointers around the challenges in accessing open data streams.
Why realtime open data is usefulA well-known example illustrating the benefits of realtime open data is Transport for London and the āCitymapper effectā. Deloitte estimates that the 13,000 developers who started using this data created 600+ apps (including Citymapper), contributing Ā£130m to the cityās economy within just a few years of the schemeās launch. So itās surprising large-scale examples like this are so rare (if you know of any similar success stories/ good sources of realtime data please comment at the end of this article). The EUās data commission has also noted a distinct lack of publicly available, value-generating data sources (think traffic data, weather information, realtime financial updates) due to the costs involved of realtime distribution. In the UK, the Office of National Statistics (the ONS) has noted a widespread lack of data sources in realtime. Headlines aside, ask most developers and youāll get the same answer.Ā
By allowing developers to publish and consumer realtime open data feeds on Ablyās API Streamer (a realtime API Management Platform) Ablyās Open Data Streaming Program aims to make public realtime data easier to work with. Work setting this in motion has involved identifying the most useful, publicly-available realtime data, converting it to a single realtime feed, and inputting it to the Ably Hub, which then re-distributes it to users (for free) in whichever realtime protocol and data structure they need. The process brought us into contact with hundreds of āopenā realtime data sets, and we soon became veterans in identifying and solving common problems developers experience when trying to consume realtime data feeds. Recurring obstacles range from a lack of ārealā realtime information, to a lack of protocol support, to heterogeneous data structures.Ā
Below we isolate three key potential problems to bear in mind when accessing ārealtimeā data sources, and share what we learnt about how to overcome them.Ā
1. Polling takes up time and resourcesDespite the fact many online experiences (B2C, C2C and B2B) now take place in realtime, we still see a lack of push-based realtime APIs. Developers have to poll for data if they want updates in near realtime. The internetās infrastructure is built on REST-APIs, which fall short in terms of providing event-driven online experiences.Ā
Letās take transport systems as an example. Although transport systems are subject to change at any minute, even here we notice a lack of realtime APIs that would be better suited to reflect this. When we looked into this we found just 2/10 cities provided actual realtime APIs. As it happens, these were the two cities with some of the best journey-planning and transport sharing apps.Ā
How do realtime APIs help? Consider an application which is meant to keep end-users updated with train arrival times, subject to change (as the city dwellers amongst us know), at any moment. Using pull-based protocols, those wanting to receive the information will need to poll the providerās endpoint every few seconds for current information, with obvious impacts on server load as well as usability.
Leave it too long and you risk missing information on a train arriving at a different platform, and have the end user miss the train:
Make it too short, and youāre using a lot of bandwidth making requests for unchanged information, with each message also having a fairly large overhead:
What can we do about it? We can recommend data be provided using push-based systems, to lighten the engineering load both for producers, who only need to provide the initial connection point, and for subscribers, who no longer need to worry about intermittently polling the providerās endpoint. The result is instantaneous updates and far lower bandwidth costs.Ā
Unlike pull systems, push bandwidth costs remain sustainable even when thousands of developers start using the data. For developers wishing to add realtime to their apps, look out for push-based APIs, such as WebSockets and MQTT, that allow for persistent, bidirectional connections. But while we are persuading data producers of the benefits of providing these, we can –Ā up to an extent – stick with long-polling BUT optimize how we long-polling with maximal efficiency.
2. Data structures are fragmentedĀDevelopers looking for realtime updates have to spend a lot of time familiarizing themselves with each providerās chosen protocol, be that HTTP or something like STOMP, working out its implementation, and how to convert this data into a unified format suited to a particular app or service. More widely though, and again using transport as an example, there is also a fundamental lack of standardization in the way transport providers structure their data. Some companies provide extended information – carriage formation, up-to-the-minute ETAs, and seat availability, others scrape by with the bare minimum of time and transport mode ID. A lack of standards across sectors mean developers wanting to expand the reach of their app (ie all developers) eventually come up against a host of additional problems to solve. With each new data structure developers need to work out which data corresponds to what, how to correlate similar data, in addition to allow for varying degrees of accuracy.
A good illustration of lack of cohesion is the variety of options for what has caused a disruption. GTFS Realtime includes twelve possible reasons for delays. NationalRail on Darwin however, has a whopping 496 options (I kid you not). If open data is to have a meaningful impact on different sectors, we recommend industry-wide agreements on what data to provide. For developers, in the meantime, itās a matter of knowing how to sift through the sources.
3. Some data sets are more open than othersĀMost pull-based systems Iāve encountered donāt seem to be designed to handle large numbers of requests, which inherently reduce the value in the data as it becomes less accessible. Many transport data providers impose heavy rate limits and restrictions on data usage. For example, UKĀ train operator NetworkRail has a limit of 500 people using their queues at any one time. TFLās RESTful API is limited to 500 requests a minute. I think that public data providers need to impose generous limits. For developers, so as not to get caught out when your app scales, itās a wise precaution to bear in mind that you will likely need higher loads than you are anticipating. Here and elsewhere, before you dive into building an app, itās best to read the smallprint around your chosen data source, gauging how it fits in both with other data sources, and your use case.
–
Ably is a global cloud network for streaming data and managing the full lifecycle of realtime APIs. Read more about concepts, design patterns and protocols underpinning realtime engineering on the Ably Engineering blog.Ā
Finally, if you know of realtime data feeds that would benefit from being on the Ably Hub, get in touch – tom@ably.ioĀ
The post Data APIs: How to make the most of ‘public’ realtime data sources appeared first on BlogGeek.me.
Looking at the future of CPaaS, the lines are blurring in the cloud communication API future. And this isnāt only about UCaaS and CCaaS.
Iāve been asked recently by multiple clients to analyze for them the future of specific technologies they are developing. The process was very interesting and provided a lot of insights – somethings things that havenāt been obvious to me to begin with.
It got me into thinking. What if I do the same around CPaaS? Looking at how the future of cloud communication APIs look like, what are vendors after, what they pitch and brief analysts about, and what their customers are looking for.
I decided to do exactly that, ending up writing this article and creating a new comparison sheet and eBook (this eBook/sheet combo can be found in my WebRTC Course paid-for ebooks section).
–
When looking at what the future holds in the CPaaS domain, there are many aspects to review. If this topic interests you, then you should probably also read these other 4 articles Iāve written previously:
Now that weāre on āthe same pageā, hereās where I see things heading for communication APIs.
Want to figure out exactly what each vendor is doing in each of these future trajectories? You can purchase my CPaaS Vendors Comparison.
nocodeThereās this new trend of making software development all-encompassing. It boils down to a single non-word used for it known as #nocode
Hereās some of the things people like saying about this trend:
As creating things on the internet becomes more accessible, more people will become makers. Itās no longer limited to the <1% of engineers that can code resulting in an explosion of ideas from all kinds of people. #NoCode
— Shaheer Ahmed āŖ (@Boringcuriosity) September 13, 2019The best code you could write is #nocode at all
— Denis Anisimov (@dbanisimov) September 14, 2019Interestingly, the place where you see people talk the most about #nocode is in the third party API space. Now that weāve made integrating with third parties simpler via APIs, it is time to make it even more so by requiring less development skills to do so.
This has been a long time coming to the communication API space as well.
Weāve had visual IVRs for quite some time, and weāve seen in the past 2-3 years many of the CPaaS vendors adding visual drag and drop tools. Twilio calls their tool Twilio Studio, while the rest of the industry settled on the name Flow.
Who is doing it today with CPaaS?
Others, like Nexmo, opted for releasing a Node-RED package, enabling developers more flexibility in the integration points the Flow tool has to offer them.
What I fail to understand is why so little activity is taking place in the serverless trend. It is as if CPaaS vendors knowingly decide NOT to offer these and instead jump directly towards the visual drag & drop flow tool.
Look at the diagram above. It shows why I believe it is a mistake to skip the serverless opportunity. Weāve started with APIs, to simplify the task of inhouse development, going towards cloud so we donāt need to install complex systems. Weāve seen a shift towards serverless (think AWS Lambda), where developers can focus on their use case and not think too much about the whole non-functional infrastructure stuff. Then came the visual drag and drop tools, which made life even simpler, as for many scenarios, there is no more need to code anything – just express your intents by connecting dots to boxes.
Developers end up using ALL of the tools given to them. They will use a visual drag & drop tool to speed up development when the flow is easier to express in that tool. Theyāll write code when necessary. And they will use serverless functions to reduce the effort of scaling and maintenance if that is needed. So why not give them all of these tools?
CPaaS vendors are doing APIs and moving towards visual. The serverless part is an internal implementation which most donāt expose to their customers. Why? I am not sure.
What should you expect in the coming years?
Visual Flow tools will become an integral part of any CPaaS offering, with more widget types being added into these tools – supporting new features, adding new channels or integrating with external third parties.
OmnichannelOmnichannel is the biggest thing in CPaaS at the moment.
There are two reasons for this:
Why is SMS crap? Because in the last week or so Iāve received so much spam on SMS related to the election here in Israel that it made that channel useless. I am sure I am not the only one and that this isnāt only in Israel.
SMS is being marketed to marketers as the channel that gets the highest attention rate from the spammed audience. What it gets is the highest deliverability – maybe. Definitely not the highest attention. This makes SMS great for transactional messages but I am not sure how good it is for sales or marketing promotions if done in the current stupid carpet-bombing tactics.
How does omnichannel change that? It doesnāt. But the social networks that act as channels treat their users better than carriers, which means they are guarding the entry to their garden from sales people and marketers, trying to bake the rules of permission marketing into the engagement. This is done by things like manually approving message templates, not letting businesses send unsolicited messages, forcing identity on the sender, allowing users to mark crap they receive as spam, etc.
It does one more thing – it brings the game into a new field which is murkier than SMS today. There are many channels already, with a promise of more channels to come in the future. Will you develop it on your own or rely on a third party CPaaS vendor for that? Most will choose the CPaaS vendor approach.
Timing is also good. Social networks are opening up their APIs, letting CPaaS vendors (and other vendors) access to their users, in an effort to enhance their usefulness to their users and to have more monetization options on their platform. They are doing that while trying really hard not to piss off their users, so spam levels are low and will be kept that way for years to come.
Omnichannel is the leading force of future CPaaS growth. This is where most invest their focus on, and where thereās an easy path for migrating SMS revenue/engagement from.
EmailEmail was always shunned from. Akin to fax. A relic of a bad past.
But it isnāt.
Most of my business revolves around the ability to reach people via email. And it mostly works for me (donāt like my content? unsubscribe).
It isnāt a replacement for SMS messages. Not really. But it has many uses of its own. Especially if you factor omnichannel. Businesses need to communicate with their customers and prospects, and doing that only over SMS or WhatsApp is a limited worldview. Thereās email as well.
Some CPaaS platforms already had email integrations and capabilities to some extent. Twilio has taken it to a whole new level with the acquisition of SendGrid. Did Twilio decide on this acquisition to increase their bottom line and appeal to Wall Street? Were they after an operation with less costs attached to it to increase their revenue per share? Was it a genuine strategic move towards email?
Doesnāt matter anymore. Email is part of the game of CPaaS. I donāt think many agree with me on that. The reason it is becoming part of CPaaS is because we need to look at communications holistically. As we head towards the enterprise with CPaaS, email is yet another channel of interaction – same as SMS, WhatsApp and others. Being better at email means answering more of the needs of an enterprise communications which means appealing more in a vendor selection process.
Email will take a bigger and more important position in CPaaS. The more omnichannel becomes the norm, the more customers will ask about Email support and capabilities.
Streaming media to third partiesWe call it AI – Artificial Intelligences. If weāre not overly hyped, then ML – Machine Learning. And if weāre true to ourselves, then most of it is probably statistics, sometimes sprinkled with a bit of machine learning.
CPaaS is too generic and broad to be able to cover all possible algorithms and models. What do you want to do with that recorded voice call? Transcribe it? Translate to another language? Maybe do some emotion analysis? Find intents? Summarize? Look for action items?
Too many alternatives, with too much data to train from to get a good enough model. And then each scenario needs its own data to train for and get a specialized model to use.
The end result?
CPaaS vendors offer a few out of the box integration with popular features and frameworks. The known culprits are speech to text and text to speech. Or just connectivity to AWS or Google machine learning algorithms in this speech analytics domain.
Another approach which is gaining a lot of traction is to be able to stream the media itself to any third party – be it an on premise/proprietary machine learning model or a cloud based machine learning API. Usually over a WebSocket, but sometimes on top of other transport mechanisms.
The name of the game here? Simplicity and real time.
Enabling easy access to the media streams is key. The easier it is to access the media streams and integrate them with third parties that do machine learning the more attractive the CPaaS vendor will be moving forward.
Chatbots and voicebotsThe digital transformation of enterprises is a transition that is taking now over a decade and will continue for many years to come. Part of that transition is figuring out how businesses communicate with users. Part of that communication needs to be relegated to bots.
Why?
Iāve written about this trend and its reasoning when reviewing the two recent acquisitions of Cisco and Vonage in this space.
There are startups focusing solely on the bots industry, which is great. But in many ways, this is part of what a CPaaS vendor can offer – enablement of communications at scale.
Some CPaaS vendors today integrate directly or indirectly with bot frameworks such as Dialogflow or have built their own bot infrastructure. Moving forward, expect to see this more.
Enabling easy creation and configuration of chatbots and voice bots will be an important feature in CPaaS. The better tooling a CPaaS vendor will have in this space, the easier it will be for him to maintain enterprise customers looking to better communicate with their users.
UCaaS and CPaaSAcronyms might be confusing in this section and the next so follow closely (or skip altogether)
UCaaS vendors are looking at CPaaS as a potential growth opportunity.
Vonage has seen that first with the acquisition of Nexmo.
Since then weāve had Cisco acquire Tropo (and botch that one), RingCentral introducing developer APIs and 8×8 acquiring Wavecell.
There are definite synergies at the infrastructure level of UCaaS and CPaaS, though it is a bit less obvious what synergies there are on the frontend/application/business side. They do exist, but just a bit harder to see.
UCaaS vendors are adding APIs and points of integrations to their service because it makes sense. Everyoneās doinā it in one way or another. It isnāt CPaaS but in some minor cases it can replace the need for using CPaaS.
What you donāt see, is CPaaS vendors heading towards UCaaS. Yet.
And you donāt see any successful independent UCaaS vendor using a 3rd party CPaaS vendor to operate all of its communication infrastructure. Yet.
For UCaaS, CPaaS is a growth potential. For CPaaS, UCaaS is just another use case. The lines are blurring between these two domains but not enough to matter.
CCaaS and CPaaSCloud contact centers take the exact opposite powerplay than UCaaS.
Many of the cloud based contact centers are using CPaaS and not their own infrastructure.
Twilio decided to build a contact center solution – Twilio Flex. In a way, it competes with some of its own customers. As successful companies grow large, they go toward adjacencies and CPaaS is an adjacency.
Will Twilio succeed with Flex? Too early to know.
Will more CPaaS vendors introduce contact center solutions? Probably not, but they are being bunched up and consolidated as larger entities – just see what Vonage and 8×8 have been doing in their acquisitions.
Twilio Flex is a singular occurrence. The norm would be other larger communication players who have CCaaS, acquiring smaller CPaaS players. The end result? A blurring of the lines between the various communication vendors.
For Twilio, Flex might be just the beginning. If this bet succeeds, Twilio will find the appetite to look at other adjacent enterprise applications it could build or acquire and make its own.
M2M / IOTThis. isnāt. part. of. CPaaS.
Or is it?
Iāll start by splitting this one into two areas:
Twilio has their Programmable Wireless offering, which at its core is a modern M2M solution (for me M2M and IOT are one and the same).
In this domain, communication is needed between devices. Less human intervention for the most part, so some of the requirements are different.
But this is still communications.
CPaaS will redefine M2M/IOT as one of the use cases it covers. I donāt see a reason why CPaaS vendors wouldnāt take that route in an effort to grow their product line horizontally.
IOT – serverless infrastructure for real-time messagingI tried to find a name for this subdomain and settled on what vendors like PubNub, Pusher and Ably end up with (or something in-between). Thereās a set of vendors offering a kind of general purpose managed messaging that developers can use when they build their apps.
These vendors are settling on something like serverless infrastructure for real-time messaging as a name.
Serverless because it sounds modern, advanced and cool (marketing asked for that).
Infrastructure because this is what they have.
Real-time messaging because this is what they do.
How is that related to CPaaS? It doesnāt directly. Because no CPaaS vendor offers a āserverless infrastructure for real-time messagingā.
Hereās a surprising thing.
All of the CPaaS vendors who support WebRTC have a global backend real-time messaging infrastructure already. It is used to drive signaling across the network.
It might be more centralized. It might be slightly slower. It might be simplistic.
But at the end of the day – it is a serverless infrastructure for real-time messaging.
These CPaaS vendors can slap an API on top of that infrastructure and offer that as yet another distinct service. And they will. Either by inhouse development or through acquisitions.
Serverless infrastructure for real-time messaging will be wrapped into CPaaS.
Cloud native, no hybridThere were attempts in the past by CPaaS vendors to offer both cloud and on premise alternatives.
Some are probably doing it still.
The vendors that see more growth though are cloud native and offer no on premise alternative.
Things arenāt going to change here.
The future of CPaaS is cloud. Hybrid is a nice idea, but until cloud vendors themselves donāt offer an easy (and cost effective) path towards that goal, the hybrid model makes less sense – it becomes too expensive to develop and maintain.
Measurements and SLAsQuality across vendors, carriers, networks, infrastructures, time of day, day of the week or any other parameter you wish to use is variable at best. CPaaS vendors are āsupposedā to handle that. They track and optimize media quality and connectivity across their services. They strive to maintain high uptime and reliability. Some even use quality as reasons for opting for their service.
At some point, TokBox and Twilio started offering quality measurement tools. TokBox introduced Inspector, a way for its users to troubleshoot network issues of recent sessions. Twilio launched Voice Insights, offering its users a quality dashboard of the calls conducted through its service.
A similar aspect is the use of SLAs as part of the service – a binding of what type of service expectations the customer should expect and what happens when the expectation isnāt met. These apply mostly to enterprise plans of some of the CPaaS vendors.
Why am I mentioning it here? Because it see it happening. It is what got Talkdesk to pick testRTC for a network testing tool (I am a co-founder at testRTC). It is also an issue that causes a lot of challenges to customers – understanding the quality their own users experience.
Measurement and SLAs will take bigger roles in customerās buying decision making. As the market evolves and matures, expect to see more of these capabilities crop up in CPaaS offerings. It will happen due to pressure from competitors but more likely due to pressure from enterprise customers.
Vying towards the Programmable EnterpriseWeāre shifting from on premise to the cloud. From analog to digital. From siloed solutions towards highly integrated ones. This migration changes the requirements of the enterprise and the types of tools it would require.
I think we will end up with the Programmable Enterprise. One where the software used is highly integratable. Many of these early trends we now see in CPaaS will trickle and find their way across all enterprise software.
Want to figure out exactly what each vendor is doing in each of these future trajectories? You can purchase my CPaaS Vendors Comparison.
The post Future of CPaaS; a look ahead appeared first on BlogGeek.me.
An analysis of the most popular open-source WebRTC repos on GitHub with a review of how WebRTC itself is doing there.
Continue reading and the WebRTC Open Source Popularity Contest Winner is⦠at webrtcHacks.
Some updates you might want to be aware of.
This is going to be mainly about updates of things that are going on that you may want to be aware of. Mainly:
Kranky Geek 2019 is coming up fast.
Date is set to Friday, November 15 2019
At our traditional location: Googleās office at 345 Spear St, San Francisco
We are going to continue this year in our look at WebRTC and machine learning in communications as our main theme.
Want to register for Kranky Geek?Registration to the Kranky Geek event are now open.
Weāve got limited room, so you should register earlier rather than later.
Thereās a token registration fee ($10) – it is how we make sure everyone has a place to sit during the event.
Want to speak at Kranky Geek?If youāre into sharing your knowledge and experience with others, then how about speaking at Kranky Geek?
Weāre working on the agenda at the moment, and are looking for speakers to join us. Each year we get one or two such requests that end up quite well. Need examples? Check out last yearās Facebook session on Portal or maybe Discord on their infrastructure.
Want to try this out? Contact us.
Want to sponsor Kranky Geek?We get to do Kranky Geek on a yearly basis due to our great sponsors.
Our sponsors this year include:
This leaves room for one or two more sponsors. If youād like to help us our, and show off your brand where it matters when it comes to WebRTC, then let us know.
Meet me in personIn the next couple of months Iāll be traveling. If youād like to meet, ping me.
October 24-25, BeijingIāll be heading to Beijing for Agora.ioās RTC 2019 event.
My session at the event is āCommon WebRTC mistakes and how to avoid themā. Still need to work out on my presentation.
If youāre in Beijing for the event, it would be great to see you in person.
November 11-16, San FranciscoKranky Geek takes place November 15. Iāll be in San Francisco for the duration of that week.
My time in San Francisco is usually limited and hectic, but I am always happy to catch up and talk when I can find an open slot for it.
If you are interested in meeting up – just tell me.
Available WebRTC related sponsorshipsThere are sponsorship opportunities that are available if you want to highlight your products, services or even job listings. These are available not directly on BlogGeek.me, but rather in a few partner domains:
Thereās now an orderly media kit you can review for the webrtcHacks and WebRTC Weekly sponsorships. Check it out.
New testRTC product: Network TestingAt testRTC, weāve launched a new product a few months back – Network Testing
While our other products are geared towards developers, testers and IT, this new product caters for support teams.
What it does is connects to your backend directly (thereās an onboarding/integration associated with this product), and then runs a battery of network tests from the machine you use our service. It ends up providing the information it gathers to both the person running the test as well as your support team.
This was developed with the help of Talkdesk, one of our first clients for this product. Check out the testimonial we did with Talkdesk using testRTCās Network Testing.
Interested in learning more? Contact us @ testRTC
A new WebRTC course – for support teamsI have started working on a new course called “Supporting WebRTC”. The purpose of this course is to assist support teams that need to deal with issues related to WebRTC to better understand and handle them.
This comes as I celebrate my 500 course students in my developer focused Advanced WebRTC training.
Anyways, ping me if youre interested in learning more about the new Supporting WebRTC course – or even want to be there during the prelaunch, providing feedback as I create lessons.
Revamping my consulting pagesThis how the menu bar on my website looked until yesterday:
And this is how it looks now:
Iāve replaced the ānon-performingā and somewhat cluttered Workshops/Consulting combo with the more usual Products/Services alternative.
Why the change?
Because there are many of my services that were gone unnoticed. I found that out while speaking to clients and potential clients. So it made sense to change the structure. Another reason is the recent launch of my ebooks section – while these are part of the WebRTC Course website (along with the courses themselves), I wanted to be able to share everything on my main site – BlogGeek.me.
Iāve decided to make this change available now and not wait for it, but these pages will be updated soon. I have commissioned a few unique illustrations for these new pages and canāt wait to get them up.
Hereās a glimpse of one of the concept sketches I received (this one for the courses):
Doing something with communications? I am here to help.
The post Kranky Geek, WebRTC sponsorships and other updates around my services appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.