The NanoPi NEO2 board by FriendlyElec has several options for an enclosure in their webshop. The 3D-printed plastic enclosure is of too poor quality, and it doesn’t fixate the heatsink properly on the CPU.
The acrylic case does not include washers, which makes the whole construct too fragile, as the screws can easily damage the plastic. Also the M2.5 screws for fixing the heatsink are too short.
So, I added the following components to the design:
Also the following parts came with the acrylic case:
As a result, we get a sturdy case that is able to sustain some rough handling, like carrying it in a toolbox among other hardware.
(scratches on my phone camera made the pictures a bit too soft)
Twilio’s Jeff Lawson had a really interesting keynote at their Signal event. I think Twilio is trying to redefine what CPaaS is. If this works for them, it will make it doubly hard for their competitors.
This is going to be long, as the keynote was long and packed full with information and details that pave the road to what CPaaS is going to be in 2020.
I suggest you watch this keynote yourself –
What I loved the most? The beginning, where Jeff refers to code as making art. I have to agree. In my developer days, that was the feeling. Coding was like building with lego bricks without the instructions or sitting down to paint on T-shirts (yes – I did that in my youth). When a CEO of a company talks about coding as art and you see he truly believes it – you know that what that company is doing must be… art.
Before we BeginOne term you didn’t hear at the keynote:
CPaaS
One term that was there every other slide:
This was about developers, who is the buyer and how software APIs are everywhere.
It was also about how CPaaS is changing and Twilio is now much bigger than that – in the traditional sense of what CPaaS means.
It wasn’t said out loud, but the low level APIs that everyone are haggling about – SMS and voice – are nice, but not where the future lies.
Twilio by the NumbersThe numbers game was reserved for the first 13 minutes of the keynote, where Jeff asserted Twilio’s distinct leadership in this market:
More about the two last bullets later.
Here’s what Twilio deployed in the past year:
To me, this is becoming hard to follow and grasp, especially when I need to look at other vendors as well.
If you look at it, you’ll see that Twilio has been working hard in multiple vectors. The main ones are Enterprise, IP communications and “legacy” telephony.
The main messages?
All this boils down to stating that a competitive advantage can be best achieved on top of Twilio.
Twilio’s New Layering ModelIf you’ve been watching this space, you might have noticed that I tend to use this model to explain CPaaS feature sets:
And this is how Jeff explained it on stage in Twilio’s Signal event 2016:
Building blocks. Unrelated. Could have been placed horizontally one next to the other to get the same concept. But piling them on top of each other is great – it shows there’s lots and lots of services and features to use.
2017 brings with it a change in the paradigm and a new layers model that Jeff explained, and was later expanded with more details:
The funny thing is that this reminded me of how we explained the portfolio and API layers in our VoIP products at RADVISION more than 10 years ago. It is great to see how this translates well when shifting from on premise APIs to cloud APIs. But I digress.
Back to the layering model.
The Super Network wasn’t given much thought in this time around. There were announcements and improvements in this area, but these are a given by now. For those who wish to outmaneuver Twilio by offering a better network – that’s going to be tough without the layers above it.
Then there’s the Programmable Communications Cloud, which is where most of the CPaaS vendors are. This is what I drawn as my own perspective of CPaaS services. The names have changed a bit for the Twilio’s services – we’ve got Programmable Chat now instead of IP Messaging. SMS has 3 separate building blocks here instead of one, and the baseline one is called Programmable SMS – keeping the lower level Communications APIs with a nice naming convention of Programmable X.
The interesting part of this story comes in the Engagement Cloud. Jeff made a point of explaining the three aspects of it: Systems, Departments and Individuals. And the thing about the Engagement Cloud is that services there are actually best practices – they aren’t “functional” in their nature. So Twilio are referring to the APIs in this layer as Declarative APIs.
The Engagement CloudThe main difference between what’s in the Engagement Cloud and the Programmable Communications Cloud? In the Programmable Communications Cloud you know as a developer what will happen – you ask to send an SMS and the SMS is sent. With the Engagement Cloud, you ask for a message to reach someone – and you don’t really care how it is done – just that it will be done in any channel that fits best.
No Channel To Rule Them AllWhat is that “any channel that fits best”?
That’s based on what Twilio decides in the modules they offer in the Engagement Cloud, and it is where the words “best practices” were used during the event.
Best practices are powerful. As a supplier, they show you know the business of your customer to a point where you can assist him more than just by giving him the thing that he think he needs. It places you often as a trusted advisor, or one to go to in order to decide what it is you are going to do. After all – you own the best practices, so why not follow them?
It is also where the most value is to be made moving forward.
SMS is probably still kind when it comes to revenue in CPaaS. Not only for Twilio, but for all players in the market. And while this is nice and true, it is also a real threat to them all:
Yes. SMS is growing in use.
Yes. The stupid term A2P (Application 2 Person) is growing rapidly and it is done using SMS.
Yes. People prefer that over installing apps, receiving emails and getting push notifications.
Yes. People do read SMS messages. But I am not sure if they trust them.
Here’s a quick story for you.
Airbnb.
I use them once in awhile. I was just planning a trip with the family for July. Found the dates. Booked the flights. Found an Airbnb to stay at. Reserved a place – and was asked if I am cool with push notifications. I clicked yes. And here’s what I got the next moment on my phone:
Businesses might be recommended to use SMS to reach their customers, but the prices of SMS urges businesses to seek other, cheaper channels of communications at the same time.
There is no money to be had in Communications APIs in the long term.
Click To Tweet
There is already a price war at this level. Vendors trying to be “cheaper than X”. Developers complaining about the high prices of CPaaS, not understanding the real costs of developing and maintaining such systems.
What’s in Twilio’s Engagement CloudWhich is where the Engagement Cloud comes – or more accurately – the best practices and smarts on top of just calling communications APIs.
Twilio are now offering 4 APIs in that domain:
The interesting bit here is that these all started as functional building blocks. But now the stories behind them are all about multi-channel.
SMS is great, but it isn’t the answer.
IP messaging is great, but it isn’t the answer.
Facebook messenger with its billion+ users is great, but it isn’t the answer.
XKCD says it best:
With such a model in which we live in, programmable communications need to be able to keep track of the best means to reach a person. And so Twilio’s Engagement Cloud is about becoming Omnichannel (=everywhere) with the smarts needed to pick and choose the best channel per interaction.
Are we there yet with the current Twilio offering? I don’t know. But the positioning, intent, roadmap and vision is crystal clear. And with Twilio’s current speed of execution, it is going to happen sooner rather than later.
Vendor Lock-inThe great thing about this layer of Engagement Cloud for Twilio is that it is going to be hard to replace once you start using it.
How hard is it to replace an API that sends out an SMS to a phone number with another API that does the same? Not much.
But how hard is it to replace best practices wrapped inside an API that decides what to do on its own based on context? Harder. And getting even more so as time goes by and that piece of module gets smarter.
Twilio gets a better handle on its customers with the Engagement Cloud. It makes it a lot harder for developers to go for a multi-vendors strategy where they use SMS from the CPaaS vendor whose price is the lowest.
Developer’s BenefitsWhy would developers use these Engagement Cloud modules from Twilio?
Because they save them a ton of time and even a lot more in headaches.
Today, there are 3 huge benefits for developers:
These areas are usually those that developers usually don’t like to deal with. That third one especially is a real pain – after you did it for 2 vendors/channels – connected it to SMS and maybe Facebook Messenger, it feels boring to add the next channel. But now you don’t have to anymore. And don’t you get me started with how the APIs there deprecated and changed through time.
Machine Learning and its CPaaS RoleTwilio talked about Machine Learning in two new APIs that it is introducing: Speech Recognition and Understand
The Speech Recognition one is a bit less interesting. It is done in partnership with Google, using Google’s engine for it. The smarts on Twilio’s side here is the integration and how they are stitching these capabilities of text to speech throughout their line of products.
Here what Twilio is doing is acting in the most Twilio-like approach – instead of developing its own speech recognition tech, or using a 3rd party that gets installed on premise, it decided to partner with Google and user their cloud based speech recognition technology. And then making it easier for developers to consume as part of the bigger Twilio offering.
The real story lies elsewhere though – in Twilio Understand.
While Speech Recognition is a functional piece where you feed the machine with voice and get text, Understand is about modeling your use case and then having the machine parse text based on that model.
It is also where Twilio seems to have gone it alone (or embedded a third party internally), building its real first customer-facing Machine Learning based product.
In the past few years we’ve seen huge growth in this space. It started with Big Data, turned Analytics, turned Real Time Analytics, turned Decision Engines, turned Machine Learning.
Companies use this type of capabilities in many ways. Mostly internally, where Twilio probably had been doing that already. But embedding machine learning and big data, making products smarter is where we’re headed. And for me, this is the first instance I’ve seen by a CPaaS vendor taking this route.
It is still a small step, as Understand is another piece of API – a module – that you can use. And just like many of Twilio’s other APIs, you can use it as a building block integrated with its other building blocks. It is a move in the right direction to evolving into something much bigger.
LinkedIn shows that Twilio has several data scientists (the man power you need for such tasks), though none of them was “kind enough” to offer details of his role or doings at Twilio
Moving forward, I’d expect Twilio to hire several more people in that domain, beefing up its chops and starting to offer these capabilities elsewhere.
The only competitor at the moment who is seeing that is Cisco Spark – with their recent acquisition of MindMeld.
The great thing about machine learning? People feel and assume that it is super hard. Which means it is worth paying for.
The EnterpriseHere’s where enterprises find a home at Twilio’s Signal 2017 keynote. Best to just show it in slides:
Twilio’s API calls success rate. This goes on top of its 99.999% API availability and this is where Jeff wants you to focus – not on getting an API returning an error (would would still fall under availability) but rather on how many successful results you get from the APIs.
Since Twilio launched, none of its APIs was ever deprecated or killed (haven’t checked it myself but this is what Jeff wants you to remember).
Twilio has been working hard on reaching out to enterprises. It introduced an Enterprise plan last year. Implemented ISO 2700. Added Public Key Validation. Introduced support for Enterprise SSO.
All these are great, but what I think resonates here the most are the above two items.
99.999% Success RateEnterprises LOVE this.
SLAs. Guarantees. All the rage.
Twilio is operating at 99.999% uptime and is happy to offer a 99.99% guarantee in its enterprise SLA:
For an enterprise to go for Twilio requires two leaps of faith:
When you pick Twilio, who’s giving you any guarantees?
Well… Twilio does. At 99.99% while maintaining 99.999% across all of its services to all of its customers.
That’s a powerful message. Especially if you couple it with 30,000 deployments a year.
0 APIs KilledThis one is REALLY interesting.
In the world of APIs where everything is in the cloud with a single copy running (it isn’t, but bear with me a second), having someone say that they offer backward compatibility to all of their APIs is huge.
The number of changes you usually need to follow with APIs on the internet is huge. If you have a product using third party APIs, then every year or two, you need to make some changes to have it continue to work properly – because the APIs you use change.
0 APIs kills means that if an enterprise writes their code today for a project they have, it won’t need to worry about changes to it due to Twilio. Now, in many cases, enterprises develop a project, launch it and then are happy to continue with it as is without further investment (or budget). Which means that this kind of a soft guarantee is important.
How does Twilio do it?
They launch products in beta and run the beta for long periods of time. During that time, they get developers to use and tinker with the APIs, collect feedback and when they feel ready, they officially launch it – at which point the API is deemed stable.
It works well because Twilio has lots and lots of customers, some willing to jump on new offerings and take the risk of having things break a bit during those beta periods.
The end result? 0 APIs killed.
Will it Blend?I believe it will.
Twilio has introduced a new paradigm for they way it is layering its product offerings.
In the process, it repositioned all of its higher level APIs as the Engagement Cloud. It stitched these APIs to use its lower Programmable Communications APIs, adding business logic and best practices. And it is now looking into machine learning as well.
It is a powerful package with nothing comparable on the market.
Twilio are the best of suite approach of CPaaS – offering the largest breadth of support across this space. And it is making sure to offer powerful building blocks to make developers think twice before going for an alternative.
Twilio isn’t for everyone. And other CPaaS vendors do have their place. But increasingly, these places become niches.
Is there more?Yes.
This analysis is long, but by no means full.
There were a lot of other aspects of the announcements and Twilio’s moves that require more thought and details. The pricing model on group Programmable Video is one of them. Third Party Add Ons in certain domains (especially for analytics) is another. Or Twilio heading into the UI layer. And then there’s serverless via Twilio Functions. This isn’t even an exhaustive list…
I won’t be going into these here, but these are things that I am actively looking at.
Contact me if you are interested in understanding more about this space.
Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.
The post Is Twilio Redefining CPaaS? appeared first on BlogGeek.me.
WebRTC for Business People – The 2017 Edition
The revamped WebRTC for Business People report is now published.
I’ve been reviewing the stats of the content here lately, and noticed that people still find their way and download my WebRTC for Business People report.
My only problem with it is that it was already old – and showing it: There’s no serious mention of the advances made by Microsoft Edge towards WebRTC support, nothing about the difficulty of finding experienced WebRTC developers.
But most of all – the use cases it mentioned. Some of them were companies that got acquired. Others were shutdown. Some stagnated in place, or are now on life support.
That’s not a good way to introduce someone to the topic of WebRTC.
So a rewrite was in order here, which brought me to work with a few sponsors:
They were kind enough to make the investment needed for me to put the time and effort into it.
WebRTC for Business People – what’s in the 2017 edition?Well…
First off – the report is still free. You can download it, print it, read it online – practically do whatever you want with it.
I’ve refreshed the visuals and updated the analysis part with data from 2015 until 2017 (the time that have passed since the last update to the report). Did you know that WebRTC is still growing linearly in all relevant parameters that you can check?
No hype here. Just solid, steady growth. The minor change in github projects trajectory? That started when Google moved their WebRTC samples and demos from Google code to github.
I wonder if this will change when Apple adds WebRTC to Safari and iOS.
I removed all the nonsense in the report about SIP and H.323. These protocols still exist, but more often than not, people don’t look at them and compare them with WebRTC – because WebRTC has gone way beyond these signaling protocols.
Oh – yes – and I completely rewritten all the vendor use cases and the segments I look at in this report. Here’s the new set of vendors in the report:
If you are interested in WebRTC, the ecosystem around it and understand how companies are using it today – in the real life – making real commercial use of it – then check out this report.
Download the report
The post WebRTC for Business People – The 2017 Edition appeared first on BlogGeek.me.
PC Engines GmbH has recently released a new board, APU3. The difference from APU2 is that two mPCIe slots are suitable for 3G or LTE modems, whereas APU2 had only one such slot. This article explains how to utilize two HUAWEI ME909 LTE modems, and it’s applicable to other modems too.
One of the LTE modems has to occupy the slot which is otherwise usable for mSATA storage. So, the board has to use the SD card for booting, and Voyage Linux is designed for such setup. The scripts in this article are tested against Voyage Linux version: 0.11.0 (Build Date 20170122).
As with APU2, the Linux kernel assigns ttyUSB port numbers randomly, so two ME909 modems produce 10 ttyUSB devices with random numbers which change after a reboot.
The modems have identical serial numbers “0123456789ABCDEF”, and the only thing that allows distinguishing them reliably is the PCI slot number of the corresponding USB controller.
Luckily, APU3 board slots designed for LTE modems, J14 (mSATA/mPCIe 3), and J15 (mPCIE 2), are attached to different USB controllers. The third slot, J16 (mPCIE 1), shares the same USB controller with J15.
USB EHCI Controller at PCI device 00:12.0 is attached to J14, and the controller at 00:13.0 is attached to J15 and J16.
So, the udev rules require a small Shell script that translates DEVPATH variable into the PCI slot and function number, and the resulting string will persistently distinguish the devices attached to USB interfaces in J14 and J15:
cat >/etc/udev/devpath_to_pcislot <<'EOT' #!/bin/sh echo ${DEVPATH} | sed -r \ -e 's,^\/[^\/]+\/[^\/]+\/[0-9af]{4}:[0-9af]{2}:,,' \ -e 's,\/.+,,' -e 's,\.,,g' EOT cat >/etc/udev/rules.d/99-wwan.rules <<'EOT' SUBSYSTEM=="tty", ATTRS{idVendor}=="12d1", ATTRS{idProduct}=="15c1", PROGRAM="/etc/udev/devpath_to_pcislot" SYMLINK+="ttyWWAN%c{1}_%E{ID_USB_INTERFACE_NUM}" SUBSYSTEM=="net", ATTRS{idVendor}=="12d1", ATTRS{idProduct}=="15c1", PROGRAM="/etc/udev/devpath_to_pcislot" NAME="lte%c{1}" EOTAfter rebooting, you can see “lte120” and “lte130” network interfaces, and devices suitable for configuring modems: “/dev/ttyWWAN120_02” and “/dev/ttyWWAN130_02”. There are few other TTY interfaces for various purposes, as explained in HUAWEI documentation.
Time to start another ongoing project. This time – my Monthly Virtual Coffee sessions about WebRTC, CPaaS, APIs and comms in general.
Some time in 2015-2016, I decided to host Virtual Coffee sessions. Once a month, I’d pick a subject, create a presentation and host a meeting with my customers. All of them. It was open for questions and it was fun. It stopped because… I don’t know. It just did.
Ever since then, I wanted to do something similar. I found I like talking and interacting with people, and I want to do it more.
Which is why I am now announcing the new Virtual Coffee with Tsahi.
Here’s how it will go down:I won’t be using this blog to publish future sessions – sorry.
The sessions will be announced through Crowdcast (the service I started using for such events lately), so follow me there. And through my newsletter, so if you’re not subscribed – do it now.
What topics will I cover?I really don’t know…
If you want something specific – drop me a line.
Our 1st Virtual Coffee togetherThe first topic I want to tackle?
CPaaS, WebRTC, Differentiation and M&AWhen? May 23 @ 15:30 EDT
There are over 20 different CPaaS vendors out there, and that number is growing and shrinking at the same time:
I want to take the time to review some of this M&A activities, as well as show how different vendors are trying to differentiate themselves from the rest of the crowd.
Join me for this Virtual Coffee with Tsahi
Oh – if you you have questions for this already – just ask them on Crowdcast once you register.
See you there!
The post Announcing my Virtual Coffee sessions appeared first on BlogGeek.me.
WebRTC establishes peer-to-peer connections between web browsers. To do that, it uses a set of techniques known as Interactive Connectivity Establishment or ICE. ICE allows clients behind certain types of routers that perform Network Address Translation, or NAT,to establish direct connections. (See the WebRTC glossary entry for a good introduction.) One of the first problems is for […]
The post Am I behind a Symmetric NAT? appeared first on webrtcHacks.
What is WebRTC and What is it Good For? This 7-minute video provides a quick introduction to WebRTC and demonstrates why it is growing in importance and popularity.
Covered in this video:
WebRTC is an HTML5 specification that you can use to add real time media communications directly between browser and devices.
Simply put:
WebRTC enables for voices and video communication to work inside web pages.
And you can do that without the need of any prerequisite of plugins to be installed in the browser.
WebRTC was announced in 2011 and since then it has steadily grown in popularity and adoption.
By 2016 there has been an estimate from 2 billion browsers installed that are enabled to work with WebRTC. From traffic perspective, WebRTC has seen an estimate of over a billion minutes and 500 terabytes of data transmitted every week from browser communications alone. Today, WebRTC is widely popular for video calling but it is capable of so much more.
A few things worth mentioning:
It is important to understand from where we are coming from: If you wanted to build anything that allowed for voice or video calling a few years ago, you were most probably used C/C++ for that. This means long development cycles and higher development costs.
WebRTC changes all that: it takes the need for C/C++ and replace it with a Javascript API.
WebRTC comes with a Javascript API layer on the top that you can use inside the browser. This makes it far easier to develop and integrate real time communications anywhere. Internally, WebRTC is still mostly implemented using C/C++, but most developers that use WebRTC won’t need to dig deep into these layers in order to develop their applications.
AvailabilityWebRTC today is available in most modern browsers. Chrome, Firefox and Microsoft Edge support it already, while Apple is rumored to be in the process of adding WebRTC to Safari.
You can also take WebRTC and embed it into an application without the need of browser at all.
Media and accessWhat WebRTC does is allow the access to devices. You can access the microphone of your device, the camera that you have on your phone or laptop – or it can be a screen itself. You can capture the screen of the user and then have that screen shared or recorded remotely.
Whatever WebRTC does that does in the real time, enabling live interactions.
WebRTC isn’t limited to voice and video. It allows sending any type of data any arbitrary data
There are several reasons WebRTC is a great choice for real time communicationsSo what other choice do you really have besides using WebRTC?
The idea around WebRTC and what you can use it for are limitless. So go on start building whatever you need and use WebRTC for that.
Embed this video on your own site for free! Just copy and paste the code below…
<iframe src="https://player.vimeo.com/video/217448338" width="640" height="360" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
The post What is WebRTC and What is it Good For? appeared first on BlogGeek.me.
Contact centers are the main adopters of WebRTC still. This is clearly reflected by my infographic of the WebRTC state of the market 2017.
Motto:“This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.”
Western Union telegraph company memo, 1877.
Think you know how WebRTC fits in a contact center? Check out with The Complete WebRTC Contact Center Uses Swipefile
Get the swipefileRecently, Jaroslav from iCORD, told me the stats they now see from the contact center deployment they have in O2 Czech Republic, who also happen to be their parent company.
How is O2 CZ making use of WebRTC in their Contact Center?What they did isn’t the classic approach you will see to WebRTC in contact centers, but rather something slightly different. If you are a customer of O2 CZ and you are thinking of making a purchase on their website, you have the option to leave a number for them to immediately get back to you:
And yes – there is also an “exit intent” on that sales page, so if try to leave this page, it will appear as a popup.
How is a phone call related to WebRTC you ask? Well… it isn’t. Unless you factor in the fact that we now know what web page the user is on.
What happens next, is that a contact center agent will call back to the user, and the user will see something new on his browser – a shared space between him and the agent that just called him.
This shared space will enable the agent to browse the same page the customer was on, and move on from there elsewhere. It also includes annotations – the agent can draw or mark things on the screen. One last thing – the user will see the video of the agent, but will not share his video.
See? They even haggle and write down discount prices right on the webpage.
Now, if the interaction started with a phone call, the agent in the contact center can instruct the customer to go to the O2 CZ website and enter a PIN code there – and magically get to the same experience.
Here’s a diagram to show the communication channels we now have between the customer and the contact center agent:
Why this approach?
But was this effective? Was it worth the effort?
O2 CZ have been running this contact center service throughout 2016, and took the time to analyze the results. They did so only for sales related calls – the money makers.
Here’s what they found out:
Using this approach is much more efficient than a simple phone call.
Let’s stop right here for a second and soak that statement.
We’re talking about a contact center.
Of a mid-sized European carrier (4 million subscribers).
The type of those where I am told over and over would NOT adopt WebRTC because it does not support Internet Explorer 4. Oh. And this specific service falls back to Flash if the customer’s browser doesn’t support WebRTC and even decreases further in feature set to static screenshot and PDF file sharing for those who don’t even support Flash.
And they are already doing it for a full year.
Successfully.
In production.
In front of live customers.
Who would have thought a non-startup company that isn’t located in Silicon Valley and operated by 16-year olds would be able of doing such a ridiculous thing like deploy WebRTC in production directly to where money gets negotiated with customers.
— end of rant —
Back to the results.
Call length on average droppedIt takes 30% less time to negotiate and close a deal than a regular phone call and considerably shorter than text chat. This may seem a bit backwards – the fact that chat takes the longest and a video session the shortest, but that’s the experience of this contact center.
How about succeeding to close a deal and make a sale? WebRTC gets closed deals 25% more than regular phone calls. Chat is slightly less successful than WebRTC but more successful as phone. These values were measured on session landing at sales agents’ desk once those irrelevant and redirected were filtered out.
And the customer satisfaction? Over 20% rate the service 5 stars at the end of the interaction and 7% left positive textual evaluation of the service. Compared to the traditional IVR system that’s really high.
Where does this lead us?And if you are looking for more information about the O2 CZ deployment details – especially the technical ones, Jaroslav will be happy to have a conversation with you.
The post What to Expect when Deploying WebRTC in Contact Centers? appeared first on BlogGeek.me.
How do you find good WebRTC outsourcing talent?
At least once a week.
That’s about the current rate in which I bump into a hiring or talent question related to WebRTC.
Recently, I got a few calls with companies that went through the process of working with an outsourcing vendor who developed their app and got stuck.
Sometimes it was due to bad blood going between the two companies. But more often than not it was because the company that approached me wasn’t happy with the delivered results. The application that was developed just didn’t really work as expected. Looking at some of these apps, it was easily apparent to see that the developers were clueless about WebRTC. Things like wrong NAT traversal configurations (or none at all), or the use of mesh media delivery for large multiparty video sessions are the most obvious warning signs here.
If I had to think why this is so, my guess it boils down to three reasons:
When you go and ask from an outsourcing vendor to build you a service, the answer you will get is “sure thing”. And then a price and a timeline. That’s their business, and most would often use that project as their jumping board towards another domain of expertise for them. Many of these outsourcing vendors won’t invest in learning new technologies without a customer paying for that investment.
This means that a lot of the market for WebRTC outsourcing is a market of lemons. Which is why it is so important you check and validate your prospective WebRTC outsourcing vendor before signing an agreement with him.
Picked a WebRTC outsourcing vendor? Here are a few quick telltale signs that will help you determine just how knowledgeable he is about WebRTC:
Get the WebRTC Outsourcing Vendor Signals swipefileHere are 6 questions to ask yourself before you hire a WebRTC outsourcing vendor.
#1 – Do I know my own requirements?There are two parts to knowing your requirements from the product:
For that, I suggest you use something like my WebRTC requirements template.
#2 – Am I their first WebRTC customer?This is a biggie.
Try. Not. To be. Their FIRST. Customer. That does. WebRTC.
Don’t be their first customer doing WebRTC.
Make sure you’re not the first one they build a WebRTC product for.
Their first WebRTC project? You shouldn’t be the one they do it for.
Got the point?
One more time if you missed it:
I knew that picture (and font) would come in handy some day.
#3 – Is the team working for me built a WebRTC product before?This one is somewhat tricky, and I must say – a bit new in my list of top questions to a WebRTC outsourcing vendor.
If you’ve been reading this from the start instead of skimming through, you might have seen the number 12,000. This number is higher than the number of profiles in LinkedIn that have the term WebRTC in them anywhere. It means that with some of these WebRTC outsourcing vendors, the people put in place on your project might not be the ones who know WebRTC – these are already fully booked by other clients – or they might have gone elsewhere (with the demand of WebRTC developers, I wouldn’t be surprised to see them learn the trade in one vendor and move on to the next).
I’ve seen it happen once or twice before.
So make sure that not only does the vendor knows WebRTC well – he is also placing the right people on your project. And understand that there are times when not the whole team must know WebRTC to develop a successful project.
#4 – Can I validate what they build for me?Developers who don’t know and understand WebRTC won’t be able to deliver a commercial product for you.
If they don’t understand the server side of WebRTC and its implications (check my free mini course on WebRTC server side), then the end result will run great between you and your pal sitting next to you, but when you take it to production it will fail spectacularly.
Things to look for:
While some of these can be solved just by more testing (and focused testing – one where the tester actually knows what to look for), there are times when the architecture selected for the product is just all wrong. It should have been apparent from the get go that it won’t hold water.
But anyways – make sure you’ve got a plan in place on what and how to test to validate that that thing that was given to you as the finished good is actually the finished good and not finished for good.
#5 – Should I ask for something On Premise or CPaaS based?This goes back to #1, but slightly different. Probably should have placed it as #2.
Developing your own product from scratch will be more expensive than using a CPaaS vendor. CPaaS vendors are those vendors that take the whole hassle of real time communications, wrap it with their nice API and manage it all for you (and yes, I wrote a report about them).
Whenever I sit down with an entrepreneur that wants a product I start there when it comes to vendor and technology stack selections. Trying to understand his restrictions and requirements. Oftentimes, entrepreneurs are deterred by the seemingly high pricing of CPaaS vendors. Especially at the beginning – when they believe they will get to a million monthly active subscribers within a month. Well… it won’t happen to you. And if it does, a VC or two will probably be happy to foot that bill, understanding you probably found a real boon.
What should you do?
Someone needs to be the owner of this project on your end.
Yes. You have a WebRTC outsourcing vendor developing this thing for you, but you need someone to have that vendor behave and deliver.
That someone needs to understand WebRTC well enough to handle the requirements, the discussions with the vendor for all the issues that will arise along the way.
I’d also recommend having that someone on the payroll and not external.
If you don’t have such a someone then you effectively selected you for that job. Congrats!
Do Your HomeworkIf you plan on starting a project that makes use of WebRTC, and you plan on using a WebRTC outsourcing vendor for it, start by doing your homework.
Make sure you have the answers to the questions above.
And if you need help along the way – with the requirements, the architecture, the vendor to select, the process – you know where to find me.
Picked a WebRTC outsourcing vendor? Here are a few quick telltale signs that will help you determine just how knowledgeable he is about WebRTC:
Get the WebRTC Outsourcing Vendor Signals swipefileThe post 6 Questions to Ask Yourself BEFORE Hiring a WebRTC Outsourcing Vendor appeared first on BlogGeek.me.
My physical machine runs Debian Jessie, and it has several LXC containers (mostly Debian and Ubuntu). Now I needed to test some software under CentOS, and I bumped into the following error when installing Apache HTTP server:
Downloading packages: httpd-2.4.6-45.el7.centos.4.x86_64.rpm | 2.7 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : httpd-2.4.6-45.el7.centos.4.x86_64 1/1 Error unpacking rpm package httpd-2.4.6-45.el7.centos.4.x86_64 error: unpacking of archive failed on file /usr/sbin/suexec;590112cd: cpio: cap_set_file Verifying : httpd-2.4.6-45.el7.centos.4.x86_64 1/1 Failed: httpd.x86_64 0:2.4.6-45.el7.centos.4The thing is, that by default “/usr/share/lxc/config/centos.common.conf” defines the following capability drops:
lxc.cap.drop = mac_admin mac_override setfcap setpcap lxc.cap.drop = sys_module sys_nice sys_pacct lxc.cap.drop = sys_rawio sys_timeSo, setfcap capability is required in order to install Apache. Use the following lines in your “/var/lib/lxc/NAME/config” to drop previously defined drops and set up a new list:
# flush all defined drops and define a new list lxc.cap.drop = lxc.cap.drop = mac_admin mac_override setpcap lxc.cap.drop = sys_module sys_nice sys_pacct lxc.cap.drop = sys_rawio sys_timethen restart the container, and “yum install httpd” should run as expected.
Security is… complex. Even with WebRTC.
I’ve always been one to praise the security measures placed in WebRTC.
While WebRTC is a secure protocol by nature, it seems that browsers take different approaches to who needs to take responsibility of any additional means of security.
The gist of it:
Seriously – what’s not to like?
Recently though, I started thinking about it. How do browser vendors think about security? How much do they take it upon themselves to be the guardians of their users? His trusted guide in the big bad world that is the Internet?
Which brings me to the big one –
Are browser vendors responsible to the actions of their users when it comes to WebRTC?
It seems that they have different approaches and concepts to this one.
Google ChromeMoto: Users are stupid and should be protected
That’s how I’d put their mindset to words.
getUserMediaChrome has long been one to clamp down on where and when can WebRTC be used.
They started off with voice and video working on HTTP and HTTPS, while HTTP access granting to the camera and microphone were forgotten, and required a user’s approval each and every time.
They shifted towards HTTPS only. You can’t access the microphone or the camera in an HTTP page.
PersistenceThe decision a user made is persistent. If you granted a domain access to your microphone or camera – Chrome remembers it – for eternity. Your only way of revoking that is by clicking the camera icon on the address bar (if you can even notice it):
Oh, and for persistency – Chrome offers you two choices:
No middle-ground here.
Screen sharingYou can share your screen with Chrome.
But it will ask the user each time for his permission.
And to enable screen sharing, you will first need to create a Chrome Extension for your web app and have the user install it. Not a biggie, but a hurdle.
Now, to publish a Chrome Extension on the Chrome Web Store, you’ll need to pay a small $5 fee.
Why? Fraud – obviously:
You see, screen sharing is considered by Google (and most other browsers) as more of a security threat than camera and microphone access.
By forcing the Chrome Extension, Google raises the bar against abuse, and can theoretically remove any abusive accounts and extensions with better tracability to their source.
The only real downside of it? I have over 10 icons on my toolbar now in Chrome, and most of them are for screen sharing on different services. Once a move I remove a few of them to declutter my browser. Yuck.
Mozilla FirefoxMoto: Users are intelligent
Maybe. But not all of humanity. Or even the billion or two that use browsers.
getUserMediaIn Firefox, getUserMedia will work in HTTP.
Not sure if persistence can be configured for Firefox for HTTP websites. I guess it is akin to herd immunity in vaccination. Since Chrome is THE browser, developers make sure their WebRTC service works on Chrome (lets call it Chrome first?) so their service starts by running only on HTTPS anyway.
PersistenceAnyways, Philipp Hancke wrote a great post about getUserMedia and timing with browsers. Here’s how timing looks for appear.in from the moment getUserMedia is called and until it is completed:
Firefox tend to take longer to complete its getUserMedia calls. Philipp attributes it to this little UI design in Firefox:
In Firefox, if you want to decision (allow/disallow) to be persisted, you need to opt in for it. And for appear.in, most people don’t opt in.
This is great, especially for the Don’t Allow option (it is quite a hassle to remove that restriction from Chrome once you decided not to allow such access in a session).
Screen sharingFor screen sharing, Firefox used to have a whitelist of domains you had to register on to get screen sharing to work.
From Firefox 52, this restriction has been removed. Mozilla wrote a post about it, explaining their millions of users around the world about the dangers.
I am not sure about you, but I’ve learned early on as a developer catering to developers that other developers are stupid (if you are a developer, then I am sorry, but bear with me – and read this one while you’re at it). So when I wrote code for developers, I made sure that if they screw things up, we crash spectacularly. The reasoning was, the sooner we crash the faster our customers (who are developers) will fix their bugs – and do that during development – so they won’t get into deadlocks or weird crashes in production that are way harder to find. These were the good old days of C programming.
Now… if developers are stupid, then what would mere users do about their understanding of security and threats?
In Firefox, they need to read and understand that yellowish warning when all they want to do is share their screen now – after all – people are waiting for them to do so in the session already.
With such a warning… I am not sure I am going to be in a trusting mood no matter the site.
While I mostly prefer Firefox approach for getUserMedia permissions, I think Chrome does a better job at it with the extensions mechanism.
Microsoft EdgeMicrosoft Edge has started to support WebRTC (finally).
While I a, in the process of installing my Creators update (where I am promised proper support for WebRTC), this will take more time than I have to get some nice screenshots of what Edge is doing.
So I asked Philipp Hancke (like I do about these things).
Here’s what I got:
Download the WebRTC Device Cheat Sheet to learn more on how to get WebRTC to as many devices and environments as possible.
Are Browser Vendors Responsible for Our WebRTC Actions?Yes they are.
In the same approach that browser vendors are taking in HTTPS everywhere, removing Flash from the web, protecting against known phishing sites, etc; they need to also protect users from the abuse of WebRTC.
The first step is by not allowing developers to do stupid (by forcing encryption and DTLS-SRTP for example). The second one and just as important is by not allowing users to do stupid.
The post Should Browser Vendors be Responsible for their User’s WebRTC Actions? appeared first on BlogGeek.me.
And I have a couple of bonuses waiting for you in this WebRTC course launch.
I’ve been thinking lately on how to make this course available throughout the year, but still “launch” it as a live program once or twice every year. The idea here is to get as many people as possible into the course and improve our current market state (which is rather abysmal):
I always say that WebRTC sits between Web and VoIP, but I guess this says it best.
You can find a million people whose profile contain either “VoIP” or “HTML5”. If you go into specifics, you’ll have hundred of thousands of people with either “SIP” or “Node.js”. But “WebRTC”? Only 11,874 righteous people. We’re a pretty small industry. And those with enough understanding and knowledge of WebRTC? Probably less than that.
What are people challenged with?The request that comes up almost every time someone contacts me through the blog? It is about finding an experienced WebRTC developer. Here are a few “sound bites” from these emails I am getting:
if we were to hire someone to build our own platform – what qualifications in a programmer would I need to look for?!!
We are needing to develop video chat and having a difficult time finding a qualified developer to create this
I am seeking a WebRTC engineer to do a peer review on a WebRTC app I had developed in oversees (west Russia.)
A couple of thoughts about this
And since the market is so slim on resources (around 12,000 people know WebRTC out of a million who know VoIP – when all VoIP projects are adding WebRTC these days), demand and supply don’t match.
My WebRTC course and its bonusesTomorrow, my Advanced WebRTC Architecture course officially launches. If you haven’t enrolled already, then you should seriously consider doing so.
The previous round had almost 100 students going through it with some very positive feedback.
There are going to be a few bonus materials that I will be giving for anyone who enrolls today (or already enrolled):
#1 – 2 live lessonsThere are going to be 2 special live lessons taking place. They will be recorded for those who can’t join live. But the lessons as well as the recordings will only be available as part of the course bonuses.
LIVE Lesson 1: Philipp Hancke – Video Quality in WebRTC: The audio and video quality WebRTC provides is amazing. Well, most of the time at least. Sometimes, the video gets pixelated and audio starts dropping out even. What is going on here and why is bandwidth estimation still a problem?
LIVE Lesson 2: Bradley T. Hughes – How to deploy TURN on AWS? TURN servers are boring. They do nothing but relay data. However, they are necessary in WebRTC. Here’s how appear.in’s global TURN infrastructure works – and how you should think of when deploying your own.
So…
2 live lessons.
With top industry experts.
Recorded and available only for you.
#2 – The Perfect WebRTC Developer Profile ebookRecently I’ve been asked multiple times about CVs and profiles and stuff. It goes both ways:
I had my own thoughts about it, but decided to take a different route on this one. I went and asked top developers and “recruiters” who work with WebRTC for quite some time now. I asked them about the ideal WebRTC developer and what they’d look for in a CV. Collected the answers and created an ebook out of it:
Who’s in there? Amir Zmora, Arin Sime, Chad Hart, Emil Ivov, Gustavo García, Iñaki Baz Castillo and Philipp Hancke.
You’ll get to see what they think about WebRTC developers and what it means to be a WebRTC professional.
#3 – WebRTC Course FAQThere are a lot of popular questions out there about WebRTC. You can find them lurking on webrtc-discuss forum, stackoverflow, Quora and elsewhere. But what are the answers? And how should you go about finding them?
What I did in the past few weeks was collect questions and map them to the course lessons. To these questions I provided short and clear answers for you, packaging it all in a neat document.
Now, you can use these questions to tackle specific issues you bump into – or to check how much you understood of the lessons of the course. Hell – if you need to recruit someone – you might as well use it as some good questions to ask to gauge experience.
What if you are not sure?Besides looking at the testimonials from previous students, I can suggest checking out two things:
Bonuses will go away in 48 hours.
After that, the only price plan available for the course will be the Plus price plan and it will only include the Office Hours for the initial duration of this course.
My suggestion?
Enroll now to the Advanced WebRTC Course
The post How to find (or create) WebRTC Developers? appeared first on BlogGeek.me.
NanoPi NEO2 by FriendlyElec is a new sub-$20 Linux microcomputer, built on Allwinner H5 SoC, providing a Gigabit Ethernet and USB 2.0 interface. Also additional interfaces are possible via expansion headers (needs some soldering work). The board is equipped with 512MB DDR3 RAM.
It is highly recommended to buy the heatsink alongside with the board. The CPU is heating up quite significantly, and it needs cooling. With “stress -c 4” CPU load test, “armbianmonitor -m” shows the core temperature rising up to 75C. The board sustains long-term load under such conditions. But with a fan, the core temperature drops below 40C, and the power consumption drops significantly too.
The plastic 3D-printed enclosure is of little use. First, it’s quite easy to break when you insert the board. Also it does not fixate the heatsink properly.
So, I ended up in using the original cardboard packaging as a base for the board, just to avoid extra touching of electronic circuits, and to fixate the USB power cable:
Armbian nightly image booted without problems. Up to now, I noticed the following minor problems with it:
Network traffic tests with tcpkali (debs, deb build scripts) demonstrated that the CPU is able to saturate the Gigabit Ethernet port with TCP traffic, reaching above 900Mbps throughput.
All in all, this board looks much more reliable than Orange Pi Zero: it can work for long hours with an USB Wifi dongle, whereas OPI0 was hanging up after few minutes of work (using the same USB power cable and power source and the dongle).
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.